How much space is used per i-node on MDT in production installation.
What is recommended size of MDT?
I'm presently at about 10 KB/inode which seems too high compared with ldiskfs.
I ran out of inodes on zfs mdt in my tests and zfs got "locked". MDT zpool got all space used.
We have zpool created as stripe of mirrors ( mirror s0 s1 mirror s3 s3). Total size ~940 GB, get stuck at about 97 mil files.
zfs v 0.6.4.1 . default 128 KB record. Fragmentation went to 83% when things get locked at 98 % capacity; now I'm at 62% fragmentation after I removed some files (down to 97% space capacity.)
Shall we use smaller ZFS record size on MDT, say 8KB or 16KB? If inode is ~10KB and zfs record 128KB, we are dropping caches and read data we do not need.
On May 5, 2015, at 10:43 AM, Stearman, Marc <stearman2@†llnl.gov> wrote:
> We are using the HGST S842 line of 2.5" SSDs. We have them configures as a raid10 setup in ZFS. We started with SAS drives and found them to be too slow, and were bottlenecked on the drives, so we upgraded to SSDs. The nice thing with ZFS is that it's not just a two device mirror. You can do an n-way mirror, so we added the SSDs to each of the vdevs with the SAS drives, let them resilver online, and then removed the SAS drives. Users did not have to experience any downtime.
> We have about 100PB of Lustre spread over 10 file systems. All of them are using SSDs. We have a couple using OCZ SSDs, but I'm not a fan of their RMA policies. That has changed since they were bought by Toshiba, but I still prefer the HGST drives.
> We configure them as 10 mirror pairs (20 drives total), spread across two JBODs so we can lose an entire JBOD and still have the pool up.
> D. Marc Stearman
> Lustre Operations Lead
> On May 4, 2015, at 11:18 AM, Kevin Abbey <***@rutgers.edu> wrote:
>> For a single node OSS I'm planning to use a combined MGS/MDS. Can anyone recommend an enterprise ssd designed for this workload? I'd like to create a raid10 with 4x ssd using zfs as the backing fs.
>> Are there any published/documented systems using zfs in raid 10 using ssd?
>> Kevin Abbey
>> Systems Administrator
>> Rutgers University
>> lustre-discuss mailing list
> lustre-discuss mailing list