Although I cant confirm it... From my understanding there are two different limitations that people are saying, a 16TB file size limit with EXT4 but a 1ExaByte volume size limit or a 16tb partition limit.
Sorry I can't be of much help but wouldn't xfs be better suited for such a large partition?
ext4 is limited to 16TB in partition size because of e2fsprogs. Although I hear there is a new e4fsprogs, but it still wont allow anything greater than 16TB because of that e2fsprogs. I hear there is a work around, but it would be by using the beta release of e2fsprogs. I would only use XFS if you don't need to increase/decrease the size later on.
*Has a reputation for losing data during a power interruption (and I personally have had this happen).
*I have heard it requires 1 GB of ram for the fsck for each TB of data.
*Lots of bugs back when I used it (kernel panic when to fragmented, accessing a specific file,etc..)
*Relatively slow fsck times (atleast from what I have experienced).
Now some of the issues have been from previous experience from atleast 4 years ago I am sure in multiple aspects its better now but needless to say I don't really trust XFS with my data anymore.
The reasons I use JFS:
*hardly any memory usage for fsck.
*fsck is very fast (much faster than ext3). It takes around 12-13 minutes for my 36 TB volume to fsck. This is mainly dependent on what the inode usage/number of files/directories on the file-system). In my case this is 6 million as i store very large media files.
*Very fast. Just like XFS it gives near raw disk I/O performance. XFS might be just slightly faster on this front.
*Low CPU usage. Uses less CPU usage than any other file-system AFAIK.
*I have been running JFS for many years now on very large file-systems. I have yet to have data loss when the file-system was not unmounted cleanly due to powerloss, kernel panic, or any other options.
So your only real choices are pretty much between those two (IMHO).
As others have said there is no fsck (and other important stuff) for 64-bit based ext4 which is needed for > 16TiB support.
I am very aware of this issue. The *only* two native/mature linux file-systems which support > 16TiB support are JFS and XFS.
I don't consider btrfs as its not considered stable nor do I consider ZFS as its not really native linux.
I personally use JFS on my system:...
my client is convinced to use JFS for the single 24TB volume. however, JFS is not native to CentOS/RHEL6, and googling isn't yielding much info about how to install JFS on CentOS6...
16x Seagate 3TB consetellation ES2 RAID-10 on 3ware 9750-4i with SAS expansion backplane
x86_64 centOS 6
1GB (/dev/sba1) carved from 24TB array in 3ware BIOS for "/boot" (EXT4)
~24TB (/dev/sdb) is veried by 3ware already to be OK
usual small partitions for swap, root, /tmp (all EXT4)
the rest to be one big "/home" by JFS
will you OS experts out there please post a brief guide of how to do this?
GFS2 Is supported though at least 25TB and 8EB as it's max size. Some additional overhead running a clustered FS for sure. XFS under linux has come a long way over the years. In any event hacking redhat/centos to get JFS working will probably be a long term nightmare.
centOS has really old kernels that they just backport **** to (atleast this is what used to be the case). I would want atleast a 2.6.24 kernel for JFS. Also you would need the newest jfsutils to properly create over 32 TiB:
I think if the fsck.jfs link is created which make install should do then the file-system should properly fsck at boot even on a distro that wasn't specifically designed around JFS but maybe not if they are being lame?