Results 1 to 12 of 12
-
09-08-2011, 10:14 PM #1Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
16TB limit GPT partition with EXT4?
trying to partition 16x 3TB RAID-10 (24TB array volume) with with EXT4 file system comes with CentOS 6.0 here. is it true that even with GPT, EXT4 has a 16TB limit?
a quick googling tells that EXT4 supporting files are still 32-bit, so even with x86_64 CentOS6/RHEL6 distro's, the EXT4 file system is still limited to 32-bit, therefore "16TB limit" applies.
can any linux OS expert here confirm this? is any way around this limit?Last edited by cwl@apaqdigital; 09-08-2011 at 10:20 PM.
-
09-08-2011, 10:42 PM #2Web Hosting Master
- Join Date
- Mar 2010
- Posts
- 4,533
Although I cant confirm it... From my understanding there are two different limitations that people are saying, a 16TB file size limit with EXT4 but a 1ExaByte volume size limit or a 16tb partition limit.
Sorry I can't be of much help but wouldn't xfs be better suited for such a large partition?
-
09-08-2011, 10:48 PM #3Web Hosting Evangelist
- Join Date
- Apr 2010
- Posts
- 493
That's correct they are planning on changing it but that's the current limit. XFS is a nice mature FS that deals with large partitions well, just do not use it on a 32 bit os.
-
09-08-2011, 11:25 PM #4Web Hosting Industry Expert
- Join Date
- Dec 2007
- Location
- Indiana, USA
- Posts
- 19,206
XFS is good for large partitions that handle large files, but not so much for large partitions with a LOT of small files. It would largely depend on what you're going to do with the storage.
█ Michael Denney - MDDHosting.com - Proudly hosting more than 37,700 websites since 2007.
█ Ultra-Fast Cloud Shared and Pay-By-Use Reseller Hosting Powered by LiteSpeed!
█ cPanel • Free SSL • 100% Uptime SLA • 24/7 Support
█ Class-leading support that responds in minutes, not days.
-
09-08-2011, 11:37 PM #5Web Hosting Master
- Join Date
- Mar 2010
- Posts
- 4,533
-
09-09-2011, 01:02 AM #6Web Hosting Master
- Join Date
- Oct 2007
- Location
- United States
- Posts
- 1,182
ext4 is limited to 16TB in partition size because of e2fsprogs. Although I hear there is a new e4fsprogs, but it still wont allow anything greater than 16TB because of that e2fsprogs. I hear there is a work around, but it would be by using the beta release of e2fsprogs. I would only use XFS if you don't need to increase/decrease the size later on.
www.opticip.com - Optic IP LLC
-
09-09-2011, 07:53 AM #7Web Hosting Master
- Join Date
- Mar 2008
- Location
- Los Angeles, CA
- Posts
- 555
As others have said there is no fsck (and other important stuff) for 64-bit based ext4 which is needed for > 16TiB support.
I am very aware of this issue. The *only* two native/mature linux file-systems which support > 16TiB support are JFS and XFS.
I don't consider btrfs as its not considered stable nor do I consider ZFS as its not really native linux.
I personally use JFS on my system:
Code:root@dekabutsu: 04:37 AM :~# df -H Filesystem Size Used Avail Use% Mounted on rootfs 129G 90G 40G 70% / /dev/root 129G 90G 40G 70% / udev 11M 238k 11M 3% /dev /dev/sda1 129G 78G 52G 61% /winxp /dev/sdd1 36T 29T 7.5T 80% /data /dev/sde1 84T 23G 84T 1% /data2 tmpfs 13G 0 13G 0% /dev/shm root@dekabutsu: 04:37 AM :~#
*Has a reputation for losing data during a power interruption (and I personally have had this happen).
*I have heard it requires 1 GB of ram for the fsck for each TB of data.
*Lots of bugs back when I used it (kernel panic when to fragmented, accessing a specific file,etc..)
*Relatively slow fsck times (atleast from what I have experienced).
Now some of the issues have been from previous experience from atleast 4 years ago I am sure in multiple aspects its better now but needless to say I don't really trust XFS with my data anymore.
The reasons I use JFS:
*hardly any memory usage for fsck.
*fsck is very fast (much faster than ext3). It takes around 12-13 minutes for my 36 TB volume to fsck. This is mainly dependent on what the inode usage/number of files/directories on the file-system). In my case this is 6 million as i store very large media files.
*Very fast. Just like XFS it gives near raw disk I/O performance. XFS might be just slightly faster on this front.
*Low CPU usage. Uses less CPU usage than any other file-system AFAIK.
*I have been running JFS for many years now on very large file-systems. I have yet to have data loss when the file-system was not unmounted cleanly due to powerloss, kernel panic, or any other options.
So your only real choices are pretty much between those two (IMHO).
-
09-12-2011, 12:58 PM #8Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
my client is convinced to use JFS for the single 24TB volume. however, JFS is not native to CentOS/RHEL6, and googling isn't yielding much info about how to install JFS on CentOS6...
array configuration:
16x Seagate 3TB consetellation ES2 RAID-10 on 3ware 9750-4i with SAS expansion backplane
partition scheme:
x86_64 centOS 6
1GB (/dev/sba1) carved from 24TB array in 3ware BIOS for "/boot" (EXT4)
~24TB (/dev/sdb) is veried by 3ware already to be OK
usual small partitions for swap, root, /tmp (all EXT4)
the rest to be one big "/home" by JFS
will you OS experts out there please post a brief guide of how to do this?
-
09-12-2011, 01:03 PM #9Randy
- Join Date
- Aug 2006
- Location
- Ashburn VA, San Diego CA
- Posts
- 4,615
Convince your client to use an OS with (real) JFS support like Debian or Ubuntu. With ancient CentOS you'll either be rolling the kernel and tools from scratch or relying on a 'testing/plus' repo.
It's old but see here:
http://lists.centos.org/pipermail/ce...er/065822.html
CentOS support for JFS is beta/unsupported/buggy at best.Fast Serv Networks, LLC | AS29889 | DDOS Protected | Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
Since 2003 - Ashburn VA + San Diego CA Datacenters
-
09-12-2011, 01:15 PM #10Web Hosting Master
- Join Date
- Aug 2009
- Location
- Orlando, FL
- Posts
- 1,063
How about GFS or OCFS? Not sure what the limits are there, but they are considered high performance file systems.
-=SKULLBOX.NET=-
-
09-12-2011, 02:16 PM #11Web Hosting Evangelist
- Join Date
- Apr 2010
- Posts
- 493
GFS2 Is supported though at least 25TB and 8EB as it's max size. Some additional overhead running a clustered FS for sure. XFS under linux has come a long way over the years. In any event hacking redhat/centos to get JFS working will probably be a long term nightmare.
-
09-13-2011, 09:54 AM #12Web Hosting Master
- Join Date
- Mar 2008
- Location
- Los Angeles, CA
- Posts
- 555
centOS has really old kernels that they just backport **** to (atleast this is what used to be the case). I would want atleast a 2.6.24 kernel for JFS. Also you would need the newest jfsutils to properly create over 32 TiB:
http://jfs.sourceforge.net/
I think if the fsck.jfs link is created which make install should do then the file-system should properly fsck at boot even on a distro that wasn't specifically designed around JFS but maybe not if they are being lame?
Similar Threads
-
VPS for a GPT/PTC site
By SorinK84 in forum VPS HostingReplies: 4Last Post: 06-06-2011, 04:36 PM -
looking for money/ptc/gpt related links pr2+
By eazyhost in forum Advertising RequestsReplies: 0Last Post: 02-13-2009, 11:49 PM -
GPT Site For Sale - LuxeLoot.com
By NotoriousKIM in forum Domain Name with Web Site OffersReplies: 0Last Post: 10-14-2007, 09:51 PM -
Review: GPT Site - Betlik.com
By FarooqAzam in forum Web Site ReviewsReplies: 4Last Post: 12-10-2006, 04:43 PM