Results 1 to 15 of 15
  1. #1
    Join Date
    Feb 2014
    Posts
    168

    Arrow RAID 5 or RAID 6 for 24-HD Server?

    I'm mainly focused on read performance. Writer performance is irrelevant.

    The server had 24x4TB HDs (http://imgur.com/a/kpzdc) using an Adaptec 71605Q RAID Card with a SSD cache pool on a 2x100GB SSD RAID 0 array, which also hosts the OS.

    I will have lots of large files (300MB avg.) for download while the server streams MP4s to users. Existing storage usage is 50TB.

    My question: RAID 5 or RAID 6?

    Note that this is 1 out of 2 servers I have like this, and they're exact mirrors.

  2. #2
    Join Date
    Aug 2004
    Location
    Kauai, Hawaii
    Posts
    3,799
    RAID 6 over RAID 5 for sure. You want the extra parity for redundancy and rebuild times. 24x disks in RAID 5 is just too risky. I assume that even with a mirror you want the server performing well while it's degraded, RAID 6 will give you better performance during the rebuild than RAID 5. But not as good as RAID 10.

    Also if you're using SSD caching via the hardware RAID card I believe you will find the SSD cache size to HDD RAID size is too low. We have found that using smaller caching drives can result in worse performance than no caching drives, depending on your active data set of course. But I would switch to 4x 240GB RAID 10 ssd caching. Then again, do you really even need SSD caching?

  3. #3
    Join Date
    Feb 2014
    Posts
    168
    Quote Originally Posted by gordonrp View Post
    RAID 6 over RAID 5 for sure. You want the extra parity for redundancy and rebuild times. 24x disks in RAID 5 is just too risky. I assume that even with a mirror you want the server performing well while it's degraded, RAID 6 will give you better performance during the rebuild than RAID 5. But not as good as RAID 10.

    Also if you're using SSD caching via the hardware RAID card I believe you will find the SSD cache size to HDD RAID size is too low. We have found that using smaller caching drives can result in worse performance than no caching drives, depending on your active data set of course. But I would switch to 4x 240GB RAID 10 ssd caching. Then again, do you really even need SSD caching?
    Gotcha. I'll go with RAID6 for sure. I don't mind sacrificing 4TB for another layer of comfort.

    Not sure if I need the SSD caching. That's TBD. But at least this way I can rest assured knowing the capacity to grow is there. And from past experience disk IO overload has always been my bottleneck.

    We tend to have these hot files, and if they are stored just on a single disk, odds are that disk will be overloaded. Think of these hot files as hit releases.

  4. #4
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    555
    Raid6 is WAY less chance of failure compared to raid5 especially with a large 20+ disk array. Another +1 for raid6 as this is a no-brainer IMHO. Read performance on raid6 and raid5 would be identical. Write performance should also nearly be identical with a decent raid controller.

  5. #5
    Join Date
    Aug 2000
    Location
    Sheffield, South Yorks
    Posts
    3,627
    You're off your rocker if you put 24 x 4TB drives in RAID5 or RAID6. That's an array size of 88TB - the rebuild times will be very lengthy and you've got a really good chance of hitting an unrecoverable read error on rebuild, especially if those are just standard desktop grade sata drive.
    Karl Austin :: KDAWS.com
    The Agency Hosting Specialist :: 0800 5429 764
    Partner with us and free-up more time for income generating tasks

  6. #6
    Quote Originally Posted by bitmarket View Post
    I'm mainly focused on read performance. Writer performance is irrelevant.

    The server had 24x4TB HDs (http://imgur.com/a/kpzdc) using an Adaptec 71605Q RAID Card with a SSD cache pool on a 2x100GB SSD RAID 0 array, which also hosts the OS.

    I will have lots of large files (300MB avg.) for download while the server streams MP4s to users. Existing storage usage is 50TB.

    My question: RAID 5 or RAID 6?

    Note that this is 1 out of 2 servers I have like this, and they're exact mirrors.
    Certainly Raid 6. But i think raid 10 is the best option. However It's upto you how you want to run your server.
    AdroitSSD LLC - Incredible™ Hosting Platform| In Business Since 2012 | 24/7 Real Support
    Pure SSD Hosting - 90 Days Moneyback Guarantee! | LiteSpeed + LSCache | 99.95% Uptime | CloudLinux
    SSD KVM VPS - INTENSE™ DDOS Protection |1 Gbit port Speed | LiteSpeed & Softaculous License

  7. #7
    Join Date
    Feb 2014
    Posts
    168
    Quote Originally Posted by Host Dingle View Post
    Certainly Raid 6. But i think raid 10 is the best option. However It's upto you how you want to run your server.

    I already have servers like this in RAID10 config. Performance is fine and not the bottleneck then. However I am running at 91% capacity on those. So I need new servers in raid 5/6 configs to expand. I'm defiantly going with 6.

  8. #8
    Quote Originally Posted by KDAWebServices View Post
    You're off your rocker if you put 24 x 4TB drives in RAID5 or RAID6. That's an array size of 88TB - the rebuild times will be very lengthy and you've got a really good chance of hitting an unrecoverable read error on rebuild, especially if those are just standard desktop grade sata drive.
    What is a typical/safe amount of spinning disks on a raid 6 array ?
    What difference does it make if using Enterprise Level Disks e.g. ES.3 or RE4 etc...

  9. #9
    Yes you've selected the right think. Good luck
    AdroitSSD LLC - Incredible™ Hosting Platform| In Business Since 2012 | 24/7 Real Support
    Pure SSD Hosting - 90 Days Moneyback Guarantee! | LiteSpeed + LSCache | 99.95% Uptime | CloudLinux
    SSD KVM VPS - INTENSE™ DDOS Protection |1 Gbit port Speed | LiteSpeed & Softaculous License

  10. #10
    Quote Originally Posted by bitmarket View Post
    Gotcha. I'll go with RAID6 for sure. I don't mind sacrificing 4TB for another layer of comfort.

    Not sure if I need the SSD caching. That's TBD. But at least this way I can rest assured knowing the capacity to grow is there. And from past experience disk IO overload has always been my bottleneck.

    We tend to have these hot files, and if they are stored just on a single disk, odds are that disk will be overloaded. Think of these hot files as hit releases.
    I've setup more than a few large nodes like this and I'm curious what filesystem you plan to use and if you plan on putting all drives in a single array?

    I've setup 12, 24, and 48 drive servers and went with RAID50 w/1-2 hot spares over RAID6 on the 48. The rebuild time of 12x 4tb is long and much longer on RAID6 for 24 drives. You don't get much more performance unless you have 10g or faster nics, bonding doesn't mean much.

  11. #11
    Join Date
    Aug 2000
    Location
    Sheffield, South Yorks
    Posts
    3,627
    Quote Originally Posted by bitmarket View Post
    So I need new servers in raid 5/6 configs to expand. I'm defiantly going with 6.
    No, you don't. You need to re-think how your application works and is designed 24 drives in a RAID5/6 set is just nuts when they're 4TB drives.

    With 24 drives, use 3 x RAID-6 sets of 8 drives - that'd be my absolute maximum with RAID-6 and 4TB drives for me, even then I'd still be wanting a few hot spares kicking about.
    Karl Austin :: KDAWS.com
    The Agency Hosting Specialist :: 0800 5429 764
    Partner with us and free-up more time for income generating tasks

  12. #12
    Join Date
    Feb 2014
    Posts
    168
    Quote Originally Posted by makoulis View Post
    What is a typical/safe amount of spinning disks on a raid 6 array ?
    What difference does it make if using Enterprise Level Disks e.g. ES.3 or RE4 etc...
    Not sure about the first part, but for the second, I am using WD enterprise disks as those support RAID and are high-vibration tolerant and come with a 5yr warranty. Can't complain.


    Quote Originally Posted by compITent View Post
    I've setup more than a few large nodes like this and I'm curious what filesystem you plan to use and if you plan on putting all drives in a single array?

    I've setup 12, 24, and 48 drive servers and went with RAID50 w/1-2 hot spares over RAID6 on the 48. The rebuild time of 12x 4tb is long and much longer on RAID6 for 24 drives. You don't get much more performance unless you have 10g or faster nics, bonding doesn't mean much.
    XFS. My bandwidth usage is < 1Gbps. I should be fine. But I do have 50TB+ in content. How long is a typical rebuild type on a 24x4TB RAID6 setup?


    Quote Originally Posted by KDAWebServices View Post
    No, you don't. You need to re-think how your application works and is designed 24 drives in a RAID5/6 set is just nuts when they're 4TB drives.

    With 24 drives, use 3 x RAID-6 sets of 8 drives - that'd be my absolute maximum with RAID-6 and 4TB drives for me, even then I'd still be wanting a few hot spares kicking about.
    Well I need to work with the given hardware. I have 50TB+ of data. Perhaps I should split my streamable files and downloadables files between two servers, which should split the storage capacity in half (50-50 for both). And have both servers in RAID 10 setups. Then worry about it a year later...

    What's wrong with RAID6 on a 24 drive array? How long would the rebuild time take? And what are the serious issues I should worry about with 6 setups?

  13. #13
    Join Date
    Aug 2000
    Location
    Sheffield, South Yorks
    Posts
    3,627
    At the moment you're trying to keep scaling it vertically - this always ends in pain, way more pain than the pain of sitting down and looking how you're doing things and making the changes you need to make to scale out horizontally.

    Rebuild time of 24 x 4TB drives in a RAID-6 is going to be days.
    Karl Austin :: KDAWS.com
    The Agency Hosting Specialist :: 0800 5429 764
    Partner with us and free-up more time for income generating tasks

  14. #14
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    555
    Quote Originally Posted by KDAWebServices View Post
    You're off your rocker if you put 24 x 4TB drives in RAID5 or RAID6. That's an array size of 88TB - the rebuild times will be very lengthy and you've got a really good chance of hitting an unrecoverable read error on rebuild, especially if those are just standard desktop grade sata drive.
    Use the right drives and its not a problem... A drive had a weird issue after using a new controller and expander (not a problem with the drive I believe) and this is how long my rebuild took:

    Code:
    2014-04-28 00:19:30  DATA 2 VOLUME    Complete Rebuild      071:05:34
    2014-04-25 01:13:55  DATA 2 VOLUME    Start Rebuilding
    2014-04-25 01:13:53  90TB RAID SET    Rebuild RaidSet
    2014-04-25 01:13:53  E3 Slot#16       Unknown Event
    2014-04-25 01:13:52  E3 Slot#16       Unknown Event
    2014-04-25 01:12:55  001.001.001.003  HTTP Log In
    2014-04-25 01:11:42  Enc#3 Slot#16    Device Inserted
    2014-04-21 19:45:23  Enc#3 Slot#16    Device Removed
    I just had to eject/insert he drive and also saw some weird unkown events too which i have never seen before.

    Anyay 71 hours for a 90TB array (30x3TB coolspin HGST drives) is pretty damn good. A normal rebuild would have been way way quicker but this was under heavy disk I/O.

    Over 2 petabytes of data read/written in 27 days:

    Code:
    root@dekabutsu: 10:36 AM :~# uptime
     10:37:59 up 27 days,  7:44,  6 users,  load average: 63.78, 53.63, 41.54
    root@dekabutsu: 10:37 AM :~# iostat -m sda
    Linux 3.10.14 (dekabutsu)       05/02/2014
    
    avg-cpu:  %user   %nice %system %iowait  %steal   %idle
               1.64   43.22    0.98    1.03    0.00   53.12
    
    Device:            tps    MB_read/s    MB_wrtn/s    MB_read    MB_wrtn
    sda            1742.52       445.34       425.55 1051303581 1004590472
    Yes this is a combined average read + write speed of almost 10 gigabits over the 27 days.

    If it was closer to idle the rebuild would have taken less than 1 day.

    Also even under such heavy read/write conditions i have not had one of these 3 TB coolspin drives fail (out of 50 drives on multiple machines) All the disks are all almost 3 years of powered on time in this machine and not a single re-allocated or pending sector on any of the drives, smart stats are the same as the day i bought them. They have held up amazingly well. Also no re-allocations/pending sectors on all 50x 3TB drives I have.

    Code:
                            ARC-1880x Enclosure #2
    Port     Model Number/Firmware           Re-aloc/Pend/PS/DaysOn  Temp:
    1        HDS5C3030ALA630/MEAOA580        0/0/62/906              36
    2        HDS5C3030ALA630/MEAOA580        0/0/58/906              37
    3        HDS5C3030ALA630/MEAOA580        0/0/56/906              37
    4        HDS5C3030ALA630/MEAOA580        0/0/58/906              37
    5        HDS5C3030ALA630/MEAOA580        0/0/56/905              39
    6        HDS5C3030ALA630/MEAOA580        0/0/55/905              39
    7        HDS5C3030ALA630/MEAOA580        0/0/55/905              38
    8        HDS5C3030ALA630/MEAOA580        0/0/55/905              37
    9        HDS5C3030ALA630/MEAOA580        0/0/57/905              37
    10       HDS5C3030ALA630/MEAOA580        0/0/57/905              38
    11       HDS5C3030ALA630/MEAOA580        0/0/55/905              39
    12       HDS5C3030ALA630/MEAOA580        0/0/57/905              37
    13       HDS5C3030ALA630/MEAOA580        0/0/55/903              37
    14       HDS5C3030ALA630/MEAOA580        0/0/55/903              37
    15       HDS5C3030ALA630/MEAOA580        0/0/55/903              36
    
                            ARC-1880x Enclosure #3
    Port     Model Number/Firmware           Re-aloc/Pend/PS/DaysOn  Temp:
    1        HDS5C3030ALA630/MEAOA580        0/0/45/899              37
    2        HDS5C3030ALA630/MEAOA580        0/0/47/899              36
    3        HDS5C3030ALA630/MEAOA580        0/0/45/899              36
    4        HDS5C3030ALA630/MEAOA580        0/0/45/899              36
    5        HDS5C3030ALA630/MEAOA580        0/0/45/898              37
    6        HDS5C3030ALA630/MEAOA580        0/0/45/898              39
    7        HDS5C3030ALA630/MEAOA580        0/0/45/898              38
    8        HDS5C3030ALA630/MEAOA580        0/0/45/898              37
    9        HDS5C3030ALA630/MEAOA580        0/0/45/898              37
    10       HDS5C3030ALA630/MEAOA580        0/0/45/898              37
    11       HDS5C3030ALA630/MEAOA580        0/0/45/898              39
    12       HDS5C3030ALA630/MEAOA580        0/0/45/898              37
    13       HDS5C3030ALA630/MEAOA580        0/0/45/898              37
    14       HDS5C3030ALA630/MEAOA580        0/0/45/896              37
    15       HDS5C3030ALA630/MEAOA580        0/0/46/893              37
    I literally just urdered 24x4 TB coolspin HGST drives yesterday. I have been a proponent of HGST long before backblaze's article. In the test lab I manage not a single failure of 200 drives bought and the seagates they are replacing have had a 50% failure rate after a few years.

  15. #15
    Join Date
    Mar 2008
    Location
    Los Angeles, CA
    Posts
    555
    Quote Originally Posted by Host Dingle View Post
    Certainly Raid 6. But i think raid 10 is the best option. However It's upto you how you want to run your server.
    In real-life practice raid10 is less reliable and more chance of data loss during a rebuild than raid6. Although more data has to be read and thus more stress put on the array during rebuild (as a whole) unlike raid10 raid6 has two sets of parity for double redundancy and is the only hardware raid that can recover from read errors that happen during a rebuild. Also i find it very often the mirror pair can't handle the stress of the rebuild and fails.

    I have seen >50 raid10 arrays fail in my years when using crappy seagate disks. In all but maybe 1 cases the array would have survived if raid6. Ive only ever seen 1 case where a machine actually survived 3 disk faliures and would have failed had it been raid6 but was saved by raid10. This is actual experience in the field and not just talking about theoretical stuff.

Similar Threads

  1. Backup Server, RAID 6 or RAID 5 + Hot Spare?
    By gordonrp in forum Colocation, Data Centers, IP Space and Networks
    Replies: 7
    Last Post: 05-31-2011, 01:02 AM
  2. RAID 5 vs RAID 10 in shared hosting server
    By Tomcatf14 in forum Hosting Security and Technology
    Replies: 20
    Last Post: 10-23-2009, 03:54 PM
  3. Turn non raid into raid on live server?
    By Red Squirrel in forum Dedicated Server
    Replies: 12
    Last Post: 12-24-2008, 07:43 PM
  4. Whats the best for server with SATA raid 1 and SCSI raid 10
    By zefefre in forum Hosting Security and Technology
    Replies: 0
    Last Post: 01-14-2008, 12:42 AM
  5. 400GB Hard Disk Drives in RAID 0, RAID 5 and RAID 10 Arrays: Performance Analysis
    By donniesd in forum Hosting Security and Technology
    Replies: 0
    Last Post: 03-07-2007, 03:19 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •