Results 1 to 33 of 33
  1. #1
    Join Date
    Aug 2002
    Posts
    68

    RAID 1 vs Software RAID performance/reliability

    on this thread, bqinternet states that:

    For a simple RAID configuration such as RAID1, software RAID works very well. With modern CPUs, the overhead is minimal. In many cases, software RAID mirroring is faster than hardware RAID! RAID5 is where you usually want hardware, but it's not necessary for RAID1.
    Two questions:

    While comparing to using a decent RAID controller (3ware for example), does the above still hold true?

    Can you hot-swap a software raid setup?

    Thanks in advance
    Last edited by crashnet; 02-09-2011 at 10:19 AM.

  2. #2
    Join Date
    Aug 2007
    Location
    Belgium
    Posts
    4,183
    Can't really say, we only work with hardware raid for our customers. I prefer that over software RAID.
    www.InstantDedicated.com - Online in no time
    Dedicated Servers in [EU] Netherlands with DAILY support, also on weekends
    DDOS Protected network - 100% Money Back if it doesn't work for you
    Streaming / IPTV allowed | Up to 10 Gbit ports | 100% Network Uptime

  3. #3
    Join Date
    Jun 2002
    Location
    Waco, TX
    Posts
    5,292
    Yes you can hotswap software raid just fine.

    Up until last week I was a pretty firm believer in hardware raid cards, but last week we had to hardware raid fails in a short period. One was while changing a drive on a 4 drive RAID10, almost immediately two additional drives failed and would not come back no matter what was tried.

    And another was a very simple RAID1 on a 2 port 3ware card(of which we have many 3ware cards, and only one prior failure before these two), and not one but BOTH drives without prior warning went offline with 'smart errors' and the thing would not boot at all. I guess it is possible both drives really failed at the same time, I just find it highly improbable.

    What's worse is after trying to simply bring the drives online without raid, the data was mostly gone, so it seems like a card fail there to me in that case.

    Software Raid I am sure can have similar issues, but RAID1 shouldn't really fail in this manner, hardware or software.

    the main benefit of hardware raid comes on RAID5/6/50/60 with heavy XOR calculations over a large number of spindles. That is where you can really see the difference. I did a bit of performance testing recently on these type arrays with 10+ drives and found much better performance with an adaptec card, than on softraid in rebuilds and high IO stability.

  4. #4
    Join Date
    Jul 2009
    Location
    Indiana
    Posts
    2,193
    I prefer software RAID myself, it's cheaper and it's more flexible since it's not bound to a single raid card chipset. Performance difference is negligible, and considering that a good RAID card is usually $200+, if I was worried about performance I could just put that money towards a more powerful cpu.
    If I ever did need to use RAID5/6 I would probably use a card, but for simple mirroring or striping I'd stick with the software.
    Sam Barrow - CEO @ SQUIDIX (1-855-SQUIDIX)
    Ask Us About Sponsoring Your Web Site (High Traffic Sites Only)
    Squidix - Shared, Reseller, Semi-Dedicated, Managed VPS and Managed Dedicated Hosting
    Midwestern Web - Web Design & Development Services

  5. #5
    Join Date
    Jun 2001
    Location
    Texas
    Posts
    1,245
    I'd be interested in some other opinions here. We run hardware based RAID on all of our servers too but we've had a few end users request software RAID on their dedicated boxes with us.
    ThePrimeHost LLC - Serving Websites Since 2001.
    Fully Managed VPS Hosting w/ Cpanel + WHM
    Fully Managed Dedicated Servers w/ Cpanel + WHM
    Reseller Hosting with End User Support

  6. #6
    Join Date
    Feb 2003
    Location
    Detroit
    Posts
    836
    We run both setups and each has their advantages. Depending on the OS, a software raid can be a pain to repair or recover from a failure. The same can be said for hardware raids too. We have had a few controller failures and if an identical raid controller isn't available you may be looking for backups.

    There are other situations as well, such as raid 5 write hole. With a battery backed controller, these changes are saved and parity can be calculated after recovery. In a software raid 5, you have a potential for data loss.

    Overall, I try to evaluate each situation independently. You can't say either one is right for every deployment. In some instances a software raid is much much faster. In others, a hardware raid is much safer or easier to manage. Most of the time it comes down to what our client wants
    managedway
    WE BUILD CLOUDS

    Cloud Computing | Fiber Optic Internet | Colocation

  7. #7
    Join Date
    Dec 2005
    Posts
    3,077
    Linux software raid (mdadm) is fantastic, and much more stable than onboard RAID chips or cheap RAID cards which are software/driver based.

    For RAID-1, unless you need hot-swap functionality there is nothing wrong with software raid. For RAID 5/10 I would always go for a good quality hardware raid card (3Ware, Adaptec, LSI etc)

  8. #8
    Join Date
    Apr 2009
    Posts
    1,143
    Sorry for stealing topic here. But how are you guys' experience with software raid when it comes to RAID10 ? - Im thinking if theres Heavy I/O on discs wont it need to read all the time to figure out where the data is going wont that kill i/o and cpu load?

  9. #9
    Join Date
    Feb 2002
    Location
    New York, NY
    Posts
    4,612
    Quote Originally Posted by crashnet View Post
    While comparing to using a decent RAID controller (3ware for example), does the above still hold true?
    If you're using a bad controller, software RAID can be faster. If you're using a good controller, they should be about the same.

    Quote Originally Posted by crashnet View Post
    Can you hot-swap a software raid setup?
    It depends on how you have things set up. If you do it right, you can.
    Scott Burns, President
    BQ Internet Corporation
    Remote Rsync and FTP backup solutions
    *** http://www.bqbackup.com/ ***

  10. #10
    Join Date
    Jul 2009
    Location
    The backplane
    Posts
    1,790
    Quote Originally Posted by mazedk View Post
    Sorry for stealing topic here. But how are you guys' experience with software raid when it comes to RAID10 ? - Im thinking if theres Heavy I/O on discs wont it need to read all the time to figure out where the data is going wont that kill i/o and cpu load?
    RAID 10 under Linux/mdadm seems to work well, a good hardware card with cache will be faster in some scenarios -- but if you're on a budget it's worth consideration. I haven't done extremely extensive testing, but I've put it through the paces and haven't had any problems, resource utilization was not unacceptable.

    Here is a nice comparison of hardware vs. software RAID -- http://www.linux.com/news/hardware/s...-software-raid

  11. #11
    Join Date
    Aug 2002
    Posts
    68
    What about alerts when an array starts to degrade? I know that there is no 100% reliable way to detect/predict hard drive failures, but the RAID card alerts come in handy. I wouldnt want to rely solely on SMART for this purpose...

    Since it seems that SW RAID is comparable in perfomance, the question now becomes: can I get hot-swap AND degraded RAID alerts using an CentOS 5.5 x64 SW RAID manager?

  12. #12
    Software RAID for RAID 0 (no fault tolerance) and RAID 1 (Mirror) works perfectly fine...no need for the more expensive hardware RAID cards as the overhead is quite low using modern processors.

    If you are going RAID 5, certainly the Hardware solutions are the way to go as the parity bit calculations are quite intensive on software RAID and will cause a tangible CPU overhead.
    KiloServe Hosting - 48 Core Nodes and specializing in high disk I/O VPS
    www.KiloServe.com Providing Quality Hosting since 2007
    Specializing in Dedicated Servers, OpenVZ VPS, Xen VPS, and Windows VPS.

  13. #13
    Join Date
    Jan 2008
    Location
    Jax, FL
    Posts
    2,707
    Quote Originally Posted by crashnet View Post
    What about alerts when an array starts to degrade? I know that there is no 100% reliable way to detect/predict hard drive failures, but the RAID card alerts come in handy. I wouldnt want to rely solely on SMART for this purpose...

    Since it seems that SW RAID is comparable in perfomance, the question now becomes: can I get hot-swap AND degraded RAID alerts using an CentOS 5.5 x64 SW RAID manager?
    Yes, you can get alerts with SW RAID... You just need to edit your mdadm.conf file to do such.

    Hot swap is also possible if your board/OS supports it.

  14. #14
    Join Date
    Nov 2005
    Location
    Minneapolis, MN
    Posts
    1,648
    Of course mdadm sends out alerts for array integrity events, as does smartmontools (smartd).

    I have a couple minor gripes with mdadm, like --grow can actually extend or shrink an array, and there is no warning or prompt in advance of a shrink event. (Check your syntax carefully)

    That said, the nice thing about working with software RAID is that tools available to you for working with arrays are only limited by the combined programming ability of the opensource community and yourself.

    I recently went through an upgrade of a RAID5 array replacing 1.5TB drives with 2TB drives and letting the array resync at each step -- once the array was complete I hit a snag and was concerned about data loss. With mdadm I was still able to force the assembly of the array using the 1.5TB drives that were removed at different points of time and after some e2fsck work I was able to recover data intact. Most hardware RAID controllers I've used would have just barked that the volumes weren't consistent and locked me out from doing this.
    Eric Spaeth
    Enterprise Network Engineer :: Hosting Hobbyist :: Master of Procrastination
    "The really cool thing about facts is they remain true regardless of who states them."

  15. #15
    Join Date
    Apr 2006
    Posts
    919
    I've always worked with hardware RAID controllers, but after having 2 customers on a servers with software raid1 without any problem for more than 1 year I am planning to use it for web hosting.

    The idea is to host 130 domains and 100GB of data in the following configuration.

    Intel Quad-Core HT
    8GB RAM
    2 x 750GB RAID1 (software)
    1.5 TB Backups

    There really any difference in performance?

  16. #16
    Quote Originally Posted by crashnet View Post
    on this thread, bqinternet states that:



    Two questions:

    While comparing to using a decent RAID controller (3ware for example), does the above still hold true?

    Can you hot-swap a software raid setup?

    Thanks in advance
    Yes, I would say it's true. Software raid works great. Also, if you think about what is involved in raid 1, it's just sending a copy of your data to each of two drives. Hardly very cpu intensive stuff.

    I prefer software raid because I know what I'm getting, and it's well tied in to the operating system. I can do "cat /proc/mdstat" and see what the status of my raid arrays are, and do simple commands to do whatever I want there. With a raid card, you have to install the drivers into the OS, which is not always a simple matter, as there are many different cards out there with varying compatabilities with different operating systems. Often times, it can be hard to even track the drivers down. I would think of linux's software raid as a more "mature" product than the hardware raid cards I've come across, as the software has been more meticulously tested in more environments than any of the firmwares or interfaces on raid cards I've come across. Even forgetting performance, I just don't trust hardware raid. Had too many bad / mediocre experiences.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  17. #17
    Join Date
    Aug 2006
    Location
    Ashburn VA, San Diego CA
    Posts
    4,571
    Quote Originally Posted by SC-Daniel View Post
    Hot swap is also possible if your board/OS supports it.
    Any board which supports AHCI-mode should allow hot swap. The problem is most boards come with IDE mode as the default setting, which will throw a huge fit if you yank or insert a drive. There are also some performance degradation in IDE mode. That said, a modern Dual or Quad core server with Linux RAID1 is perfectly fine and performs very well if managed correctly.
    Fast Serv Networks, LLC | AS29889 | Fully Managed Cloud, Streaming, Dedicated Servers, Colo by-the-U
    Since 2003 - Ashburn VA + San Diego CA Datacenters

  18. #18
    Quote Originally Posted by FastServ View Post
    Any board which supports AHCI-mode should allow hot swap. The problem is most boards come with IDE mode as the default setting, which will throw a huge fit if you yank or insert a drive. There are also some performance degradation in IDE mode. That said, a modern Dual or Quad core server with Linux RAID1 is perfectly fine and performs very well if managed correctly.
    which is why it's important to make sure all your bios settings are the way you want before you put a system live.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  19. #19
    Join Date
    Jan 2005
    Location
    San Francisco/Hot Springs
    Posts
    988
    Quote Originally Posted by crashnet View Post
    Can you hot-swap a software raid setup?
    For Linux, it depends, and basically the answer is "maybe".
    If you're using SATA, good luck with that.
    If you're using SAS, probably.

    For FreeBSD, the answer is basically yes.

    In terms of software config - You can not easily swap out a mdadm (Linux) software raided root disk, but you can swap out anything else ok.

    For FreeBSD, you can hotswap a raided root disk very very simply and its very fast. If you're using ZFS, its even easier and the rebuild time is amazingly short.
    AppliedOperations - Premium Service
    Bandwidth | Colocation | Hosting | Managed Services | Consulting
    www.appliedops.net

  20. #20
    Join Date
    Jun 2007
    Posts
    99
    How does software raid1 compare to hardware in terms of read spead/reading both disks at the same time.

    For example 3ware Twinstor vs mdadm's algorithms
    Last edited by reasonpolice; 02-20-2011 at 09:17 PM.

  21. #21
    Join Date
    Apr 2006
    Posts
    919
    Quote Originally Posted by reasonpolice View Post
    How does software raid1 compare to hardware in terms of read spead/reading both disks at the same time.

    For example 3ware Twinstor vs gmirror's algorithms
    write test in a software raid1 (2x750GB) with Intel X58 chipset.

    dd if=/dev/zero of=testfile bs=1024k count=10000

    10485760000 bytes (10 GB) copied, 85.0528 seconds, 123 MB/s

  22. #22
    Join Date
    Jan 2005
    Location
    San Francisco/Hot Springs
    Posts
    988
    Quote Originally Posted by reasonpolice View Post
    How does software raid1 compare to hardware in terms of read spead/reading both disks at the same time.

    For example 3ware Twinstor vs gmirror's algorithms
    I actually don't know how the Twinstor algo stacks up internally to gmirror but I have done some benchmarks and I've seen some pretty ripping performance coming out of the 3ware and with Gmirror. By eyeballing, I occasionally see gmirror being faster for reads, but some of that could be system load too. Using Bonnie, I see gmirror being a bit faster on mostly identical HW/OS.
    AppliedOperations - Premium Service
    Bandwidth | Colocation | Hosting | Managed Services | Consulting
    www.appliedops.net

  23. #23
    Join Date
    Nov 2003
    Posts
    538
    I have systems that use both but personally for me when dealing with customer data I prefer to have a hardware RAID controller so that I have someone to hold accountable for it if it doesn't work properly. (I.e. the server vendor or the raid controller vendor).

    You save a little money dealing with software RAID but it only takes one wacky software RAID failure to pay for the cost difference.
    XLHost.com
    Dedicated Servers, Virtual Private Servers, and more since 1995.
    drew @ xlhost.com

  24. #24
    Quote Originally Posted by XLHost View Post
    I have systems that use both but personally for me when dealing with customer data I prefer to have a hardware RAID controller so that I have someone to hold accountable for it if it doesn't work properly. (I.e. the server vendor or the raid controller vendor).

    You save a little money dealing with software RAID but it only takes one wacky software RAID failure to pay for the cost difference.
    Ah, see, I understand your point, but personally feel the opposite. If a hardware raid card screws up, I'm at the mercy of some company in Taiwan that manufactured the thing to come up with a firmware update just for me? No thanks, one screwup and that card goes in the dumpster. And then what? Try to find some other card that also gets generally good reviews but also has immature firmware drivers? No thanks. If s/w raid goes wrong, it's my own fault for configuring it wrong, and I need to do better next time; at least the code base is solid.

    Don't even get me started on stripe sizes. What is the max on most hardware cards? 64k?
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  25. #25
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,208
    Quote Originally Posted by funkywizard View Post
    Ah, see, I understand your point, but personally feel the opposite. If a hardware raid card screws up, I'm at the mercy of some company in Taiwan that manufactured the thing to come up with a firmware update just for me? No thanks, one screwup and that card goes in the dumpster. And then what? Try to find some other card that also gets generally good reviews but also has immature firmware drivers? No thanks. If s/w raid goes wrong, it's my own fault for configuring it wrong, and I need to do better next time; at least the code base is solid.
    Agree here.

    I personally run/manage Software RAID on around ~60 systems, RAID 1 and 0, and have never had any problems. It's so simple to check the status of the RAID, add and remove drives etc. (and no chance of any RAID cards dying )

    For RAID 10 i'd definitely use hardware RAID, but other then that, theres no reason not to use Software RAID, and save yourself some good $$.

  26. #26
    Join Date
    Jan 2005
    Location
    San Francisco/Hot Springs
    Posts
    988
    Quote Originally Posted by funkywizard View Post
    If a hardware raid card screws up, I'm at the mercy of some company in Taiwan that manufactured the thing to come up with a firmware update just for me?
    Versus an unaccountable person who may or may not know what they're doing but is doing it anyways? Companies like 3ware/LSI and Adaptec basically only sell storage hardare, and have scant few real problems compared to the hassle of SW raid. Yes, some HW is half baked - so don't buy it

    ZFS or Veritas are great examples of why not to use a HW raid controller, the flexibility and features are well worth it. For the most part though, the only reason to use SW raid is that you're cheap.

    Don't even get me started on stripe sizes. What is the max on most hardware cards? 64k?
    I have a couple where the max stripe size is 512KB...
    Last edited by appliedops; 02-21-2011 at 06:29 PM. Reason: typo
    AppliedOperations - Premium Service
    Bandwidth | Colocation | Hosting | Managed Services | Consulting
    www.appliedops.net

  27. #27
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,208
    Quote Originally Posted by appliedops View Post
    For the most part though, the only reason to use SW raid is that you're cheap.
    Cheap or on a very tight budget

  28. #28
    Quote Originally Posted by appliedops View Post
    Versus an unaccountable person who may or may not know what they're doing but is doing it anyways? Companies like 3ware/LSI and Adaptec basically only sell storage hardare, and have scant few real problems compared to the hassle of SW raid. Yes, some HW is half baked - so don't buy it

    ZFS or Veritas are great examples of why not to use a HW raid controller, the flexibility and features are well worth it. For the most part though, the only reason to use SW raid is that you're cheap.



    I have a couple where the max stripe size is 512KB...
    512 is barely adequate. If you tell me 2mb, then we're talking. To get decent performance, the raid stripe needs to be at least double your typical read request size, preferably 4x as big. The default linux readahead is 128k, so 512k stripe will minimally suffice. That said, for maximum performance for things like streaming video, you want to set your linux readahead to 512K. with a 512k readahead and a 512k stripe, you've got about 100% chance that every read request will cause two disks to seek. If you can do one request and have just one disk seek, you'll double your performance in a multi-user environment. By using 512K readaheads you'll be able to 4x improve your performance for streaming out large files in a multi-user environment compared to 128k. Readahead over 512K didn't seem to help much beyond 512k, so that's where I'd set that as an optimal value. As such, a stripe less than 1-2MB, you're throwing performance away and may as well just use a 2 drive raid 1 instead of a 4 drive raid 10.

    Also I'm a bit confused as it sounds like you're saying you'll trust someone to set up a hardware raid, but that you wouldn't trust the same staff to set up a software raid? Either way you need to know what you're doing.

    Also to bhavicp, I'm with you most of the way, but I'm confused why you'd want to use hardware with a raid 10, but you're happy to use software for raid 1? Raid 10 software performs great. Raid 5 is where you can see some performance advantages going with hardware vs software.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  29. #29
    Quote Originally Posted by bhavicp View Post
    Cheap or on a very tight budget
    yeah. The kind of hardware cards I'd trust cost as much as a pretty powerful server by itself, which doesn't sound like a great value to me. Sure, there's being cheap vs doing it right, but just because you don't want to throw money down the toilet doesn't mean you're being cheap.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  30. #30
    Join Date
    Jul 2008
    Location
    New Zealand
    Posts
    1,208
    Quote Originally Posted by funkywizard View Post
    Also to bhavicp, I'm with you most of the way, but I'm confused why you'd want to use hardware with a raid 10, but you're happy to use software for raid 1? Raid 10 software performs great. Raid 5 is where you can see some performance advantages going with hardware vs software.
    I've mostly read/heard that RAID 10 in Software RAID isn't good as it uses alot of CPU , so i was just assuming hardware RAID would be better for this. But if it performs OK with decent CPUs in Software RAID, i'd certainly like to know!

  31. #31
    Join Date
    Jan 2005
    Location
    San Francisco/Hot Springs
    Posts
    988
    Quote Originally Posted by funkywizard View Post
    512 is barely adequate. If you tell me 2mb, then we're talking.
    That would be terrible for OLAP/OLTP... If you're looking for raw serving speed, I can see your point there.

    I did some more digging and it looks like some of the Adaptec cards can do 1MB stripes so I'd imagine some others do offer 2MB but I'm not sure I'd really want to use that for a transactional disk.
    AppliedOperations - Premium Service
    Bandwidth | Colocation | Hosting | Managed Services | Consulting
    www.appliedops.net

  32. #32
    Quote Originally Posted by bhavicp View Post
    I've mostly read/heard that RAID 10 in Software RAID isn't good as it uses alot of CPU , so i was just assuming hardware RAID would be better for this. But if it performs OK with decent CPUs in Software RAID, i'd certainly like to know!
    Ah ok. For what its worth, s/w raid 10 uses basically no CPU. Raid5 is where you might see a difference
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  33. #33
    Quote Originally Posted by appliedops View Post
    That would be terrible for OLAP/OLTP... If you're looking for raw serving speed, I can see your point there.

    I did some more digging and it looks like some of the Adaptec cards can do 1MB stripes so I'd imagine some others do offer 2MB but I'm not sure I'd really want to use that for a transactional disk.
    See your point there. For a database you still want the stripe at 4x the readahead but you'll want to leave the readahead at 128k, so a 512k stripe would be adequate in that case. Raid 5 is probably not best for oltp anyway, to keep up the write performance you want raid 10. If your raid has a battery backup that can be good for data security for a database, but without battery backed write cache, not much point in hardware raid 10 just in general. And those kind of cards run like $1000 which makes it really only something I'd recommend on a high end database server.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

Similar Threads

  1. combine software & hardware raid for performance
    By matt2kjones in forum Hosting Security and Technology
    Replies: 6
    Last Post: 05-11-2010, 09:51 AM
  2. Software RAID 0 performance 4HDD ??
    By nikon333 in forum Dedicated Server
    Replies: 8
    Last Post: 05-05-2010, 07:56 AM
  3. Software RAID 0 performance?
    By cheaptraffic in forum Dedicated Server
    Replies: 11
    Last Post: 04-15-2010, 02:09 AM
  4. How much of a performance hit is software Raid 1?
    By Soulwatcher1974 in forum Colocation and Data Centers
    Replies: 20
    Last Post: 06-04-2007, 07:46 PM
  5. 400GB Hard Disk Drives in RAID 0, RAID 5 and RAID 10 Arrays: Performance Analysis
    By donniesd in forum Hosting Security and Technology
    Replies: 0
    Last Post: 03-07-2007, 03:19 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •