It is becoming the software RAID in a real and reliable option?
I've been seeing for some time a greater number of hosting companies that use this type of raid for Linux systems. Myself, I have customers with software raid in their servers without problems, even I have a hosting server with a software RAID1 SATA.
If you analyze the reliability of a hardware controller (firmware problems, temperature, possible hardware failure, no BBU) and the purchase price (at least 300 €), every time I see less needed a hardware raid in servers with sparing use of resources.
It is true that the software raid has a CPU/RAM usage, but with the current servers with Intel Nehalem / Sandy Bridge and the excellent I/O of Intel chipsets is a minor problem.
Software RAID works very well under Linux, we have been using in production for some time. A RAID level that needs to do parity calculations will use some CPU. Everything we have deployed is 1 and 10, so I can't really comment on what the utilization is.
Software RAID1 or RAID10 has been solid on *NIX for quite a while now. For RAID5 and RAID6, the calculations are more involved and it is probably worth the time to use a hardware card.
Besides performance, the ability to install a BBU and simplicity of setup are also nice (device shows up as one block device rather than having to set up each individual md). Also, if a port is faulty, it is easier to replace the RAID card than the whole motherboard.
Software RAIDs biggest advantage is my books is the ability to move the drives to another chassis and know they will come up without worrying if the RAID card is compatible or not.
We generally recommend software RAID for RAID1 (for the redundancy), and hardware RAID for RAID10 (for the disk I/O) where you'll see a bit more of a performance advantage.
We don't typically recommend RAID5/6 for servers, as situations where you need a lot of storage but where disk I/O is not an issue are pretty rare. There's some excellent software solutions in this area as well, with ZFS and raidz/raidz2. As machines in this type of configuration are usually dedicated for storage anyhow, CPU usage isn't a significant concern so I'd probably lean more towards software solutions here as well.
ASTUTE HOSTING: Advanced, customized, and scalable solutions with AS54527 Premium Canadian Optimized Network (Level3, PEER1, Shaw, Tinet) MicroServers.io: Enterprise Dedicated Hardware with IPMI at VPS-like Prices using AS63213 Affordable Bandwidth (Cogent, HE, Tinet) Dedicated Hosting, Colo, Bandwidth, and Fiber out of Vancouver, Seattle, LA, Toronto, NYC, and Miami
I have seen cases of data loss by replacing a hardware raid controller that has failed. But in a software raid would be much more difficult because there is no physical parts. Is a common problem for you?
I have used software raid with linux for my servers and never once have had a problem.
If you are using them for webservers, as I am, I doubt the need for the hardware version since you will be load balancing with other servers should demand get that high. In other words, the webserver would need to be load balanced long before the software raid/cpu/ram becomes an issue.
not using as a webserver, I do not know, but it has been such a breeze with software raid. Cheaper and better...much cheaper in so many ways.
I have never heard of data loss with software raid and have never experienced it. I have test failed some parts to see what happens and it is very easy to recover from...and move to other servers.
Hardware raid....well, I do not want to be a hardware technician for that company just to try and get some data back...software raid, cheaper, safer, easier, better... in my opinion.