I was just wondering what the pros/cons of using a software raid 1 setup or using a hardware raid 1 setup? Are there any big issues with either of them?
Printable View
I was just wondering what the pros/cons of using a software raid 1 setup or using a hardware raid 1 setup? Are there any big issues with either of them?
Do you mean comparing Software RAID to Hardware RAID?
Of course you'll get less performance issues by using (quality) hardware RAID setup.
Regards,
Yes comparing the two.
Are performance issues common with software raid?
Yes, as the work is done by the processor instead of dedicated controller chip.
You also won't get features like hot-swap and hot-spare with software.
Con of hardware - Cost
other then that hardware is best
I would disagree with hardware being best, from my experience, I've seen many hardware cards crap out and screw up entire arrays, where data could not be recovered - NEVER seen that once with software.
And IMO the performance difference is negligible with RAID1.
Razorblue is right .. same experiences here .. adding a RAID card is raising the chances of hardware problems. You need to think about the pros and the cons in your specific situation and the pros and the cons are not limited to cost.
That is what I am looking for the pros/cons of both setups. I'm not really worried about the costs, to me thats not a con if it is a better setup. I am trying to figure out what method would be better for disk mirroring.
1- What hardware will you use and what will you use the server for?
2- Will you use IDE or SCSI hard disks?
3- Will there be a LOT of disk activity?
4- Is a 15 minute downtime to change a hard disk critical in your situation?
I would say that the pros and cons depend of your particular situation. I would say software for raid-1 solutions, hardware raid for raid-5 scsi or other advanced solutions. Personnal opinion based on my personnal experience.
Regards,
1) dual xeon, 2 GB ram, it will be a web/mysql server
2) IDE drives
3) most likely, it will be the only server I have
4) 15 minutes downtime is fine, more then that might start to have issues with that.
Thanks for your opinion.
I would choose software raid-1 ... the performance lost of software raid should not affect you, the cost is cheaper and you will avoid possible hardware raid adapter problems. Make sure that someone is available quickly to swap the drives if a drive fails. Also, do not forget to have an external backup because you cannot rely on your raid drive to be your only backup, if a data corruption occurs on your first drive there are chances that data will also be corrupted on the mirror drive.
What hardware RAID cards are people using that they're having problems with? We've been using a good numbers of 3ware cards for well over a year now and we have yet to see any issue with them.
I agree plus with most you can backup the raid config offline which means IF the card dies you can put a replacement in and have the raid config back.
In terms of hardware raid actually mangling the drives/contents no thats been very rare maybe happened once on an old DEC server where we put the wrong driver on and thats in about 16 years of working with hardware raid.
I was just thinking the same, we've been using RAID for coming up to 5 years now in our servers and certainly aren't seeing the issues many are pointing out. With the 3Ware cards, they store the RAID config on the drives, so if the card dies, you just put another one in and away you go.Quote:
Originally posted by KarlZimmer
What hardware RAID cards are people using that they're having problems with? We've been using a good numbers of 3ware cards for well over a year now and we have yet to see any issue with them.
I was just thinking the same, we've been using RAID for coming up to 5 years now in our servers and certainly aren't seeing the issues many are pointing out. With the 3Ware cards, they store the RAID config on the drives, so if the card dies, you just put another one in and away you go.
Ditto.
It's also very handy if another component fails in a server. Just swap out all the hard disks to your spare chassis, and the 3ware controller in that chassis picks up all of the array data off the hard disks.
If you're worried about extra redundancy or performance, look at using RAID5 or RAID10 instead.
Performance (particularly write, compared to the 9500) on the new 9550SX cards is pretty amazing as well.
Raid1 adds huge performance benefits when using a proper hardware card (like a 3ware). Read performance is much better with raid1 :)
Areca makes the best SATA cards.
3ware's cards are the best supported with any linux distribution. It can be a nitemare trying to get other brands to work, which is why most people tend to reccomend them.Quote:
Originally posted by theacolyte
Areca makes the best SATA cards.
From windows based benchmarks, areca seems to shine however.
You are wrong about Areca support in Linux. They have native support. And their cards are way better (Expensive) than 3ware.
Areca aren't in the kernel yet, are they?
The 3w-xxxx driver has been in for ages, and the 3w-9xxx driver has even made it in to the standard RedHat kernels now. I was speaking to the 3ware guys at the Storage Expo in London last week and they said they're hoping to get the latest 9.3 drivers (which support the new 9550SX cards) in to the next kernel release...
We've got both Areca and 3ware here, and given the performance of the 9550SX cards, we'll be sticking with 3ware for the forseeable future...
I think as of 2.6.11 it was added to the kernel. I'm suprised that the Areca doesn't outperform your 3ware... I'd look into that -- the Areca cards consistantly bench higher than 3ware.
Which 3ware cards are you talking about? Areca are better than the 9500 range, but the new 3ware cards are very impressive indeed...
I can't find any links at the moment now, and IIRC, the comparisons I've seen are mostly for the 9500 series, not sure if I've seen anything about the 9550SX -- but I do know that at least in Centos the stock kernels don't support it.
I admit I don't know much about the lowest end of the RAID world, but in the midrange and high end part of it software RAID is not even an option. I don't know what kind of cards some of the posters here had problems with, but they sound very strange. Any halfway professional RAID adapter has been storing RAID array data on the disk drives for the last half a decade or so (the really professional ones much longer than that), and at least in the SCSI RAID area you can not only swap the defective controller against a same model but against any other (older or newer) model with the same interface and as soon as you switch the server on again you've got your RAID arrays back.
I'd go hardware,
Performance - Not a huge difference, but known to be better than software
Capacity - You aren't clogging your onboard IDE slots
OS independant - Doesn't matter what OS you use or how messed up your OS gets. The BIOS of the card will have its own interface so you rebuid the array and can even pull it up from a knoppix CD or other rescue method