Results 1 to 36 of 36
  1. #1
    Join Date
    Mar 2004
    Posts
    426

    Software RAID 1 question

    I am setting up some servers using 2 hard drives and RAID 1 (software). If a hard drive fails, the server will continue to operate, correct? And when I replace the bad drive, it will rebuild the drive to match the good one? I understand the basics of RAID, but this is my first experience actually using it.

    Also, from what I am reading, it is better to use Software RAID for RAID 1 rather than Hardware (integrated motherboard type, not 3Ware, etc) due to stability? I'm sure hardware RAID 1 via a 3Ware card is definitely better, just not economical fro my application.

    Thanks !

  2. #2
    Join Date
    Nov 2005
    Location
    Michigan, USA
    Posts
    3,872
    Well software RAID uses more system resources, while the hardware raid relies on a RAID card. For a basic RAID1 you don't really need an expensive 3Ware card, a software RAID would probably be fine. When you get to RAID5 and RAID10 is when you will be needing hardware RAID.

    And yes RAID1 will work if one drive fails, it mirrors data from one drive to the other. You need to do something with the boot partition though, because if the drive that has the boot partition fails then how will your system boot up? Maybe someone else can enlighten you on that.

  3. #3
    Quote Originally Posted by themicro
    I am setting up some servers using 2 hard drives and RAID 1 (software). If a hard drive fails, the server will continue to operate, correct? And when I replace the bad drive, it will rebuild the drive to match the good one? I understand the basics of RAID, but this is my first experience actually using it.

    Also, from what I am reading, it is better to use Software RAID for RAID 1 rather than Hardware (integrated motherboard type, not 3Ware, etc) due to stability? I'm sure hardware RAID 1 via a 3Ware card is definitely better, just not economical fro my application.

    Thanks !
    Here is a decent explanation on RAID (specifically software RAID under the linux kernel):

    http://tldp.org/HOWTO/Software-RAID-0.4x-HOWTO-2.html

    Software RAID is typically frowned upon in enterprise environments. Hardware RAID is your best bet for an easy recovery during drive failure.

    The cost of RAID controllers should not keep you from the data integrity provided by hardware RAID setups.
    Thomas Brenneke | Network Redux, LLC | http://www.networkredux.com
    • Proud sponsors of the SimpleMachines ImageMagick and AdiumX projects.

  4. #4
    The main advantage of hardware raid, in my mind, is ease of replacement. If a drive goes down, you ask your colo provider to swap the one with the red light with the one sitting on the shelf, and it'll rebuild itself with no downtime.

    Software RAID, otoh, can be a pain in the patootie to get to rebuild properly, and is not a no-downtime kind of proposition. It's better than nothing, and you can migrate your users to another box while you rebuild the mirror to minimize service interruption, but it's still nowhere near hardware RAID in terms of low-maintenance repairs.

    That's IMHO, and I've only dealt with about a half-dozen RAID drive failure issues, but the software mirror was by far the worst to deal with.

  5. #5
    Join Date
    Jan 2004
    Location
    Chicago
    Posts
    984
    I'll 2nd dzeanah. Software RAID is a PIA... hardware raid, just have a tech swap the drive and let it rebuild, or pop into the scsi bios and tell it to rebuild.

    Software Raid can be much more intensive and difficult to get back online in comparison. Considering the cost of your entire server I've never felt this to be a part to skimp on. Getting better performance with the added hardware is just icing on the cake.

  6. #6
    Thanks for the advice. I have been on the fence between hardware and software RAID. I'm conviced: hardware is the way to go.

    Jeff

  7. #7
    Join Date
    Mar 2004
    Posts
    426
    Thanks for the help !

    Sounds like hardware RAID is worth it. Now, is RAID built into motherboards fairly reliable and better performance than software, or are separate controller cards better. I'm definitely using a 3Ware 9550SX for my RAID 10 server, no doubt, but curious about motherboards that have RAID built into them for RAID 1 on lower end systems.

  8. unless you are running windows, you can't use on-board HostRAID because there is no RAID driver for linux OS. you can only install Linux OS RAID (md.o) for the on-board SATA controller, configured as HostRAID or not.

    with Linux OS software RAID1, you need to be somewhat an file system expert in order to resync the RAID1 after defective drive is replaced. with hardware based RAID1, all you have to do is to replace the drive, then instruct RAID card to rebuild.

  9. #9
    Join Date
    Nov 2004
    Location
    Switzerland
    Posts
    855
    Are all hardware raid cards bootable? I would like to buy an entry level on in eBay but I am not sure if my server will recognize and added PCI-X card at boot level.
    .:. Enterprise SAN Consultant .:.

  10. Quote Originally Posted by edelweisshosting
    Are all hardware raid cards bootable? I would like to buy an entry level on in eBay but I am not sure if my server will recognize and added PCI-X card at boot level.
    yes, they are bootable. in motherboard's BIOS, RAID card is classified as "SCSI controller", and you need to make sure it's included in boot device list in your desired boot order.

  11. #11
    Join Date
    Jan 2004
    Location
    Chicago
    Posts
    984
    Quote Originally Posted by themicro
    Thanks for the help !

    Sounds like hardware RAID is worth it. Now, is RAID built into motherboards fairly reliable and better performance than software, or are separate controller cards better. I'm definitely using a 3Ware 9550SX for my RAID 10 server, no doubt, but curious about motherboards that have RAID built into them for RAID 1 on lower end systems.
    That would depend. Are you talking about a true server motherboard with on-board raid card, or a consumer PC motherboard with a raid chip onboard?

    If it's a true server MB with a LSI Logic, or Adaptec, you should be fine as long as it has all the drive connections and Raid levels supported that you require.

    If it's a consumer PC MB with some cheap raid chip, it's definately not going to be as reliable or better performing than a 3ware, LSI, or Adaptec. Plus most of those chips don't offering anything over Raid 1, have poor performance, and may not even allow hot swap. Again, for someone that was planning on using Software raid, at least one of these bundled raid chips on their MB would be better than nothing (ie. software raid) but personally not my preference.

  12. #12
    Join Date
    Nov 2005
    Location
    Michigan, USA
    Posts
    3,872
    Most of the consumer PC Boards with onboard RAID don't have drivers for Linux, so if your trying to run Linux with RAID you need to buy a RAID card that is supported.

    Promise would be the most affordable, but 3Ware, LSI and Adaptec would be the better performing. They should all support linux.

  13. #13
    Join Date
    Mar 2004
    Posts
    426
    Some GREAT feedback. I've been in the hosting business since 2003, but since my decision to go colo, I have REALLY learned a lot. Sometimes nerve wrenching, but I am really learning the business like I never thought I would. And I thought my switch from reseller to dedicated a couple of years back was a nervous experience !!
    Anyway, my situation is I wanted to setup all of my customer's dedicated servers with 2 drives and RAID 1. Often times, I am a one man show and this would add to the reliabilty as well be a good selling feature. But I am looking at converting several servers from leased to owned, and adding a $300+ raid card to each server is not feasible, and maybe not even smart in the long run. If you had 1 drive failure in 100, you would spend $30,000 to prevent the headaches of that one situation. This is why I was considering software raid. The Intel boards I will be using do not have a RAID controller built in, but I do see lots of RAID controller cards from $15 up to several hundred dollars. Just wondering if a medium duty RAID card would be smart in this situation. The servers will be PentiumD servers with SATA II (3.0 Gbps) hard drives, and some higher. Anyone have any experience with a cheaper raid card?
    But in reality, since I am using the "server" grade hard drives from WD (the ones with "YS" at the end of their P/N) then just single drive or dual drive with backups on server and/or off server may be good enough without the extra headache and expenditure.
    Now for my servers that house my shared hosting accounts, yes, 3Ware RAID 10 Hot Swap all the way! A drive failure in one of these could have a much larger effect.

  14. #14
    Join Date
    Mar 2004
    Posts
    426
    Thanks devonblzx! You posted that just as I was asking !

  15. #15
    Just some feedback on software raid.

    We've used it on a few hundred servers.

    We find it easy to use and manage. You'd need to be familiar with the mdadm tool, but that is pretty straight forward.

    The software raid overhead doesn't noticeably affect server performance. In fact since you're probably going to be running on a server with a modern CPU you're likely to find it faster than a raid card with a simple controller processing unit.

    Software raid is flexible.

    Software raid let's you mix and match disk sizes. e.g. we can pair up a 40GB and a 41GB drive (or a 60/80/120/200/etc drive) and it will work (40GB raid1 array)

    Software raid let's you mix and match drive types SATA and PATA.

    Software raid let's you move disks from one machine to the other without having to move the raid controller card.

    Linux driver support for software raid is good. No need to get a custom kernel module installed before you can use your hardware raid controller.

    Software raid let's you support as many disks as your server will hold. And you'd have access to any raid level you desire not just what was offered on the raid controller card.

    The software raid array can be rebuilt after boot up while the server is operating as normal.

    You don't need to go into the bios at all to configure it (which would otherwise be tricky if you are remote from the server).

    Cheers, Peter
    RimuHosting.com - VPS Hosting and Dedicated Server Hosting since 2003
    Pingability.com - Peace of Mind Web Site Monitoring

  16. #16
    Quote Originally Posted by themicro
    Some GREAT feedback. I've been in the hosting business since 2003, but since my decision to go colo, I have REALLY learned a lot. Sometimes nerve wrenching, but I am really learning the business like I never thought I would. And I thought my switch from reseller to dedicated a couple of years back was a nervous experience !!
    Anyway, my situation is I wanted to setup all of my customer's dedicated servers with 2 drives and RAID 1. Often times, I am a one man show and this would add to the reliabilty as well be a good selling feature. But I am looking at converting several servers from leased to owned, and adding a $300+ raid card to each server is not feasible, and maybe not even smart in the long run. If you had 1 drive failure in 100, you would spend $30,000 to prevent the headaches of that one situation. This is why I was considering software raid. The Intel boards I will be using do not have a RAID controller built in, but I do see lots of RAID controller cards from $15 up to several hundred dollars. Just wondering if a medium duty RAID card would be smart in this situation. The servers will be PentiumD servers with SATA II (3.0 Gbps) hard drives, and some higher. Anyone have any experience with a cheaper raid card?
    But in reality, since I am using the "server" grade hard drives from WD (the ones with "YS" at the end of their P/N) then just single drive or dual drive with backups on server and/or off server may be good enough without the extra headache and expenditure.
    Now for my servers that house my shared hosting accounts, yes, 3Ware RAID 10 Hot Swap all the way! A drive failure in one of these could have a much larger effect.
    In my experience the cheaper cards simply use the same chipsets as big brand names, they do the job cheaply. Of course, the quality isn't as good but I haven't had any failures or issues using cheaper RAID cards in the past.

    RAID cards costing several hundred dollars really are not necessary.
    BeeServe
    * Rock solid shared & reseller UK webhosting. No downtime™ *
    Now offering fully managed VPS servers

  17. #17
    Join Date
    Jan 2004
    Location
    Chicago
    Posts
    984
    You need to weigh the pro's and con's of your backup strategy based on your budget, needs, and requirements.

    #1. You need to realize that Raid is not disaster recovery, make sure your servers have full backups that are being taken and stored on a separate server at a minimum, preferbly off-site.

    #2. If a full server grade Raid card is too expensive, and you're leaning towards software raid or a backup hard drive, you need to account for the issues raised by how much time it'll take for a tech to restore that backup drive as a primary drive, or the software raid vs simply popping in a replacement drive.

    It's not all black/white. Most times you can't look at it and go, $300 is too much for a Hardware Raid option that may only realize its value 1/20 times by recovering from a drive failure.

    You need to look at the total value of service it provided in on-hands fee's saved, perhaps you don't pay that tech $100+/hr for after hours support restoring a single system from the OS level. Perhaps your customer and you don't lose $100's or $1000's in profit from downtime and resulting issues.

    Work long enough in this industry and IT, tend to be a true believe in Murphy's Law of what can go wrong, will.

    As for selecting a mid level card, you might want to look at some of the servers that the well known system resellers like apaqdigital sell for their various price-points. They definately stake their business on knowing which MB's and chipsets work for their clientelle and which don't, just as much as most hosts know from experience what does and doesn't work.

    I've had some bad experience with Promise chipsets in the past but that was on their lower end offerings, and on-board chips. I wasn't aware, nor have tried any higher end cards from them.

  18. #18
    Join Date
    Oct 2004
    Location
    Southwest UK
    Posts
    1,159
    I can safely say that, while a good hardware raid card with a hot-swap backplane is the best solution but expensive, a software raid solution is better (and very cost effective) than a single drive.

    The performance is (they say) better than the cheap bios-style raid cards, but obviously not as good as the expensive cards with dedicated processors on them.

    Chances are it'll never die, and you'll never need worry about the raid array, but if it does, you'll be very happy you had something and the cost of recovering is significantly cheaper than not having any raid at all.

    The time taken to repair is reasonably small, shut the server down, replace drive, bring it back up. (note: there won't be any flashing red lights to tell which one died, so try and get the s/n of the live drive through smart so its easy to tell).

    The time the software array takes to rebuild is the same as the time a hardware card will take to rebuild anyway so you'll suffer the same amount of degraded performance.

    The only caveat to software raid is you NEED to install grub on both drives. If one dies, you do not want to reboot only to find 'no os found' shown on the boot screen! To do this, go into grub and run setup on both hd0 and hd1. Software RAID does not mirror the MBRs (or grub doesn't get installed after the array is created)
    Do not meddle in the affairs of Dragons, for you are crunchy and taste good.

  19. #19
    Join Date
    Apr 2002
    Location
    Auckland - New Zealand
    Posts
    1,572
    Having managed close to 100 servers for the past few years running software RAID 1, I would whole heartedly recommend software RAID as a viable, cost effective and redundant solution.

    I have had more problems in that time, with hardware raid than I did with software.

    You can hotswap hard drives, running linux, this means NO downtime with software RAID .. you just need to disable the device and add the new one when a drive fails and rebuild the array.

    You do need to install grub on both drives, this isn't hard, when you set a server up just run grub-install or from grub shell, install the bootloader on both devices.

    The beauty of grub is that you can boot a system up on a sd* device, rather than md* in case of a large problem, by simply editing the root=/dev/xxx on boot.
    That has saved my bacon and customers data a few times.

    Software RAID rocks and for most people, will perform absolutley fine.

  20. #20
    Join Date
    Jan 2004
    Location
    Chicago
    Posts
    984
    I understand where a few of you are coming from in regards to "software raid works for us", personally I think its a different kind of technicians and company that typically utilize it as a solution, ie. ones with staff that are more likely to deal with and spend much more time at the command line tweaking linux, and where you have a scale of operation and on-site staffing to minimize the drawbacks of software raid and still realize cost savings long term.

    From what I read, while the Op may not be able to justify the full cost of hardware raid he doesn't have the staff, no-onsite support, and may or may not be able to personally deal with some of the software-raid issues in a timely fashion following a failure at a remote colo. I certainly wouldn't feel comfortable recommending it in a situation like that.

  21. Quote Originally Posted by sshepherd
    I understand where a few of you are coming from in regards to "software raid works for us", personally I think its a different kind of technicians and company that typically utilize it as a solution, ie. ones with staff that are more likely to deal with and spend much more time at the command line tweaking linux, and where you have a scale of operation and on-site staffing to minimize the drawbacks of software raid and still realize cost savings long term.

    From what I read, while the Op may not be able to justify the full cost of hardware raid he doesn't have the staff, no-onsite support, and may or may not be able to personally deal with some of the software-raid issues in a timely fashion following a failure at a remote colo. I certainly wouldn't feel comfortable recommending it in a situation like that.
    that's awfully good point to make!

    true to be told, lot of customers we have dealt with didn't even know how to set up Linux software RAID1, let alone rebuilding one if one drive should fail. hardware RAID1 is still much easier to deal with for "in-experienced' user. 3ware 8006-2LP HW RAID1 card costs about $130, but DC can charge you $60 per 15-minute to rebuild software RAID1 if you don't have the know-how.

  22. #22
    Join Date
    Apr 2002
    Location
    Auckland - New Zealand
    Posts
    1,572
    Quote Originally Posted by [email protected]
    that's awfully good point to make!

    true to be told, lot of customers we have dealt with didn't even know how to set up Linux software RAID1, let alone rebuilding one if one drive should fail. hardware RAID1 is still much easier to deal with for "in-experienced' user. 3ware 8006-2LP HW RAID1 card costs about $130, but DC can charge you $60 per 15-minute to rebuild software RAID1 if you don't have the know-how.
    Granted

    A lot of colo places don't have staff skilled in software RAID, and that is a factor I guess.

    It's really not hard to manage once you are comfortable with it, but it can turn to custard quite easily if you don't. Same applies to hardware raid too, but most techs know how to handle hardware raid as it's common etc.

  23. #23
    Join Date
    Oct 2004
    Location
    Southwest UK
    Posts
    1,159
    Some of the replies here are deluding themselves that a hardware card is the answer to all raid issues. If a drive dies, it requires physical intervention at your server. Even if you have a raid card, unless you have the corresponding hot-swap hardware, you will have the same amount of downtime as you'd have with software raid.

    I hope no-one reading this thread will think they can just open up the case and unplug a drive from a running server. Worse - plugging the new drive in might result in you needing another drive...
    Do not meddle in the affairs of Dragons, for you are crunchy and taste good.

  24. #24
    Join Date
    Apr 2002
    Location
    Auckland - New Zealand
    Posts
    1,572
    Quote Originally Posted by gbjbaanb
    Some of the replies here are deluding themselves that a hardware card is the answer to all raid issues. If a drive dies, it requires physical intervention at your server. Even if you have a raid card, unless you have the corresponding hot-swap hardware, you will have the same amount of downtime as you'd have with software raid.

    I hope no-one reading this thread will think they can just open up the case and unplug a drive from a running server. Worse - plugging the new drive in might result in you needing another drive...
    Agreed, but you can hot swap even if your hardware card or system doesn't support hotswap. This is possible with linux, but you do need to know what you are doing.

  25. #25
    Join Date
    Jan 2004
    Location
    Chicago
    Posts
    984
    Quote Originally Posted by gbjbaanb
    Some of the replies here are deluding themselves that a hardware card is the answer to all raid issues. If a drive dies, it requires physical intervention at your server. Even if you have a raid card, unless you have the corresponding hot-swap hardware, you will have the same amount of downtime as you'd have with software raid.

    I hope no-one reading this thread will think they can just open up the case and unplug a drive from a running server. Worse - plugging the new drive in might result in you needing another drive...
    Its an assumption with hardware raid that someone is buying a properly built server that has external hard drive bays that can be hot swapped such as standard servers from Dell, HP, IBM, and nearly all 3rd party vendor offerings. With those it's as simple and pulling the drive, replacing it with a compatiable or larger drive, and let the raid rebuild it for the next couple of hours. Takes no more than a few minutes at the server in question. In that case, there's a huge difference.

    If you're suggesting someone out there take the advice of hardware raid for some budget server with internal drives, no hot swap support, and likely a on-board raid chipset that doesn't have full functionality.. no there would be very little point to that over software raid. At the very least it'd be a wash between the two options in that case.
    Last edited by sshepherd; 01-11-2007 at 04:54 PM.

  26. #26
    Join Date
    Jan 2007
    Location
    Nashville area
    Posts
    11
    Intresting. I have never used a software raid. Only hw (compaq / hp ) built in RAID. Usually setup with raid 5. What software raids are you using?

    -daniel

  27. #27
    Quote Originally Posted by retep
    Just some feedback on software raid.

    The software raid overhead doesn't noticeably affect server performance. In fact since you're probably going to be running on a server with a modern CPU you're likely to find it faster than a raid card with a simple controller processing unit.
    I'd agree with that. Raid 1 uses hardly any extra system resources. We were going to go with software raid1 in the latest server builds but had bad luck with it. So for a little over $100 each added 3ware 8006-2lp hardware raid cards on Hitachi 500Gb drives. It turned out to be a good move.

  28. #28
    Join Date
    Mar 2004
    Posts
    426
    I noticed the 3ware 8006-2lp is SATA-150, I am running SATA300 drives. Is there a comparable card for a similar price that is SATAII? I can't seem to find one without getting into some higher $$$.

  29. #29
    Join Date
    Jan 2005
    Location
    San Francisco/Hot Springs
    Posts
    988
    Quote Originally Posted by themicro
    I am setting up some servers using 2 hard drives and RAID 1 (software). If a hard drive fails, the server will continue to operate, correct?
    It depends on the type of failure really.

    In general, HW raid is much better in how it detects and qualifies a failure.
    SW raid is neat and cheap, but it pretty much requires downtime to fix a real problem, and I have yet to see Linux SW raid perform properly in several situations.
    With SW raid, you're relying on the OS's driver to control the drive and writes/reads to it, this can cause a problem when the bus hangs or resets.

    The bottom line is that if you care about raid, buy the HW.
    If you're doing it as a cheap gimmick, by all means, SW.
    AppliedOperations - Premium Service
    Bandwidth | Colocation | Hosting | Managed Services | Consulting
    www.appliedops.net

  30. #30
    Join Date
    Mar 2004
    Posts
    426
    It's not as a gimmick, RAID 1 is certainly a good selling feature. But with the budget to medium scale servers I am offering, I could easily just offer single drives, or dual drives with one being for backups. But one, I want something with a little more reliability should a drive failure occur, and would keep downtime to a minimum if access to the server was delayed.
    Two, it must be economical to help keep initial costs down on server hardware. From what I'm reading, I think software raid is a good option for the situation vs. no raid at all.
    I will also have a backup server connected to the servers on a private lan for backups.
    I will offer 3Ware hardware raid to my customers if desired, for an extra charge of course.

  31. #31
    Join Date
    Jan 2005
    Location
    San Francisco/Hot Springs
    Posts
    988
    Quote Originally Posted by themicro
    But one, I want something with a little more reliability should a drive failure occur, and would keep downtime to a minimum if access to the server was delayed.
    See though, thats the thing - with software raid, any drive failure can cause a complete channel failure which may not be handled properly which can hang the whole raid.
    SW raid really only helps you in certain situations and in certain failure cases...
    AppliedOperations - Premium Service
    Bandwidth | Colocation | Hosting | Managed Services | Consulting
    www.appliedops.net

  32. #32
    Join Date
    Oct 2004
    Location
    Southwest UK
    Posts
    1,159
    Its hardly a cheap gimmick. Its there for drive redundancy, like all RAID systems. If a drive fails, you still have your data. You may need a reboot, you may need to power down to replace a drive, but you're still better off than losing everything and restoring from backups.

    HW RAID is just more of the same, add expensive backplanes for hotswap, and dedicated xor processing and you'll be happier with it, but its still only there for saving you time if (when) a drive fails.

    Considering the MTBF of modern harddrives, SW raid offers outstanding price/performance for what you're trying to achive by using it, that is more than suitable for 99% of the readers of this forum.
    Do not meddle in the affairs of Dragons, for you are crunchy and taste good.

  33. #33
    Greetings:

    In 12 years of in-business experience, software raid does not equal hardware raid.

    And if you want peace of mind, get hardware raid.

    Thank you.
    ---
    Peter M. Abraham
    LinkedIn Profile

  34. #34
    Join Date
    Jan 2004
    Location
    Chicago
    Posts
    984
    Quote Originally Posted by gbjbaanb
    Considering the MTBF of modern harddrives, SW raid offers outstanding price/performance for what you're trying to achive by using it, that is more than suitable for 99% of the readers of this forum.
    Count me in the 1% that won't compromise on SW Raid. Been there, done that. Doesn't fit my applications, and the cost of HW raid isn't excessive for the benefits you get.

  35. #35
    Quote Originally Posted by themicro
    From what I'm reading, I think software raid is a good option for the situation vs. no raid at all.
    You might want to test that theory out in the real world for a couple of months. I find that no RAID at all is better than sodftware RAID as far as reliability and recovery time. Recovery time in each instance is the same whether the array dies 4 times in 4 months or the single drive dies one time in 4 years.

    Here's my experience with software RAID.

    Live server - 4 drives in RAID 5, lost power, booted back up. Next morning, complaints from customers about password rejection, logging in via ssh gave an error about passwd file missing, logging in via KVM showed 3 corrupted drives. Reboot gave errors of not enough drives to even restore the array.

    Other LIVE server, reboots, gives file system errors from hell during fsck. After 2 hours of pressing "Y", rebooted, but it wouldn't reboot.

    At least 4 other servers that never made it to being live had similar problems. Different mobos, different cpu's. All CentOS.

    Contrasted with same servers and hardware RAID. Have had 4-5 drives go bad in last two months, one yesterday. After replacing the drive, but not yet even 5% mirrored back into the array lost pwer at least 4 times due to battery backup breaker switch issues. When that was fixed, the degraded drive was fully restored without error.

    Other drive failures on hardware RAID have been the same - walk over put it in, walk away (on 9000 series 3ware), on 8000 series you have to manually tell it to add back in during boot.

  36. #36
    Join Date
    Apr 2003
    Location
    San Jose, CA.
    Posts
    1,622
    While I probably wouldn't do a software RAID 5...

    On a dual drive system... with no need for more then a few GB of space... I'll more then likely software RAID1 the two drives.

    Never had any problems from doing so... and it beats having to do a minimal OS install & restore from backups in the event of failure.

    p.s. Not a big fan of 3ware cards tho... Seen way too many have the exact same issues your software RAIDs had.
    p.s.s Big fan of Areca and RAID6

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •