Results 1 to 7 of 7
  1. #1
    Join Date
    Jul 2009
    Posts
    71

    Raid Setups - Double Drive Failure

    Just blurbing outlound. With increased disk densities, what raid levels are you doing your *boxes*. We are primarily virtualized with 3 boxes that remain . On our virtualized infrastructure, we are running Raid 50. Ontop of that, we have another box that is a exact replica of our primary san.

    However, just to throw this out there, my beef is due to larger density drives - longer rebuild times, etc. We recently saw one box have a double disk failure and it was great that we are CDP'ing most of out data.

    When was the last time ya'll seen a double disk failure.

  2. #2
    Join Date
    Apr 2002
    Location
    Auckland - New Zealand
    Posts
    1,572
    Had 1 recent one, RAID 0 stripe, both drives nuked, along with the raid card. That was a Xserve. Had plenty of Raid card failures, way more than hard drive failures in raid arrays. Some recoverable, some not. Seems you need redundant and fail over raid cards more than hard drives!

    4 years of running a netapp, only had 1 drive fail in that time.

    Got a couple of filers using raid 50 as well.

  3. #3
    Join Date
    Sep 2008
    Location
    Dallas, TX
    Posts
    4,552
    We use only RAID 10. I've had a double drive failure. Then a 3rd drive died the next day :O
    Jacob Wall - GetCloak.com

  4. #4
    Join Date
    Jul 2009
    Posts
    71
    Heh. Humor me. What were you running on the R0 on the xserve.
    Tons and tons of Graids on R0, but these are just swap drives per se.


    If I had to take a while guess, alot of 2U boxes are so commonly loaded with R5 and a hot spare which gets risky with high density drives and rebuild times IMO. But ones experiance generally affects decisions based on that ;-)

  5. #5
    Join Date
    Feb 2008
    Location
    Houston, Texas, USA
    Posts
    2,955
    I tend to have nightmare about dying controllers and/or failed HDs. We don't use large arrays anymore. Only 1U boxes with 4 hotswap disks. It keeps the density in check. 12 and up shared spindle arrays are to be avoided!

    Keep it simple and manageable.

    Regards
    UNIXy - Fully Managed Servers and Clusters - Established in 2006
    [ cPanel Varnish Nginx Plugin ] - Enhance LiteSpeed and Apache Performance
    www.unixy.net - Los Angeles | Houston | Atlanta | Rotterdam
    Love to help pro bono (time permitting). joe > unixy.net

  6. #6
    Join Date
    Jul 2009
    Posts
    71
    dedicated I can see on the 1U. But VPS'es...in our world would be VM's. Any of you guys running with big beffy boxes and then onto the SAN fabric from a webhosts perspective. $$ hit on the front end, but much more scalable when you grow. Plus some things like snapshots, etc that come with it depend on the type of box you get.

  7. #7
    Join Date
    Apr 2002
    Location
    Auckland - New Zealand
    Posts
    1,572
    Openbase on the Xserve, Raid 0. It was financial data as well, stuff that you don't want to lose. There was a replica, but it was out of sync.
    That cluster just got a 100K upgrade

Similar Threads

  1. Raid 1 array drive failure
    By PH-Kev in forum Dedicated Server
    Replies: 6
    Last Post: 12-06-2007, 05:42 PM
  2. DOUBLE RAM, DOUBLE HARD DRIVE AND DOUBLE BANDWIDTH LINK FREE
    By directspace in forum Dedicated Hosting Offers
    Replies: 7
    Last Post: 12-03-2007, 07:51 AM
  3. help - raid 5 failure
    By carpman in forum Hosting Security and Technology
    Replies: 2
    Last Post: 11-07-2007, 11:16 AM
  4. Raid Setups....
    By [email protected] in forum Hosting Security and Technology
    Replies: 6
    Last Post: 10-10-2005, 07:34 PM
  5. Replies: 8
    Last Post: 03-29-2005, 09:29 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •