Results 1 to 26 of 26
  1. #1

    Question RAID10 much slower than single drive, why?

    Just got a server

    250G SATA x 4 with Hardware RAID 10

    I expect dd results to be over 200M/s, but actually it is:
    =======================================
    dd if=/dev/zero of=test bs=64k count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    67108864 bytes (67 MB) copied, 3.65184 s, 18.4 MB/s
    ======================================

    it is much slower than a single drive( usually I got 70-80 MB/s ), can anyone explain it?

  2. #2
    Join Date
    Aug 2004
    Location
    Dallas, TX
    Posts
    3,507
    Do a larger test like 3GB

  3. #3
    Quote Originally Posted by gordonrp View Post
    Do a larger test like 3GB
    Tried the following:

    bs=512k count=1k
    bs=64k count=16k

    same results around 18M/s, any idea?

    and also, with unixbench-5.1.2, the results are:

    System Benchmarks Index Values BASELINE RESULT INDEX
    Dhrystone 2 using register variables 116700.0 37882306.5 3246.1
    Double-Precision Whetstone 55.0 4044.8 735.4
    Execl Throughput 43.0 4504.8 1047.6
    File Copy 1024 bufsize 2000 maxblocks 3960.0 1260839.9 3183.9
    File Copy 256 bufsize 500 maxblocks 1655.0 348217.2 2104.0
    File Copy 4096 bufsize 8000 maxblocks 5800.0 3042293.8 5245.3
    Pipe Throughput 12440.0 2394478.0 1924.8
    Pipe-based Context Switching 4000.0 238976.6 597.4
    Process Creation 126.0 18666.1 1481.4
    Shell Scripts (1 concurrent) 42.4 11710.1 2761.8
    Shell Scripts (8 concurrent) 6.0 4864.9 8108.2
    System Call Overhead 15000.0 4356821.9 2904.5
    ========
    System Benchmarks Index Score 2149.7

    The IO scores are much better than my single drive dedi, I'm totally confused..

  4. #4
    Join Date
    Jan 2011
    Location
    Varna, Bulgaria
    Posts
    1,270
    Maybe the array is busy resyncing while you are doing your tests? Can you monitor the status of the array and also see if there is other IO going at the time of your tests.

  5. #5
    yeah i would agree. either a resync happening at the same time, background usage on the array, or one or more drives is faulty.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  6. #6
    Quote Originally Posted by rds100 View Post
    Maybe the array is busy resyncing while you are doing your tests? Can you monitor the status of the array and also see if there is other IO going at the time of your tests.
    How to monitor the status of the array?
    I don't even know how to detect whether there's a RAID, because it is hardware raid.

    I guess it is not busy, it's a fresh setuped server, it's the first time I logged on, first thing, dd test.

  7. #7
    Join Date
    Dec 2007
    Location
    Indiana, USA
    Posts
    16,087
    The array could possibly be background initializing. Get with your provide and ask them.
    Michael Denney - MDDHosting LLC
    New shared plans for 2016! Check them out!
    Highly Available Shared, Premium, Reseller, and VPS
    http://www.mddhosting.com/

  8. #8
    Join Date
    Jan 2011
    Location
    Varna, Bulgaria
    Posts
    1,270
    If you know what is the RAID card you can download utility from the manufacturer of the RAID card (i.e. Adaptec) and use it to monitor the status.
    And if it is newly provisioned server, like 1 hour ago - it is probably resyncing. Give it some time.

  9. #9
    Join Date
    Aug 2009
    Location
    LAX, DAL, MIA, ATL!
    Posts
    3,310
    Do you know which RAID card it is?

    Edit: a few seconds faster than me
    〓〓 QuadraNet ├ CLOUD ├ DEDICATED ├ COLOCATION
    〓〓 Locations: Los Angeles, Dallas, Miami and Atlanta!!
    〓〓 andrew.moore[at]quadranet.com
    〓〓 http://www.QuadraNet.com

  10. #10
    Thanks all, It's been 5 hours after I received the welcome letter, will resyncing take so long time?

    I will submit a support ticket about this

  11. #11
    Quote Originally Posted by observerss View Post
    How to monitor the status of the array?
    I don't even know how to detect whether there's a RAID, because it is hardware raid.

    I guess it is not busy, it's a fresh setuped server, it's the first time I logged on, first thing, dd test.
    This is a major reason I don't like hardware raid. In software raid you would know just by doing cat /proc/mdstat
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  12. #12
    Join Date
    Mar 2003
    Location
    WebHostingTalk
    Posts
    16,967
    Quote Originally Posted by funkywizard View Post
    This is a major reason I don't like hardware raid. In software raid you would know just by doing cat /proc/mdstat
    Hardware raid can easily be monitored via cli. You can even set it to send to your email daily.

    It will tell you if it is degraded, doing rebuild, etc...

    Adaptec has a really nice CLI.
    Specially 4 You
    .
    JoneSolutions.Com ( Jones.Solutions ) is on the net 24/7 providing stable and reliable web hosting solutions and services since 2001

  13. #13
    Join Date
    Jun 2002
    Location
    United Kingdom
    Posts
    1,236
    You will need to install the clientside utilities anyway. It is all good having hardware raid, but if you have no method of monitoring it then its pointless... because a failed drive could go un-noticed.

    Get the client utilities install as a priority to save you heartache in the future. As we as being able to monitor the array, certain cards have settings to change the performance/reliabilty of the raid sets.

  14. #14
    All right, I got the cli utilities, the raid card is 3ware 9500s
    After a brief reading of the manual, I check the status by following command
    /c0/u0 show all
    /c0/u0 status = OK
    /c0/u0 is not rebuilding, its current state is OK
    /c0/u0 is not verifying, its current state is OK
    /c0/u0 is initialized.
    /c0/u0 Write Cache = on
    /c0/u0 volume(s) = 1
    /c0/u0 name =
    /c0/u0 serial number = 0ZC11001AF1E44005640
    /c0/u0 Ignore ECC policy = off
    /c0/u0 Auto Verify Policy = off

    Unit UnitType Status %RCmpl %V/I/M Port Stripe Size(GB)
    ------------------------------------------------------------------------
    u0 RAID-10 OK - - - 64K 465.641
    u0-0 RAID-1 OK - - - - -
    u0-0-0 DISK OK - - p2 - 232.82
    u0-0-1 DISK OK - - p3 - 232.82
    u0-1 RAID-1 OK - - - - -
    u0-1-0 DISK OK - - p0 - 232.82
    u0-1-1 DISK OK - - p1 - 232.82
    the array looks good, any ideas?

  15. #15
    /c0 show all

    /c0 show all
    /c0 Driver Version = 2.26.02.014
    /c0 Model = 9500S-4LP
    /c0 Available Memory = 112MB
    /c0 Firmware Version = FE9X 2.08.00.009
    /c0 Bios Version = BE9X 2.03.01.052
    /c0 Boot Loader Version = BL9X 2.02.00.001
    /c0 Serial Number = L19004A5211112
    /c0 PCB Version = Rev 019
    /c0 PCHIP Version = 1.50
    /c0 ACHIP Version = 3.20
    /c0 Number of Ports = 4
    /c0 Number of Drives = 4
    /c0 Number of Units = 1
    /c0 Total Optimal Units = 1
    /c0 Not Optimal Units = 0
    /c0 JBOD Export Policy = off
    /c0 Disk Spinup Policy = 1
    /c0 Spinup Stagger Time Policy (sec) = 2
    /c0 Cache on Degrade Policy = Follow Unit Policy

    Unit UnitType Status %RCmpl %V/I/M Stripe Size(GB) Cache AVrfy
    ------------------------------------------------------------------------------
    u0 RAID-10 OK - - 64K 465.641 ON OFF

    Port Status Unit Size Blocks Serial
    ---------------------------------------------------------------
    p0 OK u0 232.83 GB 488281250 S2B5J90ZC11001
    p1 OK u0 232.83 GB 488281250 S2B5J90ZC10818
    p2 OK u0 232.83 GB 488281250 S2B5J90ZC10822
    p3 OK u0 232.83 GB 488281250 S2B5J90ZC10986

  16. #16
    Join Date
    Aug 2004
    Location
    Dallas, TX
    Posts
    3,507
    Do you have a BBU on that card? I see "/c0/u0 Write Cache = on" which could lead to problems also. What drives are in there? If you've got some cheap WD blue drives in there instead of "RE" drives that could explain it also.

    I'd have the DC check the hardware.
    Dallas Colocation by Incero, 8 years and counting!
    e: sales(at)incero(dot)com 855.217.COLO (2656)
    Colocation & Enterprise Servers, SATA/SAS/SSD, secure IPMI/KVM remote control, 100% U.S.A. Based Staff
    SSAE 16, SAS70, Redundant Power & Network, Fully Diverse Fiber

  17. #17
    Quote Originally Posted by observerss View Post
    All right, I got the cli utilities, the raid card is 3ware 9500s
    After a brief reading of the manual, I check the status by following command


    the array looks good, any ideas?
    the 64k stripe is going to make the array run like crap. linux's default readahead is 128k, so every read request will hit both drives. doesn't account for the bad sequential read speeds, but I would expect that the raid 10 wouldn't run any better than a single drive for random i/o
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  18. #18
    Quote Originally Posted by gordonrp View Post
    Do you have a BBU on that card? I see "/c0/u0 Write Cache = on" which could lead to problems also. What drives are in there? If you've got some cheap WD blue drives in there instead of "RE" drives that could explain it also.

    I'd have the DC check the hardware.
    No BBU

    Hard Drives are 4 x Samsung

    /c0/p0 show all
    /c0/p0 Status = OK
    /c0/p0 Model = SAMSUNG HE253GJ
    /c0/p0 Firmware Version = 1AJ30001
    /c0/p0 Serial = S2B5J90ZC11001
    /c0/p0 Capacity = 232.83 GB (488281250 Blocks)
    /c0/p0 Belongs to Unit = u0
    (0x0B:0x0002): Feature not supported
    Thanks for the advice

  19. #19
    Quote Originally Posted by funkywizard View Post
    the 64k stripe is going to make the array run like crap. linux's default readahead is 128k, so every read request will hit both drives. doesn't account for the bad sequential read speeds, but I would expect that the raid 10 wouldn't run any better than a single drive for random i/o
    So what to do? change linux's default readahead to 64k?
    or change the strip size to 256k?

    the manual said for RAID10, 9500S can only set strip size to 16K, 64K, or 256K

  20. #20
    Quote Originally Posted by observerss View Post
    So what to do? change linux's default readahead to 64k?
    or change the strip size to 256k?

    the manual said for RAID10, 9500S can only set strip size to 16K, 64K, or 256K
    Make the stripe size as big as you can. I personally prefer 1M or 2M stripe and a 512K readahead, but if 256k is the highest you can do for the stripe, 256K stripe / 128K readahead might be acceptable. With a 128k readahead, and a 256K stripe, about half your reads are going to hit 2 disks in a raid 10, which isn't ideal, but it's not so bad as 100%.

    The bigger the linux readahead, the better the performance you'll have for multiple simultaneous sequential reads (i.e. serving medium to large files to a large number of connected users), but if the stripe isn't at least twice as big as the readahead (preferably 4x as big), you'll shoot yourself in the foot for performance.

    Just another reason I don't like hardware raid cards. The default stripe is often 64K, which is going to be terrible for performance, and in many cases like yours, even the maximum stripe size is too small. For linux s/w raid, the default is 256K (acceptable but not ideal), and you can bump it up to a much more ideal value of 1-2M.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  21. #21
    Quote Originally Posted by gordonrp View Post
    Do you have a BBU on that card? I see "/c0/u0 Write Cache = on" which could lead to problems also. What drives are in there? If you've got some cheap WD blue drives in there instead of "RE" drives that could explain it also.

    I'd have the DC check the hardware.
    the write cache actually helps performance a lot

    write cache off
    dd if=/dev/zero of=test bs=64k count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    67108864 bytes (67 MB) copied, 13.763 s, 4.9 MB/s
    write cache on
    # dd if=/dev/zero of=test bs=64k count=1k conv=fdatasync
    1024+0 records in
    1024+0 records out
    67108864 bytes (67 MB) copied, 4.11232 s, 16.3 MB/s

  22. #22
    Quote Originally Posted by funkywizard View Post
    Make the stripe size as big as you can. I personally prefer 1M or 2M stripe and a 512K readahead, but if 256k is the highest you can do for the stripe, 256K stripe / 128K readahead might be acceptable. With a 128k readahead, and a 256K stripe, about half your reads are going to hit 2 disks in a raid 10, which isn't ideal, but it's not so bad as 100%.

    The bigger the linux readahead, the better the performance you'll have for multiple simultaneous sequential reads (i.e. serving medium to large files to a large number of connected users), but if the stripe isn't at least twice as big as the readahead (preferably 4x as big), you'll shoot yourself in the foot for performance.

    Just another reason I don't like hardware raid cards. The default stripe is often 64K, which is going to be terrible for performance, and in many cases like yours, even the maximum stripe size is too small. For linux s/w raid, the default is 256K (acceptable but not ideal), and you can bump it up to a much more ideal value of 1-2M.
    how to change the stripe size?
    do I have to delete the unit and create a new one? will this damage the files on the drives?

  23. #23
    Again, I would suspect bad drives here (or I suppose possibly a lousy raid card)

    I had bought 20 500gb samsung drives and 20 500gb WD drives for 10 squid proxy servers a few years back. The samsungs were slower from the start, and had a higher failure rate than the WDs. The failures usually weren't outright failures, just the performance became abysmal. I had to stop using quite a few of the samsungs for cache storage because the performance became very slow, but only a couple outright failed so I was able to keep using them as boot drives.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  24. #24
    Quote Originally Posted by observerss View Post
    how to change the stripe size?
    do I have to delete the unit and create a new one? will this damage the files on the drives?
    Depends on the controller. Most of the time, you'll have to destroy the array (and all files on it) and rebuild a new array.

    Linux s/w raid 5 can change the stripe size on an active array, but I don't believe that works on a raid 10. And then of course for a hardware raid, it depends on the card. For a card that only accepts a maximum 256k stripe, I'd be surprised if it has a feature to resize the stripe on the fly.
    Phoenix Dedicated Servers -- IOFLOOD.com
    Email: sales [at] ioflood.com
    Skype: iofloodsales
    Backup Storage VPS -- 1TBVPS.com

  25. #25
    Join Date
    Aug 2004
    Location
    Dallas, TX
    Posts
    3,507
    Quote Originally Posted by observerss View Post
    the write cache actually helps performance a lot

    write cache off

    write cache on
    Of course it helps performance! But as soon as the power goes out you've now got various data on the RAID card lost (and presumably various data in different drive caches lost) resulting in possible array/file system corruption.

    Have the DC check the drives/card something isn't right with your system performance, that's for sure.

    This is performance from a heavily loaded (busy 500k uniques/day server) RAID10 with adaptec 2405 and 4x 1TB RE3:
    dd if=/dev/zero of=test bs=64k count=15k conv=fdatasync
    15360+0 records in
    15360+0 records out
    1006632960 bytes (1.0 GB) copied, 9.1519 seconds, 110 MB/s
    Last edited by ServiceProvider; 04-22-2011 at 01:13 AM.
    Dallas Colocation by Incero, 8 years and counting!
    e: sales(at)incero(dot)com 855.217.COLO (2656)
    Colocation & Enterprise Servers, SATA/SAS/SSD, secure IPMI/KVM remote control, 100% U.S.A. Based Staff
    SSAE 16, SAS70, Redundant Power & Network, Fully Diverse Fiber

  26. #26
    Quote Originally Posted by funkywizard View Post
    Depends on the controller. Most of the time, you'll have to destroy the array (and all files on it) and rebuild a new array.

    Linux s/w raid 5 can change the stripe size on an active array, but I don't believe that works on a raid 10. And then of course for a hardware raid, it depends on the card. For a card that only accepts a maximum 256k stripe, I'd be surprised if it has a feature to resize the stripe on the fly.
    Thanks a lot
    Seems the only thing I can do now is update the support ticket, and wait...

Similar Threads

  1. is ufs slower then ext2 for lots of files in a single directory?
    By nkad in forum Hosting Security and Technology
    Replies: 2
    Last Post: 08-25-2007, 03:07 AM
  2. single AMD Dual-Core Opteron 170 2.0Ghz 4*250GB hds with raid10
    By HostTitan in forum Dedicated Hosting Offers
    Replies: 1
    Last Post: 12-28-2006, 12:24 AM
  3. slower dual vs faster single cpu
    By kur1j in forum Dedicated Server
    Replies: 4
    Last Post: 09-24-2004, 02:52 AM
  4. Do single drive XTR's have 4 drive sleds?
    By 365 in forum Dedicated Server
    Replies: 1
    Last Post: 10-20-2003, 01:14 AM
  5. Repartion Single Drive System Remotely?
    By Jake29 in forum Hosting Security and Technology
    Replies: 3
    Last Post: 04-27-2002, 11:07 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •