Page 2 of 3 FirstFirst 123 LastLast
Results 26 to 50 of 53
  1. #26
    Join Date
    Aug 2000
    Location
    Sheffield, South Yorks
    Posts
    3,627
    We run LSI MegaRAID cards in most of our virtual hosting boxes, most with RAId1, but our mail and mysql servers run RAID5 and we don't have any significant problems, sure the write speed is poor compared to a single drive, but that's always going to be the case, as it has to calculate the distribution of the data and the parity etc. but not getting any major problems like it seems you're getting.
    Karl Austin :: KDAWS.com
    The Agency Hosting Specialist :: 0800 5429 764
    Partner with us and free-up more time for income generating tasks

  2. #27
    Join Date
    Nov 2003
    Posts
    385
    I gave up.... I just ordered a TC-box with 3 single HDs... I will have the site use both as master/slave DBs and compare iowait etc.

    I also tried the 2.4.26 kernel but I am pretty sure that 2.6 performs better (megaraid2)... at least with the tests I performed. If you find anything to tune the megaraid let me know and I'll do the same - good luck!

  3. #28
    Join Date
    Aug 2002
    Location
    Seattle
    Posts
    5,525
    Do an offsite backup and have them switch your RAID to "0." It will offer no redundancy, but should speed things up nicely.

  4. #29
    Join Date
    Apr 2004
    Posts
    1,834
    Dell has crappy Raid solutions. Part of the probelm is they monkey with the OEM bios, and secondly they disable write back caching in advanced perfomance (Windows, do not know Linux) unless the controller card comes with a backup battery.

    How do I know this? I have $5,000 worth of their hardware in house that is taking a one way ticket to Round Rock via "Brown" first thing Monday morning.

    I have benched their stuff using IO Meter, and have developed a P4 SATA Raid 5 package on a 64-bit PCI that will do 180 Mbps read and 60 Mbps write.

    I can copy to the a 540 meg file (Windows Enterprise file) to the same location in 1:21.

  5. #30
    Join Date
    Nov 2003
    Posts
    385
    random writes (DB) are actually "OK", comparable to using 2 disks and spreading, putting *.MYD on the first and *.MYI on the second... so I ordered 2 more of these boxes - IMHO best ratio in terms of redundancy/RAM/disk space+2TB transfer @ TP - greetings to "Mr. Brown"

  6. #31
    Join Date
    Apr 2004
    Posts
    1,834
    Yeah, that's too bad,

    I'm just finishing most of the benchmarking on SATA machines, and I have gone through the Dell's pretty well. It wasn't very hard to find solutions that were better than their stock 90 mbps read and 8 mbps write.

    I am enclosing a live photo concluding nearly 3 weeks of benchmarking different cards/mobo's/etc.

    http://psf.biz/images/raid.jpg

    Only thing missing is the beer can on top of the chassis!

    I would be insterested in having you run an IO Meter on your machine. I would like to see how the SATA Raptors compare.

  7. #32
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,681
    We resolved this issue by loading up a server with 2.4.26 and disabling HT in the bios.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  8. #33
    Join Date
    Aug 2003
    Location
    Florida
    Posts
    181


    Went from crippling loads of 15 with a light rsync process to being able to tar my entire home directory with a load of less than 1.

    Output of hdparm -Tt /dev/sda1 jumped from 9 MB/s on the second result to 60.

    Poweredge 1600SC with an LSI Logic RAID card under RHE.
    Tyler
    www.AdminZoom.com
    "Server Administration Done Right"

    Server setup, hardening, migrations and more

  9. #34
    Join Date
    Feb 2004
    Posts
    963
    Originally posted by freeflight2
    what makes you think that? their 'photo' on the order page shows dell boxes.

    It also says "Powered by Dell": http://theplanet.com/control/pro/p2800sr5x_details.html
    How can I check if it's a dell?
    Dells are extremely cheap servers.

    Their raid cards are by far the worst I've ever used... The only thing as bad is their broadcom NICs.. horrible.

    Raid 5 on 4 drives, expect to get what, a 90% increase in speed, something like that.. Unless they're using the LSI cards. Megaraids are bad.. Check the drivers.. Use at least 1.18j (I think 1.18h is out now). Run megamgr and see what the read and write settings are (cached is better than writethrough for the read-types). Also see how much cache is on it.

    Plain and simple, Dell sucks.

  10. #35
    Join Date
    Dec 2001
    Location
    Toronto, Ontario, Canada
    Posts
    6,896
    Originally posted by mhalligan
    Dells are extremely cheap servers.

    Their raid cards are by far the worst I've ever used... The only thing as bad is their broadcom NICs.. horrible.

    Raid 5 on 4 drives, expect to get what, a 90% increase in speed, something like that.. Unless they're using the LSI cards. Megaraids are bad.. Check the drivers.. Use at least 1.18j (I think 1.18h is out now). Run megamgr and see what the read and write settings are (cached is better than writethrough for the read-types). Also see how much cache is on it.

    Plain and simple, Dell sucks.
    Man do you ever need to get your facts straight. You typically will *not* get a 90% "increase in speed" using raid 5, it's completely depending on file sizes, os, caching, etc. RAID 5 is designed (as discussed in this thread) for redundancy, not speed. LSI's are some of the better RAID cards out there, and by no means the cheapest either. The 16MB LSI RAID card (their smallest one I believe) outruns the Adaptec 128MB 2100S to say the least.
    Myles Loosley-Millman - admin@prioritycolo.com
    Priority Colo Inc. - Affordable Colocation & Dedicated Servers.
    Two Canadian facilities serving Toronto & Markham, Ontario
    http://www.prioritycolo.com

  11. #36
    Join Date
    Mar 2004
    Location
    New York City
    Posts
    995
    If the original poster is still having this issue, have them span the raid across multiple channels. Looking at your specifics, all 4 drives are on channel 0. This is bad because it means all 4 drives are sharing either 160 or 320MBit, depending on what type of raid you have installed. Ask them to move 2 of them to the second channel if your raid card supports it (and it should...)

  12. #37
    Dell machines come with CERC (cost effictive riad solutions) or the ???????????????

    They Suck, Suck, Suck, Suck.

  13. #38
    Join Date
    Feb 2004
    Posts
    963
    Originally posted by porcupine
    Man do you ever need to get your facts straight. You typically will *not* get a 90% "increase in speed" using raid 5, it's completely depending on file sizes, os, caching, etc. RAID 5 is designed (as discussed in this thread) for redundancy, not speed. LSI's are some of the better RAID cards out there, and by no means the cheapest either. The 16MB LSI RAID card (their smallest one I believe) outruns the Adaptec 128MB 2100S to say the least.
    Raid5 over 4 drives should get you roughly 90% write increase, when you factor in the metadata overhead. THat's just a normal rule of thumb, and one that's usually applied true when using quality components.

    And the ideae of LSI being some of the better raid cards out there is a joke. I've deployed hundreds of them (against my will) and thousands of vortex cards, as well as hundreds of adaptec and mylex cards. LSI is junk. High failure rate, dumbed-down interface, and their one "benefit" is that they're supposedly r eally forgiving when swapping failures.. Supposedly. Runbook procedure at my last full-time gig was to backup daily, and if you lost a raid card, put a new one in, and rebuild, because we were about 5/50 in terms of the lsi actually recovering the volume from disk.

    Lsi is typical of hardware to come with a dell. It's a low-end piece of junk touted as an upper-midrange piece of hardware.

  14. #39
    Join Date
    Nov 2003
    Posts
    385
    Raid5 over 4 drives should get you roughly 90% write increase
    that's what I expected as well...at least not a decrease and I don't think I am a complete idiot, especially since the concept of raid5 is pretty straightforward.

    one of these single drive gives me about 25MB/sec write performance (out of my head... somewhere between 20-30MB/sec) - I got about 6.5-7MB write performance last time I benchmarked the raid5 (copy from cached file in ram to disk)
    You don't have to be a genious to see that there is something wrong... what's even worse: taring -c huge files (10GB+) locks the machine pretty badly (mysql slave gets very very slow replicating) - can a competent person from ThePlanet please comment on that?
    (orbit support was not helpful)

  15. #40
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,681
    Originally posted by freeflight2
    that's what I expected as well...at least not a decrease and I don't think I am a complete idiot, especially since the concept of raid5 is pretty straightforward.

    one of these single drive gives me about 25MB/sec write performance (out of my head... somewhere between 20-30MB/sec) - I got about 6.5-7MB write performance last time I benchmarked the raid5 (copy from cached file in ram to disk)
    You don't have to be a genious to see that there is something wrong... what's even worse: taring -c huge files (10GB+) locks the machine pretty badly (mysql slave gets very very slow replicating) - can a competent person from ThePlanet please comment on that?
    (orbit support was not helpful)

    We are seeing 45+ on a raid5 setup now. We tar'ed an entire /home directory and the load never got past 1.0
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  16. #41
    Join Date
    Nov 2003
    Posts
    385
    lg: u mnetioned that you disabled ht in the bios - do you think that might have caused the slow writes?

  17. #42
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,681
    Originally posted by freeflight2
    lg: u mnetioned that you disabled ht in the bios - do you think that might have caused the slow writes?

    Well HT creates alot of IO overhead most of the time in my experience. Once we disabled HT the server got alot more stable, and the writes became real nice.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  18. #43
    Hi,

    I'm having the same problems with my megaraid card.
    I have a LSI MegaRaid 150-6 SATA card with six Maxtor 160GB SATA drives attached in a raid10 config.

    I see performance differences between different kernels / drivers.
    With kernel 2.6.7-mm1 i get a throughput of ~60 Mb/sec for reads and ~14 Mb/sec for writes. System load (iowaits) is high during writes.

    With kernel 2.4.25 and the megaraid2 driver i get a throughput of ~85 Mb/sec for reads and 20 Mb/sec for writes. iowaits are a lot better with this kernel/driver.

    With both kernels I experience lockups during heavy file writes (file transfers from a client, or from another ide disk).

    I can live with the read/write speeds, but I can't live with the system lockups
    I hope someone will find a fix for this, because it drives me crazy!

    system specs:
    cpu : Athlon 2600+ (1900Mhz)
    mobo : MSI-delta (nForce2)
    ram : 1 Gb
    *raid card is in a 32bit pci slot

  19. #44
    I was looking for clues and found the following on the Linux Kernel Mailinglist.

    alan pearson writes:

    > On 2.6 the iowait jumps to around 70%, while 2.4 on
    > both tests it is firmly zero.

    The 2.4 kernel lumps iowait into idle, so you
    won't see iowait on a 2.4 kernel.

    > On disk read, I'm loosing 30 Mb/sec of bandwidth PER
    > DISK, compared to 2.4.20.
    > I've tried using both the deadline and as ioschedulers
    > but no difference.
    >
    >
    > Under real conditions (ie our application running
    > which reads from all the disks simultaneously) on
    > 2.6.4, the system performance is around 1/3 of 2.4.20)
    >
    > Summary MB/Sec :
    >
    > dd if=x dd if=/dev/zero
    > 2.4 64 35.6
    > 2.6 30.34 35.9

    Well, that looks serious, but unfortunately you
    can't tell what the iowait was on the 2.4 kernel.
    Only the 2.6 kernel provides this information.
    So that's why the 2.4.x kernel seems to have a much lower iowait.

  20. #45
    Join Date
    Nov 2003
    Posts
    385
    thanx Nossie, I have 3 of these boxes running as mysql masters/slaves and they actually didn't give me any problems so far since 1 month... 'DB IO' is good/fair under 2.6.6 and 300+ sql requests/second . I am doing DB backups nightly with rsync --bwlimit=8192 which prevents the machine from locking up.

    I hope/I am confident the kernelguys will get the 2.6 up to speed.

  21. #46
    Join Date
    Mar 2003
    Location
    California USA
    Posts
    13,681
    2.6 Helps the boxes alot.
    Steven Ciaburri | Industry's Best Server Management - Rack911.com
    Software Auditing - 400+ Vulnerabilities Found - Quote @ https://www.RACK911Labs.com
    Fully Managed Dedicated Servers (Las Vegas, New York City, & Amsterdam) (AS62710)
    FreeBSD & Linux Server Management, Security Auditing, Server Optimization, PCI Compliance

  22. #47
    I have done some more testing/tweaking.
    If i change the write policy from 'write through' to 'write back' the system is a bit more responsive during writes.
    The max. throughput speed also increased from 20 to 27 for writes. Read throughput is the same (as expected).

    I tried kernel 2.6.7-mm2 and to my surprise it gave the same results as kernel 2.4.26.
    The only problem with kernel 2.6.7-mm2 is that my Ethernet card (intel Gbit, e1000 driver) has a maximum throughput of 40kb/sec. so I can’t really use 2.6.7-mm2

    Greetz,
    Nossie

  23. #48
    Join Date
    Nov 2001
    Location
    New York / New Jersey
    Posts
    753
    Run while you can, I just got 3 SATA port Mega Raid cards and they SUCK beyond belief.

    Took over 15 hours to format an 80GB SATA drive.

  24. #49
    Join Date
    Nov 2003
    Posts
    385
    Nossie: congrats to the 40kb/sec for the gigabit card how did you switch the write policy from write through to write back?

  25. #50
    Join Date
    Nov 2003
    Posts
    385
    megaraid.h says (in the 2.6 kernels and megaraid2.h in the 2.4s):
    #define WRMODE_WRITE_THRU 0
    #define WRMODE_WRITE_BACK 1

    so write back seems to be the default

Page 2 of 3 FirstFirst 123 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •