Page 2 of 3 FirstFirst 123 LastLast
Results 26 to 50 of 51
  1. #26
    Join Date
    Nov 2001
    Location
    Atlanta, GA
    Posts
    633
    One thing to remember about disk I/O on a VPS system is that it's federated. That is, each VPS's filesystem is going to be based on different areas of the disk, provided you space it right. With an 8x RAID 10 setup, you've basically got 4 RAID 1 sets and can go that fast. So, disk I/O can be parallelized with a VPS setup much more easily than your standard server setup, simply because of how the file access patterns run.

    So, that would be the logic for why the SATA setup outpaces the SCSI setup in Apaq's example. There are simply more spindles to spread random reads over, so latency is reduced. Provided the RAID controller does good command queuing and reordering, it can almost turn those random reads into what are basically sequential ones, provided enough spindles are available.
    Former owner of A Small Orange
    New owner of <COMING SOON>

  2. #27
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by devonblzx
    Lee, if I were to run a 4xSATAII RAID10 how much performance would I lose by just using the built-in RAID functions on my motherboard compared to a PCI-X RAID Controller? Would it be fairly noticable or do you think it would be all right?
    linux or windows?
    if linux, there are basically no RAID driver, either from kernel or manufacturer, for linux, therefore you can't set up RAID-10 at all. linux can only see on-board SATA controller as 'host' controller, not RAID controller.

    can't really tell you much about on-board 'hostRAID' (windows only!) vs real hardware RAID, at least I couldn't any benchmark comparison on web. supposedly, hostraid (software raid) will tax system CPU a great deal, therefore it can never be a good thing.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  3. #28
    Join Date
    Dec 2004
    Location
    San Francisco, CA
    Posts
    1,912
    Quote Originally Posted by (Stephen)
    No it is new drives we have been buying in bulk, OEM 15k SCSI u320 80 pin drives Maxtor brand.
    they are 10K, not 15K.
    Small Mistake by Stephen. 15K disks are priced quite a bit higher
    init.me - Build, Share & Embed

    JodoHost.com - Windows VPS Hosting, ASP.NET and SQL Server Hosting
    8th year in Business, 200+ Servers. Microsoft Gold Certified Partner

  4. #29
    Join Date
    Dec 2004
    Location
    San Francisco, CA
    Posts
    1,912
    Quote Originally Posted by cwl@apaqdigital
    one factor is that the dedecated RAID XOR CPU on popular SCSI RAID card (adaptec or LSI) are really at least 2-year old Intel IOP processor, they may have a hard time competing with the new breeds of 500mhz~800mhz IOP processor found on state of art SATA-II PCI-E RAID card. also, RAID card like Areca ARC-1x60 can have 1G cache installed, and the newest ARC1260ML are based on 800mhz RAID processor w/whopping 2G DDR2 Cache! I don't see how an old Adaptec 2130SLP or LSI 320x to compete with it.

    single SCSI vs single SATA, hands down, SCSI will win every time on everything! but under large-scale array, the performance is a lot to do with RAID card than drives themselves.

    Not true
    1) Most popular SCSI RAID cards these days have Intel processors a 400MHz+. We are using 128MB/400MHz+ SCSI RAID cards
    2) The reason SATA raid cards require more power is because the RAID card acts as a controller for the disk.. SCSI drives have their own controllers... That is why SATA raid cards need more power and are more expensive.. not because they perform better than SCSI raid cards but because they need that extra


    Also.. all the tests you are doing on sequential files. If you have 8 scsi disks and 8 SATA disks.. in raid10, SATA and SCSI would match each other in read/write performance.

    But that doesn't happen in server environements, you have tens of thousands of small files being written and read from the disk every minute. A SCSI RAID system will handle this I/O much better, there would be less I/O latency.. and that gives it huge improvements. Remember... a SATA raid card cannot improve how the SATA drive works... it cannot determine how the disk rotates to achieve optimal performance. SCSI drives can... The best a RAID card can do is command queing and some optimisation... SATA2 does that with NCQ and SATA2 doesn't match SCSI..

    There is a very fundamental difference between SATA and SCSI. That doesn't disappear with RAID...
    Last edited by Yash-JH; 10-13-2006 at 03:11 PM.
    init.me - Build, Share & Embed

    JodoHost.com - Windows VPS Hosting, ASP.NET and SQL Server Hosting
    8th year in Business, 200+ Servers. Microsoft Gold Certified Partner

  5. #30
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    all hardware SATA RAID card comes it's own SATA controller chips and RAID engine separately on single card.

    FOLKS, please! I'm not arguing SATA array is definitely "better" than SCSI array. I just expressed the real world array performance of the VPS nodes, the 8x 250G RAID10 outruns 4x SCSI 15k RAID10, PURPORTEDLY! i want to know why too! Tim/ASO's explanation makes sense to me!

    tweaker.net has this SCSI vs SATA array benchmark:
    http://tweakers.net/reviews/557/29
    concluded that SCSI array shows it's muscle on radon read/write and database, while SATA array performs better on video streaming/file servings. so it's really depends on what applications the array is created for. if files read/write better with SATA raid on VPS with real-world account, then it just makes no sense to me to recommend SCSI array to my customers buying VPS nodes. the point is you can't just say in a blanket statement that SCSI array is ALWAYS better than SATA array or vice versa!

    nowadays, unless money is no object to you, most folks also need to consider cost/performance ratio. it's definitely possible 8x 15K SCSI RAID-10 will gun down 8x SATA RAID-10 on both sequential and radon IO, but at what costs!?

    8x 74G/15k + LSI 320-2X = $3150 (for 8x SCSI, realistically you need 2-channel SCSI RAID card)
    8x 250G RE + 3ware 9550SX-8LP = $1110
    do you really want to spent $2K extra to get some debatable performance boost?
    Last edited by cwl@apaqdigital; 10-13-2006 at 04:05 PM.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  6. #31
    Join Date
    Jul 2002
    Location
    New York, USA
    Posts
    467
    Quote Originally Posted by cwl@apaqdigital
    one factor is that the dedecated RAID XOR CPU on popular SCSI RAID card (adaptec or LSI) are really at least 2-year old Intel IOP processor, they may have a hard time competing with the new breeds of 500mhz~800mhz IOP processor found on state of art SATA-II PCI-E RAID card. also, RAID card like Areca ARC-1x60 can have 1G cache installed, and the newest ARC1260ML are based on 800mhz RAID processor w/whopping 2G DDR2 Cache! I don't see how an old Adaptec 2130SLP or LSI 320x to compete with it.

    single SCSI vs single SATA, hands down, SCSI will win every time on everything! but under large-scale array, the performance is a lot to do with RAID card than drives themselves.
    I have to agree with apaqdigital on this one the LSI controllers are garbage! They are old and slow. We have seen more CPU IO wait with the LSI 320-1 than with the 3ware 9550sx. The 9550sx is a much better card and performs much better overall. We are discussing not going with SCSI anymore because of this. It seems like the hardware vendors are spending all of their time with SATA over SCSI/SAS.

    The speed of the XOR processing and cache are more important factors than pure RPM of the drive.

    The other factor IMHO is the overall cost for your bottom line. With SATA prices much lower than SCSI counterparts. For the cost/performance ratio it might be better to go with SATA in most situations.
    Last edited by empoweri; 10-13-2006 at 11:31 PM.
    Larry Ludwig
    Empowering Media
    HostCube - Proactively Managed Xen based VPSes
    Empowering Media - The Dev Null Blog

  7. #32
    Greetings everyone:

    Thank you for all of your input.

    For those who want to buy vs. build, do you have any vendor recommendations? Do you have recommendations for specific models and configurations from those vendors?

    For those of you running Xen, do you have any initial hard drive partition recommendations (i.e. /tmp 3 GB, /boot 250 MB, etc.)?

    Thank you.
    ---
    Peter M. Abraham
    LinkedIn Profile

  8. #33
    Join Date
    Jul 2001
    Location
    Northern VA
    Posts
    400

    Iops

    Read up on IOPS, and you'll get your answer...spindles > RPM.

    The areca's are awesome cards, but until the drivers go mainstream, it is hit and miss with kernel's and such. 3Ware finally went mainstream (again) with RHEL/CentOS 4.4, not sure why they were ever removed.

    ZCR's are horrible cards and do get crushed under heavy IO, I'm surprised your SWsoft folks haven't told you this yet. I can't give away my ZCR cards to anyone who knows anything about IO tuning and performance.

    I'm a big fan of the LSI Megaraid-2 PCI-X cards, they are rock solid and perform great across many applications (Xen, VZ, etc).

    Adaptec lacks many of the management tools you need to have in place when you have a big deployed base of nodes, things like alerting, bios access from within the running OS, etc.

  9. #34
    Join Date
    Jul 2001
    Location
    Northern VA
    Posts
    400
    Quote Originally Posted by dynamicnet
    Greetings everyone:

    Thank you for all of your input.

    For those who want to buy vs. build, do you have any vendor recommendations? Do you have recommendations for specific models and configurations from those vendors?

    For those of you running Xen, do you have any initial hard drive partition recommendations (i.e. /tmp 3 GB, /boot 250 MB, etc.)?

    Thank you.
    Also, Peter, if you are going to use XEN, you should select hardware that has the virtualiztion extensions, Intel-VT or AMD Pacifica.

  10. #35
    Greetings Tom:

    Can you recommend any specific makes or models from brand name vendors such as Dell, HP, etc.?

    Or are you and most of the providers in the space custom building their VPS physical servers?

    Thank you.
    ---
    Peter M. Abraham
    LinkedIn Profile

  11. #36
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,131
    scsi will yeild faster seeks/reads, sata will give greater data transfer.

    So with that being said, why not go SAS? best of both worlds.
    Yellow Fiber Networks
    http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
    Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net

  12. #37
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by Spudstr
    scsi will yeild faster seeks/reads, sata will give greater data transfer.

    So with that being said, why not go SAS? best of both worlds.
    SAS drives are nice, but SAS RAID cards are still not "tuned" for main stream:
    1. hardware based SAS RAID card comes with 8-port minimally! needless to say expensive as hell too! they are also full-height card which can be difficult to install in 2U depending on chassis! dealing with SAS cable to SAS backplane can also be a nightmare! there are so many different type of SAS cables, just make sure you get the right one!
    2. basically no good driver supports for linux/BSD. before you commit to SAS RAID, hostRAID or hardware raid based, make sure you have DRIVER availabe for your choosen OS! usually, driver for HostRAID is for windows only, you won't have any luck with Linux/BSD. even hardware based Adaptec 4800SAS (8-port) offers driver only for RHEL4 +update 1 (CentOS 4.1).....
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  13. #38
    Join Date
    Nov 2005
    Posts
    3,944
    What about a RAID10 of 4xRaptors? How do those compare to the 4xSCSI drives? I see the seek and read times are about equal to that of a 10k Cheetah but can the SATA150 compare to the U320? I'm not a wiz on hard drives or anything so thanks in advance.

    Also Lee, about the linux with RAID10, are you saying its not compatible with most boards or most cards or how am I supposed to set it up in Linux?

  14. #39
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,131
    Quote Originally Posted by cwl@apaqdigital
    SAS drives are nice, but SAS RAID cards are still not "tuned" for main stream:
    1. hardware based SAS RAID card comes with 8-port minimally! needless to say expensive as hell too! they are also full-height card which can be difficult to install in 2U depending on chassis! dealing with SAS cable to SAS backplane can also be a nightmare! there are so many different type of SAS cables, just make sure you get the right one!
    2. basically no good driver supports for linux/BSD. before you commit to SAS RAID, hostRAID or hardware raid based, make sure you have DRIVER availabe for your choosen OS! usually, driver for HostRAID is for windows only, you won't have any luck with Linux/BSD. even hardware based Adaptec 4800SAS (8-port) offers driver only for RHEL4 +update 1 (CentOS 4.1).....
    Have you looked at the supermicro AOC-LPZCR2 Cards? Zero channel SCSI/SAS/SATA raid with 256mb cache. The card can be had for around $400. I have one that was aid to work with a older supermicro motherboard but due to missprint on their website the card isn't supported afterall. So i'll be getting a system that does support this card and seeing how it works.

    I don't build servers like you do so i'm sure you do know more than me in this area but this zero channel card has caught my attention.
    Yellow Fiber Networks
    http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
    Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net

  15. #40
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by devonblzx
    What about a RAID10 of 4xRaptors? How do those compare to the 4xSCSI drives? I see the seek and read times are about equal to that of a 10k Cheetah but can the SATA150 compare to the U320? I'm not a wiz on hard drives or anything so thanks in advance.

    Also Lee, about the linux with RAID10, are you saying its not compatible with most boards or most cards or how am I supposed to set it up in Linux?
    4x raptor/10k RAID-10 does have 25% advantage in sequential write over 4x SATA RAID-10. however, 4x raptor + 4-port SATA RAID card will cost just about the same with 4x SCSI/10k + SCSI RAID card, therefore you may be better off to do the SCSI raid-10.

    keep in mind that the so-called 320M/sec transfer rate for u320 SCSI interface is "per channel", meaning all scsi drives running from the same channel share the whole 320M/sec band width. that's why, it will be much better to use dual-channel SCSI RAID card (such as Adaptec 2230SLP or LSI 320-2x) if you need to install more than 4x SCSI in large-scale array.

    it's true that all RAID-1/5/10 by on-board SATA "hostRAID' (BIOS software raid, "fake" raid, whatever you want to call it) is basically NOT supported by linux because no driver ---> no drive. on other hands, hardware based SATA RAID card, such as those offered by 3ware, Areca, Adaptec, LSI, do have RAID driver (kernel built-in or manufacturer supplied) availabe for Linux /BSD.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  16. #41
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by Spudstr
    Have you looked at the supermicro AOC-LPZCR2 Cards? Zero channel SCSI/SAS/SATA raid with 256mb cache. The card can be had for around $400. I have one that was aid to work with a older supermicro motherboard but due to missprint on their website the card isn't supported afterall. So i'll be getting a system that does support this card and seeing how it works.

    I don't build servers like you do so i'm sure you do know more than me in this area but this zero channel card has caught my attention.
    I know they exist, but didn't try one on SAS array before....

    again, linux/DSB driver can still be a big issue. let us know whether you can get linux or FreeBSD installed.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  17. #42
    Join Date
    Nov 2001
    Location
    Vancouver
    Posts
    2,422
    C.W., is there a similar lack of cost differential for a 4 X SAS RAID 10 solution?

    I'm trying to determine where to go with a 1U 4 X SAS or SATA RAID 10 on either Woodcrest or Opteron, as compared to 2U (or 3U) 8 X SATA RAID 10 (assuming greater number of disks tips the balance towards using truly low-cost disk, all things considered) on the same two platforms. On FreeBSD.

    Are there any driver concerns that would steer me one way or the other? (edit, only saw spudstr's message after posting)
    “Even those who arrange and design shrubberies are under
    considerable economic stress at this period in history.”

  18. #43
    Join Date
    Mar 2006
    Location
    Reston, VA
    Posts
    3,131
    Quote Originally Posted by mwatkins
    C.W., is there a similar lack of cost differential for a 4 X SAS RAID 10 solution?

    I'm trying to determine where to go with a 1U 4 X SAS or SATA RAID 10 on either Woodcrest or Opteron, as compared to 2U (or 3U) 8 X SATA RAID 10 (assuming greater number of disks tips the balance towards using truly low-cost disk, all things considered) on the same two platforms. On FreeBSD.

    Are there any driver concerns that would steer me one way or the other? (edit, only saw spudstr's message after posting)
    The search i've done says that that R2 card is *suposably* supported under the generic aacraid module. I havn't tested it yet but thats just what i've found under some research.
    Yellow Fiber Networks
    http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
    Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net

  19. #44
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    Quote Originally Posted by mwatkins
    C.W., is there a similar lack of cost differential for a 4 X SAS RAID 10 solution?

    I'm trying to determine where to go with a 1U 4 X SAS or SATA RAID 10 on either Woodcrest or Opteron, as compared to 2U (or 3U) 8 X SATA RAID 10 (assuming greater number of disks tips the balance towards using truly low-cost disk, all things considered) on the same two platforms. On FreeBSD.

    Are there any driver concerns that would steer me one way or the other? (edit, only saw spudstr's message after posting)
    only if FreeBSD offers kernel driver for ZCR RAID card, then 4x SAS RAID-10 can be possibly evaluated as cost-efficient solution. in fact, if there is no driver for inexpensive ZCR or super-expensive 8-port SAS hardware RAID card, then it's a moot point to consider SAS array because you can't even install FreeBSD.

    if Tim's (A small orange) explanation holds true regarding file read/write patterns on VPS nodes, then performance of 'sequential write' should be a good indication of how well a purticular array performs on VPS servers (not to confused with database server where SCSI array wins everytime), then I will say 8x 7200rpm SATA RAID-10 on 3ware 9550SX-8LP still give you the best balanced, yet cost efficient solution.

    again, check VPS compatibility with woodcrest before you commit the platform! we have seen production proof that Virtuozzo runs on Woodcrest platform, but not Xen, though!
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  20. #45
    Join Date
    Nov 2001
    Location
    Vancouver
    Posts
    2,422
    C.W. - thanks for the response. Actually my question wasn't specific to VPS so compatibility with Virtuozzo and Xen are not requirements for *all* my needs, although I am investigating hosting a VPS at the same time.

    Just happened to notice the thread and cost efficient storage arrays are on my mind of late.
    “Even those who arrange and design shrubberies are under
    considerable economic stress at this period in history.”

  21. #46
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    since the OP asked for VPS recommemdations specifically, so we don't really want to steer off track too much!

    if you do database server or anything requires lots, lots of random IOs for small files, then 4x SCSI/10k RAID-10 (or 4x 15K if you can afford it) prolly your best choice. was it me, I would rule out 4x raptor RAID-10 though because it just costs too closely to 4x SCSI/10k RAID-10.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

  22. #47
    Join Date
    Nov 2001
    Location
    Vancouver
    Posts
    2,422
    I agree re hijacking the thread, not my intent! Started another here for similar but non VPS specific discussion: http://www.webhostingtalk.com/showthread.php?t=555072

    On VPS I've got a lot invested in FreeBSD but haven't looked at offering VPS solutions as yet. On my list to look at which is why I'm watching this thread among others. I probably would avoid going a non FreeBSD route if I can't find a VPS software solution compatible both with the OS and reasonable hardware choices.
    “Even those who arrange and design shrubberies are under
    considerable economic stress at this period in history.”

  23. #48
    Greetings:

    Getting back on track for the thread, does Dell, HP, Gateway, etc. make any systems that would work well as physical VPS servers that take into account the RAID-10, VT chipset, etc. suggestions in this thread?

    If so, what makes / models?

    Does anyone have a hard drive partition guide (i.e. /boot 250 MB, /tmp 3 GB, etc.) for Xen?

    Thank you.
    ---
    Peter M. Abraham
    LinkedIn Profile

  24. #49
    Join Date
    Jul 2002
    Location
    New York, USA
    Posts
    467
    Quote Originally Posted by dynamicnet
    Does anyone have a hard drive partition guide (i.e. /boot 250 MB, /tmp 3 GB, etc.) for Xen?
    I'll take a stab at it.. General recommendations for partitions (never used XEN VPS), but the guidelines below should apply.

    Don't use /boot any more as a seperate partition really doesn't give much. all is included in the / (root) partition Make that typically 10GB. Should be penty of space to grow. I imagine with Xen no VPS files is stored in /usr directory. The /boot partion was for old drives because the boot sector had to be on one of the first 1024 sectors. That restriction no longer applies with new BIOS. So IMHO is a waste of space and a partition and makes it harder to restore from backup. - primary partition

    /tmp 3-4 GB Any more - primary partition

    /var - depends upon where XEN stores VPSs but at least 7-8GB based upon logs and other files - extended partition

    swap - 1.5 times actual memory I will imagine this is important for a VPS install and should include XEN. Any larger you are swapping out to memory and thrashing too much to make it worth wild. On servers I plan on adding more memory in the future I tend to go 2.5 times just in case. This is so I add more memory and don't have to worry about resizing other partitions to add more swap. - primary partition.

    /home - The rest of the space. - extended partition
    Last edited by empoweri; 10-15-2006 at 08:45 PM.
    Larry Ludwig
    Empowering Media
    HostCube - Proactively Managed Xen based VPSes
    Empowering Media - The Dev Null Blog

  25. #50
    Join Date
    May 2004
    Location
    Atlanta, GA
    Posts
    3,872
    for a standard Virtuozzo VPS node, this seems to be a "standard" partition scheme that our clients requested:
    /boot: 100M
    swap = RAM size
    / (root): 10G
    /vz: the rest
    note that there are no fixed-size "/tmp" "/var" "/usr" specified, therefore they can be automatically created under root, then size can be flexibly allocated.

    can't recall what exact partition scheme was done on a few Xen nodes we've shipped, but I do recall that customer just passed along the partition requirement of whatever Xen documentation is recommended as 'standard'.
    C.W. LEE, Apaq Digital Systems
    http://www.apaqdigital.com
    sales@apaqdigital.com

Page 2 of 3 FirstFirst 123 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •