Results 26 to 50 of 51
-
10-13-2006, 01:43 PM #26Formerly orange-y
- Join Date
- Nov 2001
- Location
- Atlanta, GA
- Posts
- 633
One thing to remember about disk I/O on a VPS system is that it's federated. That is, each VPS's filesystem is going to be based on different areas of the disk, provided you space it right. With an 8x RAID 10 setup, you've basically got 4 RAID 1 sets and can go that fast. So, disk I/O can be parallelized with a VPS setup much more easily than your standard server setup, simply because of how the file access patterns run.
So, that would be the logic for why the SATA setup outpaces the SCSI setup in Apaq's example. There are simply more spindles to spread random reads over, so latency is reduced. Provided the RAID controller does good command queuing and reordering, it can almost turn those random reads into what are basically sequential ones, provided enough spindles are available.Former owner of A Small Orange
New owner of <COMING SOON>
-
10-13-2006, 02:16 PM #27Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
Originally Posted by devonblzx
if linux, there are basically no RAID driver, either from kernel or manufacturer, for linux, therefore you can't set up RAID-10 at all. linux can only see on-board SATA controller as 'host' controller, not RAID controller.
can't really tell you much about on-board 'hostRAID' (windows only!) vs real hardware RAID, at least I couldn't any benchmark comparison on web. supposedly, hostraid (software raid) will tax system CPU a great deal, therefore it can never be a good thing.
-
10-13-2006, 03:00 PM #28Web Hosting Master
- Join Date
- Dec 2004
- Location
- San Francisco, CA
- Posts
- 1,912
Originally Posted by (Stephen)
Small Mistake by Stephen. 15K disks are priced quite a bit higherinit.me - Build, Share & Embed
JodoHost.com - Windows VPS Hosting, ASP.NET and SQL Server Hosting
8th year in Business, 200+ Servers. Microsoft Gold Certified Partner
-
10-13-2006, 03:06 PM #29Web Hosting Master
- Join Date
- Dec 2004
- Location
- San Francisco, CA
- Posts
- 1,912
Originally Posted by cwl@apaqdigital
Not true
1) Most popular SCSI RAID cards these days have Intel processors a 400MHz+. We are using 128MB/400MHz+ SCSI RAID cards
2) The reason SATA raid cards require more power is because the RAID card acts as a controller for the disk.. SCSI drives have their own controllers... That is why SATA raid cards need more power and are more expensive.. not because they perform better than SCSI raid cards but because they need that extra
Also.. all the tests you are doing on sequential files. If you have 8 scsi disks and 8 SATA disks.. in raid10, SATA and SCSI would match each other in read/write performance.
But that doesn't happen in server environements, you have tens of thousands of small files being written and read from the disk every minute. A SCSI RAID system will handle this I/O much better, there would be less I/O latency.. and that gives it huge improvements. Remember... a SATA raid card cannot improve how the SATA drive works... it cannot determine how the disk rotates to achieve optimal performance. SCSI drives can... The best a RAID card can do is command queing and some optimisation... SATA2 does that with NCQ and SATA2 doesn't match SCSI..
There is a very fundamental difference between SATA and SCSI. That doesn't disappear with RAID...Last edited by Yash-JH; 10-13-2006 at 03:11 PM.
init.me - Build, Share & Embed
JodoHost.com - Windows VPS Hosting, ASP.NET and SQL Server Hosting
8th year in Business, 200+ Servers. Microsoft Gold Certified Partner
-
10-13-2006, 03:50 PM #30Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
all hardware SATA RAID card comes it's own SATA controller chips and RAID engine separately on single card.
FOLKS, please! I'm not arguing SATA array is definitely "better" than SCSI array. I just expressed the real world array performance of the VPS nodes, the 8x 250G RAID10 outruns 4x SCSI 15k RAID10, PURPORTEDLY! i want to know why too! Tim/ASO's explanation makes sense to me!
tweaker.net has this SCSI vs SATA array benchmark:
http://tweakers.net/reviews/557/29
concluded that SCSI array shows it's muscle on radon read/write and database, while SATA array performs better on video streaming/file servings. so it's really depends on what applications the array is created for. if files read/write better with SATA raid on VPS with real-world account, then it just makes no sense to me to recommend SCSI array to my customers buying VPS nodes. the point is you can't just say in a blanket statement that SCSI array is ALWAYS better than SATA array or vice versa!
nowadays, unless money is no object to you, most folks also need to consider cost/performance ratio. it's definitely possible 8x 15K SCSI RAID-10 will gun down 8x SATA RAID-10 on both sequential and radon IO, but at what costs!?
8x 74G/15k + LSI 320-2X = $3150 (for 8x SCSI, realistically you need 2-channel SCSI RAID card)
8x 250G RE + 3ware 9550SX-8LP = $1110
do you really want to spent $2K extra to get some debatable performance boost?Last edited by cwl@apaqdigital; 10-13-2006 at 04:05 PM.
-
10-13-2006, 11:23 PM #31Web Hosting Evangelist
- Join Date
- Jul 2002
- Location
- New York, USA
- Posts
- 467
Originally Posted by cwl@apaqdigital
The speed of the XOR processing and cache are more important factors than pure RPM of the drive.
The other factor IMHO is the overall cost for your bottom line. With SATA prices much lower than SCSI counterparts. For the cost/performance ratio it might be better to go with SATA in most situations.Last edited by empoweri; 10-13-2006 at 11:31 PM.
Larry Ludwig
Empowering Media
HostCube - Proactively Managed Xen based VPSes
Empowering Media - The Dev Null Blog
-
10-14-2006, 07:36 AM #32Web Hosting Master
- Join Date
- Dec 2001
- Posts
- 5,221
Greetings everyone:
Thank you for all of your input.
For those who want to buy vs. build, do you have any vendor recommendations? Do you have recommendations for specific models and configurations from those vendors?
For those of you running Xen, do you have any initial hard drive partition recommendations (i.e. /tmp 3 GB, /boot 250 MB, etc.)?
Thank you.
-
10-14-2006, 11:33 AM #33Aspiring Evangelist
- Join Date
- Jul 2001
- Location
- Northern VA
- Posts
- 400
Iops
Read up on IOPS, and you'll get your answer...spindles > RPM.
The areca's are awesome cards, but until the drivers go mainstream, it is hit and miss with kernel's and such. 3Ware finally went mainstream (again) with RHEL/CentOS 4.4, not sure why they were ever removed.
ZCR's are horrible cards and do get crushed under heavy IO, I'm surprised your SWsoft folks haven't told you this yet. I can't give away my ZCR cards to anyone who knows anything about IO tuning and performance.
I'm a big fan of the LSI Megaraid-2 PCI-X cards, they are rock solid and perform great across many applications (Xen, VZ, etc).
Adaptec lacks many of the management tools you need to have in place when you have a big deployed base of nodes, things like alerting, bios access from within the running OS, etc.
-
10-14-2006, 11:46 AM #34Aspiring Evangelist
- Join Date
- Jul 2001
- Location
- Northern VA
- Posts
- 400
Originally Posted by dynamicnet
-
10-14-2006, 08:50 PM #35Web Hosting Master
- Join Date
- Dec 2001
- Posts
- 5,221
Greetings Tom:
Can you recommend any specific makes or models from brand name vendors such as Dell, HP, etc.?
Or are you and most of the providers in the space custom building their VPS physical servers?
Thank you.
-
10-15-2006, 11:26 AM #36Master of the Truth
- Join Date
- Mar 2006
- Location
- Reston, VA
- Posts
- 3,131
scsi will yeild faster seeks/reads, sata will give greater data transfer.
So with that being said, why not go SAS? best of both worlds.Yellow Fiber Networks
http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net
-
10-15-2006, 11:59 AM #37Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
Originally Posted by Spudstr
1. hardware based SAS RAID card comes with 8-port minimally! needless to say expensive as hell too! they are also full-height card which can be difficult to install in 2U depending on chassis! dealing with SAS cable to SAS backplane can also be a nightmare! there are so many different type of SAS cables, just make sure you get the right one!
2. basically no good driver supports for linux/BSD. before you commit to SAS RAID, hostRAID or hardware raid based, make sure you have DRIVER availabe for your choosen OS! usually, driver for HostRAID is for windows only, you won't have any luck with Linux/BSD. even hardware based Adaptec 4800SAS (8-port) offers driver only for RHEL4 +update 1 (CentOS 4.1).....
-
10-15-2006, 12:41 PM #38Web Hosting Master
- Join Date
- Nov 2005
- Posts
- 3,944
What about a RAID10 of 4xRaptors? How do those compare to the 4xSCSI drives? I see the seek and read times are about equal to that of a 10k Cheetah but can the SATA150 compare to the U320? I'm not a wiz on hard drives or anything so thanks in advance.
Also Lee, about the linux with RAID10, are you saying its not compatible with most boards or most cards or how am I supposed to set it up in Linux?
-
10-15-2006, 01:05 PM #39Master of the Truth
- Join Date
- Mar 2006
- Location
- Reston, VA
- Posts
- 3,131
Originally Posted by cwl@apaqdigital
I don't build servers like you do so i'm sure you do know more than me in this area but this zero channel card has caught my attention.Yellow Fiber Networks
http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net
-
10-15-2006, 01:45 PM #40Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
Originally Posted by devonblzx
keep in mind that the so-called 320M/sec transfer rate for u320 SCSI interface is "per channel", meaning all scsi drives running from the same channel share the whole 320M/sec band width. that's why, it will be much better to use dual-channel SCSI RAID card (such as Adaptec 2230SLP or LSI 320-2x) if you need to install more than 4x SCSI in large-scale array.
it's true that all RAID-1/5/10 by on-board SATA "hostRAID' (BIOS software raid, "fake" raid, whatever you want to call it) is basically NOT supported by linux because no driver ---> no drive. on other hands, hardware based SATA RAID card, such as those offered by 3ware, Areca, Adaptec, LSI, do have RAID driver (kernel built-in or manufacturer supplied) availabe for Linux /BSD.
-
10-15-2006, 01:54 PM #41Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
Originally Posted by Spudstr
again, linux/DSB driver can still be a big issue. let us know whether you can get linux or FreeBSD installed.
-
10-15-2006, 01:58 PM #42Web Hosting Master
- Join Date
- Nov 2001
- Location
- Vancouver
- Posts
- 2,422
C.W., is there a similar lack of cost differential for a 4 X SAS RAID 10 solution?
I'm trying to determine where to go with a 1U 4 X SAS or SATA RAID 10 on either Woodcrest or Opteron, as compared to 2U (or 3U) 8 X SATA RAID 10 (assuming greater number of disks tips the balance towards using truly low-cost disk, all things considered) on the same two platforms. On FreeBSD.
Are there any driver concerns that would steer me one way or the other? (edit, only saw spudstr's message after posting)“Even those who arrange and design shrubberies are under
considerable economic stress at this period in history.”
-
10-15-2006, 02:04 PM #43Master of the Truth
- Join Date
- Mar 2006
- Location
- Reston, VA
- Posts
- 3,131
Originally Posted by mwatkinsYellow Fiber Networks
http://www.yellowfiber.net : Managed Solutions - Colocation - Network Services IPv4/IPv6
Ashburn/Denver/NYC/Dallas/Chicago Markets Served zak@yellowfiber.net
-
10-15-2006, 02:21 PM #44Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
Originally Posted by mwatkins
if Tim's (A small orange) explanation holds true regarding file read/write patterns on VPS nodes, then performance of 'sequential write' should be a good indication of how well a purticular array performs on VPS servers (not to confused with database server where SCSI array wins everytime), then I will say 8x 7200rpm SATA RAID-10 on 3ware 9550SX-8LP still give you the best balanced, yet cost efficient solution.
again, check VPS compatibility with woodcrest before you commit the platform! we have seen production proof that Virtuozzo runs on Woodcrest platform, but not Xen, though!
-
10-15-2006, 02:28 PM #45Web Hosting Master
- Join Date
- Nov 2001
- Location
- Vancouver
- Posts
- 2,422
C.W. - thanks for the response. Actually my question wasn't specific to VPS so compatibility with Virtuozzo and Xen are not requirements for *all* my needs, although I am investigating hosting a VPS at the same time.
Just happened to notice the thread and cost efficient storage arrays are on my mind of late.“Even those who arrange and design shrubberies are under
considerable economic stress at this period in history.”
-
10-15-2006, 02:40 PM #46Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
since the OP asked for VPS recommemdations specifically, so we don't really want to steer off track too much!
if you do database server or anything requires lots, lots of random IOs for small files, then 4x SCSI/10k RAID-10 (or 4x 15K if you can afford it) prolly your best choice. was it me, I would rule out 4x raptor RAID-10 though because it just costs too closely to 4x SCSI/10k RAID-10.
-
10-15-2006, 03:16 PM #47Web Hosting Master
- Join Date
- Nov 2001
- Location
- Vancouver
- Posts
- 2,422
I agree re hijacking the thread, not my intent! Started another here for similar but non VPS specific discussion: http://www.webhostingtalk.com/showthread.php?t=555072
On VPS I've got a lot invested in FreeBSD but haven't looked at offering VPS solutions as yet. On my list to look at which is why I'm watching this thread among others. I probably would avoid going a non FreeBSD route if I can't find a VPS software solution compatible both with the OS and reasonable hardware choices.“Even those who arrange and design shrubberies are under
considerable economic stress at this period in history.”
-
10-15-2006, 04:25 PM #48Web Hosting Master
- Join Date
- Dec 2001
- Posts
- 5,221
Greetings:
Getting back on track for the thread, does Dell, HP, Gateway, etc. make any systems that would work well as physical VPS servers that take into account the RAID-10, VT chipset, etc. suggestions in this thread?
If so, what makes / models?
Does anyone have a hard drive partition guide (i.e. /boot 250 MB, /tmp 3 GB, etc.) for Xen?
Thank you.
-
10-15-2006, 08:41 PM #49Web Hosting Evangelist
- Join Date
- Jul 2002
- Location
- New York, USA
- Posts
- 467
Originally Posted by dynamicnet
Don't use /boot any more as a seperate partition really doesn't give much. all is included in the / (root) partition Make that typically 10GB. Should be penty of space to grow. I imagine with Xen no VPS files is stored in /usr directory. The /boot partion was for old drives because the boot sector had to be on one of the first 1024 sectors. That restriction no longer applies with new BIOS. So IMHO is a waste of space and a partition and makes it harder to restore from backup. - primary partition
/tmp 3-4 GB Any more - primary partition
/var - depends upon where XEN stores VPSs but at least 7-8GB based upon logs and other files - extended partition
swap - 1.5 times actual memory I will imagine this is important for a VPS install and should include XEN. Any larger you are swapping out to memory and thrashing too much to make it worth wild. On servers I plan on adding more memory in the future I tend to go 2.5 times just in case. This is so I add more memory and don't have to worry about resizing other partitions to add more swap. - primary partition.
/home - The rest of the space. - extended partitionLast edited by empoweri; 10-15-2006 at 08:45 PM.
Larry Ludwig
Empowering Media
HostCube - Proactively Managed Xen based VPSes
Empowering Media - The Dev Null Blog
-
10-15-2006, 10:34 PM #50Web Hosting Master
- Join Date
- May 2004
- Location
- Atlanta, GA
- Posts
- 3,872
for a standard Virtuozzo VPS node, this seems to be a "standard" partition scheme that our clients requested:
/boot: 100M
swap = RAM size
/ (root): 10G
/vz: the rest
note that there are no fixed-size "/tmp" "/var" "/usr" specified, therefore they can be automatically created under root, then size can be flexibly allocated.
can't recall what exact partition scheme was done on a few Xen nodes we've shipped, but I do recall that customer just passed along the partition requirement of whatever Xen documentation is recommended as 'standard'.