Results 1 to 23 of 23
Thread: VPS Disk I/O test
-
01-26-2013, 12:31 PM #1Newbie
- Join Date
- May 2009
- Posts
- 19
VPS Disk I/O test
Hello I have a hybrid vps in futurehosting, and decided to test it first one being the disk. What do you think of this results? thanks
root@xxx [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 36.0586 s, 29.8 MB/s
-
01-26-2013, 12:33 PM #2Disabled
- Join Date
- Dec 2012
- Location
- Preston, England
- Posts
- 159
It's average, not amazing really.
-
01-26-2013, 12:39 PM #3Web Hosting Master
- Join Date
- Aug 2009
- Posts
- 3,207
-
01-26-2013, 12:55 PM #4Web Hosting Master
- Join Date
- Dec 2005
- Posts
- 3,110
Thats a very poor result, especially for a hybrid.
One of our Xen vps:
[root@test ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 3.17292 s, 338 MB/s
-
01-26-2013, 01:59 PM #5Newbie
- Join Date
- Jul 2010
- Posts
- 25
29.8 is not too bad.
-
01-26-2013, 02:06 PM #6Web Hosting Guru
- Join Date
- Aug 2012
- Location
- UK
- Posts
- 291
That's not really good to be honest, not even reasonable.
The point of hybrid is that you get more power / IO and less people on the node to abuse it, unless their is someone totally using disk at full throttle you should not be getting such speeds unless I'm missing something.
But then again it is just numbers, unless you actually need to write 25mb/s+ every second then it shouldn't be an issue in reality.
-
01-26-2013, 02:32 PM #7The VPS Specialist
- Join Date
- Aug 2003
- Location
- Edinburgh/London
- Posts
- 5,789
-
01-26-2013, 02:35 PM #8Disabled
- Join Date
- Dec 2012
- Location
- Preston, England
- Posts
- 159
Having said that you need to actually do it a few times because it's not 100% accurate, for instance;
root@pine [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 10.2123 s, 105 MB/s
root@pine [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 9.70176 s, 111 MB/s
root@pine [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 8.13292 s, 132 MB/s
-
01-26-2013, 02:40 PM #9Web Hosting Master
- Join Date
- Dec 2010
- Posts
- 694
-
01-26-2013, 02:45 PM #10Newbie
- Join Date
- May 2009
- Posts
- 19
Say to you that I am in Spain but vps are in UK.
I have other vps with a spanish company ( 2GB ram) and have better result such as:
xx@xxxx [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1,1 GB) copied, 7,94856 seconds, 135 MB/s
-
01-26-2013, 03:49 PM #11Randy
- Join Date
- Aug 2006
- Location
- Ashburn VA, San Diego CA
- Posts
- 4,615
Sequential test like this means almost nothing, especially with network based storage. What matters is IOPs. For example, a cheap SATA disk can do 180MB/sec but a measly 80 IOPs. Test your IOPs if you want a better indication of performance.
-
01-26-2013, 04:14 PM #12Web Hosting Master
- Join Date
- May 2011
- Posts
- 586
I get about 1GB/sec transfer rate on RAMNode SSD servers. Does anybody know how to check the IOPS?
How is everyone copying their results? I use BitVise SSH client, I cannot copy anything I do in terminal, only paste commands.
-
01-26-2013, 04:28 PM #13Solid State
- Join Date
- Aug 2010
- Posts
- 1,687
-
01-26-2013, 05:15 PM #14Web Hosting Evangelist
- Join Date
- Apr 2006
- Posts
- 460
future hosting hybrid
root@test [~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 5.41795 s, 198 MB/s
Jaguar VPS
[root@test ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
16384+0 records in
16384+0 records out
1073741824 bytes (1.1 GB) copied, 12.1865 seconds, 88.1 MB/s
-
01-26-2013, 07:14 PM #15The VPS Specialist
- Join Date
- Aug 2003
- Location
- Edinburgh/London
- Posts
- 5,789
Just to add to the pot here, it also depends on the parameters.
Code:dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync dd if=/dev/zero of=test bs=512k count=1k conv=fdatasync dd if=/dev/zero of=test bs=1M count=1k conv=fdatasync
-
01-26-2013, 07:32 PM #16The Linux Specialist
- Join Date
- Mar 2003
- Location
- /root
- Posts
- 23,991
-
01-26-2013, 07:50 PM #17Aspiring Evangelist
- Join Date
- Apr 2008
- Location
- Tulsa, OK, USA
- Posts
- 376
the results of "ioping -D -s 1M -i 0 -w 2s $HOME" may be more interesting, as random seek performance and latency guarantees are actually more important than linear throughput.
filesystems are not normally contiguous chunks of data, so seek performance and host scheduler latencies are far more important metrics than contiguous read/write performance.
also conv=fdatasync is unfairly biased towards containers, because it induces a flush of the filesystem metadata journal to disk, which containers do not have their own physical filesystem. conv=fdatasync may also show problems on servers where there is no problem, if the server has recently written data to disk.
as a result, conv=fsync may yield better results depending on how much data is in the filesystem journal.
-
01-26-2013, 07:52 PM #18Web Hosting Master
- Join Date
- Aug 2012
- Location
- localhost
- Posts
- 1,495
-
01-26-2013, 08:12 PM #19Aspiring Evangelist
- Join Date
- Apr 2008
- Location
- Tulsa, OK, USA
- Posts
- 376
other thoughts:
if you're going to do this silly test, you should do so in a way which avoids inducing iosched dequeues, which means using writes the length of the drive's LBA size.
so, you probably want:
dd if=/dev/zero bs=4096 count=256k of=1gb.bin conv=fdatasync
also as a correction to my previous post, fdatasync just flushes the data part of the journal, fsync includes metadata (and always induces a dequeue).
-
01-27-2013, 07:48 AM #20Junior Guru
- Join Date
- Jun 2012
- Posts
- 235
Mine is much slower, I guess then..
dd if=/dev/zero bs=4096 count=256k of=1gb.bin conv=fdatasync
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 52.044 s, 20.6 MB/s
-
01-27-2013, 03:08 PM #21Junior Guru Wannabe
- Join Date
- Aug 2012
- Location
- Sweden
- Posts
- 58
running
dd if=/dev/zero bs=4096 count=256k of=1gb.bin conv=fdatasync
OpenITC KVM (UK) (their low-end-box)
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 39.8 s, 27.0 MB/s
Gridlane KVM (SE)
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 14.8657 s, 72.2 MB/s
Inception XEN (UK)
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 5.13088 s, 209 MB/s
Must say that my Inception VPS has very good IO, boots in like 5 nano seconds.
The OpenITC box is great value for money
Using Gridlane for production because of their location (Stockholm) and reliability
-
01-27-2013, 05:16 PM #22Newbie
- Join Date
- Oct 2011
- Location
- Prague, Czech Rep.
- Posts
- 17
# dd if=/dev/zero bs=4096 count=256k of=1gb.bin conv=fdatasync
262144+0 records in
262144+0 records out
1073741824 bytes (1.1 GB) copied, 12.2085 s, 88.0 MB/s
my personal VPS from vpsfree.cz (Prague)
-
01-27-2013, 05:40 PM #23Web Hosting Master
- Join Date
- Nov 2005
- Posts
- 3,944
dd testing is a joke. We have some older SSD drives that sometimes only put out about 70MB/s as opposed to our SATAIII drives that put out 120MB/s in dd but when your utilizing real world performance, that same SSD can handle about 50x the load of the SATA since it puts out about 10000 IOPS as opposed to 200.
Similar Threads
-
VPS Disk I/O test
By andr3w4u in forum VPS HostingReplies: 147Last Post: 10-11-2013, 01:51 AM -
iobench: Test your *real* VPS disk I/O
By devonblzx in forum VPS HostingReplies: 32Last Post: 11-12-2012, 12:35 PM -
Test Hard disk Speed
By Cbchung in forum Hosting Security and TechnologyReplies: 2Last Post: 03-19-2012, 11:38 AM -
DD disk test 19 MB/s acceptable?
By chasebug in forum VPS HostingReplies: 25Last Post: 08-22-2011, 02:52 AM -
Hard disk test 'surprises' Google.
By encoderX in forum Computers and PeripheralsReplies: 1Last Post: 02-21-2007, 03:06 AM