Page 2 of 4 FirstFirst 1234 LastLast
Results 41 to 80 of 148
  1. #41
    I has a vps of delimiter for two years. one month ago Delimiter was gone, or who knows, i lost my customers becaouse my vps was down for many days without any answare from Delimiter, without any response to my tickets. I am bored now, so i needed(i hope i find it) a trusted provider. I hope i find it...

  2. #42
    Join Date
    Apr 2009
    Location
    United Kingdom
    Posts
    136
    If i login to my VPS and run

    'dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync' will it kill my machine -- i read somewhere someone ran something like this and it overwrote 10GB of his disk!

  3. #43
    Join Date
    Jan 2010
    Location
    San Francisco
    Posts
    1,799
    That's because he ran "dd if=/dev/zero of=/dev/sda bs=1M count=1000"

    Notice the "of=/dev/sda" which is his drive. The command you posted is "of=test" which writes a file named "test."

  4. #44
    Join Date
    Apr 2009
    Location
    United Kingdom
    Posts
    136
    Ah i see, so the command i posted is safe

  5. #45
    Join Date
    Apr 2009
    Location
    United Kingdom
    Posts
    136
    hmm - VPSLatch

    [email protected] [~]# dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 11.9935 seconds, 22.4 MB/s

  6. #46
    Join Date
    May 2009
    Location
    US
    Posts
    2,502
    Quote Originally Posted by lilrichieh View Post
    hmm - VPSLatch

    [email protected] [~]# dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 11.9935 seconds, 22.4 MB/s
    Our nodes have been known to report such issues due with some BIOS/RAID card setting, I believe with write cache, we are already in the works to rectify that. However your sites should not be loading slow at all. Even if we run it on a new node with 0 VMs it would report similar speeds.

  7. #47
    Join Date
    Apr 2009
    Location
    United Kingdom
    Posts
    136
    Adam i was neither suggesting the node was poor or that my site was loading slow i was merely posting MY results. I am happy with my VPS and it's loading times...

    Happy to hear you are working on a fix though.

  8. #48
    Hi,

    Shouldn't random read or write, like in fio, be used to test disk IO speed instead of dd?

  9. #49
    Join Date
    Oct 2007
    Location
    United States
    Posts
    1,175
    Quote Originally Posted by VL-Adam View Post
    Our nodes have been known to report such issues due with some BIOS/RAID card setting, I believe with write cache, we are already in the works to rectify that. However your sites should not be loading slow at all. Even if we run it on a new node with 0 VMs it would report similar speeds.
    Most RAID controllers are around 150Mbps to 300Mbps sustained read/write like that, so the ~20-30MB/s is just fine and you probably wont get it better than that. The real test will be when you have a full node, if he can still push 25MB/s then there is nothing wrong and you wont be able to fix that unless you remove the RAID card in general. RAID cards that can push more generally cost a lot more ($400+), which most datacenters don't bother to carry unfortunately.

  10. #50

    Thumbs up

    clubuptime.com just got 1


    [email protected]:/# dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 3.14158 s, 85.4 MB/s

  11. #51
    Join Date
    Mar 2009
    Location
    NL
    Posts
    571
    This is in our cloud, so it isn't very fair (because storage is on a SAN):

    [[email protected] ~]# dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 2.36134 seconds, 114 MB/s (limited to 1gbit line because it is a single thread)

    Still, on an empty node you should easily get > 20mb/s.

  12. #52
    Join Date
    Apr 2009
    Posts
    1,320
    What does that command do?
    dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync

    Can it be run on live server? Will it wipe out any data?

  13. #53
    Join Date
    Mar 2009
    Location
    NL
    Posts
    571
    Quote Originally Posted by chasebug View Post
    What does that command do?
    dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync

    Can it be run on live server? Will it wipe out any data?
    It only writes a file test, you can use it on a production server

  14. #54
    Join Date
    Sep 2006
    Location
    Toronto
    Posts
    158
    vpslatch (managed24 server)

    dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 26.8052 seconds, 40.1 MB/s

    ===================

    Directspace $2 server:

    [[email protected] ~]# dd if=/dev/zero of=test bs=64k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    1073741824 bytes (1.1 GB) copied, 20.5059 seconds, 52.4 MB/s

  15. #55
    Join Date
    Jul 2010
    Location
    ~/
    Posts
    1,288

    My Results

    For the sake of comparison:

    PHOTON VPS:

    Code:
    [[email protected] ~]# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 13.1446 seconds, 20.4 MB/s
    Racksrv.com VPS

    Code:
    [email protected] [~]# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 0.774849 seconds, 346 MB/s
    2host.com VPS (actualy supprised it even ran lol)

    Code:
    [[email protected] ~]#dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 1.82974 seconds, 147 MB/s
    My own VPS (XEN) on a crappy desktop system that is totaly overloaded without any raid and I am shocked it even still runs

    Code:
    [[email protected] ~]#dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 4.51338 seconds, 59.5 MB/s
    So rackserv win hands down, I ran each test 5 times and took a result that sat in the middle on all of them

  16. #56
    Join Date
    Jun 2003
    Location
    Los Angeles, CA
    Posts
    1,506
    We have a few nodes undergoing RAID rebuilds, if you can PM your IP I can verify with what's going on with your VPS.

  17. #57
    Join Date
    Jul 2010
    Location
    ~/
    Posts
    1,288
    PM sent, let me know when the rebuilds are done (If my node is one of them) and I will run it again.

  18. #58
    Join Date
    Jan 2006
    Location
    Charlotte,NC
    Posts
    138
    One thing I would caution people on are using raw shorterm numbers like these to judge overall performance. Would you have 20MB/s of extremely stable storage on an HA array off the box, or 400MB/s on a Raid0 array? I always recommend people I consult with to take a overall view of performance and availability, and try not to get caught up in benchmarks.
    And to be perfectly honest, real world disk usage doesn't come close to saturating interface like dd is capably of. So I would also recommend understanding what TYPE of storage a host is using, versus what raw scores you can come up with.

    Case in point, it would take 2 and maybe 3 1G fiber drops to hit the half the 346MB/s, but you could fairly easily hit that number with 4-6 consumer grade sata disks in a single server config. But would you rather have a HA fiber storage cluster that can do 80MB/s and support multiple full server failures, and more bandwidth versus pure speed, or 400MB/s that goes completely down when a single server fails. It's a balancing act, and I think it's our job as providers to better educate consumers, if we want to see the industry grow in the right direction.

  19. #59
    Join Date
    Jul 2010
    Location
    ~/
    Posts
    1,288
    Its a fair point you make Linology, I dont know what they all use hardware wise, although from the 2 host website my VPS that got 147MB/s is using SATA II drives in raid 10.

    Is there a query I can run from the VPS that would give me a hint or would that need to be ran on the node itself?

  20. #60
    Join Date
    Jul 2010
    Location
    ~/
    Posts
    1,288
    Quote Originally Posted by PhotonVPS-Jim View Post
    We have a few nodes undergoing RAID rebuilds, if you can PM your IP I can verify with what's going on with your VPS.
    Have you finished the raid rebuild yet?

    Code:
    [[email protected] ~]# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync && rm test
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 17.9879 seconds, 14.9 MB/s

  21. #61
    Join Date
    Jul 2010
    Location
    ~/
    Posts
    1,288
    oh i forgot to add my $1.05 openvz vps from hostrail

    Code:
    [[email protected] ~]# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 7.17027 seconds, 37.4 MB/s
    [[email protected] ~]# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 6.85732 seconds, 39.1 MB/s
    [[email protected] ~]# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 6.93566 seconds, 38.7 MB/s

  22. #62
    Join Date
    Mar 2009
    Location
    NL
    Posts
    571
    Many people seem to only check the throughput, but random reads/writes are more important. For example the following VMs both do ~100MB/s, but there is a huge difference in performance:

    Xenserver vm:
    Results: 172 seeks/second, 13.79 ms random access time

    Cloud:
    Results: 1954 seeks/second, 0.51 ms random access time

    The last one is on a VM on our new cloud platform. Seeks can be tested using:

    http://www.linuxinsight.com/how_fast_is_your_disk.html

    Commands:
    wget http://www.linuxinsight.com/files/seeker
    chmod +x seeker
    ./seeker /dev/sda (change this to your disk)

    Another test i can recommend is bonnie++. But please stop focussing on these maximum throughput stats

  23. #63
    Join Date
    May 2007
    Posts
    1,969
    Quote Originally Posted by Rens View Post
    Many people seem to only check the throughput, but random reads/writes are more important. For example the following VMs both do ~100MB/s, but there is a huge difference in performance:

    Xenserver vm:
    Results: 172 seeks/second, 13.79 ms random access time

    Cloud:
    Results: 1954 seeks/second, 0.51 ms random access time

    The last one is on a VM on our new cloud platform. Seeks can be tested using:

    http://www.linuxinsight.com/how_fast_is_your_disk.html

    Commands:
    wget http://www.linuxinsight.com/files/seeker
    chmod +x seeker
    ./seeker /dev/sda (change this to your disk)

    Another test i can recommend is bonnie++. But please stop focussing on these maximum throughput stats


    ./seeker /dev/sda1
    Seeker v2.0, 2007-01-15, http://www.linuxinsight.com/how_fast_is_your_disk.html
    Benchmarking /dev/sda1 [15360MB], wait 30 seconds..............................
    Results: 133 seeks/second, 7.50 ms random access time

    yardvps

  24. #64
    Join Date
    Jul 2010
    Location
    ~/
    Posts
    1,288
    Quote Originally Posted by Rens View Post
    Many people seem to only check the throughput, but random reads/writes are more important. For example the following VMs both do ~100MB/s, but there is a huge difference in performance:

    Xenserver vm:
    Results: 172 seeks/second, 13.79 ms random access time

    Cloud:
    Results: 1954 seeks/second, 0.51 ms random access time

    The last one is on a VM on our new cloud platform. Seeks can be tested using:

    http://www.linuxinsight.com/how_fast_is_your_disk.html

    Commands:
    wget http://www.linuxinsight.com/files/seeker
    chmod +x seeker
    ./seeker /dev/sda (change this to your disk)

    Another test i can recommend is bonnie++. But please stop focussing on these maximum throughput stats
    I think thats a fair point you make hoever it is still a good indication.

    I have re ran a few using bonnie++ for consistancy as seeker does not work on openvz.

    2host.com VPS (XEN):
    Code:
    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    blahblahblah 1G   320  97 109246  26 34810   2   796  96 131428   5 646.2   0
    Latency             34052us     360ms     164ms   18492us   28017us    1719ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    blahblahblah.com -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  7513  14 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
    Latency             10920us     870us   25300us     597us     363us     677us
    Hostrail $1.05 VPS (openvz)

    Code:
    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    storage3.incepti 1G    43  10 17053   4 40982   7    96  10 175315  10  1137   7
    Latency              1469ms    5782ms     145ms     237ms     145ms    6648us
    Version  1.96       ------Sequential Create------ --------Random Create--------
    blahblah.blahblah -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16  2201   3 +++++ +++  6776   8  9548  12 +++++ +++  1157   1
    Latency               145ms     657us     172ms     144ms     144ms     305ms
    PhotonVPS (openVZ)


    Code:
    Version  1.96       ------Sequential Output------ --Sequential Input- --Random-
    Concurrency   1     -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
    Machine        Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec %CP
    blahbl.ahbalh. 2G   484  97 18222   4  4413   0   333  22  9231   0 154.0   1
    Latency             34280us    3735ms   20117ms     503ms   13816ms    3721ms
    Version  1.96       ------Sequential Create------ --------Random Create--------
    blahbl.blahbl.com -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
                  files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP
                     16   513   1 +++++ +++   428   0  2008   3 +++++ +++   390   0
    Latency              1026ms    1473us     330ms    1147ms      77us     120us

  25. #65
    Join Date
    Jul 2010
    Location
    ~/
    Posts
    1,288
    not sure thats the best test as the result output is hard to read but if you look closley you can see that they do ok in some areas and not soo good in others.

    Only tested a few VPS's above but assuming I am reading that right then the little $1.05 VPS from host rail actualy seems to do better than the PhotonVPS in most areas and the 2host wins overall (shocked)

  26. #66
    Join Date
    Jun 2003
    Location
    Los Angeles, CA
    Posts
    1,506
    Quote Originally Posted by backtogeek View Post
    not sure thats the best test as the result output is hard to read but if you look closley you can see that they do ok in some areas and not soo good in others.

    Only tested a few VPS's above but assuming I am reading that right then the little $1.05 VPS from host rail actualy seems to do better than the PhotonVPS in most areas and the 2host wins overall (shocked)
    I did some checking and it appears you're on our older legacy systems with 10K RPM disks in RAID1. The writes on these servers are slower, however the reads will be outperforming. If you wish to move to a RAID10 node, please open a ticket to do so.

    One the second note your comparing OpenVZ and Xen and these results will vary. To be fair you should be comparing similar virtualization platforms from different providers.

  27. #67
    Join Date
    Oct 2010
    Posts
    1,784
    dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    How long does this take to run? I tried it twice and it appeared to just hang. I waited about a minute. No results or prompt afterward. Can a host block this command?

  28. #68
    Join Date
    Jan 2010
    Location
    San Francisco
    Posts
    1,799
    The time it takes depends on how fast your server can write the file, which this command is meant to test - sequential throughput. Most likely, your disks are extremely slow and it is still writing the file and not timing out.

  29. #69
    Join Date
    Oct 2010
    Posts
    1,784
    My VPS is with KnownHost so I don't think there would be a disk issue. My VPS is running fine. I'm just doing this for haha's. What happens when it's done writing the file? Does a command prompt display, or something else? Thanks.

  30. #70
    Join Date
    Jan 2010
    Location
    San Francisco
    Posts
    1,799
    Look at the original post.

  31. #71
    Join Date
    Oct 2010
    Posts
    1,784
    Well it worked immediately this time so I don't know. Maybe I did something wrong. Anyway how is this speed? Thanks.

    [email protected] [~]# dd if=/dev/zero of=test bs=64 count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    262144 bytes (262 kB) copied, 0.00844 seconds, 31.1 MB/s

  32. #72
    Join Date
    Jan 2010
    Location
    San Francisco
    Posts
    1,799
    I think you're missing a "k" in bs:

    dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync

    This outputs a 268mb file which might be a better indicator, rather than writing 262kb.

  33. #73
    Join Date
    Oct 2010
    Posts
    1,784
    LOL. Thanks. I'm such a noob. This is better.

    [email protected] [~]# dd if=/dev/zero of=test bs=64k count=4k conv=fdatasync
    4096+0 records in
    4096+0 records out
    268435456 bytes (268 MB) copied, 1.53869 seconds, 174 MB/s

    How do I delete the file?
    Last edited by TheJoker; 12-16-2010 at 10:05 PM.

  34. #74
    Join Date
    Jan 2010
    Location
    San Francisco
    Posts
    1,799
    rm -f test

    Nice result!

  35. #75
    Join Date
    Mar 2006
    Location
    Australia
    Posts
    771
    Well, an update on my VRTServer cloud. I had low expectations due to all their poor reviews, but not this bad. I was kind of hoping it would be half decent though, ah well.

    It's gone from ~10MB/s, which was already awful, to:


    [[email protected] ~]# dd if=/dev/zero of=test bs=16k count=16k conv=fdatasync
    16384+0 records in
    16384+0 records out
    268435456 bytes (268 MB) copied, 176.499 seconds, 1.5 MB/s


    [[email protected] ~]# ./seeker /dev/sda
    Seeker v2.0, 2007-01-15, http://www.linuxinsight.com/how_fast_is_your_disk.html
    Benchmarking /dev/sda [51200MB], wait 30 seconds.............................
    Results: 322 seeks/second, 3.10 ms random access time

    Asked them to look into it, never heard back. It's cancelled now.

  36. #76
    Join Date
    Oct 2010
    Posts
    1,784
    Quote Originally Posted by WickedFactor View Post
    rm -f test

    Nice result!
    Thanks man. Happy Holidays.

  37. #77
    Join Date
    Jan 2010
    Location
    San Francisco
    Posts
    1,799
    Same to you.

  38. #78
    Join Date
    Sep 2008
    Location
    New York City
    Posts
    528
    Considering VPS' are getting cheaper and cheaper I'm not surprised at such slow speeds. However, I've never found these tests to be very accurate. As long as your applications are running well I don't really care much about disk speed.

  39. #79
    From my new VPS, that's why I feel lag when typing command in console
    Code:
    Benchmarking /dev/sda1 [25600MB], wait 30 seconds......................
    Results: 4 seeks/second, 218.98 ms random access time
    From my old VPS
    Code:
    Benchmarking /dev/sda1 [10240MB], wait 30 seconds..............................
    Results: 195 seeks/second, 5.10 ms random access time

  40. #80
    Join Date
    Aug 2007
    Posts
    118
    The WHT vps community needs to stop doing snapshots of dd, unixbench, or bonnie at 1 point in time. These benchmarks are almost useless when you have 20 people on a node and only have 2x the drive speed in raid 10 or raid 1. Depending on your neighbors or how overloaded the nodes are, your top HD speed of 200 mb/s may slow down to 5 or 10 during the day.

    Here is an example of hard drive performance over time for ThrustVPS's openVZ.

    http://www.codexon.com/wht/review_files/image015.gif
    http://www.codexon.com/wht/review_files/image025.jpg

Page 2 of 4 FirstFirst 1234 LastLast

Similar Threads

  1. Tool for stress test , check disk i/o and health of drives in array
    By turbovps in forum Colocation and Data Centers
    Replies: 2
    Last Post: 11-17-2010, 03:47 PM
  2. Replies: 0
    Last Post: 07-22-2010, 12:06 PM
  3. Replies: 15
    Last Post: 02-12-2010, 02:23 PM
  4. 1 Test VPS Only! Windows 2003 VPS - 512MB RAM / 50GB / 100GB Bandwidth
    By A Grateful Dad in forum VPS Hosting Offers
    Replies: 9
    Last Post: 08-22-2009, 09:48 PM
  5. Hard disk test 'surprises' Google.
    By encoderX in forum Computers and Peripherals
    Replies: 1
    Last Post: 02-21-2007, 03:06 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •