Results 1 to 3 of 3
  1. #1

    KVM Virtualization - mysterious performance problems

    Hello all,

    I'm already following this forumfor a while, and would like to discuss a few problems i've already facing for a pretty long period.
    I have a some servers running a few KVM virtual machines.
    Already since day 1 i'm facing some performance issues that I could not explain.
    After ignoring it for half a year, i'm going to give it another try.

    This "new" server runs on CentOS ans SolusVM. Used proxmox and ubuntu + virtmanager before, but as a lot of people are running SolosVM over here I might have a bit more luck ;-)

    Setup is a Dell server with 2 brand new Seagate ST2000DM001's @ Software RAID1 + a small SSD for caching (flashcache powered).
    Previous setups used WD/Hitachi disks without SSD Cache, but exact the same issues.

    I think my problems is caused by LVM or the Virtual layer. Inside virtual machines I get poor write speeds.
    The host itself runs great, and i'm maxing out the disks.
    When doing a DD test (dd if=/dev/zero of=test bs=1M count=1k conv=fdatasync), I get average speeds like these:
    Host: 130MB/s - 140MB/s
    VM: 70MB/s - 90MB/s with sometimes peaks to 110MB/s (VirtIO disk driver)

    When doing this test inside the VM, iostat on the host looks like this:
    Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await  svctm  %util
    sda               0.00  9575.00    0.00 8877.00     0.00 73268.00    16.51    16.71    1.86   0.10  87.90
    sdc               0.00   485.00    0.00 17958.00     0.00 73772.00     8.22     2.32    0.13   0.03  57.20
    sdb               0.00  9598.00    0.00 8836.00     0.00 72572.00    16.43    16.30    1.83   0.10  86.30
    md1               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
    md2               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
    md0               0.00     0.00    0.00    0.00     0.00     0.00     0.00     0.00    0.00   0.00   0.00
    md127             0.00     0.00    0.00 18443.00     0.00 73772.00     8.00     0.00    0.00   0.00   0.00
    dm-0              0.00     0.00    0.00 18443.00     0.00 73772.00     8.00   216.90   11.16   0.05  97.50
    dm-1              0.00     0.00    0.00 18443.00     0.00 73772.00     8.00   216.96   11.17   0.05  97.70
    Most of the time the dm device is having utilizations up to 100%, but the individual SATA disks are bouncing
    a bit between 80% and 93% most of the time.
    When doing the same test on the host, both SATA disks are maxing out @ 100%.
    CPU load is not the problem - the host is pretty much idling, and the DD proces inside the VM only use 20% CPU load.
    But IO wait inside the VM goes to 75%+ after the test is running for 2 seconds.

    I have tried the same test on another server with RAID-10 disks (software raid with no SSD cache),
    and the LVM (dm) decives are also touching 100% but the individual disks are showing 50 - 75% disk utilizaton.
    So for some reason the virtual machines don't use all the "juice" the SATA disks have.

    On the box with RAID-10 array, i loose all my performance when multiple VM's are having a little bit of random I/O.
    The virtual machines are horrible slow, but the host disks are not even 75% loaded.
    Sometimes even installing a minimal CentOS/Debian VM would take 1+ hour.

    Even on JBOD (single disk) i'm not touching the speeds in a VM that I could get on the host.
    So I don't think this problem is RAID related.

    I'm using the KVM cache setting "none", which seems to give the best performance over all.
    Hosts i'm now testing are all running on CentOS 6.4 (up-to-date) and SolusVM.

    Does anyone else ever had problems like these?

  2. #2
    Join Date
    Dec 2011
    Tulsa, OK
    Sounds like you've already set the cache to none. Can you provide any additional settings you've tweaked?
    OCOSA Communications | Since 2003
    Hosting, Connectivity, Professional Services

  3. #3
    Haven't done much tweaks actually. Pretty much "straight out of the box".

    cache=none and the VirtIO driver are the only change I made - the rest of the config is created by SolusVM.

    Did anyone here ever tested their (SolusVM) nodes to see how the disk I/O speeds are on the host and inside a VM?

Similar Threads

  1. KVM Virtualization specialist required
    By CH-Shaun in forum Systems Management Requests
    Replies: 10
    Last Post: 07-04-2013, 01:48 PM
  2. Which virtualization is better: KVM or XEN
    By kafka13 in forum VPS Hosting
    Replies: 26
    Last Post: 01-16-2013, 02:16 AM
  3. Virtualization Drives Storage Performance Problems
    By HostingConRSS in forum From the HostingCon Blog
    Replies: 0
    Last Post: 05-19-2010, 10:20 AM
  4. Virtualization Performance: Citrix XenServer vs. Opensource Xen
    By FHDave in forum Hosting Software and Control Panels
    Replies: 12
    Last Post: 02-08-2010, 11:25 PM
  5. Linux KVM Virtualization
    By JFSG in forum Specialty Hosting and Markets
    Replies: 14
    Last Post: 08-06-2009, 10:21 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts