I'm already following this forumfor a while, and would like to discuss a few problems i've already facing for a pretty long period.
I have a some servers running a few KVM virtual machines.
Already since day 1 i'm facing some performance issues that I could not explain.
After ignoring it for half a year, i'm going to give it another try.
This "new" server runs on CentOS ans SolusVM. Used proxmox and ubuntu + virtmanager before, but as a lot of people are running SolosVM over here I might have a bit more luck ;-)
Setup is a Dell server with 2 brand new Seagate ST2000DM001's @ Software RAID1 + a small SSD for caching (flashcache powered).
Previous setups used WD/Hitachi disks without SSD Cache, but exact the same issues.
I think my problems is caused by LVM or the Virtual layer. Inside virtual machines I get poor write speeds.
The host itself runs great, and i'm maxing out the disks.
When doing a DD test (dd if=/dev/zero of=test bs=1M count=1k conv=fdatasync), I get average speeds like these:
Host: 130MB/s - 140MB/s
VM: 70MB/s - 90MB/s with sometimes peaks to 110MB/s (VirtIO disk driver)
When doing this test inside the VM, iostat on the host looks like this:
Most of the time the dm device is having utilizations up to 100%, but the individual SATA disks are bouncing
a bit between 80% and 93% most of the time.
When doing the same test on the host, both SATA disks are maxing out @ 100%.
CPU load is not the problem - the host is pretty much idling, and the DD proces inside the VM only use 20% CPU load.
But IO wait inside the VM goes to 75%+ after the test is running for 2 seconds.
I have tried the same test on another server with RAID-10 disks (software raid with no SSD cache),
and the LVM (dm) decives are also touching 100% but the individual disks are showing 50 - 75% disk utilizaton.
So for some reason the virtual machines don't use all the "juice" the SATA disks have.
On the box with RAID-10 array, i loose all my performance when multiple VM's are having a little bit of random I/O.
The virtual machines are horrible slow, but the host disks are not even 75% loaded.
Sometimes even installing a minimal CentOS/Debian VM would take 1+ hour.
Even on JBOD (single disk) i'm not touching the speeds in a VM that I could get on the host.
So I don't think this problem is RAID related.
I'm using the KVM cache setting "none", which seems to give the best performance over all.
Hosts i'm now testing are all running on CentOS 6.4 (up-to-date) and SolusVM.