Qemu, and, in effect, KVM and HVM Xen, use a particular kind of OS Image to boot a VM, which includes the kernel and a boot sector, similar to a bootable CD. This is different from a paravirt Xen OS image, which only includes the userspace parts of the OS image and is a raw copy of a volume with a root filesystem.
In KVM, a VM runs on top of a normal Linux kernel as a normal process. For example, using the 'kill' command on a KVM process kills the VM.
There is no logical network between the VMs and the VMM's physical network interface. This can be an advantage or a disadvantage:
There is less isolation between VMs and VMM, which can affect stability and security.
In terms of performance, it is hard to say if one approach is better than the other, but having a virtual network driver between VMs gives a performance penalty, and direct access to hardware is always faster. In both KVM and Xen it should be possible to bypass the driver domain (dom0 in Xen or the KVM VMM) and, with Xen paravirt, this has already been done, allowing VMs to have direct access to hardware. Bypassing the driver domain, however, is harder to implement in full virtualization (Xen HVM and KVM), although this may be solved with newer virtualization hardware extensions (e.g. VT-d).
KVM does not yet support SMP in VMs.
KVM is in the vanilla Linux 2.6.20 kernel and can be patched against older kernels. The source code profile of KVM is less intrusive on the kernel source and should be more easy to backport to older versions of Linux.
KVM relies on the availability of VT or SVM hardware extensions of x86 CPUs, which should be available in most new PC computers.