KVM causes high CPU loaded when cache='none'

5,176

OK, with additional option io='native' in disk section and IO scheduler cfq on host system, I'll get the best results for my system. IO rate is nearly the same for all values of option io in guest XML and for IO scheduler on host and guest, only cache='unsafe' gives significantly more performance. But only with io='native', noop scheduler in guest and cfq scheduler on host I'll get lowest CPU load.

Share:
5,176

Related videos on Youtube

rabudde
Author by

rabudde

Updated on September 18, 2022

Comments

  • rabudde
    rabudde over 1 year

    I've followed instructions on http://www.linux-kvm.org/page/Tuning_KVM. Host is Debian Squeeze Kernel 3.2, QEMU 1.0, libvirt 0.9.12 (all from squeeze-backport). All 4 guests are Debian Squeeze Kernel 3.2, also.

    So my settings in Guest XML are

    <cpu model='host-passthrough'/>
    <disk [...]/>
       <driver name='qemu' type='raw' cache='none'/>
       <target [...] bus='virtio'/>
    </disk>
    <interface [...]>
       <model type='virtio'/>
    </interface>
    

    IO scheduler on guests is set to noop. On host I tried noop/deadline/cfq with no significant performance differences, for me. All guests storage is provided by LVM. When using cache='none' and all guests have no noteworthy load, the 15 minutes average CPU load on host goes up to 3-4. But when using cache='writeback' the hosts CPU loaded rises down to less than 1. Can anyone explain, why the suggested settings for LVM causes the higher load on host?

    BTW: When running disk benchmarks, the option none results in higher IO performance than writeback.

    • Michael Hampton
      Michael Hampton over 11 years
      Your disks are slow?
    • rabudde
      rabudde over 11 years
      No, on host and guests there's an average write rate of 170-180MB/s. I've set now io='native' and the CPU load on host is lower than 1 for more than one hour now, I'll have a look at it
  • Sebastian Marsching
    Sebastian Marsching over 7 years
    Thanks for the hint regarding the "native" IO mode. In my case this was sufficient to reduce the load, even when keeping the default (deadline) scheduler on the host. I found a presentation from RedHat (slideshare.net/pradeepkumarsuvce/…) comparing the two modes, and it seems like the "native" mode is better for most cases involving an HDD, while "threads" mode is better for some cases involving an SSD.