Performance comparison of e1000 and virtio-pci drivers

10,231

You did perform a bandwidth test, that does not stress PCI.

You need to simulate an environment with many concurrent sessions. There you should see a difference.

Perhaps -P 400 might simulate that kind of test using iperf.

Share:
10,231

Related videos on Youtube

comeback4you
Author by

comeback4you

Updated on September 18, 2022

Comments

  • comeback4you
    comeback4you almost 2 years

    I made a following setup to compare a performance of virtio-pci and e1000 drivers:

    virtio test-setup

    I expected to see much higher throughput in case of virtio-pci compared to e1000, but they performed identically.

    Test with virtio-pci(192.168.0.126 is configured to T60 and 192.168.0.129 is configured to PC1):

    root@PC1:~# grep hype /proc/cpuinfo
    flags       : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic hypervisor lahf_lm tpr_shadow vnmi flexpriority ept vpid
    root@PC1:~# lspci -s 00:03.0 -v
    00:03.0 Ethernet controller: Red Hat, Inc Virtio network device
        Subsystem: Red Hat, Inc Device 0001
        Physical Slot: 3
        Flags: bus master, fast devsel, latency 0, IRQ 11
        I/O ports at c000 [size=32]
        Memory at febd1000 (32-bit, non-prefetchable) [size=4K]
        Expansion ROM at feb80000 [disabled] [size=256K]
        Capabilities: [40] MSI-X: Enable+ Count=3 Masked-
        Kernel driver in use: virtio-pci
    
    root@PC1:~# iperf -c 192.168.0.126 -d -t 30 -l 64
    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------
    ------------------------------------------------------------
    Client connecting to 192.168.0.126, TCP port 5001
    TCP window size: 85.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 192.168.0.129 port 41573 connected with 192.168.0.126 port 5001
    [  5] local 192.168.0.129 port 5001 connected with 192.168.0.126 port 44480
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-30.0 sec   126 MBytes  35.4 Mbits/sec
    [  5]  0.0-30.0 sec   126 MBytes  35.1 Mbits/sec
    root@PC1:~# 
    

    Test with e1000(192.168.0.126 is configured to T60 and 192.168.0.129 is configured to PC1):

    root@PC1:~# grep hype /proc/cpuinfo
    flags       : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic hypervisor lahf_lm tpr_shadow vnmi flexpriority ept vpid
    root@PC1:~# lspci -s 00:03.0 -v
    00:03.0 Ethernet controller: Intel Corporation 82540EM Gigabit Ethernet Controller (rev 03)
        Subsystem: Red Hat, Inc QEMU Virtual Machine
        Physical Slot: 3
        Flags: bus master, fast devsel, latency 0, IRQ 11
        Memory at febc0000 (32-bit, non-prefetchable) [size=128K]
        I/O ports at c000 [size=64]
        Expansion ROM at feb80000 [disabled] [size=256K]
        Kernel driver in use: e1000
    
    root@PC1:~# iperf -c 192.168.0.126 -d -t 30 -l 64
    ------------------------------------------------------------
    Server listening on TCP port 5001
    TCP window size: 85.3 KByte (default)
    ------------------------------------------------------------
    ------------------------------------------------------------
    Client connecting to 192.168.0.126, TCP port 5001
    TCP window size: 85.0 KByte (default)
    ------------------------------------------------------------
    [  3] local 192.168.0.129 port 42200 connected with 192.168.0.126 port 5001
    [  5] local 192.168.0.129 port 5001 connected with 192.168.0.126 port 44481
    [ ID] Interval       Transfer     Bandwidth
    [  3]  0.0-30.0 sec   126 MBytes  35.1 Mbits/sec
    [  5]  0.0-30.0 sec   126 MBytes  35.1 Mbits/sec
    root@PC1:~# 
    

    With large packets the bandwidth was ~900Mbps in case of both drivers.

    When does the theoretical higher performance of virtio-pci comes into play? Why did I see equal performance with e1000 and virtio-pci?

    • drHogan
      drHogan over 7 years
      Could you watch the CPU usage of the host and vm while doing this benchmark? Maybe non-virtio does need more CPU but your one is fast enough.
    • comeback4you
      comeback4you over 7 years
      I did watch the CPU usage and for both drivers it was pretty much the same. I executed iperf -c 192.168.0.126 -d -t 300 -l 64; uptime with both drivers and in case of e1000 the results were load average: 0.04, 0.07, 0.05 and in case of virtio-pci they were load average: 0.23, 0.11, 0.05. CPU usage on host machine was also basically the same(I checked this with top).
    • phk
      phk over 7 years
      Number of connections, number of IPs?
    • drHogan
      drHogan over 7 years
      Another guess. Maybe virtio is able to pass through "hardware features" of the host's NIC.(like segmentation and checksum Offloading) if the host NIC does supprt them. In other words if the host NIC does not have whatever advanced features then virto cannot be better than e1000.
    • phk
      phk over 7 years
      @rudimeier If this was the case this would only apply to traffic coming through these host's nics I guess.
    • sean
      sean almost 6 years
      Can you please post your qemu config?
    • ceving
      ceving almost 5 years
      See here for another comparison: linux-kvm.org/page/Using_VirtIO_NIC
    • jrglndmnn
      jrglndmnn over 3 years
      Did you installed the kvm kernel module system or a bare qemu?