A method to achieve high-performance (NIC wirespeed) and flexible I/O in virtual machine and container environments. Not much Japanese information is available yet. I haven’t actually tried it, so there may be misunderstandings.

vDPA Kernel Framework

In March 2020, the vDPA kernel framework was merged into Linux 5.7. A vDPA device handled by the vDPA kernel framework refers to a device where the data plane follows the virtio specification and the control plane is vendor-specific. From the perspective of a virtio-net device on the guest, the data plane directly accesses the physical NIC bypassing the host, while the control plane goes through the host’s vDPA framework (and vendor-dependent drivers).

The vDPA kernel framework was once built based on mdev. mdev mediates the control plane during VFIO passthrough, and is responsible for translating between vendor-dependent commands and emulated PCI devices.

However, for the following reasons, the mdev-based approach was abandoned, and the vDPA kernel framework was designed as a new subsystem independent from VFIO.

  • VFIO and mdev perform abstraction at a lower layer compared to vDPA, so it’s unnatural to incorporate high-layer APIs into VFIO
  • Not all NICs in the world are suited to the VFIO IOMMU design

The vDPA kernel framework provides vhost character devices, so it can be treated as a vhost device from user-space drivers like QEMU.

As an independent subsystem, it can keep up with new hardware features. For example, for live migration, device state can be saved and restored via the vhost API. It also supports SVA (Shared Virtual Address) and PASID (Process Address Space ID).

A user-space layer that mediates between host and guest, like QEMU, is required, but compared to simply passing through with SR-IOV, the attack surface can be kept small. It’s also compatible with Intel Scalable IOV and Sub Function (SF).

vDPA DPDK Framework

As another vDPA framework, there’s also the vDPA DPDK framework using host DPDK. From QEMU’s perspective, you simply need to connect to a vhost-user backend. This vDPA DPDK driver exists for most major NICs. However, the vDPA DPDK framework had the following limitations:

  • Since vhost-user is a user-space API, it cannot manipulate host kernel subsystems. For example, it cannot work in coordination with features like eBPF.
  • Since DPDK focuses on the data plane, it doesn’t provide tools to manipulate hardware.

To remove these limitations, the vDPA kernel framework was needed. How deep does the vDPA rabbit hole go?, redhat.com contains more detailed information.

Comparison with Existing Approaches

Let’s compare vDPA with existing approaches.

Item vhost-net vhost-user virtio full HW offload vDPA
Performance Low Medium High High
Data plane support on NIC side No No Yes Yes
Control plane support on NIC side No No Yes No
Live migration Yes Yes No Yes
Maturity High High High Medium?

Source: Achieving network wirespeed in an open standard manner: introducing vDPA, redhat.com

References