User Guide


A virtio-mem device manages a memory region in the the guest physical address space of a virtual machine and provides a dynamic amount of memory via this memory region to the virtual machine.

The memory region is partitioned into memory blocks of fixed size, such as 2 MiB, that can either be in the state plugged or unplugged. Once plugged, a memory block can be used like ordinary RAM by the virtual machine. The guest driver selects memory blocks to (un)plug and requests the device to perform the (un)plug. The maximum size of a virtio-mem device corresponds to the size of the managed memory region.

The hypervisor requests the guest to change the amount of memory consumed via a virtio-mem device by adjusting the requested size of that device. Such resize requests correspond to memory hot(un)plug requests. It is up to the guest to fulfill such request by requesting to (un)plug devce blocks. Once the plugged size of a virtio-mem device is greater or equal to the requested size, the guest cannot plug any more memory blocks.

On initial start, and after a system reset, usually all memory blocks are unplugged; exceptions include rebooting while migrating. Consequently, if the guest isn't able to consume any/all memory (e.g., missing virtio-mem driver), this is usually reflected in the plugged size of the device.

The device-managed memory region is not exposed as RAM via other hw / firmware interfaces , such as the e820 BIOS memory map on x86-64. The virtio-mem driver in the guest is always responsible for detecting memory, plugging it, and exposing it to the operating system.


Virtio-mem was designed to combine the advantages of memory ballooning and DIMM-based memory hot(un)plug, avoiding known issues.

  • Growing a VM beyond its initial size, being able to shrink it again without having to care about DIMMs (e.g., count, size, alignment, migration, selection). It's purely guided by a requested target size.
  • Growing/shrinking a VM in small granularity (e.g., 4 MiB on x86-64).
  • Supports transparent huge pages in the hypervisor.
  • Supports differing page sizes between host/guest.
  • Supports vNUMA.
  • Supports kexec-style reboots.
  • Supports device passthrough / vfio / mdev.
  • Supports detecting guests which don't support virtio-mem and guests which cannot address all memory provided by virtio-mem.
  • Allows for supporting huge and gigantic pages (WIP).
  • Allows for detecting malicious guests (DRAFT).
  • Provides a uniform, flexible, and easy-to-use mechanism across architectures (and hypervisors) to dynamically resize VM memory.

Important Current Limitations

Virtio-mem is still under heavy development. It is considered tech-preview. Testing is very welcome.

In addition, there is no guest operating support except Linux kernels >= 5.8.

Passthrough of virtio-mem-pci devices to nested VMs

Similar to virtio-pmem (and virtio-balloon), a virtio-mem device provided by the hypervisor (L0) to a VM (L1) is not designed to be passed by the VM (L1) to a nested VM (L2) - e.g., using vfio-pci in L1. Passthrough will not harm L1, but L2 will not work as exptected.

results matching ""

    No results matching ""