| ===================== |
| VFIO device Migration |
| ===================== |
| |
| Migration of virtual machine involves saving the state for each device that |
| the guest is running on source host and restoring this saved state on the |
| destination host. This document details how saving and restoring of VFIO |
| devices is done in QEMU. |
| |
| Migration of VFIO devices currently consists of a single stop-and-copy phase. |
| During the stop-and-copy phase the guest is stopped and the entire VFIO device |
| data is transferred to the destination. |
| |
| The pre-copy phase of migration is currently not supported for VFIO devices. |
| Support for VFIO pre-copy will be added later on. |
| |
| Note that currently VFIO migration is supported only for a single device. This |
| is due to VFIO migration's lack of P2P support. However, P2P support is planned |
| to be added later on. |
| |
| A detailed description of the UAPI for VFIO device migration can be found in |
| the comment for the ``vfio_device_mig_state`` structure in the header file |
| linux-headers/linux/vfio.h. |
| |
| VFIO implements the device hooks for the iterative approach as follows: |
| |
| * A ``save_setup`` function that sets up migration on the source. |
| |
| * A ``load_setup`` function that sets the VFIO device on the destination in |
| _RESUMING state. |
| |
| * A ``state_pending_exact`` function that reads pending_bytes from the vendor |
| driver, which indicates the amount of data that the vendor driver has yet to |
| save for the VFIO device. |
| |
| * A ``save_state`` function to save the device config space if it is present. |
| |
| * A ``save_live_complete_precopy`` function that sets the VFIO device in |
| _STOP_COPY state and iteratively copies the data for the VFIO device until |
| the vendor driver indicates that no data remains. |
| |
| * A ``load_state`` function that loads the config section and the data |
| sections that are generated by the save functions above. |
| |
| * ``cleanup`` functions for both save and load that perform any migration |
| related cleanup. |
| |
| |
| The VFIO migration code uses a VM state change handler to change the VFIO |
| device state when the VM state changes from running to not-running, and |
| vice versa. |
| |
| Similarly, a migration state change handler is used to trigger a transition of |
| the VFIO device state when certain changes of the migration state occur. For |
| example, the VFIO device state is transitioned back to _RUNNING in case a |
| migration failed or was canceled. |
| |
| System memory dirty pages tracking |
| ---------------------------------- |
| |
| A ``log_global_start`` and ``log_global_stop`` memory listener callback informs |
| the VFIO IOMMU module to start and stop dirty page tracking. A ``log_sync`` |
| memory listener callback marks those system memory pages as dirty which are |
| used for DMA by the VFIO device. The dirty pages bitmap is queried per |
| container. All pages pinned by the vendor driver through external APIs have to |
| be marked as dirty during migration. When there are CPU writes, CPU dirty page |
| tracking can identify dirtied pages, but any page pinned by the vendor driver |
| can also be written by the device. There is currently no device or IOMMU |
| support for dirty page tracking in hardware. |
| |
| By default, dirty pages are tracked during pre-copy as well as stop-and-copy |
| phase. So, a page pinned by the vendor driver will be copied to the destination |
| in both phases. Copying dirty pages in pre-copy phase helps QEMU to predict if |
| it can achieve its downtime tolerances. If QEMU during pre-copy phase keeps |
| finding dirty pages continuously, then it understands that even in stop-and-copy |
| phase, it is likely to find dirty pages and can predict the downtime |
| accordingly. |
| |
| QEMU also provides a per device opt-out option ``pre-copy-dirty-page-tracking`` |
| which disables querying the dirty bitmap during pre-copy phase. If it is set to |
| off, all dirty pages will be copied to the destination in stop-and-copy phase |
| only. |
| |
| System memory dirty pages tracking when vIOMMU is enabled |
| --------------------------------------------------------- |
| |
| With vIOMMU, an IO virtual address range can get unmapped while in pre-copy |
| phase of migration. In that case, the unmap ioctl returns any dirty pages in |
| that range and QEMU reports corresponding guest physical pages dirty. During |
| stop-and-copy phase, an IOMMU notifier is used to get a callback for mapped |
| pages and then dirty pages bitmap is fetched from VFIO IOMMU modules for those |
| mapped ranges. |
| |
| Flow of state changes during Live migration |
| =========================================== |
| |
| Below is the flow of state change during live migration. |
| The values in the brackets represent the VM state, the migration state, and |
| the VFIO device state, respectively. |
| |
| Live migration save path |
| ------------------------ |
| |
| :: |
| |
| QEMU normal running state |
| (RUNNING, _NONE, _RUNNING) |
| | |
| migrate_init spawns migration_thread |
| Migration thread then calls each device's .save_setup() |
| (RUNNING, _SETUP, _RUNNING) |
| | |
| (RUNNING, _ACTIVE, _RUNNING) |
| If device is active, get pending_bytes by .state_pending_exact() |
| If total pending_bytes >= threshold_size, call .save_live_iterate() |
| Iterate till total pending bytes converge and are less than threshold |
| | |
| On migration completion, vCPU stops and calls .save_live_complete_precopy for |
| each active device. The VFIO device is then transitioned into _STOP_COPY state |
| (FINISH_MIGRATE, _DEVICE, _STOP_COPY) |
| | |
| For the VFIO device, iterate in .save_live_complete_precopy until |
| pending data is 0 |
| (FINISH_MIGRATE, _DEVICE, _STOP) |
| | |
| (FINISH_MIGRATE, _COMPLETED, _STOP) |
| Migraton thread schedules cleanup bottom half and exits |
| |
| Live migration resume path |
| -------------------------- |
| |
| :: |
| |
| Incoming migration calls .load_setup for each device |
| (RESTORE_VM, _ACTIVE, _STOP) |
| | |
| For each device, .load_state is called for that device section data |
| (RESTORE_VM, _ACTIVE, _RESUMING) |
| | |
| At the end, .load_cleanup is called for each device and vCPUs are started |
| (RUNNING, _NONE, _RUNNING) |
| |
| Postcopy |
| ======== |
| |
| Postcopy migration is currently not supported for VFIO devices. |