Merge tag 'gpu-pull-request' of https://gitlab.com/marcandre.lureau/qemu into staging
virtio-gpu rutabaga support
# -----BEGIN PGP SIGNATURE-----
#
# iQJQBAABCAA6FiEEh6m9kz+HxgbSdvYt2ujhCXWWnOUFAmUtP5YcHG1hcmNhbmRy
# ZS5sdXJlYXVAcmVkaGF0LmNvbQAKCRDa6OEJdZac5X9CD/4s1n/GZyDr9bh04V03
# otAqtq2CSyuUOviqBrqxYgraCosUD1AuX8WkDy5cCPtnKC4FxRjgVlm9s7K/yxOW
# xZ78e4oVgB1F3voOq6LgtKK6BRG/BPqNzq9kuGcayCHQbSxg7zZVwa702Y18r2ZD
# pjOhbZCrJTSfASL7C3e/rm7798Wk/hzSrClGR56fbRAVgQ6Lww2L97/g0nHyDsWK
# DrCBrdqFtKjpLeUHmcqqS4AwdpG2SyCgqE7RehH/wOhvGTxh/JQvHbLGWK2mDC3j
# Qvs8mClC5bUlyNQuUz7lZtXYpzCW6VGMWlz8bIu+ncgSt6RK1TRbdEfDJPGoS4w9
# ZCGgcTxTG/6BEO76J/VpydfTWDo1FwQCQ0Vv7EussGoRTLrFC3ZRFgDWpqCw85yi
# AjPtc0C49FHBZhK0l1CoJGV4gGTDtD9jTYN0ffsd+aQesOjcsgivAWBaCOOQWUc8
# KOv9sr4kLLxcnuCnP7p/PuVRQD4eg0TmpdS8bXfnCzLSH8fCm+n76LuJEpGxEBey
# 3KPJPj/1BNBgVgew+znSLD/EYM6YhdK2gF5SNrYsdR6UcFdrPED/xmdhzFBeVym/
# xbBWqicDw4HLn5YrJ4tzqXje5XUz5pmJoT5zrRMXTHiu4pjBkEXO/lOdAoFwSy8M
# WNOtmSyB69uCrbyLw6xE2/YX8Q==
# =5a/Z
# -----END PGP SIGNATURE-----
# gpg: Signature made Mon 16 Oct 2023 09:50:14 EDT
# gpg: using RSA key 87A9BD933F87C606D276F62DDAE8E10975969CE5
# gpg: issuer "marcandre.lureau@redhat.com"
# gpg: Good signature from "Marc-André Lureau <marcandre.lureau@redhat.com>" [full]
# gpg: aka "Marc-André Lureau <marcandre.lureau@gmail.com>" [full]
# Primary key fingerprint: 87A9 BD93 3F87 C606 D276 F62D DAE8 E109 7596 9CE5
* tag 'gpu-pull-request' of https://gitlab.com/marcandre.lureau/qemu:
docs/system: add basic virtio-gpu documentation
gfxstream + rutabaga: enable rutabaga
gfxstream + rutabaga: meson support
gfxstream + rutabaga: add initial support for gfxstream
gfxstream + rutabaga prep: added need defintions, fields, and options
virtio-gpu: blob prep
virtio-gpu: hostmem
virtio-gpu: CONTEXT_INIT feature
virtio: Add shared memory capability
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
diff --git a/docs/system/device-emulation.rst b/docs/system/device-emulation.rst
index 4491c4c..1167f3a 100644
--- a/docs/system/device-emulation.rst
+++ b/docs/system/device-emulation.rst
@@ -91,6 +91,7 @@
devices/nvme.rst
devices/usb.rst
devices/vhost-user.rst
+ devices/virtio-gpu.rst
devices/virtio-pmem.rst
devices/vhost-user-rng.rst
devices/canokey.rst
diff --git a/docs/system/devices/virtio-gpu.rst b/docs/system/devices/virtio-gpu.rst
new file mode 100644
index 0000000..cb73dd7
--- /dev/null
+++ b/docs/system/devices/virtio-gpu.rst
@@ -0,0 +1,112 @@
+..
+ SPDX-License-Identifier: GPL-2.0-or-later
+
+virtio-gpu
+==========
+
+This document explains the setup and usage of the virtio-gpu device.
+The virtio-gpu device paravirtualizes the GPU and display controller.
+
+Linux kernel support
+--------------------
+
+virtio-gpu requires a guest Linux kernel built with the
+``CONFIG_DRM_VIRTIO_GPU`` option.
+
+QEMU virtio-gpu variants
+------------------------
+
+QEMU virtio-gpu device variants come in the following form:
+
+ * ``virtio-vga[-BACKEND]``
+ * ``virtio-gpu[-BACKEND][-INTERFACE]``
+ * ``vhost-user-vga``
+ * ``vhost-user-pci``
+
+**Backends:** QEMU provides a 2D virtio-gpu backend, and two accelerated
+backends: virglrenderer ('gl' device label) and rutabaga_gfx ('rutabaga'
+device label). There is a vhost-user backend that runs the graphics stack
+in a separate process for improved isolation.
+
+**Interfaces:** QEMU further categorizes virtio-gpu device variants based
+on the interface exposed to the guest. The interfaces can be classified
+into VGA and non-VGA variants. The VGA ones are prefixed with virtio-vga
+or vhost-user-vga while the non-VGA ones are prefixed with virtio-gpu or
+vhost-user-gpu.
+
+The VGA ones always use the PCI interface, but for the non-VGA ones, the
+user can further pick between MMIO or PCI. For MMIO, the user can suffix
+the device name with -device, though vhost-user-gpu does not support MMIO.
+For PCI, the user can suffix it with -pci. Without these suffixes, the
+platform default will be chosen.
+
+virtio-gpu 2d
+-------------
+
+The default 2D backend only performs 2D operations. The guest needs to
+employ a software renderer for 3D graphics.
+
+Typically, the software renderer is provided by `Mesa`_ or `SwiftShader`_.
+Mesa's implementations (LLVMpipe, Lavapipe and virgl below) work out of box
+on typical modern Linux distributions.
+
+.. parsed-literal::
+ -device virtio-gpu
+
+.. _Mesa: https://www.mesa3d.org/
+.. _SwiftShader: https://github.com/google/swiftshader
+
+virtio-gpu virglrenderer
+------------------------
+
+When using virgl accelerated graphics mode in the guest, OpenGL API calls
+are translated into an intermediate representation (see `Gallium3D`_). The
+intermediate representation is communicated to the host and the
+`virglrenderer`_ library on the host translates the intermediate
+representation back to OpenGL API calls.
+
+.. parsed-literal::
+ -device virtio-gpu-gl
+
+.. _Gallium3D: https://www.freedesktop.org/wiki/Software/gallium/
+.. _virglrenderer: https://gitlab.freedesktop.org/virgl/virglrenderer/
+
+virtio-gpu rutabaga
+-------------------
+
+virtio-gpu can also leverage rutabaga_gfx to provide `gfxstream`_
+rendering and `Wayland display passthrough`_. With the gfxstream rendering
+mode, GLES and Vulkan calls are forwarded to the host with minimal
+modification.
+
+The crosvm book provides directions on how to build a `gfxstream-enabled
+rutabaga`_ and launch a `guest Wayland proxy`_.
+
+This device does require host blob support (``hostmem`` field below). The
+``hostmem`` field specifies the size of virtio-gpu host memory window.
+This is typically between 256M and 8G.
+
+At least one virtio-gpu capability set ("capset") must be specified when
+starting the device. The currently capsets supported are ``gfxstream-vulkan``
+and ``cross-domain`` for Linux guests. For Android guests, the experimental
+``x-gfxstream-gles`` and ``x-gfxstream-composer`` capsets are also supported.
+
+The device will try to auto-detect the wayland socket path if the
+``cross-domain`` capset name is set. The user may optionally specify
+``wayland-socket-path`` for non-standard paths.
+
+The ``wsi`` option can be set to ``surfaceless`` or ``headless``.
+Surfaceless doesn't create a native window surface, but does copy from the
+render target to the Pixman buffer if a virtio-gpu 2D hypercall is issued.
+Headless is like surfaceless, but doesn't copy to the Pixman buffer.
+Surfaceless is the default if ``wsi`` is not specified.
+
+.. parsed-literal::
+ -device virtio-gpu-rutabaga,gfxstream-vulkan=on,cross-domain=on,
+ hostmem=8G,wayland-socket-path=/tmp/nonstandard/mock_wayland.sock,
+ wsi=headless
+
+.. _gfxstream: https://android.googlesource.com/platform/hardware/google/gfxstream/
+.. _Wayland display passthrough: https://www.youtube.com/watch?v=OZJiHMtIQ2M
+.. _gfxstream-enabled rutabaga: https://crosvm.dev/book/appendix/rutabaga_gfx.html
+.. _guest Wayland proxy: https://crosvm.dev/book/devices/wayland.html
diff --git a/hw/display/meson.build b/hw/display/meson.build
index 05619c6..2b64fd9 100644
--- a/hw/display/meson.build
+++ b/hw/display/meson.build
@@ -80,6 +80,13 @@
if_true: [files('virtio-gpu-gl.c', 'virtio-gpu-virgl.c'), pixman, virgl])
hw_display_modules += {'virtio-gpu-gl': virtio_gpu_gl_ss}
endif
+
+ if rutabaga.found()
+ virtio_gpu_rutabaga_ss = ss.source_set()
+ virtio_gpu_rutabaga_ss.add(when: ['CONFIG_VIRTIO_GPU', rutabaga],
+ if_true: [files('virtio-gpu-rutabaga.c'), pixman])
+ hw_display_modules += {'virtio-gpu-rutabaga': virtio_gpu_rutabaga_ss}
+ endif
endif
if config_all_devices.has_key('CONFIG_VIRTIO_PCI')
@@ -96,6 +103,12 @@
if_true: [files('virtio-gpu-pci-gl.c'), pixman])
hw_display_modules += {'virtio-gpu-pci-gl': virtio_gpu_pci_gl_ss}
endif
+ if rutabaga.found()
+ virtio_gpu_pci_rutabaga_ss = ss.source_set()
+ virtio_gpu_pci_rutabaga_ss.add(when: ['CONFIG_VIRTIO_GPU', 'CONFIG_VIRTIO_PCI', rutabaga],
+ if_true: [files('virtio-gpu-pci-rutabaga.c'), pixman])
+ hw_display_modules += {'virtio-gpu-pci-rutabaga': virtio_gpu_pci_rutabaga_ss}
+ endif
endif
if config_all_devices.has_key('CONFIG_VIRTIO_VGA')
@@ -114,6 +127,15 @@
virtio_vga_gl_ss.add(when: 'CONFIG_ACPI', if_true: files('acpi-vga.c'),
if_false: files('acpi-vga-stub.c'))
hw_display_modules += {'virtio-vga-gl': virtio_vga_gl_ss}
+
+ if rutabaga.found()
+ virtio_vga_rutabaga_ss = ss.source_set()
+ virtio_vga_rutabaga_ss.add(when: ['CONFIG_VIRTIO_VGA', rutabaga],
+ if_true: [files('virtio-vga-rutabaga.c'), pixman])
+ virtio_vga_rutabaga_ss.add(when: 'CONFIG_ACPI', if_true: files('acpi-vga.c'),
+ if_false: files('acpi-vga-stub.c'))
+ hw_display_modules += {'virtio-vga-rutabaga': virtio_vga_rutabaga_ss}
+ endif
endif
system_ss.add(when: 'CONFIG_OMAP', if_true: files('omap_lcdc.c'))
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index ca1fb7b..50c5373 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -223,7 +223,8 @@
{
VirtIOGPUBase *g = VIRTIO_GPU_BASE(vdev);
- if (virtio_gpu_virgl_enabled(g->conf)) {
+ if (virtio_gpu_virgl_enabled(g->conf) ||
+ virtio_gpu_rutabaga_enabled(g->conf)) {
features |= (1 << VIRTIO_GPU_F_VIRGL);
}
if (virtio_gpu_edid_enabled(g->conf)) {
@@ -232,6 +233,9 @@
if (virtio_gpu_blob_enabled(g->conf)) {
features |= (1 << VIRTIO_GPU_F_RESOURCE_BLOB);
}
+ if (virtio_gpu_context_init_enabled(g->conf)) {
+ features |= (1 << VIRTIO_GPU_F_CONTEXT_INIT);
+ }
return features;
}
diff --git a/hw/display/virtio-gpu-pci-rutabaga.c b/hw/display/virtio-gpu-pci-rutabaga.c
new file mode 100644
index 0000000..c96729e
--- /dev/null
+++ b/hw/display/virtio-gpu-pci-rutabaga.c
@@ -0,0 +1,47 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qemu/module.h"
+#include "hw/pci/pci.h"
+#include "hw/qdev-properties.h"
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/virtio-bus.h"
+#include "hw/virtio/virtio-gpu-pci.h"
+#include "qom/object.h"
+
+#define TYPE_VIRTIO_GPU_RUTABAGA_PCI "virtio-gpu-rutabaga-pci"
+OBJECT_DECLARE_SIMPLE_TYPE(VirtIOGPURutabagaPCI, VIRTIO_GPU_RUTABAGA_PCI)
+
+struct VirtIOGPURutabagaPCI {
+ VirtIOGPUPCIBase parent_obj;
+
+ VirtIOGPURutabaga vdev;
+};
+
+static void virtio_gpu_rutabaga_initfn(Object *obj)
+{
+ VirtIOGPURutabagaPCI *dev = VIRTIO_GPU_RUTABAGA_PCI(obj);
+
+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+ TYPE_VIRTIO_GPU_RUTABAGA);
+ VIRTIO_GPU_PCI_BASE(obj)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
+}
+
+static const TypeInfo virtio_gpu_rutabaga_pci_info[] = {
+ {
+ .name = TYPE_VIRTIO_GPU_RUTABAGA_PCI,
+ .parent = TYPE_VIRTIO_GPU_PCI_BASE,
+ .instance_size = sizeof(VirtIOGPURutabagaPCI),
+ .instance_init = virtio_gpu_rutabaga_initfn,
+ .interfaces = (InterfaceInfo[]) {
+ { INTERFACE_CONVENTIONAL_PCI_DEVICE },
+ }
+ },
+};
+
+DEFINE_TYPES(virtio_gpu_rutabaga_pci_info)
+
+module_obj(TYPE_VIRTIO_GPU_RUTABAGA_PCI);
+module_kconfig(VIRTIO_PCI);
+module_dep("hw-display-virtio-gpu-pci");
diff --git a/hw/display/virtio-gpu-pci.c b/hw/display/virtio-gpu-pci.c
index 93f214f..da6a99f 100644
--- a/hw/display/virtio-gpu-pci.c
+++ b/hw/display/virtio-gpu-pci.c
@@ -33,6 +33,20 @@
DeviceState *vdev = DEVICE(g);
int i;
+ if (virtio_gpu_hostmem_enabled(g->conf)) {
+ vpci_dev->msix_bar_idx = 1;
+ vpci_dev->modern_mem_bar_idx = 2;
+ memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+ g->conf.hostmem);
+ pci_register_bar(&vpci_dev->pci_dev, 4,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_PREFETCH |
+ PCI_BASE_ADDRESS_MEM_TYPE_64,
+ &g->hostmem);
+ virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+ VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+ }
+
virtio_pci_force_virtio_1(vpci_dev);
if (!qdev_realize(vdev, BUS(&vpci_dev->bus), errp)) {
return;
diff --git a/hw/display/virtio-gpu-rutabaga.c b/hw/display/virtio-gpu-rutabaga.c
new file mode 100644
index 0000000..9e67f9b
--- /dev/null
+++ b/hw/display/virtio-gpu-rutabaga.c
@@ -0,0 +1,1120 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#include "qemu/osdep.h"
+#include "qapi/error.h"
+#include "qemu/error-report.h"
+#include "qemu/iov.h"
+#include "trace.h"
+#include "hw/virtio/virtio.h"
+#include "hw/virtio/virtio-gpu.h"
+#include "hw/virtio/virtio-gpu-pixman.h"
+#include "hw/virtio/virtio-iommu.h"
+
+#include <glib/gmem.h>
+#include <rutabaga_gfx/rutabaga_gfx_ffi.h>
+
+#define CHECK(condition, cmd) \
+ do { \
+ if (!(condition)) { \
+ error_report("CHECK failed in %s() %s:" "%d", __func__, \
+ __FILE__, __LINE__); \
+ (cmd)->error = VIRTIO_GPU_RESP_ERR_UNSPEC; \
+ return; \
+ } \
+ } while (0)
+
+struct rutabaga_aio_data {
+ struct VirtIOGPURutabaga *vr;
+ struct rutabaga_fence fence;
+};
+
+static void
+virtio_gpu_rutabaga_update_cursor(VirtIOGPU *g, struct virtio_gpu_scanout *s,
+ uint32_t resource_id)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct rutabaga_transfer transfer = { 0 };
+ struct iovec transfer_iovec;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ res = virtio_gpu_find_resource(g, resource_id);
+ if (!res) {
+ return;
+ }
+
+ if (res->width != s->current_cursor->width ||
+ res->height != s->current_cursor->height) {
+ return;
+ }
+
+ transfer.x = 0;
+ transfer.y = 0;
+ transfer.z = 0;
+ transfer.w = res->width;
+ transfer.h = res->height;
+ transfer.d = 1;
+
+ transfer_iovec.iov_base = s->current_cursor->data;
+ transfer_iovec.iov_len = res->width * res->height * 4;
+
+ rutabaga_resource_transfer_read(vr->rutabaga, 0,
+ resource_id, &transfer,
+ &transfer_iovec);
+}
+
+static void
+virtio_gpu_rutabaga_gl_flushed(VirtIOGPUBase *b)
+{
+ VirtIOGPU *g = VIRTIO_GPU(b);
+ virtio_gpu_process_cmdq(g);
+}
+
+static void
+rutabaga_cmd_create_resource_2d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_create_3d rc_3d = { 0 };
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_create_2d c2d;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(c2d);
+ trace_virtio_gpu_cmd_res_create_2d(c2d.resource_id, c2d.format,
+ c2d.width, c2d.height);
+
+ rc_3d.target = 2;
+ rc_3d.format = c2d.format;
+ rc_3d.bind = (1 << 1);
+ rc_3d.width = c2d.width;
+ rc_3d.height = c2d.height;
+ rc_3d.depth = 1;
+ rc_3d.array_size = 1;
+ rc_3d.last_level = 0;
+ rc_3d.nr_samples = 0;
+ rc_3d.flags = VIRTIO_GPU_RESOURCE_FLAG_Y_0_TOP;
+
+ result = rutabaga_resource_create_3d(vr->rutabaga, c2d.resource_id, &rc_3d);
+ CHECK(!result, cmd);
+
+ res = g_new0(struct virtio_gpu_simple_resource, 1);
+ res->width = c2d.width;
+ res->height = c2d.height;
+ res->format = c2d.format;
+ res->resource_id = c2d.resource_id;
+
+ QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+}
+
+static void
+rutabaga_cmd_create_resource_3d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_create_3d rc_3d = { 0 };
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_create_3d c3d;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(c3d);
+
+ trace_virtio_gpu_cmd_res_create_3d(c3d.resource_id, c3d.format,
+ c3d.width, c3d.height, c3d.depth);
+
+ rc_3d.target = c3d.target;
+ rc_3d.format = c3d.format;
+ rc_3d.bind = c3d.bind;
+ rc_3d.width = c3d.width;
+ rc_3d.height = c3d.height;
+ rc_3d.depth = c3d.depth;
+ rc_3d.array_size = c3d.array_size;
+ rc_3d.last_level = c3d.last_level;
+ rc_3d.nr_samples = c3d.nr_samples;
+ rc_3d.flags = c3d.flags;
+
+ result = rutabaga_resource_create_3d(vr->rutabaga, c3d.resource_id, &rc_3d);
+ CHECK(!result, cmd);
+
+ res = g_new0(struct virtio_gpu_simple_resource, 1);
+ res->width = c3d.width;
+ res->height = c3d.height;
+ res->format = c3d.format;
+ res->resource_id = c3d.resource_id;
+
+ QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+}
+
+static void
+rutabaga_cmd_resource_unref(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_unref unref;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(unref);
+
+ trace_virtio_gpu_cmd_res_unref(unref.resource_id);
+
+ res = virtio_gpu_find_resource(g, unref.resource_id);
+ CHECK(res, cmd);
+
+ result = rutabaga_resource_unref(vr->rutabaga, unref.resource_id);
+ CHECK(!result, cmd);
+
+ if (res->image) {
+ pixman_image_unref(res->image);
+ }
+
+ QTAILQ_REMOVE(&g->reslist, res, next);
+ g_free(res);
+}
+
+static void
+rutabaga_cmd_context_create(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_ctx_create cc;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cc);
+ trace_virtio_gpu_cmd_ctx_create(cc.hdr.ctx_id,
+ cc.debug_name);
+
+ result = rutabaga_context_create(vr->rutabaga, cc.hdr.ctx_id,
+ cc.context_init, cc.debug_name, cc.nlen);
+ CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_context_destroy(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_ctx_destroy cd;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cd);
+ trace_virtio_gpu_cmd_ctx_destroy(cd.hdr.ctx_id);
+
+ result = rutabaga_context_destroy(vr->rutabaga, cd.hdr.ctx_id);
+ CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_resource_flush(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result, i;
+ struct virtio_gpu_scanout *scanout = NULL;
+ struct virtio_gpu_simple_resource *res;
+ struct rutabaga_transfer transfer = { 0 };
+ struct iovec transfer_iovec;
+ struct virtio_gpu_resource_flush rf;
+ bool found = false;
+
+ VirtIOGPUBase *vb = VIRTIO_GPU_BASE(g);
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+ if (vr->headless) {
+ return;
+ }
+
+ VIRTIO_GPU_FILL_CMD(rf);
+ trace_virtio_gpu_cmd_res_flush(rf.resource_id,
+ rf.r.width, rf.r.height, rf.r.x, rf.r.y);
+
+ res = virtio_gpu_find_resource(g, rf.resource_id);
+ CHECK(res, cmd);
+
+ for (i = 0; i < vb->conf.max_outputs; i++) {
+ scanout = &vb->scanout[i];
+ if (i == res->scanout_bitmask) {
+ found = true;
+ break;
+ }
+ }
+
+ if (!found) {
+ return;
+ }
+
+ transfer.x = 0;
+ transfer.y = 0;
+ transfer.z = 0;
+ transfer.w = res->width;
+ transfer.h = res->height;
+ transfer.d = 1;
+
+ transfer_iovec.iov_base = pixman_image_get_data(res->image);
+ transfer_iovec.iov_len = res->width * res->height * 4;
+
+ result = rutabaga_resource_transfer_read(vr->rutabaga, 0,
+ rf.resource_id, &transfer,
+ &transfer_iovec);
+ CHECK(!result, cmd);
+ dpy_gfx_update_full(scanout->con);
+}
+
+static void
+rutabaga_cmd_set_scanout(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_scanout *scanout = NULL;
+ struct virtio_gpu_set_scanout ss;
+
+ VirtIOGPUBase *vb = VIRTIO_GPU_BASE(g);
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+ if (vr->headless) {
+ return;
+ }
+
+ VIRTIO_GPU_FILL_CMD(ss);
+ trace_virtio_gpu_cmd_set_scanout(ss.scanout_id, ss.resource_id,
+ ss.r.width, ss.r.height, ss.r.x, ss.r.y);
+
+ CHECK(ss.scanout_id < VIRTIO_GPU_MAX_SCANOUTS, cmd);
+ scanout = &vb->scanout[ss.scanout_id];
+
+ if (ss.resource_id == 0) {
+ dpy_gfx_replace_surface(scanout->con, NULL);
+ dpy_gl_scanout_disable(scanout->con);
+ return;
+ }
+
+ res = virtio_gpu_find_resource(g, ss.resource_id);
+ CHECK(res, cmd);
+
+ if (!res->image) {
+ pixman_format_code_t pformat;
+ pformat = virtio_gpu_get_pixman_format(res->format);
+ CHECK(pformat, cmd);
+
+ res->image = pixman_image_create_bits(pformat,
+ res->width,
+ res->height,
+ NULL, 0);
+ CHECK(res->image, cmd);
+ pixman_image_ref(res->image);
+ }
+
+ vb->enable = 1;
+
+ /* realloc the surface ptr */
+ scanout->ds = qemu_create_displaysurface_pixman(res->image);
+ dpy_gfx_replace_surface(scanout->con, NULL);
+ dpy_gfx_replace_surface(scanout->con, scanout->ds);
+ res->scanout_bitmask = ss.scanout_id;
+}
+
+static void
+rutabaga_cmd_submit_3d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_cmd_submit cs;
+ struct rutabaga_command rutabaga_cmd = { 0 };
+ g_autofree uint8_t *buf = NULL;
+ size_t s;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cs);
+ trace_virtio_gpu_cmd_ctx_submit(cs.hdr.ctx_id, cs.size);
+
+ buf = g_new0(uint8_t, cs.size);
+ s = iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num,
+ sizeof(cs), buf, cs.size);
+ CHECK(s == cs.size, cmd);
+
+ rutabaga_cmd.ctx_id = cs.hdr.ctx_id;
+ rutabaga_cmd.cmd = buf;
+ rutabaga_cmd.cmd_size = cs.size;
+
+ result = rutabaga_submit_command(vr->rutabaga, &rutabaga_cmd);
+ CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_to_host_2d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_transfer transfer = { 0 };
+ struct virtio_gpu_transfer_to_host_2d t2d;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(t2d);
+ trace_virtio_gpu_cmd_res_xfer_toh_2d(t2d.resource_id);
+
+ transfer.x = t2d.r.x;
+ transfer.y = t2d.r.y;
+ transfer.z = 0;
+ transfer.w = t2d.r.width;
+ transfer.h = t2d.r.height;
+ transfer.d = 1;
+
+ result = rutabaga_resource_transfer_write(vr->rutabaga, 0, t2d.resource_id,
+ &transfer);
+ CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_to_host_3d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_transfer transfer = { 0 };
+ struct virtio_gpu_transfer_host_3d t3d;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(t3d);
+ trace_virtio_gpu_cmd_res_xfer_toh_3d(t3d.resource_id);
+
+ transfer.x = t3d.box.x;
+ transfer.y = t3d.box.y;
+ transfer.z = t3d.box.z;
+ transfer.w = t3d.box.w;
+ transfer.h = t3d.box.h;
+ transfer.d = t3d.box.d;
+ transfer.level = t3d.level;
+ transfer.stride = t3d.stride;
+ transfer.layer_stride = t3d.layer_stride;
+ transfer.offset = t3d.offset;
+
+ result = rutabaga_resource_transfer_write(vr->rutabaga, t3d.hdr.ctx_id,
+ t3d.resource_id, &transfer);
+ CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_transfer_from_host_3d(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct rutabaga_transfer transfer = { 0 };
+ struct virtio_gpu_transfer_host_3d t3d;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(t3d);
+ trace_virtio_gpu_cmd_res_xfer_fromh_3d(t3d.resource_id);
+
+ transfer.x = t3d.box.x;
+ transfer.y = t3d.box.y;
+ transfer.z = t3d.box.z;
+ transfer.w = t3d.box.w;
+ transfer.h = t3d.box.h;
+ transfer.d = t3d.box.d;
+ transfer.level = t3d.level;
+ transfer.stride = t3d.stride;
+ transfer.layer_stride = t3d.layer_stride;
+ transfer.offset = t3d.offset;
+
+ result = rutabaga_resource_transfer_read(vr->rutabaga, t3d.hdr.ctx_id,
+ t3d.resource_id, &transfer, NULL);
+ CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_attach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ struct rutabaga_iovecs vecs = { 0 };
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_attach_backing att_rb;
+ int ret;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(att_rb);
+ trace_virtio_gpu_cmd_res_back_attach(att_rb.resource_id);
+
+ res = virtio_gpu_find_resource(g, att_rb.resource_id);
+ CHECK(res, cmd);
+ CHECK(!res->iov, cmd);
+
+ ret = virtio_gpu_create_mapping_iov(g, att_rb.nr_entries, sizeof(att_rb),
+ cmd, NULL, &res->iov, &res->iov_cnt);
+ CHECK(!ret, cmd);
+
+ vecs.iovecs = res->iov;
+ vecs.num_iovecs = res->iov_cnt;
+
+ ret = rutabaga_resource_attach_backing(vr->rutabaga, att_rb.resource_id,
+ &vecs);
+ if (ret != 0) {
+ virtio_gpu_cleanup_mapping(g, res);
+ }
+
+ CHECK(!ret, cmd);
+}
+
+static void
+rutabaga_cmd_detach_backing(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_detach_backing detach_rb;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(detach_rb);
+ trace_virtio_gpu_cmd_res_back_detach(detach_rb.resource_id);
+
+ res = virtio_gpu_find_resource(g, detach_rb.resource_id);
+ CHECK(res, cmd);
+
+ rutabaga_resource_detach_backing(vr->rutabaga,
+ detach_rb.resource_id);
+
+ virtio_gpu_cleanup_mapping(g, res);
+}
+
+static void
+rutabaga_cmd_ctx_attach_resource(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_ctx_resource att_res;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(att_res);
+ trace_virtio_gpu_cmd_ctx_res_attach(att_res.hdr.ctx_id,
+ att_res.resource_id);
+
+ result = rutabaga_context_attach_resource(vr->rutabaga, att_res.hdr.ctx_id,
+ att_res.resource_id);
+ CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_ctx_detach_resource(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_ctx_resource det_res;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(det_res);
+ trace_virtio_gpu_cmd_ctx_res_detach(det_res.hdr.ctx_id,
+ det_res.resource_id);
+
+ result = rutabaga_context_detach_resource(vr->rutabaga, det_res.hdr.ctx_id,
+ det_res.resource_id);
+ CHECK(!result, cmd);
+}
+
+static void
+rutabaga_cmd_get_capset_info(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_get_capset_info info;
+ struct virtio_gpu_resp_capset_info resp;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(info);
+
+ result = rutabaga_get_capset_info(vr->rutabaga, info.capset_index,
+ &resp.capset_id, &resp.capset_max_version,
+ &resp.capset_max_size);
+ CHECK(!result, cmd);
+
+ resp.hdr.type = VIRTIO_GPU_RESP_OK_CAPSET_INFO;
+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void
+rutabaga_cmd_get_capset(VirtIOGPU *g, struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ struct virtio_gpu_get_capset gc;
+ struct virtio_gpu_resp_capset *resp;
+ uint32_t capset_size, capset_version;
+ uint32_t current_id, i;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(gc);
+ for (i = 0; i < vr->num_capsets; i++) {
+ result = rutabaga_get_capset_info(vr->rutabaga, i,
+ ¤t_id, &capset_version,
+ &capset_size);
+ CHECK(!result, cmd);
+
+ if (current_id == gc.capset_id) {
+ break;
+ }
+ }
+
+ CHECK(i < vr->num_capsets, cmd);
+
+ resp = g_malloc0(sizeof(*resp) + capset_size);
+ resp->hdr.type = VIRTIO_GPU_RESP_OK_CAPSET;
+ rutabaga_get_capset(vr->rutabaga, gc.capset_id, gc.capset_version,
+ resp->capset_data, capset_size);
+
+ virtio_gpu_ctrl_response(g, cmd, &resp->hdr, sizeof(*resp) + capset_size);
+ g_free(resp);
+}
+
+static void
+rutabaga_cmd_resource_create_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int result;
+ struct rutabaga_iovecs vecs = { 0 };
+ g_autofree struct virtio_gpu_simple_resource *res = NULL;
+ struct virtio_gpu_resource_create_blob cblob;
+ struct rutabaga_create_blob rc_blob = { 0 };
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cblob);
+ trace_virtio_gpu_cmd_res_create_blob(cblob.resource_id, cblob.size);
+
+ CHECK(cblob.resource_id != 0, cmd);
+
+ res = g_new0(struct virtio_gpu_simple_resource, 1);
+
+ res->resource_id = cblob.resource_id;
+ res->blob_size = cblob.size;
+
+ if (cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+ result = virtio_gpu_create_mapping_iov(g, cblob.nr_entries,
+ sizeof(cblob), cmd, &res->addrs,
+ &res->iov, &res->iov_cnt);
+ CHECK(!result, cmd);
+ }
+
+ rc_blob.blob_id = cblob.blob_id;
+ rc_blob.blob_mem = cblob.blob_mem;
+ rc_blob.blob_flags = cblob.blob_flags;
+ rc_blob.size = cblob.size;
+
+ vecs.iovecs = res->iov;
+ vecs.num_iovecs = res->iov_cnt;
+
+ result = rutabaga_resource_create_blob(vr->rutabaga, cblob.hdr.ctx_id,
+ cblob.resource_id, &rc_blob, &vecs,
+ NULL);
+
+ if (result && cblob.blob_mem != VIRTIO_GPU_BLOB_MEM_HOST3D) {
+ virtio_gpu_cleanup_mapping(g, res);
+ }
+
+ CHECK(!result, cmd);
+
+ QTAILQ_INSERT_HEAD(&g->reslist, res, next);
+ res = NULL;
+}
+
+static void
+rutabaga_cmd_resource_map_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ uint32_t map_info = 0;
+ uint32_t slot = 0;
+ struct virtio_gpu_simple_resource *res;
+ struct rutabaga_mapping mapping = { 0 };
+ struct virtio_gpu_resource_map_blob mblob;
+ struct virtio_gpu_resp_map_info resp = { 0 };
+
+ VirtIOGPUBase *vb = VIRTIO_GPU_BASE(g);
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(mblob);
+
+ CHECK(mblob.resource_id != 0, cmd);
+
+ res = virtio_gpu_find_resource(g, mblob.resource_id);
+ CHECK(res, cmd);
+
+ result = rutabaga_resource_map_info(vr->rutabaga, mblob.resource_id,
+ &map_info);
+ CHECK(!result, cmd);
+
+ /*
+ * RUTABAGA_MAP_ACCESS_* flags are not part of the virtio-gpu spec, but do
+ * exist to potentially allow the hypervisor to restrict write access to
+ * memory. QEMU does not need to use this functionality at the moment.
+ */
+ resp.map_info = map_info & RUTABAGA_MAP_CACHE_MASK;
+
+ result = rutabaga_resource_map(vr->rutabaga, mblob.resource_id, &mapping);
+ CHECK(!result, cmd);
+
+ /*
+ * There is small risk of the MemoryRegion dereferencing the pointer after
+ * rutabaga unmaps it. Please see discussion here:
+ *
+ * https://lists.gnu.org/archive/html/qemu-devel/2023-09/msg05141.html
+ *
+ * It is highly unlikely to happen in practice and doesn't affect known
+ * use cases. However, it should be fixed and is noted here for posterity.
+ */
+ for (slot = 0; slot < MAX_SLOTS; slot++) {
+ if (vr->memory_regions[slot].used) {
+ continue;
+ }
+
+ MemoryRegion *mr = &(vr->memory_regions[slot].mr);
+ memory_region_init_ram_ptr(mr, OBJECT(vr), "blob", mapping.size,
+ mapping.ptr);
+ memory_region_add_subregion(&vb->hostmem, mblob.offset, mr);
+ vr->memory_regions[slot].resource_id = mblob.resource_id;
+ vr->memory_regions[slot].used = 1;
+ break;
+ }
+
+ if (slot >= MAX_SLOTS) {
+ result = rutabaga_resource_unmap(vr->rutabaga, mblob.resource_id);
+ CHECK(!result, cmd);
+ }
+
+ CHECK(slot < MAX_SLOTS, cmd);
+
+ resp.hdr.type = VIRTIO_GPU_RESP_OK_MAP_INFO;
+ virtio_gpu_ctrl_response(g, cmd, &resp.hdr, sizeof(resp));
+}
+
+static void
+rutabaga_cmd_resource_unmap_blob(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ int32_t result;
+ uint32_t slot = 0;
+ struct virtio_gpu_simple_resource *res;
+ struct virtio_gpu_resource_unmap_blob ublob;
+
+ VirtIOGPUBase *vb = VIRTIO_GPU_BASE(g);
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(ublob);
+
+ CHECK(ublob.resource_id != 0, cmd);
+
+ res = virtio_gpu_find_resource(g, ublob.resource_id);
+ CHECK(res, cmd);
+
+ for (slot = 0; slot < MAX_SLOTS; slot++) {
+ if (vr->memory_regions[slot].resource_id != ublob.resource_id) {
+ continue;
+ }
+
+ MemoryRegion *mr = &(vr->memory_regions[slot].mr);
+ memory_region_del_subregion(&vb->hostmem, mr);
+
+ vr->memory_regions[slot].resource_id = 0;
+ vr->memory_regions[slot].used = 0;
+ break;
+ }
+
+ CHECK(slot < MAX_SLOTS, cmd);
+ result = rutabaga_resource_unmap(vr->rutabaga, res->resource_id);
+ CHECK(!result, cmd);
+}
+
+static void
+virtio_gpu_rutabaga_process_cmd(VirtIOGPU *g,
+ struct virtio_gpu_ctrl_command *cmd)
+{
+ struct rutabaga_fence fence = { 0 };
+ int32_t result;
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ VIRTIO_GPU_FILL_CMD(cmd->cmd_hdr);
+
+ switch (cmd->cmd_hdr.type) {
+ case VIRTIO_GPU_CMD_CTX_CREATE:
+ rutabaga_cmd_context_create(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_CTX_DESTROY:
+ rutabaga_cmd_context_destroy(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_2D:
+ rutabaga_cmd_create_resource_2d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_3D:
+ rutabaga_cmd_create_resource_3d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_SUBMIT_3D:
+ rutabaga_cmd_submit_3d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_2D:
+ rutabaga_cmd_transfer_to_host_2d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_TRANSFER_TO_HOST_3D:
+ rutabaga_cmd_transfer_to_host_3d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_TRANSFER_FROM_HOST_3D:
+ rutabaga_cmd_transfer_from_host_3d(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_ATTACH_BACKING:
+ rutabaga_cmd_attach_backing(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_DETACH_BACKING:
+ rutabaga_cmd_detach_backing(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_SET_SCANOUT:
+ rutabaga_cmd_set_scanout(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_FLUSH:
+ rutabaga_cmd_resource_flush(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_UNREF:
+ rutabaga_cmd_resource_unref(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_CTX_ATTACH_RESOURCE:
+ rutabaga_cmd_ctx_attach_resource(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_CTX_DETACH_RESOURCE:
+ rutabaga_cmd_ctx_detach_resource(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_GET_CAPSET_INFO:
+ rutabaga_cmd_get_capset_info(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_GET_CAPSET:
+ rutabaga_cmd_get_capset(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_GET_DISPLAY_INFO:
+ virtio_gpu_get_display_info(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_GET_EDID:
+ virtio_gpu_get_edid(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_CREATE_BLOB:
+ rutabaga_cmd_resource_create_blob(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_MAP_BLOB:
+ rutabaga_cmd_resource_map_blob(g, cmd);
+ break;
+ case VIRTIO_GPU_CMD_RESOURCE_UNMAP_BLOB:
+ rutabaga_cmd_resource_unmap_blob(g, cmd);
+ break;
+ default:
+ cmd->error = VIRTIO_GPU_RESP_ERR_UNSPEC;
+ break;
+ }
+
+ if (cmd->finished) {
+ return;
+ }
+ if (cmd->error) {
+ error_report("%s: ctrl 0x%x, error 0x%x", __func__,
+ cmd->cmd_hdr.type, cmd->error);
+ virtio_gpu_ctrl_response_nodata(g, cmd, cmd->error);
+ return;
+ }
+ if (!(cmd->cmd_hdr.flags & VIRTIO_GPU_FLAG_FENCE)) {
+ virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
+ return;
+ }
+
+ fence.flags = cmd->cmd_hdr.flags;
+ fence.ctx_id = cmd->cmd_hdr.ctx_id;
+ fence.fence_id = cmd->cmd_hdr.fence_id;
+ fence.ring_idx = cmd->cmd_hdr.ring_idx;
+
+ trace_virtio_gpu_fence_ctrl(cmd->cmd_hdr.fence_id, cmd->cmd_hdr.type);
+
+ result = rutabaga_create_fence(vr->rutabaga, &fence);
+ CHECK(!result, cmd);
+}
+
+static void
+virtio_gpu_rutabaga_aio_cb(void *opaque)
+{
+ struct rutabaga_aio_data *data = opaque;
+ VirtIOGPU *g = VIRTIO_GPU(data->vr);
+ struct rutabaga_fence fence_data = data->fence;
+ struct virtio_gpu_ctrl_command *cmd, *tmp;
+
+ uint32_t signaled_ctx_specific = fence_data.flags &
+ RUTABAGA_FLAG_INFO_RING_IDX;
+
+ QTAILQ_FOREACH_SAFE(cmd, &g->fenceq, next, tmp) {
+ /*
+ * Due to context specific timelines.
+ */
+ uint32_t target_ctx_specific = cmd->cmd_hdr.flags &
+ RUTABAGA_FLAG_INFO_RING_IDX;
+
+ if (signaled_ctx_specific != target_ctx_specific) {
+ continue;
+ }
+
+ if (signaled_ctx_specific &&
+ (cmd->cmd_hdr.ring_idx != fence_data.ring_idx)) {
+ continue;
+ }
+
+ if (cmd->cmd_hdr.fence_id > fence_data.fence_id) {
+ continue;
+ }
+
+ trace_virtio_gpu_fence_resp(cmd->cmd_hdr.fence_id);
+ virtio_gpu_ctrl_response_nodata(g, cmd, VIRTIO_GPU_RESP_OK_NODATA);
+ QTAILQ_REMOVE(&g->fenceq, cmd, next);
+ g_free(cmd);
+ }
+
+ g_free(data);
+}
+
+static void
+virtio_gpu_rutabaga_fence_cb(uint64_t user_data,
+ const struct rutabaga_fence *fence)
+{
+ struct rutabaga_aio_data *data;
+ VirtIOGPU *g = (VirtIOGPU *)user_data;
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ /*
+ * gfxstream and both cross-domain (and even newer versions virglrenderer:
+ * see VIRGL_RENDERER_ASYNC_FENCE_CB) like to signal fence completion on
+ * threads ("callback threads") that are different from the thread that
+ * processes the command queue ("main thread").
+ *
+ * crosvm and other virtio-gpu 1.1 implementations enable callback threads
+ * via locking. However, on QEMU a deadlock is observed if
+ * virtio_gpu_ctrl_response_nodata(..) [used in the fence callback] is used
+ * from a thread that is not the main thread.
+ *
+ * The reason is QEMU's internal locking is designed to work with QEMU
+ * threads (see rcu_register_thread()) and not generic C/C++/Rust threads.
+ * For now, we can workaround this by scheduling the return of the
+ * fence descriptors on the main thread.
+ */
+
+ data = g_new0(struct rutabaga_aio_data, 1);
+ data->vr = vr;
+ data->fence = *fence;
+ aio_bh_schedule_oneshot(qemu_get_aio_context(),
+ virtio_gpu_rutabaga_aio_cb,
+ data);
+}
+
+static void
+virtio_gpu_rutabaga_debug_cb(uint64_t user_data,
+ const struct rutabaga_debug *debug)
+{
+ switch (debug->debug_type) {
+ case RUTABAGA_DEBUG_ERROR:
+ error_report("%s", debug->message);
+ break;
+ case RUTABAGA_DEBUG_WARN:
+ warn_report("%s", debug->message);
+ break;
+ case RUTABAGA_DEBUG_INFO:
+ info_report("%s", debug->message);
+ break;
+ default:
+ error_report("unknown debug type: %u", debug->debug_type);
+ }
+}
+
+static bool virtio_gpu_rutabaga_init(VirtIOGPU *g, Error **errp)
+{
+ int result;
+ struct rutabaga_builder builder = { 0 };
+ struct rutabaga_channel channel = { 0 };
+ struct rutabaga_channels channels = { 0 };
+
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+ vr->rutabaga = NULL;
+
+ builder.wsi = RUTABAGA_WSI_SURFACELESS;
+ /*
+ * Currently, if WSI is specified, the only valid strings are "surfaceless"
+ * or "headless". Surfaceless doesn't create a native window surface, but
+ * does copy from the render target to the Pixman buffer if a virtio-gpu
+ * 2D hypercall is issued. Surfacless is the default.
+ *
+ * Headless is like surfaceless, but doesn't copy to the Pixman buffer. The
+ * use case is automated testing environments where there is no need to view
+ * results.
+ *
+ * In the future, more performant virtio-gpu 2D UI integration may be added.
+ */
+ if (vr->wsi) {
+ if (g_str_equal(vr->wsi, "surfaceless")) {
+ vr->headless = false;
+ } else if (g_str_equal(vr->wsi, "headless")) {
+ vr->headless = true;
+ } else {
+ error_setg(errp, "invalid wsi option selected");
+ return false;
+ }
+ }
+
+ builder.fence_cb = virtio_gpu_rutabaga_fence_cb;
+ builder.debug_cb = virtio_gpu_rutabaga_debug_cb;
+ builder.capset_mask = vr->capset_mask;
+ builder.user_data = (uint64_t)g;
+
+ /*
+ * If the user doesn't specify the wayland socket path, we try to infer
+ * the socket via a process similar to the one used by libwayland.
+ * libwayland does the following:
+ *
+ * 1) If $WAYLAND_DISPLAY is set, attempt to connect to
+ * $XDG_RUNTIME_DIR/$WAYLAND_DISPLAY
+ * 2) Otherwise, attempt to connect to $XDG_RUNTIME_DIR/wayland-0
+ * 3) Otherwise, don't pass a wayland socket to rutabaga. If a guest
+ * wayland proxy is launched, it will fail to work.
+ */
+ channel.channel_type = RUTABAGA_CHANNEL_TYPE_WAYLAND;
+ g_autofree gchar *path = NULL;
+ if (!vr->wayland_socket_path) {
+ const gchar *runtime_dir = g_get_user_runtime_dir();
+ const gchar *display = g_getenv("WAYLAND_DISPLAY");
+ if (!display) {
+ display = "wayland-0";
+ }
+
+ if (runtime_dir) {
+ path = g_build_filename(runtime_dir, display, NULL);
+ channel.channel_name = path;
+ }
+ } else {
+ channel.channel_name = vr->wayland_socket_path;
+ }
+
+ if ((builder.capset_mask & (1 << RUTABAGA_CAPSET_CROSS_DOMAIN))) {
+ if (channel.channel_name) {
+ channels.channels = &channel;
+ channels.num_channels = 1;
+ builder.channels = &channels;
+ }
+ }
+
+ result = rutabaga_init(&builder, &vr->rutabaga);
+ if (result) {
+ error_setg_errno(errp, -result, "Failed to init rutabaga");
+ return false;
+ }
+
+ return true;
+}
+
+static int virtio_gpu_rutabaga_get_num_capsets(VirtIOGPU *g)
+{
+ int result;
+ uint32_t num_capsets;
+ VirtIOGPURutabaga *vr = VIRTIO_GPU_RUTABAGA(g);
+
+ result = rutabaga_get_num_capsets(vr->rutabaga, &num_capsets);
+ if (result) {
+ error_report("Failed to get capsets");
+ return 0;
+ }
+ vr->num_capsets = num_capsets;
+ return num_capsets;
+}
+
+static void virtio_gpu_rutabaga_handle_ctrl(VirtIODevice *vdev, VirtQueue *vq)
+{
+ VirtIOGPU *g = VIRTIO_GPU(vdev);
+ struct virtio_gpu_ctrl_command *cmd;
+
+ if (!virtio_queue_ready(vq)) {
+ return;
+ }
+
+ cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
+ while (cmd) {
+ cmd->vq = vq;
+ cmd->error = 0;
+ cmd->finished = false;
+ QTAILQ_INSERT_TAIL(&g->cmdq, cmd, next);
+ cmd = virtqueue_pop(vq, sizeof(struct virtio_gpu_ctrl_command));
+ }
+
+ virtio_gpu_process_cmdq(g);
+}
+
+static void virtio_gpu_rutabaga_realize(DeviceState *qdev, Error **errp)
+{
+ int num_capsets;
+ VirtIOGPUBase *bdev = VIRTIO_GPU_BASE(qdev);
+ VirtIOGPU *gpudev = VIRTIO_GPU(qdev);
+
+#if HOST_BIG_ENDIAN
+ error_setg(errp, "rutabaga is not supported on bigendian platforms");
+ return;
+#endif
+
+ if (!virtio_gpu_rutabaga_init(gpudev, errp)) {
+ return;
+ }
+
+ num_capsets = virtio_gpu_rutabaga_get_num_capsets(gpudev);
+ if (!num_capsets) {
+ return;
+ }
+
+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED);
+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED);
+ bdev->conf.flags |= (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED);
+
+ bdev->virtio_config.num_capsets = num_capsets;
+ virtio_gpu_device_realize(qdev, errp);
+}
+
+static Property virtio_gpu_rutabaga_properties[] = {
+ DEFINE_PROP_BIT64("gfxstream-vulkan", VirtIOGPURutabaga, capset_mask,
+ RUTABAGA_CAPSET_GFXSTREAM_VULKAN, false),
+ DEFINE_PROP_BIT64("cross-domain", VirtIOGPURutabaga, capset_mask,
+ RUTABAGA_CAPSET_CROSS_DOMAIN, false),
+ DEFINE_PROP_BIT64("x-gfxstream-gles", VirtIOGPURutabaga, capset_mask,
+ RUTABAGA_CAPSET_GFXSTREAM_GLES, false),
+ DEFINE_PROP_BIT64("x-gfxstream-composer", VirtIOGPURutabaga, capset_mask,
+ RUTABAGA_CAPSET_GFXSTREAM_COMPOSER, false),
+ DEFINE_PROP_STRING("wayland-socket-path", VirtIOGPURutabaga,
+ wayland_socket_path),
+ DEFINE_PROP_STRING("wsi", VirtIOGPURutabaga, wsi),
+ DEFINE_PROP_END_OF_LIST(),
+};
+
+static void virtio_gpu_rutabaga_class_init(ObjectClass *klass, void *data)
+{
+ DeviceClass *dc = DEVICE_CLASS(klass);
+ VirtioDeviceClass *vdc = VIRTIO_DEVICE_CLASS(klass);
+ VirtIOGPUBaseClass *vbc = VIRTIO_GPU_BASE_CLASS(klass);
+ VirtIOGPUClass *vgc = VIRTIO_GPU_CLASS(klass);
+
+ vbc->gl_flushed = virtio_gpu_rutabaga_gl_flushed;
+ vgc->handle_ctrl = virtio_gpu_rutabaga_handle_ctrl;
+ vgc->process_cmd = virtio_gpu_rutabaga_process_cmd;
+ vgc->update_cursor_data = virtio_gpu_rutabaga_update_cursor;
+
+ vdc->realize = virtio_gpu_rutabaga_realize;
+ device_class_set_props(dc, virtio_gpu_rutabaga_properties);
+}
+
+static const TypeInfo virtio_gpu_rutabaga_info[] = {
+ {
+ .name = TYPE_VIRTIO_GPU_RUTABAGA,
+ .parent = TYPE_VIRTIO_GPU,
+ .instance_size = sizeof(VirtIOGPURutabaga),
+ .class_init = virtio_gpu_rutabaga_class_init,
+ },
+};
+
+DEFINE_TYPES(virtio_gpu_rutabaga_info)
+
+module_obj(TYPE_VIRTIO_GPU_RUTABAGA);
+module_kconfig(VIRTIO_GPU);
+module_dep("hw-display-virtio-gpu");
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 93857ad..6efd15b 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -33,15 +33,11 @@
#define VIRTIO_GPU_VM_VERSION 1
-static struct virtio_gpu_simple_resource*
-virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
static struct virtio_gpu_simple_resource *
virtio_gpu_find_check_resource(VirtIOGPU *g, uint32_t resource_id,
bool require_backing,
const char *caller, uint32_t *error);
-static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
- struct virtio_gpu_simple_resource *res);
static void virtio_gpu_reset_bh(void *opaque);
void virtio_gpu_update_cursor_data(VirtIOGPU *g,
@@ -116,7 +112,7 @@
cursor->resource_id ? 1 : 0);
}
-static struct virtio_gpu_simple_resource *
+struct virtio_gpu_simple_resource *
virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id)
{
struct virtio_gpu_simple_resource *res;
@@ -904,8 +900,8 @@
g_free(iov);
}
-static void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
- struct virtio_gpu_simple_resource *res)
+void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
+ struct virtio_gpu_simple_resource *res)
{
virtio_gpu_cleanup_mapping_iov(g, res->iov, res->iov_cnt);
res->iov = NULL;
@@ -1367,8 +1363,9 @@
VirtIOGPU *g = VIRTIO_GPU(qdev);
if (virtio_gpu_blob_enabled(g->parent_obj.conf)) {
- if (!virtio_gpu_have_udmabuf()) {
- error_setg(errp, "cannot enable blob resources without udmabuf");
+ if (!virtio_gpu_rutabaga_enabled(g->parent_obj.conf) &&
+ !virtio_gpu_have_udmabuf()) {
+ error_setg(errp, "need rutabaga or udmabuf for blob resources");
return;
}
@@ -1511,6 +1508,7 @@
256 * MiB),
DEFINE_PROP_BIT("blob", VirtIOGPU, parent_obj.conf.flags,
VIRTIO_GPU_FLAG_BLOB_ENABLED, false),
+ DEFINE_PROP_SIZE("hostmem", VirtIOGPU, parent_obj.conf.hostmem, 0),
DEFINE_PROP_END_OF_LIST(),
};
diff --git a/hw/display/virtio-vga-rutabaga.c b/hw/display/virtio-vga-rutabaga.c
new file mode 100644
index 0000000..a7bef6d
--- /dev/null
+++ b/hw/display/virtio-vga-rutabaga.c
@@ -0,0 +1,50 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#include "qemu/osdep.h"
+#include "hw/pci/pci.h"
+#include "hw/qdev-properties.h"
+#include "hw/virtio/virtio-gpu.h"
+#include "hw/display/vga.h"
+#include "qapi/error.h"
+#include "qemu/module.h"
+#include "virtio-vga.h"
+#include "qom/object.h"
+
+#define TYPE_VIRTIO_VGA_RUTABAGA "virtio-vga-rutabaga"
+
+OBJECT_DECLARE_SIMPLE_TYPE(VirtIOVGARutabaga, VIRTIO_VGA_RUTABAGA)
+
+struct VirtIOVGARutabaga {
+ VirtIOVGABase parent_obj;
+
+ VirtIOGPURutabaga vdev;
+};
+
+static void virtio_vga_rutabaga_inst_initfn(Object *obj)
+{
+ VirtIOVGARutabaga *dev = VIRTIO_VGA_RUTABAGA(obj);
+
+ virtio_instance_init_common(obj, &dev->vdev, sizeof(dev->vdev),
+ TYPE_VIRTIO_GPU_RUTABAGA);
+ VIRTIO_VGA_BASE(dev)->vgpu = VIRTIO_GPU_BASE(&dev->vdev);
+}
+
+static VirtioPCIDeviceTypeInfo virtio_vga_rutabaga_info = {
+ .generic_name = TYPE_VIRTIO_VGA_RUTABAGA,
+ .parent = TYPE_VIRTIO_VGA_BASE,
+ .instance_size = sizeof(VirtIOVGARutabaga),
+ .instance_init = virtio_vga_rutabaga_inst_initfn,
+};
+module_obj(TYPE_VIRTIO_VGA_RUTABAGA);
+module_kconfig(VIRTIO_VGA);
+
+static void virtio_vga_register_types(void)
+{
+ if (have_vga) {
+ virtio_pci_types_register(&virtio_vga_rutabaga_info);
+ }
+}
+
+type_init(virtio_vga_register_types)
+
+module_dep("hw-display-virtio-vga");
diff --git a/hw/display/virtio-vga.c b/hw/display/virtio-vga.c
index e6fb0aa..c8552ff 100644
--- a/hw/display/virtio-vga.c
+++ b/hw/display/virtio-vga.c
@@ -115,17 +115,32 @@
pci_register_bar(&vpci_dev->pci_dev, 0,
PCI_BASE_ADDRESS_MEM_PREFETCH, &vga->vram);
- /*
- * Configure virtio bar and regions
- *
- * We use bar #2 for the mmio regions, to be compatible with stdvga.
- * virtio regions are moved to the end of bar #2, to make room for
- * the stdvga mmio registers at the start of bar #2.
- */
- vpci_dev->modern_mem_bar_idx = 2;
- vpci_dev->msix_bar_idx = 4;
vpci_dev->modern_io_bar_idx = 5;
+ if (!virtio_gpu_hostmem_enabled(g->conf)) {
+ /*
+ * Configure virtio bar and regions
+ *
+ * We use bar #2 for the mmio regions, to be compatible with stdvga.
+ * virtio regions are moved to the end of bar #2, to make room for
+ * the stdvga mmio registers at the start of bar #2.
+ */
+ vpci_dev->modern_mem_bar_idx = 2;
+ vpci_dev->msix_bar_idx = 4;
+ } else {
+ vpci_dev->msix_bar_idx = 1;
+ vpci_dev->modern_mem_bar_idx = 2;
+ memory_region_init(&g->hostmem, OBJECT(g), "virtio-gpu-hostmem",
+ g->conf.hostmem);
+ pci_register_bar(&vpci_dev->pci_dev, 4,
+ PCI_BASE_ADDRESS_SPACE_MEMORY |
+ PCI_BASE_ADDRESS_MEM_PREFETCH |
+ PCI_BASE_ADDRESS_MEM_TYPE_64,
+ &g->hostmem);
+ virtio_pci_add_shm_cap(vpci_dev, 4, 0, g->conf.hostmem,
+ VIRTIO_GPU_SHM_ID_HOST_VISIBLE);
+ }
+
if (!(vpci_dev->flags & VIRTIO_PCI_FLAG_PAGE_PER_VQ)) {
/*
* with page-per-vq=off there is no padding space we can use
diff --git a/hw/virtio/virtio-pci.c b/hw/virtio/virtio-pci.c
index abebd00..af1f4bc 100644
--- a/hw/virtio/virtio-pci.c
+++ b/hw/virtio/virtio-pci.c
@@ -1435,6 +1435,24 @@
return offset;
}
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy,
+ uint8_t bar, uint64_t offset, uint64_t length,
+ uint8_t id)
+{
+ struct virtio_pci_cap64 cap = {
+ .cap.cap_len = sizeof cap,
+ .cap.cfg_type = VIRTIO_PCI_CAP_SHARED_MEMORY_CFG,
+ };
+
+ cap.cap.bar = bar;
+ cap.cap.length = cpu_to_le32(length);
+ cap.length_hi = cpu_to_le32(length >> 32);
+ cap.cap.offset = cpu_to_le32(offset);
+ cap.offset_hi = cpu_to_le32(offset >> 32);
+ cap.cap.id = id;
+ return virtio_pci_add_mem_cap(proxy, &cap.cap);
+}
+
static uint64_t virtio_pci_common_read(void *opaque, hwaddr addr,
unsigned size)
{
diff --git a/include/hw/virtio/virtio-gpu-bswap.h b/include/hw/virtio/virtio-gpu-bswap.h
index 637a058..dd1975e 100644
--- a/include/hw/virtio/virtio-gpu-bswap.h
+++ b/include/hw/virtio/virtio-gpu-bswap.h
@@ -71,6 +71,21 @@
}
static inline void
+virtio_gpu_map_blob_bswap(struct virtio_gpu_resource_map_blob *mblob)
+{
+ virtio_gpu_ctrl_hdr_bswap(&mblob->hdr);
+ le32_to_cpus(&mblob->resource_id);
+ le64_to_cpus(&mblob->offset);
+}
+
+static inline void
+virtio_gpu_unmap_blob_bswap(struct virtio_gpu_resource_unmap_blob *ublob)
+{
+ virtio_gpu_ctrl_hdr_bswap(&ublob->hdr);
+ le32_to_cpus(&ublob->resource_id);
+}
+
+static inline void
virtio_gpu_scanout_blob_bswap(struct virtio_gpu_set_scanout_blob *ssb)
{
virtio_gpu_bswap_32(ssb, sizeof(*ssb) - sizeof(ssb->offsets[3]));
diff --git a/include/hw/virtio/virtio-gpu.h b/include/hw/virtio/virtio-gpu.h
index 4739fa4..584ba2e 100644
--- a/include/hw/virtio/virtio-gpu.h
+++ b/include/hw/virtio/virtio-gpu.h
@@ -38,6 +38,9 @@
#define TYPE_VHOST_USER_GPU "vhost-user-gpu"
OBJECT_DECLARE_SIMPLE_TYPE(VhostUserGPU, VHOST_USER_GPU)
+#define TYPE_VIRTIO_GPU_RUTABAGA "virtio-gpu-rutabaga-device"
+OBJECT_DECLARE_SIMPLE_TYPE(VirtIOGPURutabaga, VIRTIO_GPU_RUTABAGA)
+
struct virtio_gpu_simple_resource {
uint32_t resource_id;
uint32_t width;
@@ -93,6 +96,8 @@
VIRTIO_GPU_FLAG_EDID_ENABLED,
VIRTIO_GPU_FLAG_DMABUF_ENABLED,
VIRTIO_GPU_FLAG_BLOB_ENABLED,
+ VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED,
+ VIRTIO_GPU_FLAG_RUTABAGA_ENABLED,
};
#define virtio_gpu_virgl_enabled(_cfg) \
@@ -105,12 +110,19 @@
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_DMABUF_ENABLED))
#define virtio_gpu_blob_enabled(_cfg) \
(_cfg.flags & (1 << VIRTIO_GPU_FLAG_BLOB_ENABLED))
+#define virtio_gpu_context_init_enabled(_cfg) \
+ (_cfg.flags & (1 << VIRTIO_GPU_FLAG_CONTEXT_INIT_ENABLED))
+#define virtio_gpu_rutabaga_enabled(_cfg) \
+ (_cfg.flags & (1 << VIRTIO_GPU_FLAG_RUTABAGA_ENABLED))
+#define virtio_gpu_hostmem_enabled(_cfg) \
+ (_cfg.hostmem > 0)
struct virtio_gpu_base_conf {
uint32_t max_outputs;
uint32_t flags;
uint32_t xres;
uint32_t yres;
+ uint64_t hostmem;
};
struct virtio_gpu_ctrl_command {
@@ -134,6 +146,8 @@
int renderer_blocked;
int enable;
+ MemoryRegion hostmem;
+
struct virtio_gpu_scanout scanout[VIRTIO_GPU_MAX_SCANOUTS];
int enabled_output_bitmask;
@@ -224,6 +238,27 @@
bool backend_blocked;
};
+#define MAX_SLOTS 4096
+
+struct MemoryRegionInfo {
+ int used;
+ MemoryRegion mr;
+ uint32_t resource_id;
+};
+
+struct rutabaga;
+
+struct VirtIOGPURutabaga {
+ VirtIOGPU parent_obj;
+ struct MemoryRegionInfo memory_regions[MAX_SLOTS];
+ uint64_t capset_mask;
+ char *wayland_socket_path;
+ char *wsi;
+ bool headless;
+ uint32_t num_capsets;
+ struct rutabaga *rutabaga;
+};
+
#define VIRTIO_GPU_FILL_CMD(out) do { \
size_t virtiogpufillcmd_s_ = \
iov_to_buf(cmd->elem.out_sg, cmd->elem.out_num, 0, \
@@ -249,6 +284,9 @@
void virtio_gpu_base_generate_edid(VirtIOGPUBase *g, int scanout,
struct virtio_gpu_resp_edid *edid);
/* virtio-gpu.c */
+struct virtio_gpu_simple_resource *
+virtio_gpu_find_resource(VirtIOGPU *g, uint32_t resource_id);
+
void virtio_gpu_ctrl_response(VirtIOGPU *g,
struct virtio_gpu_ctrl_command *cmd,
struct virtio_gpu_ctrl_hdr *resp,
@@ -267,6 +305,8 @@
uint32_t *niov);
void virtio_gpu_cleanup_mapping_iov(VirtIOGPU *g,
struct iovec *iov, uint32_t count);
+void virtio_gpu_cleanup_mapping(VirtIOGPU *g,
+ struct virtio_gpu_simple_resource *res);
void virtio_gpu_process_cmdq(VirtIOGPU *g);
void virtio_gpu_device_realize(DeviceState *qdev, Error **errp);
void virtio_gpu_reset(VirtIODevice *vdev);
diff --git a/include/hw/virtio/virtio-pci.h b/include/hw/virtio/virtio-pci.h
index ab2051b..5a3f182 100644
--- a/include/hw/virtio/virtio-pci.h
+++ b/include/hw/virtio/virtio-pci.h
@@ -264,4 +264,8 @@
void virtio_pci_set_guest_notifier_fd_handler(VirtIODevice *vdev, VirtQueue *vq,
int n, bool assign,
bool with_irqfd);
+
+int virtio_pci_add_shm_cap(VirtIOPCIProxy *proxy, uint8_t bar, uint64_t offset,
+ uint64_t length, uint8_t id);
+
#endif
diff --git a/meson.build b/meson.build
index bd65a11..e0d1f84 100644
--- a/meson.build
+++ b/meson.build
@@ -1046,6 +1046,12 @@
dependencies: virgl))
endif
endif
+rutabaga = not_found
+if not get_option('rutabaga_gfx').auto() or have_system or have_vhost_user_gpu
+ rutabaga = dependency('rutabaga_gfx_ffi',
+ method: 'pkg-config',
+ required: get_option('rutabaga_gfx'))
+endif
blkio = not_found
if not get_option('blkio').auto() or have_block
blkio = dependency('blkio',
@@ -4277,6 +4283,7 @@
summary_info += {'PAM': pam}
summary_info += {'iconv support': iconv}
summary_info += {'virgl support': virgl}
+summary_info += {'rutabaga support': rutabaga}
summary_info += {'blkio support': blkio}
summary_info += {'curl support': curl}
summary_info += {'Multipath support': mpathpersist}
diff --git a/meson_options.txt b/meson_options.txt
index 6a17b90..e49309d 100644
--- a/meson_options.txt
+++ b/meson_options.txt
@@ -230,6 +230,8 @@
description: 'vmnet.framework network backend support')
option('virglrenderer', type : 'feature', value : 'auto',
description: 'virgl rendering support')
+option('rutabaga_gfx', type : 'feature', value : 'auto',
+ description: 'rutabaga_gfx support')
option('png', type : 'feature', value : 'auto',
description: 'PNG support with libpng')
option('vnc', type : 'feature', value : 'auto',
diff --git a/scripts/meson-buildoptions.sh b/scripts/meson-buildoptions.sh
index 2a74b02..a28ccbc 100644
--- a/scripts/meson-buildoptions.sh
+++ b/scripts/meson-buildoptions.sh
@@ -156,6 +156,7 @@
printf "%s\n" ' rbd Ceph block device driver'
printf "%s\n" ' rdma Enable RDMA-based migration'
printf "%s\n" ' replication replication support'
+ printf "%s\n" ' rutabaga-gfx rutabaga_gfx support'
printf "%s\n" ' sdl SDL user interface'
printf "%s\n" ' sdl-image SDL Image support for icons'
printf "%s\n" ' seccomp seccomp support'
@@ -425,6 +426,8 @@
--disable-replication) printf "%s" -Dreplication=disabled ;;
--enable-rng-none) printf "%s" -Drng_none=true ;;
--disable-rng-none) printf "%s" -Drng_none=false ;;
+ --enable-rutabaga-gfx) printf "%s" -Drutabaga_gfx=enabled ;;
+ --disable-rutabaga-gfx) printf "%s" -Drutabaga_gfx=disabled ;;
--enable-safe-stack) printf "%s" -Dsafe_stack=true ;;
--disable-safe-stack) printf "%s" -Dsafe_stack=false ;;
--enable-sanitizers) printf "%s" -Dsanitizers=true ;;
diff --git a/system/qdev-monitor.c b/system/qdev-monitor.c
index 74f4e41..1b8005a 100644
--- a/system/qdev-monitor.c
+++ b/system/qdev-monitor.c
@@ -86,6 +86,9 @@
{ "virtio-gpu-pci", "virtio-gpu", QEMU_ARCH_VIRTIO_PCI },
{ "virtio-gpu-gl-device", "virtio-gpu-gl", QEMU_ARCH_VIRTIO_MMIO },
{ "virtio-gpu-gl-pci", "virtio-gpu-gl", QEMU_ARCH_VIRTIO_PCI },
+ { "virtio-gpu-rutabaga-device", "virtio-gpu-rutabaga",
+ QEMU_ARCH_VIRTIO_MMIO },
+ { "virtio-gpu-rutabaga-pci", "virtio-gpu-rutabaga", QEMU_ARCH_VIRTIO_PCI },
{ "virtio-input-host-device", "virtio-input-host", QEMU_ARCH_VIRTIO_MMIO },
{ "virtio-input-host-ccw", "virtio-input-host", QEMU_ARCH_VIRTIO_CCW },
{ "virtio-input-host-pci", "virtio-input-host", QEMU_ARCH_VIRTIO_PCI },
diff --git a/system/vl.c b/system/vl.c
index ba83040..3100ac0 100644
--- a/system/vl.c
+++ b/system/vl.c
@@ -216,6 +216,7 @@
{ .driver = "ati-vga", .flag = &default_vga },
{ .driver = "vhost-user-vga", .flag = &default_vga },
{ .driver = "virtio-vga-gl", .flag = &default_vga },
+ { .driver = "virtio-vga-rutabaga", .flag = &default_vga },
};
static QemuOptsList qemu_rtc_opts = {