doc: update HLD Virtio Devices

Transcode, edit, and upload HLD 0.7 section 6.5 (Supported Virtio
Devices), merging with existing reviewed content.

Tracked-on: #1732

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2018-11-06 11:33:24 -08:00 committed by David Kinder
parent 366042cac2
commit dfcc06df30
13 changed files with 263 additions and 28 deletions

View File

@ -265,35 +265,138 @@ virtqueue through the user-level vring service API helpers.
Kernel-Land Virtio Framework
============================
The architecture of ACRN kernel-land virtio framework (VBS-K) is shown
in :numref:`virtio-kernelland`.
ACRN supports two kernel-land virtio frameworks: VBS-K, designed from
scratch for ACRN, the other called Vhost, compatible with Linux Vhost.
VBS-K provides acceleration for performance critical devices emulated by
VBS-U modules by handling the "data plane" of the devices directly in
the kernel. When VBS-K is enabled for certain device, the kernel-land
vring service API helpers are used to access the virtqueues shared by
the FE driver. Compared to VBS-U, this eliminates the overhead of
copying data back-and-forth between user-land and kernel-land within the
service OS, but pays with the extra implementation complexity of the BE
drivers.
VBS-K framework
---------------
The architecture of ACRN VBS-K is shown in
:numref:`kernel-virtio-framework` below.
Generally VBS-K provides acceleration towards performance critical
devices emulated by VBS-U modules by handling the “data plane” of the
devices directly in the kernel. When VBS-K is enabled for certain
devices, the kernel-land vring service API helpers, instead of the
user-land helpers, are used to access the virtqueues shared by the FE
driver. Compared to VBS-U, this eliminates the overhead of copying data
back-and-forth between user-land and kernel-land within service OS, but
pays with the extra implementation complexity of the BE drivers.
Except for the differences mentioned above, VBS-K still relies on VBS-U
for feature negotiations between FE and BE drivers. This means the
"control plane" of the virtio device still remains in VBS-U. When
feature negotiation is done, which is determined by FE driver setting up
an indicative flag, VBS-K module will be initialized by VBS-U, after
which all request handling will be offloaded to the VBS-K in kernel.
an indicative flag, VBS-K module will be initialized by VBS-U.
Afterwards, all request handling will be offloaded to the VBS-K in
kernel.
The FE driver is not aware of how the BE driver is implemented, either
in the VBS-U or VBS-K model. This saves engineering effort regarding FE
Finally the FE driver is not aware of how the BE driver is implemented,
either in VBS-U or VBS-K. This saves engineering effort regarding FE
driver development.
.. figure:: images/virtio-hld-image6.png
:width: 900px
.. figure:: images/virtio-hld-image54.png
:align: center
:name: virtio-kernelland
:name: kernel-virtio-framework
ACRN Kernel-Land Virtio Framework
ACRN Kernel Land Virtio Framework
Vhost framework
---------------
Vhost is similar to VBS-K. Vhost is a common solution upstreamed in the
Linux kernel, with several kernel mediators based on it.
Architecture
~~~~~~~~~~~~
Vhost/virtio is a semi-virtualized device abstraction interface
specification that has been widely applied in various virtualization
solutions. Vhost is a specific kind of virtio where the data plane is
put into host kernel space to reduce the context switch while processing
the IO request. It is usually called "virtio" when used as a front-end
driver in a guest operating system or "vhost" when used as a back-end
driver in a host. Compared with a pure virtio solution on a host, vhost
uses the same frontend driver as virtio solution and can achieve better
performance. :numref:`vhost-arch` shows the vhost architecture on ACRN.
.. figure:: images/virtio-hld-image71.png
:align: center
:name: vhost-arch
Vhost Architecture on ACRN
Compared with a userspace virtio solution, vhost decomposes data plane
from user space to kernel space. The vhost general data plane workflow
can be described as:
1. vhost proxy creates two eventfds per virtqueue, one is for kick,
(an ioeventfd), the other is for call, (an irqfd).
2. vhost proxy registers the two eventfds to VHM through VHM character
device:
a) Ioevenftd is bound with a PIO/MMIO range. If it is a PIO, it is
registered with (fd, port, len, value). If it is a MMIO, it is
registered with (fd, addr, len).
b) Irqfd is registered with MSI vector.
3. vhost proxy sets the two fds to vhost kernel through ioctl of vhost
device.
4. vhost starts polling the kick fd and wakes up when guest kicks a
virtqueue, which results a event_signal on kick fd by VHM ioeventfd.
5. vhost device in kernel signals on the irqfd to notify the guest.
Ioeventfd implementation
~~~~~~~~~~~~~~~~~~~~~~~~
Ioeventfd module is implemented in VHM, and can enhance a registered
eventfd to listen to IO requests (PIO/MMIO) from vhm ioreq module and
signal the eventfd when needed. :numref:`ioeventfd-workflow` shows the
general workflow of ioeventfd.
.. figure:: images/virtio-hld-image58.png
:align: center
:name: ioeventfd-workflow
ioeventfd general work flow
The workflow can be summarized as:
1. vhost device init. Vhost proxy create two eventfd for ioeventfd and
irqfd.
2. pass ioeventfd to vhost kernel driver.
3. pass ioevent fd to vhm driver
4. UOS FE driver triggers ioreq and forwarded to SOS by hypervisor
5. ioreq is dispatched by vhm driver to related vhm client.
6. ioeventfd vhm client traverse the io_range list and find
corresponding eventfd.
7. trigger the signal to related eventfd.
Irqfd implementation
~~~~~~~~~~~~~~~~~~~~
The irqfd module is implemented in VHM, and can enhance an registered
eventfd to inject an interrupt to a guest OS when the eventfd gets
signalled. :numref:`irqfd-workflow` shows the general flow for irqfd.
.. figure:: images/virtio-hld-image60.png
:align: center
:name: irqfd-workflow
irqfd general flow
The workflow can be summarized as:
1. vhost device init. Vhost proxy create two eventfd for ioeventfd and
irqfd.
2. pass irqfd to vhost kernel driver.
3. pass irq fd to vhm driver
4. vhost device driver triggers irq eventfd signal once related native
transfer is completed.
5. irqfd related logic traverses the irqfd list to retrieve related irq
information.
6. irqfd related logic inject an interrupt through vhm interrupt API.
7. interrupt is delivered to UOS FE driver through hypervisor.
Virtio APIs
***********
@ -664,5 +767,6 @@ supported in ACRN.
virtio-blk
virtio-net
virtio-input
virtio-console
virtio-rnd

Binary file not shown.

Before

Width:  |  Height:  |  Size: 72 KiB

After

Width:  |  Height:  |  Size: 62 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 136 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 97 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 70 KiB

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 124 KiB

View File

@ -30,12 +30,29 @@ The virtio-console architecture diagram in ACRN is shown below.
Virtio-console architecture diagram
Virtio-console is implemented as a virtio legacy device in the ACRN device
model (DM), and is registered as a PCI virtio device to the guest OS. No changes
are required in the frontend Linux virtio-console except that the guest
(UOS) kernel should be built with ``CONFIG_VIRTIO_CONSOLE=y``.
Virtio-console is implemented as a virtio legacy device in the ACRN
device model (DM), and is registered as a PCI virtio device to the guest
OS. No changes are required in the frontend Linux virtio-console except
that the guest (UOS) kernel should be built with
``CONFIG_VIRTIO_CONSOLE=y``.
Currently the feature bits supported by the BE device are:
The virtio console FE driver registers a HVC console to the kernel if
the port is configured as console. Otherwise it registers a char device
named ``/dev/vportXpY`` to the kernel, and can be read and written from
the user space. There are two virtqueues for a port, one is for
transmitting and the other is for receiving. The FE driver places empty
buffers onto the receiving virtqueue for incoming data, and enqueues
outgoing characters onto the transmitting virtqueue.
The virtio console BE driver copies data from the FE's transmitting
virtqueue when it receives a kick on the virtqueue (implemented as a
vmexit). The BE driver then writes the data to the backend, and can be
implemented as PTY, TTY, STDIO, and a regular file. BE driver uses
mevent to poll the available data from the backend file descriptor. When
new data is available, the BE driver reads it to the receiving virtqueue
of the FE, followed by an interrupt injection.
The feature bits currently supported by the BE device are:
.. list-table:: Feature bits supported by BE drivers
:widths: 30 50
@ -181,4 +198,3 @@ The File backend only supports console output to a file (no input).
#. Add the console parameter to the guest OS kernel command line::
console=hvc0

View File

@ -0,0 +1,99 @@
.. _virtio-input:
Virtio-input
############
The virtio input device can be used to create virtual human interface
devices such as keyboards, mice, and tablets. It basically sends Linux
input layer events over virtio.
The ACRN Virtio-input architecture is shown below.
.. figure:: images/virtio-hld-image53.png
:align: center
Virtio-input Architecture on ACRN
Virtio-input is implemented as a virtio modern device in ACRN device
model. It is registered as a PCI virtio device to guest OS. No changes
are required in frontend Linux virtio-input except that guest kernel
must be built with ``CONFIG_VIRTIO_INPUT=y``.
Two virtqueues are used to transfer input_event between FE and BE. One
is for the input_events from BE to FE, as generated by input hardware
devices in SOS. The other is for status changes from FE to BE, as
finally sent to input hardware device in SOS.
At the probe stage of FE virtio-input driver, a buffer (used to
accommodate 64 input events) is allocated together with the driver data.
Sixty-four descriptors are added to the event virtqueue. One descriptor
points to one entry in the buffer. Then a kick on the event virtqueue is
performed.
Virtio-input BE driver in device model uses mevent to poll the
availability of the input events from an input device thru evdev char
device. When an input event is available, BE driver reads it out from the
char device and caches it into an internal buffer until an EV_SYN input
event with SYN_REPORT is received. BE driver then copies all the cached
input events to the event virtqueue, one by one. These events are added by
the FE driver following a notification to FE driver, implemented
as an interrupt injection to UOS.
For input events regarding status change, FE driver allocates a
buffer for an input event and adds it to the status virtqueue followed
by a kick. BE driver reads the input event from the status virtqueue and
writes it to the evdev char device.
The data transferred between FE and BE is organized as struct
input_event:
.. code-block:: c
struct input_event {
struct timeval time;
__u16 type;
__u16 code;
__s32 value;
};
A structure virtio_input_config is defined and used as the
device-specific configuration registers. To query a specific piece of
configuration information FE driver sets "select" and "subsel"
accordingly. Information size is returned in "size" and information data
is returned in union "u":
.. code-block:: c
struct virtio_input_config {
uint8_t select;
uint8_t subsel;
uint8_t size;
uint8_t reserved[5];
union {
char string[128];
uint8_t bitmap[128];
struct virtio_input_absinfo abs;
struct virtio_input_devids ids;
} u;
};
Read/Write to these registers results in a vmexit and cfgread/cfgwrite
callbacks in struct virtio_ops are called finally in device model.
Virtio-input BE in device model issues ioctl to evdev char device
according to the "select" and "subselect" registers to get the
corresponding device capabilities information from kernel and return
these information to guest OS.
All the device-specific configurations are obtained by FE driver at
probe stage. Based on these information virtio-input FE driver registers
an input device to the input subsystem.
The general command syntax is::
-s n,virtio-input,/dev/input/eventX[,serial]
- /dev/input/eventX is used to specify the evdev char device node in
SOS.
- "serial" is an optional string. When it is specified it will be used
as the Uniq of guest virtio input device.

View File

@ -3,14 +3,30 @@
Virtio-rnd
##########
The virtio-rnd entropy device supplies high-quality randomness for guest
use. The virtio device ID of the virtio-rnd device is 4, and it supports
one virtqueue, the size of which is 64, configurable in the source code.
It has no feature bits defined.
virtio-rnd provides a hardware random source for UOS. The
virtual random device is based on virtio user mode framework and
simulates a PCI device based on virtio specification.
:numref:`virtio-rnd-arch` shows the Random Device Virtualization
Architecture in ACRN. virtio-rnd is implemented as a virtio legacy
device in the ACRN device model (DM), and is registered as a PCI virtio
device to the guest OS (UOS).
When the FE driver requires some random bytes, the BE device will place
bytes of random data onto the virtqueue.
Tools such as ``od`` can be used to read randomness from
``/dev/random``. This device file in UOS is bound with frontend
virtio-rng driver (The guest kernel must be built with
``CONFIG_HW_RANDOM_VIRTIO=y``). The backend virtio-rnd reads the HW
randomness from ``/dev/random`` in SOS and sends them to frontend.
.. figure:: images/virtio-hld-image61.png
:align: center
:name: virtio-rnd-arch
Virtio-rnd Architecture on ACRN
To launch the virtio-rnd device, use the following virtio command::
-s <slot>,virtio-rnd