diff --git a/doc/developer-guides/ACPI-virt-hld.rst b/doc/developer-guides/hld/acpi-virt.rst similarity index 99% rename from doc/developer-guides/ACPI-virt-hld.rst rename to doc/developer-guides/hld/acpi-virt.rst index c34faa2f0..a154f1429 100644 --- a/doc/developer-guides/ACPI-virt-hld.rst +++ b/doc/developer-guides/hld/acpi-virt.rst @@ -1,4 +1,4 @@ -.. _ACPI-virt-HLD: +.. _acpi-virt-HLD: ACPI Virtualization high-level design ##################################### diff --git a/doc/developer-guides/APL_GVT-g-hld.rst b/doc/developer-guides/hld/hld-APL_GVT-g.rst similarity index 100% rename from doc/developer-guides/APL_GVT-g-hld.rst rename to doc/developer-guides/hld/hld-APL_GVT-g.rst diff --git a/doc/developer-guides/hld/hld-devicemodel.rst b/doc/developer-guides/hld/hld-devicemodel.rst new file mode 100644 index 000000000..cd697f1c7 --- /dev/null +++ b/doc/developer-guides/hld/hld-devicemodel.rst @@ -0,0 +1,10 @@ +.. _hld-devicemodel: + +Device Model high-level design +############################## + + +.. toctree:: + :maxdepth: 1 + + ACPI virtualization diff --git a/doc/developer-guides/hld/hld-emulated-devices.rst b/doc/developer-guides/hld/hld-emulated-devices.rst new file mode 100644 index 000000000..ef9a34de0 --- /dev/null +++ b/doc/developer-guides/hld/hld-emulated-devices.rst @@ -0,0 +1,11 @@ +.. _hld-emulated-devices: + +Emulated Devices high-level design +################################## + +.. toctree:: + :maxdepth: 1 + + GVT-g GPU Virtualization + UART virtualization + Watchdoc virtualization diff --git a/doc/developer-guides/hld/hld-hypervisor.rst b/doc/developer-guides/hld/hld-hypervisor.rst new file mode 100644 index 000000000..c18e1c15b --- /dev/null +++ b/doc/developer-guides/hld/hld-hypervisor.rst @@ -0,0 +1,11 @@ +.. _hld-hypervisor: + +Hypervisor high-level design +############################ + + +.. toctree:: + :maxdepth: 1 + + Memory management + Interrupt management diff --git a/doc/developer-guides/hld/hld-overview.rst b/doc/developer-guides/hld/hld-overview.rst new file mode 100644 index 000000000..9544f1bd3 --- /dev/null +++ b/doc/developer-guides/hld/hld-overview.rst @@ -0,0 +1,4 @@ +.. _hld-overview: + +Overview +######## diff --git a/doc/developer-guides/hld/hld-power-management.rst b/doc/developer-guides/hld/hld-power-management.rst new file mode 100644 index 000000000..61c719c52 --- /dev/null +++ b/doc/developer-guides/hld/hld-power-management.rst @@ -0,0 +1,4 @@ +.. _hld-power-management: + +Power Management high-level design +################################## diff --git a/doc/developer-guides/security-hld.rst b/doc/developer-guides/hld/hld-security.rst similarity index 99% rename from doc/developer-guides/security-hld.rst rename to doc/developer-guides/hld/hld-security.rst index fcc853a93..5b6fd1544 100644 --- a/doc/developer-guides/security-hld.rst +++ b/doc/developer-guides/hld/hld-security.rst @@ -1,4 +1,4 @@ -.. _security-hld: +.. _hld-security: Security high-level design ########################## diff --git a/doc/developer-guides/hld/hld-trace-log.rst b/doc/developer-guides/hld/hld-trace-log.rst new file mode 100644 index 000000000..42d85620f --- /dev/null +++ b/doc/developer-guides/hld/hld-trace-log.rst @@ -0,0 +1,4 @@ +.. _hld-trace-log: + +Tracing and Logging high-level design +##################################### diff --git a/doc/developer-guides/virtio-hld.rst b/doc/developer-guides/hld/hld-virtio-devices.rst similarity index 75% rename from doc/developer-guides/virtio-hld.rst rename to doc/developer-guides/hld/hld-virtio-devices.rst index 6d7b846b6..5bf97766f 100644 --- a/doc/developer-guides/virtio-hld.rst +++ b/doc/developer-guides/hld/hld-virtio-devices.rst @@ -1,642 +1,499 @@ -.. _virtio-hld: - -Virtio high-level design -######################## - -The ACRN Hypervisor follows the `Virtual I/O Device (virtio) -specification -`_ to -realize I/O virtualization for many performance-critical devices -supported in the ACRN project. Adopting the virtio specification lets us -reuse many frontend virtio drivers already available in a Linux-based -User OS, drastically reducing potential development effort for frontend -virtio drivers. To further reduce the development effort of backend -virtio drivers, the hypervisor provides the virtio backend service -(VBS) APIs, that make it very straightforward to implement a virtio -device in the hypervisor. - -The virtio APIs can be divided into 3 groups: DM APIs, virtio backend -service (VBS) APIs, and virtqueue (VQ) APIs, as shown in -:numref:`be-interface`. - -.. figure:: images/virtio-hld-image0.png - :width: 900px - :align: center - :name: be-interface - - ACRN Virtio Backend Service Interface - -- **DM APIs** are exported by the DM, and are mainly used during the - device initialization phase and runtime. The DM APIs also include - PCIe emulation APIs because each virtio device is a PCIe device in - the SOS and UOS. -- **VBS APIs** are mainly exported by the VBS and related modules. - Generally they are callbacks to be - registered into the DM. -- **VQ APIs** are used by a virtio backend device to access and parse - information from the shared memory between the frontend and backend - device drivers. - -Virtio Device -************* - -Virtio framework is the para-virtualization specification that ACRN -follows to implement I/O virtualization of performance-critical -devices such as audio, eAVB/TSN, IPU, and CSMU devices. This section gives -an overview about virtio history, motivation, and advantages, and then -highlights virtio key concepts. Second, this section will describe -ACRN's virtio architectures, and elaborates on ACRN virtio APIs. Finally -this section will introduce all the virtio devices currently supported -by ACRN. - -Introduction -============ - -Virtio is an abstraction layer over devices in a para-virtualized -hypervisor. Virtio was developed by Rusty Russell when he worked at IBM -research to support his lguest hypervisor in 2007, and it quickly became -the de-facto standard for KVM's para-virtualized I/O devices. - -Virtio is very popular for virtual I/O devices because is provides a -straightforward, efficient, standard, and extensible mechanism, and -eliminates the need for boutique, per-environment, or per-OS mechanisms. -For example, rather than having a variety of device emulation -mechanisms, virtio provides a common frontend driver framework that -standardizes device interfaces, and increases code reuse across -different virtualization platforms. - -Given the advantages of virtio, ACRN also follows the virtio -specification. - -Key Concepts -============ - -To better understand virtio, especially its usage in ACRN, we'll -highlight several key virtio concepts important to ACRN: - - -Frontend virtio driver (FE) - Virtio adopts a frontend-backend architecture that enables a simple but - flexible framework for both frontend and backend virtio drivers. The FE - driver merely needs to offer services configure the interface, pass messages, - produce requests, and kick backend virtio driver. As a result, the FE - driver is easy to implement and the performance overhead of emulating - a device is eliminated. - -Backend virtio driver (BE) - Similar to FE driver, the BE driver, running either in user-land or - kernel-land of the host OS, consumes requests from the FE driver and sends them - to the host native device driver. Once the requests are done by the host - native device driver, the BE driver notifies the FE driver that the - request is complete. - - Note: to distinguish BE driver from host native device driver, the host - native device driver is called "native driver" in this document. - -Straightforward: virtio devices as standard devices on existing buses - Instead of creating new device buses from scratch, virtio devices are - built on existing buses. This gives a straightforward way for both FE - and BE drivers to interact with each other. For example, FE driver could - read/write registers of the device, and the virtual device could - interrupt FE driver, on behalf of the BE driver, in case something of - interest is happening. - - Currently virtio supports PCI/PCIe bus and MMIO bus. In ACRN, only - PCI/PCIe bus is supported, and all the virtio devices share the same - vendor ID 0x1AF4. - - Note: For MMIO, the "bus" is a little bit an overstatement since - basically it is a few descriptors describing the devices. - -Efficient: batching operation is encouraged - Batching operation and deferred notification are important to achieve - high-performance I/O, since notification between FE and BE driver - usually involves an expensive exit of the guest. Therefore batching - operating and notification suppression are highly encouraged if - possible. This will give an efficient implementation for - performance-critical devices. - -Standard: virtqueue - All virtio devices share a standard ring buffer and descriptor - mechanism, called a virtqueue, shown in :numref:`virtqueue`. A virtqueue is a - queue of scatter-gather buffers. There are three important methods on - virtqueues: - - - **add_buf** is for adding a request/response buffer in a virtqueue, - - **get_buf** is for getting a response/request in a virtqueue, and - - **kick** is for notifying the other side for a virtqueue to consume buffers. - - The virtqueues are created in guest physical memory by the FE drivers. - BE drivers only need to parse the virtqueue structures to obtain - the requests and process them. How a virtqueue is organized is - specific to the Guest OS. In the Linux implementation of virtio, the - virtqueue is implemented as a ring buffer structure called vring. - - In ACRN, the virtqueue APIs can be leveraged directly so that users - don't need to worry about the details of the virtqueue. (Refer to guest - OS for more details about the virtqueue implementation.) - -.. figure:: images/virtio-hld-image2.png - :width: 900px - :align: center - :name: virtqueue - - Virtqueue - -Extensible: feature bits - A simple extensible feature negotiation mechanism exists for each - virtual device and its driver. Each virtual device could claim its - device specific features while the corresponding driver could respond to - the device with the subset of features the driver understands. The - feature mechanism enables forward and backward compatibility for the - virtual device and driver. - -Virtio Device Modes - The virtio specification defines three modes of virtio devices: - a legacy mode device, a transitional mode device, and a modern mode - device. A legacy mode device is compliant to virtio specification - version 0.95, a transitional mode device is compliant to both - 0.95 and 1.0 spec versions, and a modern mode - device is only compatible to the version 1.0 specification. - - In ACRN, all the virtio devices are transitional devices, meaning that - they should be compatible with both 0.95 and 1.0 versions of virtio - specification. - -Virtio Device Discovery - Virtio devices are commonly implemented as PCI/PCIe devices. A - virtio device using virtio over PCI/PCIe bus must expose an interface to - the Guest OS that meets the PCI/PCIe specifications. - - Conventionally, any PCI device with Vendor ID 0x1AF4, - PCI_VENDOR_ID_REDHAT_QUMRANET, and Device ID 0x1000 through 0x107F - inclusive is a virtio device. Among the Device IDs, the - legacy/transitional mode virtio devices occupy the first 64 IDs ranging - from 0x1000 to 0x103F, while the range 0x1040-0x107F belongs to - virtio modern devices. In addition, the Subsystem Vendor ID should - reflect the PCI/PCIe vendor ID of the environment, and the Subsystem - Device ID indicates which virtio device is supported by the device. - -Virtio Frameworks -================= - -This section describes the overall architecture of virtio, and then -introduce ACRN specific implementations of the virtio framework. - -Architecture ------------- - -Virtio adopts a frontend-backend -architecture, as shown in :numref:`virtio-arch`. Basically the FE and BE driver -communicate with each other through shared memory, via the -virtqueues. The FE driver talks to the BE driver in the same way it -would talk to a real PCIe device. The BE driver handles requests -from the FE driver, and notifies the FE driver if the request has been -processed. - -.. figure:: images/virtio-hld-image1.png - :width: 900px - :align: center - :name: virtio-arch - - Virtio Architecture - -In addition to virtio's frontend-backend architecture, both FE and BE -drivers follow a layered architecture, as shown in -:numref:`virtio-fe-be`. Each -side has three layers: transports, core models, and device types. -All virtio devices share the same virtio infrastructure, including -virtqueues, feature mechanisms, configuration space, and buses. - -.. figure:: images/virtio-hld-image4.png - :width: 900px - :align: center - :name: virtio-fe-be - - Virtio Frontend/Backend Layered Architecture - -Virtio Framework Considerations -------------------------------- - -How to realize the virtio framework is specific to a -hypervisor implementation. In ACRN, the virtio framework implementations -can be classified into two types, virtio backend service in user-land -(VBS-U) and virtio backend service in kernel-land (VBS-K), according to -where the virtio backend service (VBS) is located. Although different in BE -drivers, both VBS-U and VBS-K share the same FE drivers. The reason -behind the two virtio implementations is to meet the requirement of -supporting a large amount of diverse I/O devices in ACRN project. - -When developing a virtio BE device driver, the device owner should choose -carefully between the VBS-U and VBS-K. Generally VBS-U targets -non-performance-critical devices, but enables easy development and -debugging. VBS-K targets performance critical devices. - -The next two sections introduce ACRN's two implementations of the virtio -framework. - -User-Land Virtio Framework --------------------------- - -The architecture of ACRN user-land virtio framework (VBS-U) is shown in -:numref:`virtio-userland`. - -The FE driver talks to the BE driver as if it were talking with a PCIe -device. This means for "control plane", the FE driver could poke device -registers through PIO or MMIO, and the device will interrupt the FE -driver when something happens. For "data plane", the communication -between the FE and BE driver is through shared memory, in the form of -virtqueues. - -On the service OS side where the BE driver is located, there are several -key components in ACRN, including device model (DM), virtio and HV -service module (VHM), VBS-U, and user-level vring service API helpers. - -DM bridges the FE driver and BE driver since each VBS-U module emulates -a PCIe virtio device. VHM bridges DM and the hypervisor by providing -remote memory map APIs and notification APIs. VBS-U accesses the -virtqueue through the user-level vring service API helpers. - -.. figure:: images/virtio-hld-image3.png - :width: 900px - :align: center - :name: virtio-userland - - ACRN User-Land Virtio Framework - -Kernel-Land Virtio Framework ----------------------------- - -The architecture of ACRN kernel-land virtio framework (VBS-K) is shown -in :numref:`virtio-kernelland`. - -VBS-K provides acceleration for performance critical devices emulated by -VBS-U modules by handling the "data plane" of the devices directly in -the kernel. When VBS-K is enabled for certain device, the kernel-land -vring service API helpers are used to access the virtqueues shared by -the FE driver. Compared to VBS-U, this eliminates the overhead of -copying data back-and-forth between user-land and kernel-land within the -service OS, but pays with the extra implementation complexity of the BE -drivers. - -Except for the differences mentioned above, VBS-K still relies on VBS-U -for feature negotiations between FE and BE drivers. This means the -"control plane" of the virtio device still remains in VBS-U. When -feature negotiation is done, which is determined by FE driver setting up -an indicative flag, VBS-K module will be initialized by VBS-U, after -which all request handling will be offloaded to the VBS-K in kernel. - -The FE driver is not aware of how the BE driver is implemented, either -in the VBS-U or VBS-K model. This saves engineering effort regarding FE -driver development. - -.. figure:: images/virtio-hld-image6.png - :width: 900px - :align: center - :name: virtio-kernelland - - ACRN Kernel-Land Virtio Framework - -Virtio APIs -=========== - -This section provides details on the ACRN virtio APIs. As outlined previously, -the ACRN virtio APIs can be divided into three groups: DM_APIs, -VBS_APIs, and VQ_APIs. The following sections will elaborate on -these APIs. - -VBS-U Key Data Structures -------------------------- - -The key data structures for VBS-U are listed as following, and their -relationships are shown in :numref:`VBS-U-data`. - -``struct pci_virtio_blk`` - An example virtio device, such as virtio-blk. -``struct virtio_common`` - A common component to any virtio device. -``struct virtio_ops`` - Virtio specific operation functions for this type of virtio device. -``struct pci_vdev`` - Instance of a virtual PCIe device, and any virtio - device is a virtual PCIe device. -``struct pci_vdev_ops`` - PCIe device's operation functions for this type - of device. -``struct vqueue_info`` - Instance of a virtqueue. - -.. figure:: images/virtio-hld-image5.png - :width: 900px - :align: center - :name: VBS-U-data - - VBS-U Key Data Structures - -Each virtio device is a PCIe device. In addition, each virtio device -could have none or multiple virtqueues, depending on the device type. -The ``struct virtio_common`` is a key data structure to be manipulated by -DM, and DM finds other key data structures through it. The ``struct -virtio_ops`` abstracts a series of virtio callbacks to be provided by -device owner. - -VBS-K Key Data Structures -------------------------- - -The key data structures for VBS-K are listed as follows, and their -relationships are shown in :numref:`VBS-K-data`. - -``struct vbs_k_rng`` - In-kernel VBS-K component handling data plane of a - VBS-U virtio device, for example virtio random_num_generator. -``struct vbs_k_dev`` - In-kernel VBS-K component common to all VBS-K. -``struct vbs_k_vq`` - In-kernel VBS-K component to be working with kernel - vring service API helpers. -``struct vbs_k_dev_inf`` - Virtio device information to be synchronized - from VBS-U to VBS-K kernel module. -``struct vbs_k_vq_info`` - A single virtqueue information to be - synchronized from VBS-U to VBS-K kernel module. -``struct vbs_k_vqs_info`` - Virtqueue(s) information, of a virtio device, - to be synchronized from VBS-U to VBS-K kernel module. - -.. figure:: images/virtio-hld-image8.png - :width: 900px - :align: center - :name: VBS-K-data - - VBS-K Key Data Structures - -In VBS-K, the struct vbs_k_xxx represents the in-kernel component -handling a virtio device's data plane. It presents a char device for VBS-U -to open and register device status after feature negotiation with the FE -driver. - -The device status includes negotiated features, number of virtqueues, -interrupt information, and more. All these status will be synchronized -from VBS-U to VBS-K. In VBS-U, the ``struct vbs_k_dev_info`` and ``struct -vbs_k_vqs_info`` will collect all the information and notify VBS-K through -ioctls. In VBS-K, the ``struct vbs_k_dev`` and ``struct vbs_k_vq``, which are -common to all VBS-K modules, are the counterparts to preserve the -related information. The related information is necessary to kernel-land -vring service API helpers. - -DM APIs -======= - -The DM APIs are exported by DM, and they should be used when realizing -BE device drivers on ACRN. - -[API Material from doxygen comments] - -VBS APIs -======== - -The VBS APIs are exported by VBS related modules, including VBS, DM, and -SOS kernel modules. They can be classified into VBS-U and VBS-K APIs -listed as follows. - -VBS-U APIs ----------- - -These APIs provided by VBS-U are callbacks to be registered to DM, and -the virtio framework within DM will invoke them appropriately. - -[API Material from doxygen comments] - -VBS-K APIs ----------- - -The VBS-K APIs are exported by VBS-K related modules. Users could use -the following APIs to implement their VBS-K modules. - -APIs provided by DM -~~~~~~~~~~~~~~~~~~~ - -[API Material from doxygen comments] - -APIs provided by VBS-K modules in service OS -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -VQ APIs -------- - -The virtqueue APIs, or VQ APIs, are used by a BE device driver to -access the virtqueues shared by the FE driver. The VQ APIs abstract the -details of virtqueues so that users don't need to worry about the data -structures within the virtqueues. In addition, the VQ APIs are designed -to be identical between VBS-U and VBS-K, so that users don't need to -learn different APIs when implementing BE drivers based on VBS-U and -VBS-K. - -[API Material from doxygen comments] - -Below is an example showing a typical logic of how a BE driver handles -requests from a FE driver. - -.. code-block:: c - - static void BE_callback(struct pci_virtio_xxx *pv, struct vqueue_info *vq ) { - while (vq_has_descs(vq)) { - vq_getchain(vq, &idx, &iov, 1, NULL); - /* handle requests in iov */ - request_handle_proc(); - /* Release this chain and handle more */ - vq_relchain(vq, idx, len); - } - /* Generate interrupt if appropriate. 1 means ring empty \*/ - vq_endchains(vq, 1); - } - -Current Virtio Devices -====================== - -This section introduces the status of the current virtio devices -supported in ACRN. All the BE virtio drivers are implemented using the -ACRN virtio APIs, and the FE drivers are reusing the standard Linux FE -virtio drivers. For the devices with FE drivers available in the Linux -kernel, they should use standard virtio Vendor ID/Device ID and -Subsystem Vendor ID/Subsystem Device ID. For other devices within ACRN, -their temporary IDs are listed in the following table. - -.. table:: Virtio Devices without existing FE drivers in Linux - :align: center - :name: virtio-device-table - - +--------------+-------------+-------------+-------------+-------------+ - | virtio | Vendor ID | Device ID | Subvendor | Subdevice | - | device | | | ID | ID | - +--------------+-------------+-------------+-------------+-------------+ - | RPMB | 0x8086 | 0x8601 | 0x8086 | 0xFFFF | - +--------------+-------------+-------------+-------------+-------------+ - | HECI | 0x8086 | 0x8602 | 0x8086 | 0xFFFE | - +--------------+-------------+-------------+-------------+-------------+ - | audio | 0x8086 | 0x8603 | 0x8086 | 0xFFFD | - +--------------+-------------+-------------+-------------+-------------+ - | IPU | 0x8086 | 0x8604 | 0x8086 | 0xFFFC | - +--------------+-------------+-------------+-------------+-------------+ - | TSN/AVB | 0x8086 | 0x8605 | 0x8086 | 0xFFFB | - +--------------+-------------+-------------+-------------+-------------+ - | hyper_dmabuf | 0x8086 | 0x8606 | 0x8086 | 0xFFFA | - +--------------+-------------+-------------+-------------+-------------+ - | HDCP | 0x8086 | 0x8607 | 0x8086 | 0xFFF9 | - +--------------+-------------+-------------+-------------+-------------+ - | COREU | 0x8086 | 0x8608 | 0x8086 | 0xFFF8 | - +--------------+-------------+-------------+-------------+-------------+ - -Virtio-rnd -========== - -The virtio-rnd entropy device supplies high-quality randomness for guest -use. The virtio device ID of the virtio-rnd device is 4, and it supports -one virtqueue, the size of which is 64, configurable in the source code. -It has no feature bits defined. - -When the FE driver requires some random bytes, the BE device will place -bytes of random data onto the virtqueue. - -To launch the virtio-rnd device, use the following virtio command:: - - -s ,virtio-rnd - -To verify the correctness in user OS, use the following -command:: - - od /dev/random - -Virtio-blk -========== - -The virtio-blk device is a simple virtual block device. The FE driver -places read, write, and other requests onto the virtqueue, so that the -BE driver can process them accordingly. - -The virtio device ID of the virtio-blk is 2, and it supports one -virtqueue, the size of which is 64, configurable in the source code. The -feature bits supported by the BE device are shown as follows: - -VTBLK_F_SEG_MAX(bit 2) - Maximum number of segments in a request is in seg_max. -VTBLK_F_BLK_SIZE(bit 6) - block size of disk is in blk_size. -VTBLK_F_FLUSH(bit 9) - cache flush command support. -VTBLK_F_TOPOLOGY(bit 10) - device exports information on optimal I/O alignment. - -To use the virtio-blk device, use the following virtio command:: - - -s ,virtio-blk,[,options] - - options: - - writethru: write operation is reported completed only when the - data has been written to physical storage. - - writeback: write operation is reported completed when data is - placed in page cache. Needs to be flushed to the physical storage. - - ro: open file with readonly mode. - - sectorsize: - 1> sectorsize=/ - 2> sectorsize= - default values for sector size and physical sector size are 512 - - range: - range=/ - -Successful booting of the User OS verifies the correctness of the -device. - -Virtio-net -========== - -The virtio-net device is a virtual Ethernet device. The virtio device ID -of the virtio-net is 1, and ACRN's virtio-net device supports twp -virtqueues, one for transmitting packets and the other for receiving -packets. The FE driver places empty buffers onto one virtqueue for -receiving packets, and enqueue outgoing packets onto another virtqueue -for transmission. Currently the size of each virtqueue is 1000, -configurable in the source code. - -To access the external network from user OS, as shown in -:numref:`virtio-network`, a L2 virtual switch should be created in the -service OS, and the BE driver is bonded to a tap/tun device linking -under the L2 virtual switch. - -.. figure:: images/virtio-hld-image7.png - :width: 900px - :align: center - :name: virtio-network - - Virtio-net Accessing External Network - -Currently the feature bits supported by the BE device are shown as -follows: - -VIRTIO_NET_F_MAC(bit 5) - device has given MAC address. -VIRTIO_NET_F_MRG_RXBUF(bit 15) - BE driver can merge receive buffers. -VIRTIO_NET_F_STATUS(bit 16) - configuration status field is available. -VIRTIO_F_NOTIFY_ON_EMPTY(bit 24) - device will issue an interrupt if - it runs out of available descriptors on a virtqueue. - -To enable the virtio-net device, use the following virtio command:: - - -s ,virtio-net, - -To verify the correctness of the device, access the external -network within the user OS. - -Virtio-console -============== - -The virtio-console device is a simple device for data input and output. -The virtio device ID of the virtio-console device is 3. A device could -have one or up to 16 ports in ACRN. Each port has a pair of input and -output virtqueues. A device has a pair of control virtqueues, which are -used to communicate information between the FE and BE drivers. Currently -the size of each virtqueue is 64, configurable in the source code. - -Similar to virtio-net device, two virtqueues specific to a port are -transmitting virtqueue and receiving virtqueue. The FE driver places -empty buffers onto the receiving virtqueue for incoming data, and -enqueues outgoing characters onto transmitting virtqueue. - -Currently the feature bits supported by the BE device are shown as -follows: - -VTCON_F_SIZE(bit 0) - configuration columns and rows are valid. -VTCON_F_MULTIPORT(bit 1) - device supports multiple ports, and control - virtqueues will be used. -VTCON_F_EMERG_WRITE(bit 2) - device supports emergency write. - -To use the virtio-console device, use the following virtio command:: - - -s ,virtio-console,[@]:[=portpath] - -.. note:: - - Here are some notes about the virtio-console device: - - - ``@`` : marks the port as a console port, otherwise it is a normal - virtio serial port - - stdio/tty/pty: tty capable, which means :kbd:`TAB` and :kbd:`BACKSPACE` - are supported as in regular terminals - - When tty are used, please make sure the redirected tty is sleep, e.g. by - "sleep 2d" command, and will not read input from stdin before it is used - by virtio-console to redirect guest output; - - Claiming multiple virtio serial ports as consoles are supported, however - the guest Linux will only use one of them, through "console=hvcN" kernel - parameters, as the hvc. +.. _hld-virtio-devices: +.. _virtio-hld: + +Virtio devices high-level design +################################ + +The ACRN Hypervisor follows the `Virtual I/O Device (virtio) +specification +`_ to +realize I/O virtualization for many performance-critical devices +supported in the ACRN project. Adopting the virtio specification lets us +reuse many frontend virtio drivers already available in a Linux-based +User OS, drastically reducing potential development effort for frontend +virtio drivers. To further reduce the development effort of backend +virtio drivers, the hypervisor provides the virtio backend service +(VBS) APIs, that make it very straightforward to implement a virtio +device in the hypervisor. + +The virtio APIs can be divided into 3 groups: DM APIs, virtio backend +service (VBS) APIs, and virtqueue (VQ) APIs, as shown in +:numref:`be-interface`. + +.. figure:: images/virtio-hld-image0.png + :width: 900px + :align: center + :name: be-interface + + ACRN Virtio Backend Service Interface + +- **DM APIs** are exported by the DM, and are mainly used during the + device initialization phase and runtime. The DM APIs also include + PCIe emulation APIs because each virtio device is a PCIe device in + the SOS and UOS. +- **VBS APIs** are mainly exported by the VBS and related modules. + Generally they are callbacks to be + registered into the DM. +- **VQ APIs** are used by a virtio backend device to access and parse + information from the shared memory between the frontend and backend + device drivers. + +Virtio framework is the para-virtualization specification that ACRN +follows to implement I/O virtualization of performance-critical +devices such as audio, eAVB/TSN, IPU, and CSMU devices. This section gives +an overview about virtio history, motivation, and advantages, and then +highlights virtio key concepts. Second, this section will describe +ACRN's virtio architectures, and elaborates on ACRN virtio APIs. Finally +this section will introduce all the virtio devices currently supported +by ACRN. + +Virtio introduction +******************* + +Virtio is an abstraction layer over devices in a para-virtualized +hypervisor. Virtio was developed by Rusty Russell when he worked at IBM +research to support his lguest hypervisor in 2007, and it quickly became +the de-facto standard for KVM's para-virtualized I/O devices. + +Virtio is very popular for virtual I/O devices because is provides a +straightforward, efficient, standard, and extensible mechanism, and +eliminates the need for boutique, per-environment, or per-OS mechanisms. +For example, rather than having a variety of device emulation +mechanisms, virtio provides a common frontend driver framework that +standardizes device interfaces, and increases code reuse across +different virtualization platforms. + +Given the advantages of virtio, ACRN also follows the virtio +specification. + +Key Concepts +************ + +To better understand virtio, especially its usage in ACRN, we'll +highlight several key virtio concepts important to ACRN: + + +Frontend virtio driver (FE) + Virtio adopts a frontend-backend architecture that enables a simple but + flexible framework for both frontend and backend virtio drivers. The FE + driver merely needs to offer services configure the interface, pass messages, + produce requests, and kick backend virtio driver. As a result, the FE + driver is easy to implement and the performance overhead of emulating + a device is eliminated. + +Backend virtio driver (BE) + Similar to FE driver, the BE driver, running either in user-land or + kernel-land of the host OS, consumes requests from the FE driver and sends them + to the host native device driver. Once the requests are done by the host + native device driver, the BE driver notifies the FE driver that the + request is complete. + + Note: to distinguish BE driver from host native device driver, the host + native device driver is called "native driver" in this document. + +Straightforward: virtio devices as standard devices on existing buses + Instead of creating new device buses from scratch, virtio devices are + built on existing buses. This gives a straightforward way for both FE + and BE drivers to interact with each other. For example, FE driver could + read/write registers of the device, and the virtual device could + interrupt FE driver, on behalf of the BE driver, in case something of + interest is happening. + + Currently virtio supports PCI/PCIe bus and MMIO bus. In ACRN, only + PCI/PCIe bus is supported, and all the virtio devices share the same + vendor ID 0x1AF4. + + Note: For MMIO, the "bus" is a little bit an overstatement since + basically it is a few descriptors describing the devices. + +Efficient: batching operation is encouraged + Batching operation and deferred notification are important to achieve + high-performance I/O, since notification between FE and BE driver + usually involves an expensive exit of the guest. Therefore batching + operating and notification suppression are highly encouraged if + possible. This will give an efficient implementation for + performance-critical devices. + +Standard: virtqueue + All virtio devices share a standard ring buffer and descriptor + mechanism, called a virtqueue, shown in :numref:`virtqueue`. A virtqueue is a + queue of scatter-gather buffers. There are three important methods on + virtqueues: + + - **add_buf** is for adding a request/response buffer in a virtqueue, + - **get_buf** is for getting a response/request in a virtqueue, and + - **kick** is for notifying the other side for a virtqueue to consume buffers. + + The virtqueues are created in guest physical memory by the FE drivers. + BE drivers only need to parse the virtqueue structures to obtain + the requests and process them. How a virtqueue is organized is + specific to the Guest OS. In the Linux implementation of virtio, the + virtqueue is implemented as a ring buffer structure called vring. + + In ACRN, the virtqueue APIs can be leveraged directly so that users + don't need to worry about the details of the virtqueue. (Refer to guest + OS for more details about the virtqueue implementation.) + +.. figure:: images/virtio-hld-image2.png + :width: 900px + :align: center + :name: virtqueue + + Virtqueue + +Extensible: feature bits + A simple extensible feature negotiation mechanism exists for each + virtual device and its driver. Each virtual device could claim its + device specific features while the corresponding driver could respond to + the device with the subset of features the driver understands. The + feature mechanism enables forward and backward compatibility for the + virtual device and driver. + +Virtio Device Modes + The virtio specification defines three modes of virtio devices: + a legacy mode device, a transitional mode device, and a modern mode + device. A legacy mode device is compliant to virtio specification + version 0.95, a transitional mode device is compliant to both + 0.95 and 1.0 spec versions, and a modern mode + device is only compatible to the version 1.0 specification. + + In ACRN, all the virtio devices are transitional devices, meaning that + they should be compatible with both 0.95 and 1.0 versions of virtio + specification. + +Virtio Device Discovery + Virtio devices are commonly implemented as PCI/PCIe devices. A + virtio device using virtio over PCI/PCIe bus must expose an interface to + the Guest OS that meets the PCI/PCIe specifications. + + Conventionally, any PCI device with Vendor ID 0x1AF4, + PCI_VENDOR_ID_REDHAT_QUMRANET, and Device ID 0x1000 through 0x107F + inclusive is a virtio device. Among the Device IDs, the + legacy/transitional mode virtio devices occupy the first 64 IDs ranging + from 0x1000 to 0x103F, while the range 0x1040-0x107F belongs to + virtio modern devices. In addition, the Subsystem Vendor ID should + reflect the PCI/PCIe vendor ID of the environment, and the Subsystem + Device ID indicates which virtio device is supported by the device. + +Virtio Frameworks +***************** + +This section describes the overall architecture of virtio, and then +introduce ACRN specific implementations of the virtio framework. + +Architecture +============ + +Virtio adopts a frontend-backend +architecture, as shown in :numref:`virtio-arch`. Basically the FE and BE driver +communicate with each other through shared memory, via the +virtqueues. The FE driver talks to the BE driver in the same way it +would talk to a real PCIe device. The BE driver handles requests +from the FE driver, and notifies the FE driver if the request has been +processed. + +.. figure:: images/virtio-hld-image1.png + :width: 900px + :align: center + :name: virtio-arch + + Virtio Architecture + +In addition to virtio's frontend-backend architecture, both FE and BE +drivers follow a layered architecture, as shown in +:numref:`virtio-fe-be`. Each +side has three layers: transports, core models, and device types. +All virtio devices share the same virtio infrastructure, including +virtqueues, feature mechanisms, configuration space, and buses. + +.. figure:: images/virtio-hld-image4.png + :width: 900px + :align: center + :name: virtio-fe-be + + Virtio Frontend/Backend Layered Architecture + +Virtio Framework Considerations +=============================== + +How to realize the virtio framework is specific to a +hypervisor implementation. In ACRN, the virtio framework implementations +can be classified into two types, virtio backend service in user-land +(VBS-U) and virtio backend service in kernel-land (VBS-K), according to +where the virtio backend service (VBS) is located. Although different in BE +drivers, both VBS-U and VBS-K share the same FE drivers. The reason +behind the two virtio implementations is to meet the requirement of +supporting a large amount of diverse I/O devices in ACRN project. + +When developing a virtio BE device driver, the device owner should choose +carefully between the VBS-U and VBS-K. Generally VBS-U targets +non-performance-critical devices, but enables easy development and +debugging. VBS-K targets performance critical devices. + +The next two sections introduce ACRN's two implementations of the virtio +framework. + +User-Land Virtio Framework +========================== + +The architecture of ACRN user-land virtio framework (VBS-U) is shown in +:numref:`virtio-userland`. + +The FE driver talks to the BE driver as if it were talking with a PCIe +device. This means for "control plane", the FE driver could poke device +registers through PIO or MMIO, and the device will interrupt the FE +driver when something happens. For "data plane", the communication +between the FE and BE driver is through shared memory, in the form of +virtqueues. + +On the service OS side where the BE driver is located, there are several +key components in ACRN, including device model (DM), virtio and HV +service module (VHM), VBS-U, and user-level vring service API helpers. + +DM bridges the FE driver and BE driver since each VBS-U module emulates +a PCIe virtio device. VHM bridges DM and the hypervisor by providing +remote memory map APIs and notification APIs. VBS-U accesses the +virtqueue through the user-level vring service API helpers. + +.. figure:: images/virtio-hld-image3.png + :width: 900px + :align: center + :name: virtio-userland + + ACRN User-Land Virtio Framework + +Kernel-Land Virtio Framework +============================ + +The architecture of ACRN kernel-land virtio framework (VBS-K) is shown +in :numref:`virtio-kernelland`. + +VBS-K provides acceleration for performance critical devices emulated by +VBS-U modules by handling the "data plane" of the devices directly in +the kernel. When VBS-K is enabled for certain device, the kernel-land +vring service API helpers are used to access the virtqueues shared by +the FE driver. Compared to VBS-U, this eliminates the overhead of +copying data back-and-forth between user-land and kernel-land within the +service OS, but pays with the extra implementation complexity of the BE +drivers. + +Except for the differences mentioned above, VBS-K still relies on VBS-U +for feature negotiations between FE and BE drivers. This means the +"control plane" of the virtio device still remains in VBS-U. When +feature negotiation is done, which is determined by FE driver setting up +an indicative flag, VBS-K module will be initialized by VBS-U, after +which all request handling will be offloaded to the VBS-K in kernel. + +The FE driver is not aware of how the BE driver is implemented, either +in the VBS-U or VBS-K model. This saves engineering effort regarding FE +driver development. + +.. figure:: images/virtio-hld-image6.png + :width: 900px + :align: center + :name: virtio-kernelland + + ACRN Kernel-Land Virtio Framework + +Virtio APIs +*********** + +This section provides details on the ACRN virtio APIs. As outlined previously, +the ACRN virtio APIs can be divided into three groups: DM_APIs, +VBS_APIs, and VQ_APIs. The following sections will elaborate on +these APIs. + +VBS-U Key Data Structures +========================= + +The key data structures for VBS-U are listed as following, and their +relationships are shown in :numref:`VBS-U-data`. + +``struct pci_virtio_blk`` + An example virtio device, such as virtio-blk. +``struct virtio_common`` + A common component to any virtio device. +``struct virtio_ops`` + Virtio specific operation functions for this type of virtio device. +``struct pci_vdev`` + Instance of a virtual PCIe device, and any virtio + device is a virtual PCIe device. +``struct pci_vdev_ops`` + PCIe device's operation functions for this type + of device. +``struct vqueue_info`` + Instance of a virtqueue. + +.. figure:: images/virtio-hld-image5.png + :width: 900px + :align: center + :name: VBS-U-data + + VBS-U Key Data Structures + +Each virtio device is a PCIe device. In addition, each virtio device +could have none or multiple virtqueues, depending on the device type. +The ``struct virtio_common`` is a key data structure to be manipulated by +DM, and DM finds other key data structures through it. The ``struct +virtio_ops`` abstracts a series of virtio callbacks to be provided by +device owner. + +VBS-K Key Data Structures +========================= + +The key data structures for VBS-K are listed as follows, and their +relationships are shown in :numref:`VBS-K-data`. + +``struct vbs_k_rng`` + In-kernel VBS-K component handling data plane of a + VBS-U virtio device, for example virtio random_num_generator. +``struct vbs_k_dev`` + In-kernel VBS-K component common to all VBS-K. +``struct vbs_k_vq`` + In-kernel VBS-K component to be working with kernel + vring service API helpers. +``struct vbs_k_dev_inf`` + Virtio device information to be synchronized + from VBS-U to VBS-K kernel module. +``struct vbs_k_vq_info`` + A single virtqueue information to be + synchronized from VBS-U to VBS-K kernel module. +``struct vbs_k_vqs_info`` + Virtqueue(s) information, of a virtio device, + to be synchronized from VBS-U to VBS-K kernel module. + +.. figure:: images/virtio-hld-image8.png + :width: 900px + :align: center + :name: VBS-K-data + + VBS-K Key Data Structures + +In VBS-K, the struct vbs_k_xxx represents the in-kernel component +handling a virtio device's data plane. It presents a char device for VBS-U +to open and register device status after feature negotiation with the FE +driver. + +The device status includes negotiated features, number of virtqueues, +interrupt information, and more. All these status will be synchronized +from VBS-U to VBS-K. In VBS-U, the ``struct vbs_k_dev_info`` and ``struct +vbs_k_vqs_info`` will collect all the information and notify VBS-K through +ioctls. In VBS-K, the ``struct vbs_k_dev`` and ``struct vbs_k_vq``, which are +common to all VBS-K modules, are the counterparts to preserve the +related information. The related information is necessary to kernel-land +vring service API helpers. + +DM APIs +======= + +The DM APIs are exported by DM, and they should be used when realizing +BE device drivers on ACRN. + +[API Material from doxygen comments] + +VBS APIs +======== + +The VBS APIs are exported by VBS related modules, including VBS, DM, and +SOS kernel modules. They can be classified into VBS-U and VBS-K APIs +listed as follows. + +VBS-U APIs +---------- + +These APIs provided by VBS-U are callbacks to be registered to DM, and +the virtio framework within DM will invoke them appropriately. + +[API Material from doxygen comments] + +VBS-K APIs +---------- + +The VBS-K APIs are exported by VBS-K related modules. Users could use +the following APIs to implement their VBS-K modules. + +APIs provided by DM +~~~~~~~~~~~~~~~~~~~ + +[API Material from doxygen comments] + +APIs provided by VBS-K modules in service OS +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +[API Material from doxygen comments] + +VQ APIs +======= + +The virtqueue APIs, or VQ APIs, are used by a BE device driver to +access the virtqueues shared by the FE driver. The VQ APIs abstract the +details of virtqueues so that users don't need to worry about the data +structures within the virtqueues. In addition, the VQ APIs are designed +to be identical between VBS-U and VBS-K, so that users don't need to +learn different APIs when implementing BE drivers based on VBS-U and +VBS-K. + +[API Material from doxygen comments] + +Below is an example showing a typical logic of how a BE driver handles +requests from a FE driver. + +.. code-block:: c + + static void BE_callback(struct pci_virtio_xxx *pv, struct vqueue_info *vq ) { + while (vq_has_descs(vq)) { + vq_getchain(vq, &idx, &iov, 1, NULL); + /* handle requests in iov */ + request_handle_proc(); + /* Release this chain and handle more */ + vq_relchain(vq, idx, len); + } + /* Generate interrupt if appropriate. 1 means ring empty \*/ + vq_endchains(vq, 1); + } + +Supported Virtio Devices +************************ + +All the BE virtio drivers are implemented using the +ACRN virtio APIs, and the FE drivers are reusing the standard Linux FE +virtio drivers. For the devices with FE drivers available in the Linux +kernel, they should use standard virtio Vendor ID/Device ID and +Subsystem Vendor ID/Subsystem Device ID. For other devices within ACRN, +their temporary IDs are listed in the following table. + +.. table:: Virtio Devices without existing FE drivers in Linux + :align: center + :name: virtio-device-table + + +--------------+-------------+-------------+-------------+-------------+ + | virtio | Vendor ID | Device ID | Subvendor | Subdevice | + | device | | | ID | ID | + +--------------+-------------+-------------+-------------+-------------+ + | RPMB | 0x8086 | 0x8601 | 0x8086 | 0xFFFF | + +--------------+-------------+-------------+-------------+-------------+ + | HECI | 0x8086 | 0x8602 | 0x8086 | 0xFFFE | + +--------------+-------------+-------------+-------------+-------------+ + | audio | 0x8086 | 0x8603 | 0x8086 | 0xFFFD | + +--------------+-------------+-------------+-------------+-------------+ + | IPU | 0x8086 | 0x8604 | 0x8086 | 0xFFFC | + +--------------+-------------+-------------+-------------+-------------+ + | TSN/AVB | 0x8086 | 0x8605 | 0x8086 | 0xFFFB | + +--------------+-------------+-------------+-------------+-------------+ + | hyper_dmabuf | 0x8086 | 0x8606 | 0x8086 | 0xFFFA | + +--------------+-------------+-------------+-------------+-------------+ + | HDCP | 0x8086 | 0x8607 | 0x8086 | 0xFFF9 | + +--------------+-------------+-------------+-------------+-------------+ + | COREU | 0x8086 | 0x8608 | 0x8086 | 0xFFF8 | + +--------------+-------------+-------------+-------------+-------------+ + +The following sections introduce the status of virtio devices currently +supported in ACRN. + +.. toctree:: + :maxdepth: 1 + + virtio-blk + virtio-net + virtio-console + virtio-rnd diff --git a/doc/developer-guides/hld/hld-vm-management.rst b/doc/developer-guides/hld/hld-vm-management.rst new file mode 100644 index 000000000..9512315f7 --- /dev/null +++ b/doc/developer-guides/hld/hld-vm-management.rst @@ -0,0 +1,4 @@ +.. _hld-vm-management: + +VM Management high-level design +############################### diff --git a/doc/developer-guides/hld/hld-vsbl.rst b/doc/developer-guides/hld/hld-vsbl.rst new file mode 100644 index 000000000..cd77c99fb --- /dev/null +++ b/doc/developer-guides/hld/hld-vsbl.rst @@ -0,0 +1,4 @@ +.. _hld-vsbl: + +Virtual Slim-Bootloader high-level design +######################################### diff --git a/doc/developer-guides/images/APL_GVT-g-DM.png b/doc/developer-guides/hld/images/APL_GVT-g-DM.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-DM.png rename to doc/developer-guides/hld/images/APL_GVT-g-DM.png diff --git a/doc/developer-guides/images/APL_GVT-g-access-patterns.png b/doc/developer-guides/hld/images/APL_GVT-g-access-patterns.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-access-patterns.png rename to doc/developer-guides/hld/images/APL_GVT-g-access-patterns.png diff --git a/doc/developer-guides/images/APL_GVT-g-api-forwarding.png b/doc/developer-guides/hld/images/APL_GVT-g-api-forwarding.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-api-forwarding.png rename to doc/developer-guides/hld/images/APL_GVT-g-api-forwarding.png diff --git a/doc/developer-guides/images/APL_GVT-g-arch.png b/doc/developer-guides/hld/images/APL_GVT-g-arch.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-arch.png rename to doc/developer-guides/hld/images/APL_GVT-g-arch.png diff --git a/doc/developer-guides/images/APL_GVT-g-direct-display.png b/doc/developer-guides/hld/images/APL_GVT-g-direct-display.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-direct-display.png rename to doc/developer-guides/hld/images/APL_GVT-g-direct-display.png diff --git a/doc/developer-guides/images/APL_GVT-g-display-virt.png b/doc/developer-guides/hld/images/APL_GVT-g-display-virt.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-display-virt.png rename to doc/developer-guides/hld/images/APL_GVT-g-display-virt.png diff --git a/doc/developer-guides/images/APL_GVT-g-full-pic.png b/doc/developer-guides/hld/images/APL_GVT-g-full-pic.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-full-pic.png rename to doc/developer-guides/hld/images/APL_GVT-g-full-pic.png diff --git a/doc/developer-guides/images/APL_GVT-g-graphics-arch.png b/doc/developer-guides/hld/images/APL_GVT-g-graphics-arch.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-graphics-arch.png rename to doc/developer-guides/hld/images/APL_GVT-g-graphics-arch.png diff --git a/doc/developer-guides/images/APL_GVT-g-hyper-dma.png b/doc/developer-guides/hld/images/APL_GVT-g-hyper-dma.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-hyper-dma.png rename to doc/developer-guides/hld/images/APL_GVT-g-hyper-dma.png diff --git a/doc/developer-guides/images/APL_GVT-g-indirect-display.png b/doc/developer-guides/hld/images/APL_GVT-g-indirect-display.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-indirect-display.png rename to doc/developer-guides/hld/images/APL_GVT-g-indirect-display.png diff --git a/doc/developer-guides/images/APL_GVT-g-interrupt-virt.png b/doc/developer-guides/hld/images/APL_GVT-g-interrupt-virt.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-interrupt-virt.png rename to doc/developer-guides/hld/images/APL_GVT-g-interrupt-virt.png diff --git a/doc/developer-guides/images/APL_GVT-g-ive-use-case.png b/doc/developer-guides/hld/images/APL_GVT-g-ive-use-case.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-ive-use-case.png rename to doc/developer-guides/hld/images/APL_GVT-g-ive-use-case.png diff --git a/doc/developer-guides/images/APL_GVT-g-mediated-pass-through.png b/doc/developer-guides/hld/images/APL_GVT-g-mediated-pass-through.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-mediated-pass-through.png rename to doc/developer-guides/hld/images/APL_GVT-g-mediated-pass-through.png diff --git a/doc/developer-guides/images/APL_GVT-g-mem-part.png b/doc/developer-guides/hld/images/APL_GVT-g-mem-part.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-mem-part.png rename to doc/developer-guides/hld/images/APL_GVT-g-mem-part.png diff --git a/doc/developer-guides/images/APL_GVT-g-pass-through.png b/doc/developer-guides/hld/images/APL_GVT-g-pass-through.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-pass-through.png rename to doc/developer-guides/hld/images/APL_GVT-g-pass-through.png diff --git a/doc/developer-guides/images/APL_GVT-g-per-vm-shadow.png b/doc/developer-guides/hld/images/APL_GVT-g-per-vm-shadow.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-per-vm-shadow.png rename to doc/developer-guides/hld/images/APL_GVT-g-per-vm-shadow.png diff --git a/doc/developer-guides/images/APL_GVT-g-perf-critical.png b/doc/developer-guides/hld/images/APL_GVT-g-perf-critical.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-perf-critical.png rename to doc/developer-guides/hld/images/APL_GVT-g-perf-critical.png diff --git a/doc/developer-guides/images/APL_GVT-g-plane-based.png b/doc/developer-guides/hld/images/APL_GVT-g-plane-based.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-plane-based.png rename to doc/developer-guides/hld/images/APL_GVT-g-plane-based.png diff --git a/doc/developer-guides/images/APL_GVT-g-scheduling-policy.png b/doc/developer-guides/hld/images/APL_GVT-g-scheduling-policy.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-scheduling-policy.png rename to doc/developer-guides/hld/images/APL_GVT-g-scheduling-policy.png diff --git a/doc/developer-guides/images/APL_GVT-g-scheduling.png b/doc/developer-guides/hld/images/APL_GVT-g-scheduling.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-scheduling.png rename to doc/developer-guides/hld/images/APL_GVT-g-scheduling.png diff --git a/doc/developer-guides/images/APL_GVT-g-shared-shadow.png b/doc/developer-guides/hld/images/APL_GVT-g-shared-shadow.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-shared-shadow.png rename to doc/developer-guides/hld/images/APL_GVT-g-shared-shadow.png diff --git a/doc/developer-guides/images/APL_GVT-g-workload.png b/doc/developer-guides/hld/images/APL_GVT-g-workload.png similarity index 100% rename from doc/developer-guides/images/APL_GVT-g-workload.png rename to doc/developer-guides/hld/images/APL_GVT-g-workload.png diff --git a/doc/developer-guides/images/acpi-image1.png b/doc/developer-guides/hld/images/acpi-image1.png similarity index 100% rename from doc/developer-guides/images/acpi-image1.png rename to doc/developer-guides/hld/images/acpi-image1.png diff --git a/doc/developer-guides/images/acpi-image2.png b/doc/developer-guides/hld/images/acpi-image2.png similarity index 100% rename from doc/developer-guides/images/acpi-image2.png rename to doc/developer-guides/hld/images/acpi-image2.png diff --git a/doc/developer-guides/images/acpi-image3.png b/doc/developer-guides/hld/images/acpi-image3.png similarity index 100% rename from doc/developer-guides/images/acpi-image3.png rename to doc/developer-guides/hld/images/acpi-image3.png diff --git a/doc/developer-guides/images/acpi-image5.png b/doc/developer-guides/hld/images/acpi-image5.png similarity index 100% rename from doc/developer-guides/images/acpi-image5.png rename to doc/developer-guides/hld/images/acpi-image5.png diff --git a/doc/developer-guides/images/interrupt-image2.png b/doc/developer-guides/hld/images/interrupt-image2.png similarity index 100% rename from doc/developer-guides/images/interrupt-image2.png rename to doc/developer-guides/hld/images/interrupt-image2.png diff --git a/doc/developer-guides/images/interrupt-image3.png b/doc/developer-guides/hld/images/interrupt-image3.png similarity index 100% rename from doc/developer-guides/images/interrupt-image3.png rename to doc/developer-guides/hld/images/interrupt-image3.png diff --git a/doc/developer-guides/images/interrupt-image4.png b/doc/developer-guides/hld/images/interrupt-image4.png similarity index 100% rename from doc/developer-guides/images/interrupt-image4.png rename to doc/developer-guides/hld/images/interrupt-image4.png diff --git a/doc/developer-guides/images/interrupt-image5.png b/doc/developer-guides/hld/images/interrupt-image5.png similarity index 100% rename from doc/developer-guides/images/interrupt-image5.png rename to doc/developer-guides/hld/images/interrupt-image5.png diff --git a/doc/developer-guides/images/interrupt-image6.png b/doc/developer-guides/hld/images/interrupt-image6.png similarity index 100% rename from doc/developer-guides/images/interrupt-image6.png rename to doc/developer-guides/hld/images/interrupt-image6.png diff --git a/doc/developer-guides/images/interrupt-image7.png b/doc/developer-guides/hld/images/interrupt-image7.png similarity index 100% rename from doc/developer-guides/images/interrupt-image7.png rename to doc/developer-guides/hld/images/interrupt-image7.png diff --git a/doc/developer-guides/images/mem-image1.png b/doc/developer-guides/hld/images/mem-image1.png similarity index 100% rename from doc/developer-guides/images/mem-image1.png rename to doc/developer-guides/hld/images/mem-image1.png diff --git a/doc/developer-guides/images/mem-image2.png b/doc/developer-guides/hld/images/mem-image2.png similarity index 100% rename from doc/developer-guides/images/mem-image2.png rename to doc/developer-guides/hld/images/mem-image2.png diff --git a/doc/developer-guides/images/mem-image3.png b/doc/developer-guides/hld/images/mem-image3.png similarity index 100% rename from doc/developer-guides/images/mem-image3.png rename to doc/developer-guides/hld/images/mem-image3.png diff --git a/doc/developer-guides/images/mem-image4.png b/doc/developer-guides/hld/images/mem-image4.png similarity index 100% rename from doc/developer-guides/images/mem-image4.png rename to doc/developer-guides/hld/images/mem-image4.png diff --git a/doc/developer-guides/images/mem-image5.png b/doc/developer-guides/hld/images/mem-image5.png similarity index 100% rename from doc/developer-guides/images/mem-image5.png rename to doc/developer-guides/hld/images/mem-image5.png diff --git a/doc/developer-guides/images/mem-image6.png b/doc/developer-guides/hld/images/mem-image6.png similarity index 100% rename from doc/developer-guides/images/mem-image6.png rename to doc/developer-guides/hld/images/mem-image6.png diff --git a/doc/developer-guides/images/mem-image7.png b/doc/developer-guides/hld/images/mem-image7.png similarity index 100% rename from doc/developer-guides/images/mem-image7.png rename to doc/developer-guides/hld/images/mem-image7.png diff --git a/doc/developer-guides/images/network-virt-arch.png b/doc/developer-guides/hld/images/network-virt-arch.png similarity index 100% rename from doc/developer-guides/images/network-virt-arch.png rename to doc/developer-guides/hld/images/network-virt-arch.png diff --git a/doc/developer-guides/images/network-virt-sos-infrastruct.png b/doc/developer-guides/hld/images/network-virt-sos-infrastruct.png similarity index 100% rename from doc/developer-guides/images/network-virt-sos-infrastruct.png rename to doc/developer-guides/hld/images/network-virt-sos-infrastruct.png diff --git a/doc/developer-guides/images/security-image1.png b/doc/developer-guides/hld/images/security-image1.png similarity index 100% rename from doc/developer-guides/images/security-image1.png rename to doc/developer-guides/hld/images/security-image1.png diff --git a/doc/developer-guides/images/security-image10.png b/doc/developer-guides/hld/images/security-image10.png similarity index 100% rename from doc/developer-guides/images/security-image10.png rename to doc/developer-guides/hld/images/security-image10.png diff --git a/doc/developer-guides/images/security-image11.png b/doc/developer-guides/hld/images/security-image11.png similarity index 100% rename from doc/developer-guides/images/security-image11.png rename to doc/developer-guides/hld/images/security-image11.png diff --git a/doc/developer-guides/images/security-image12.png b/doc/developer-guides/hld/images/security-image12.png similarity index 100% rename from doc/developer-guides/images/security-image12.png rename to doc/developer-guides/hld/images/security-image12.png diff --git a/doc/developer-guides/images/security-image13.png b/doc/developer-guides/hld/images/security-image13.png similarity index 100% rename from doc/developer-guides/images/security-image13.png rename to doc/developer-guides/hld/images/security-image13.png diff --git a/doc/developer-guides/images/security-image14.png b/doc/developer-guides/hld/images/security-image14.png similarity index 100% rename from doc/developer-guides/images/security-image14.png rename to doc/developer-guides/hld/images/security-image14.png diff --git a/doc/developer-guides/images/security-image2.png b/doc/developer-guides/hld/images/security-image2.png similarity index 100% rename from doc/developer-guides/images/security-image2.png rename to doc/developer-guides/hld/images/security-image2.png diff --git a/doc/developer-guides/images/security-image3.png b/doc/developer-guides/hld/images/security-image3.png similarity index 100% rename from doc/developer-guides/images/security-image3.png rename to doc/developer-guides/hld/images/security-image3.png diff --git a/doc/developer-guides/images/security-image4.png b/doc/developer-guides/hld/images/security-image4.png similarity index 100% rename from doc/developer-guides/images/security-image4.png rename to doc/developer-guides/hld/images/security-image4.png diff --git a/doc/developer-guides/images/security-image5.png b/doc/developer-guides/hld/images/security-image5.png similarity index 100% rename from doc/developer-guides/images/security-image5.png rename to doc/developer-guides/hld/images/security-image5.png diff --git a/doc/developer-guides/images/security-image6.png b/doc/developer-guides/hld/images/security-image6.png similarity index 100% rename from doc/developer-guides/images/security-image6.png rename to doc/developer-guides/hld/images/security-image6.png diff --git a/doc/developer-guides/images/security-image7.png b/doc/developer-guides/hld/images/security-image7.png similarity index 100% rename from doc/developer-guides/images/security-image7.png rename to doc/developer-guides/hld/images/security-image7.png diff --git a/doc/developer-guides/images/security-image8.png b/doc/developer-guides/hld/images/security-image8.png similarity index 100% rename from doc/developer-guides/images/security-image8.png rename to doc/developer-guides/hld/images/security-image8.png diff --git a/doc/developer-guides/images/security-image9.png b/doc/developer-guides/hld/images/security-image9.png similarity index 100% rename from doc/developer-guides/images/security-image9.png rename to doc/developer-guides/hld/images/security-image9.png diff --git a/doc/developer-guides/images/uart-image1.png b/doc/developer-guides/hld/images/uart-image1.png similarity index 100% rename from doc/developer-guides/images/uart-image1.png rename to doc/developer-guides/hld/images/uart-image1.png diff --git a/doc/developer-guides/hld/images/virtio-blk-image01.png b/doc/developer-guides/hld/images/virtio-blk-image01.png new file mode 100644 index 000000000..b83054707 Binary files /dev/null and b/doc/developer-guides/hld/images/virtio-blk-image01.png differ diff --git a/doc/developer-guides/hld/images/virtio-blk-image02.png b/doc/developer-guides/hld/images/virtio-blk-image02.png new file mode 100644 index 000000000..d01581bde Binary files /dev/null and b/doc/developer-guides/hld/images/virtio-blk-image02.png differ diff --git a/doc/developer-guides/images/virtio-console-arch.png b/doc/developer-guides/hld/images/virtio-console-arch.png similarity index 100% rename from doc/developer-guides/images/virtio-console-arch.png rename to doc/developer-guides/hld/images/virtio-console-arch.png diff --git a/doc/developer-guides/images/virtio-hld-image0.png b/doc/developer-guides/hld/images/virtio-hld-image0.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image0.png rename to doc/developer-guides/hld/images/virtio-hld-image0.png diff --git a/doc/developer-guides/images/virtio-hld-image1.png b/doc/developer-guides/hld/images/virtio-hld-image1.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image1.png rename to doc/developer-guides/hld/images/virtio-hld-image1.png diff --git a/doc/developer-guides/images/virtio-hld-image2.png b/doc/developer-guides/hld/images/virtio-hld-image2.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image2.png rename to doc/developer-guides/hld/images/virtio-hld-image2.png diff --git a/doc/developer-guides/images/virtio-hld-image3.png b/doc/developer-guides/hld/images/virtio-hld-image3.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image3.png rename to doc/developer-guides/hld/images/virtio-hld-image3.png diff --git a/doc/developer-guides/images/virtio-hld-image4.png b/doc/developer-guides/hld/images/virtio-hld-image4.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image4.png rename to doc/developer-guides/hld/images/virtio-hld-image4.png diff --git a/doc/developer-guides/images/virtio-hld-image5.png b/doc/developer-guides/hld/images/virtio-hld-image5.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image5.png rename to doc/developer-guides/hld/images/virtio-hld-image5.png diff --git a/doc/developer-guides/images/virtio-hld-image6.png b/doc/developer-guides/hld/images/virtio-hld-image6.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image6.png rename to doc/developer-guides/hld/images/virtio-hld-image6.png diff --git a/doc/developer-guides/images/virtio-hld-image7.png b/doc/developer-guides/hld/images/virtio-hld-image7.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image7.png rename to doc/developer-guides/hld/images/virtio-hld-image7.png diff --git a/doc/developer-guides/images/virtio-hld-image8.png b/doc/developer-guides/hld/images/virtio-hld-image8.png similarity index 100% rename from doc/developer-guides/images/virtio-hld-image8.png rename to doc/developer-guides/hld/images/virtio-hld-image8.png diff --git a/doc/developer-guides/images/watchdog-image1.png b/doc/developer-guides/hld/images/watchdog-image1.png similarity index 100% rename from doc/developer-guides/images/watchdog-image1.png rename to doc/developer-guides/hld/images/watchdog-image1.png diff --git a/doc/developer-guides/images/watchdog-image2.png b/doc/developer-guides/hld/images/watchdog-image2.png similarity index 100% rename from doc/developer-guides/images/watchdog-image2.png rename to doc/developer-guides/hld/images/watchdog-image2.png diff --git a/doc/developer-guides/hld/index.rst b/doc/developer-guides/hld/index.rst new file mode 100644 index 000000000..a979156a8 --- /dev/null +++ b/doc/developer-guides/hld/index.rst @@ -0,0 +1,28 @@ +.. _hld: + +High-Level Design Guides +######################## + +The ACRN Hypervisor acts as a host with full control of the processor(s) +and the hardware (physical memory, interrupt management and I/O). It +provides the User OS with an abstraction of a virtual platform, allowing +the guest to behave as if were executing directly on a logical +processor. + +These chapters describe the ACRN architecture, high-level design, +background, and motivation for specific areas within the ACRN hypervisor +system. + +.. toctree:: + :maxdepth: 1 + + Overview + Hypervisor + Device Model + Emulated Devices + Virtio Devices + VM Management + Power Management + Tracing and Logging + Virtual Bootloader + Security diff --git a/doc/developer-guides/interrupt-hld.rst b/doc/developer-guides/hld/interrupt-hld.rst similarity index 100% rename from doc/developer-guides/interrupt-hld.rst rename to doc/developer-guides/hld/interrupt-hld.rst diff --git a/doc/developer-guides/memmgt-hld.rst b/doc/developer-guides/hld/memmgt-hld.rst similarity index 100% rename from doc/developer-guides/memmgt-hld.rst rename to doc/developer-guides/hld/memmgt-hld.rst diff --git a/doc/developer-guides/uart-virtualization.rst b/doc/developer-guides/hld/uart-virt-hld.rst similarity index 100% rename from doc/developer-guides/uart-virtualization.rst rename to doc/developer-guides/hld/uart-virt-hld.rst diff --git a/doc/developer-guides/hld/virtio-blk.rst b/doc/developer-guides/hld/virtio-blk.rst new file mode 100644 index 000000000..167dedd2c --- /dev/null +++ b/doc/developer-guides/hld/virtio-blk.rst @@ -0,0 +1,107 @@ +.. _virtio-blk: + +Virtio-blk +########## + +The virtio-blk device is a simple virtual block device. The FE driver +(in the UOS space) places read, write, and other requests onto the +virtqueue, so that the BE driver (in the SOS space) can process them +accordingly. Communication between the FE and BE is based on the virtio +kick and notify mechanism. + +The virtio device ID of the virtio-blk is ``2``, and it supports one +virtqueue, the size of which is 64, configurable in the source code. + +.. figure:: images/virtio-blk-image01.png + :align: center + :width: 900px + :name: virtio-blk-arch + + Virtio-blk architecture + +The feature bits supported by the BE device are shown as follows: + +``VIRTIO_BLK_F_SEG_MAX`` + Maximum number of segments in a request is in seg_max. +``VIRTIO_BLK_F_BLK_SIZE`` + Block size of disk is in blk_size. +``VIRTIO_BLK_F_TOPOLOGY`` + Device exports information on optimal I/O alignment. +``VIRTIO_RING_F_INDIRECT_DESC`` + Support for indirect descriptors +``VIRTIO_BLK_F_FLUSH`` + Cache flush command support. +``VIRTIO_BLK_F_CONFIG_WCE`` + Device can toggle its cache between writeback and writethrough modes. + + +Virtio-blk-BE design +******************** + +.. figure:: images/virtio-blk-image02.png + :align: center + :width: 900px + :name: virtio-blk-be + +The virtio-blk BE device is implemented as a legacy virtio device. Its +backend media could be a file or a partition. The virtio-blk device +supports writeback and writethrough cache mode. In writeback mode, +virtio-blk has good write and read performance. To be safer, +writethrough is set as the default mode, as it can make sure every write +operation queued to the virtio-blk FE driver layer is submitted to +hardware storage. + +During initialization, virito-blk will allocate 64 ioreq buffers in a +shared ring used to store the I/O requests. The freeq, busyq, and pendq +shown in :numref:`virtio-blk-be` are used to manage requests. Each +virtio-blk device starts 8 worker threads to process request +asynchronously. + + +Usage: +****** + +The device model configuration command syntax for virtio-blk is:: + + -s ,virtio-blk,[,options] + +- ``filepath`` is the path of a file or disk partition +- ``options`` include: + + - ``writethru``: write operation is reported completed only when the + data has been written to physical storage. + - ``writeback``: write operation is reported completed when data is + placed in the page cache. Needs to be flushed to the physical storage. + - ``ro``: open file with readonly mode. + - ``sectorsize``: configured as either + ``sectorsize=/`` or + ``sectorsize=``. + The default values for sector size and physical sector size are 512 + - ``range``: configured as ``range=/`` + meaning the virtio-blk will only access part of the file, from the + ```` to `` + ``. + +A simple example for virtio-blk: + +1. Prepare a file in SOS folder:: + + dd if=/dev/zero of=test.img bs=1M count=1024 + mkfs.ext4 test.img + +#. Add virtio-blk in the DM cmdline, slot number should not duplicate + another device:: + + -s 9,virtio-blk,/root/test.img + +#. Launch UOS, you can find ``/dev/vdx`` in UOS. + + The ``x`` in ``/dev/vdx`` is related to the slot number used. If + If you start DM with two virtio-blks, and the slot numbers are 9 and 10, + then, the device with slot 9 will be recognized as ``/dev/vda``, and + the device with slot 10 will be ``/dev/vdb`` + +#. Mount ``/dev/vdx`` to a folder in the UOS, and then you can access it. + + +Successful booting of the User OS verifies the correctness of the +device. diff --git a/doc/developer-guides/virtio-console.rst b/doc/developer-guides/hld/virtio-console.rst similarity index 98% rename from doc/developer-guides/virtio-console.rst rename to doc/developer-guides/hld/virtio-console.rst index 14f6a9e2a..d84332c32 100644 --- a/doc/developer-guides/virtio-console.rst +++ b/doc/developer-guides/hld/virtio-console.rst @@ -1,7 +1,7 @@ -.. virtio-console: +.. _virtio-console: -Virtio-Console High-Level design -################################ +Virtio-console +############## The Virtio-console is a simple device for data input and output. The console's virtio device ID is ``3`` and can have from 1 to 16 ports. @@ -181,3 +181,4 @@ The File backend only supports console output to a file (no input). #. Add the console parameter to the guest OS kernel command line:: console=hvc0 + diff --git a/doc/developer-guides/network-virt-hld.rst b/doc/developer-guides/hld/virtio-net.rst similarity index 89% rename from doc/developer-guides/network-virt-hld.rst rename to doc/developer-guides/hld/virtio-net.rst index 8ec6cde14..b1d5ad013 100644 --- a/doc/developer-guides/network-virt-hld.rst +++ b/doc/developer-guides/hld/virtio-net.rst @@ -1,556 +1,525 @@ -.. net-virt-hld: - -Network Virtualization -###################### - -Introduction -************ - -Virtio-net is the para-virtualization solution used in ACRN for -networking. The ACRN device model emulates virtual NICs for UOS and the -frontend virtio network driver, simulating the virtual NIC and following -the virtio specification. (Refer to :ref:`introduction` and -:ref:`virtio-hld` background introductions to ACRN and Virtio.) - -Supported Features Notes -************************ - -Here are some notes about Virtio-net support in ACRN: - -- Legacy devices are supported, modern devices are not supported -- Two virtqueues are used in virtio-net: RX queue and TX queue -- Indirect descriptor is supported -- TAP backend is supported -- Control queue is not supported -- NIC multiple queues are not supported - -Network Virtualization Architecture -=================================== - -ACRN's network virtualization architecture is shown below in -:numref:`net-virt-arch`, and illustrates the many necessary network -virtualization components that must cooperate for the UOS to send and -receive data from the outside world. - -.. figure:: images/network-virt-arch.png - :align: center - :width: 900px - :name: net-virt-arch - - Network Virtualization Architecture - -(The green components are parts of the ACRN solution, while the gray -components are parts of the Linux kernel.) - -Let's explore these components further. - -SOS/UOS Network Stack: - This is the standard Linux TCP/IP stack, currently the most - feature-rich TCP/IP implementation. - -virtio-net Frontend Driver: - This is the standard driver in the Linux Kernel for virtual Ethernet - devices. This driver matches devices with PCI vendor ID 0x1AF4 and PCI - Device ID 0x1000 (for legacy devices in our case) or 0x1041 (for modern - devices). The virtual NIC supports two virtqueues, one for transmitting - packets and the other for receiving packets. The frontend driver places - empty buffers into one virtqueue for receiving packets, and enqueues - outgoing packets into another virtqueue for transmission. The size of - each virtqueue is 1024, configurable in the virtio-net backend driver. - -ACRN Hypervisor: - The ACRN hypervisor is a type 1 hypervisor, running directly on the - bare-metal hardware, and suitable for a variety of IoT and embedded - device solutions. It fetches and analyzes the guest instructions, puts - the decoded information into the shared page as an IOREQ, and notifies - or interrupts the VHM module in the SOS for processing. - -VHM Module: - The Virtio and Hypervisor Service Module (VHM) is a kernel module in the - Service OS (SOS) acting as a middle layer to support the device model - and hypervisor. The VHM forwards a IOREQ to the virtio-net backend - driver for processing. - -ACRN Device Model and virtio-net Backend Driver: - The ACRN Device Model (DM) gets an IOREQ from a shared page and calls - the virtio-net backend driver to process the request. The backend driver - receives the data in a shared virtqueue and sends it to the TAP device. - -Bridge and Tap Device: - Bridge and Tap are standard virtual network infrastructures. They play - an important role in communication among the SOS, the UOS, and the - outside world. - -IGB Driver: - IGB is the physical Network Interface Card (NIC) Linux kernel driver - responsible for sending data to and receiving data from the physical - NIC. - -The virtual network card (NIC) is implemented as a virtio legacy device -in the ACRN device model (DM). It is registered as a PCI virtio device -to the guest OS (UOS) and uses the standard virtio-net in the Linux kernel as -its driver (the guest kernel should be built with -``CONFIG_VIRTIO_NET=y``). - -The virtio-net backend in DM forwards the data received from the -frontend to the TAP device, then from the TAP device to the bridge, and -finally from the bridge to the physical NIC driver, and vice versa for -returning data from the NIC to the frontend. - -ACRN Virtio-Network Calling Stack -********************************* - -Various components of ACRN network virtualization are shown in the -architecture diagram shows in :numref:`net-virt-arch`. In this section, -we will use UOS data transmission (TX) and reception (RX) examples to -explain step-by-step how these components work together to implement -ACRN network virtualization. - -Initialization in Device Model -============================== - -virtio_net_init ---------------- - -- Present frontend a virtual PCI based NIC -- Setup control plan callbacks -- Setup data plan callbacks, including TX, RX -- Setup tap backend - -Initialization in virtio-net Frontend Driver -============================================ - -virtio_pci_probe ----------------- - -- Construct virtio device using virtual pci device and register it to - virtio bus - -virtio_dev_probe --> virtnet_probe --> init_vqs ------------------------------------------------ - -- Register network driver -- Setup shared virtqueues - -ACRN UOS TX FLOW -================ - -The following shows the ACRN UOS network TX flow, using TCP as an -example, showing the flow through each layer: - -UOS TCP Layer -------------- - -.. code-block:: c - - tcp_sendmsg --> - tcp_sendmsg_locked --> - tcp_push_one --> - tcp_write_xmit --> - tcp_transmit_skb --> - -UOS IP Layer ------------- - -.. code-block:: c - - ip_queue_xmit --> - ip_local_out --> - __ip_local_out --> - dst_output --> - ip_output --> - ip_finish_output --> - ip_finish_output2 --> - neigh_output --> - neigh_resolve_output --> - -UOS MAC Layer -------------- - -.. code-block:: c - - dev_queue_xmit --> - __dev_queue_xmit --> - dev_hard_start_xmit --> - xmit_one --> - netdev_start_xmit --> - __netdev_start_xmit --> - - -UOS MAC Layer virtio-net Frontend Driver ----------------------------------------- - -.. code-block:: c - - start_xmit --> // virtual NIC driver xmit in virtio_net - xmit_skb --> - virtqueue_add_outbuf --> // add out buffer to shared virtqueue - virtqueue_add --> - - virtqueue_kick --> // notify the backend - virtqueue_notify --> - vp_notify --> - iowrite16 --> // trap here, HV will first get notified - -ACRN Hypervisor ---------------- - -.. code-block:: c - - vmexit_handler --> // vmexit because VMX_EXIT_REASON_IO_INSTRUCTION - pio_instr_vmexit_handler --> - emulate_io --> // ioreq cant be processed in HV, forward it to VHM - acrn_insert_request_wait --> - fire_vhm_interrupt --> // interrupt SOS, VHM will get notified - -VHM Module ----------- - -.. code-block:: c - - vhm_intr_handler --> // VHM interrupt handler - tasklet_schedule --> - io_req_tasklet --> - acrn_ioreq_distribute_request --> // ioreq can't be processed in VHM, forward it to device DM - acrn_ioreq_notify_client --> - wake_up_interruptible --> // wake up DM to handle ioreq - -ACRN Device Model / virtio-net Backend Driver ---------------------------------------------- - -.. code-block:: c - - handle_vmexit --> - vmexit_inout --> - emulate_inout --> - pci_emul_io_handler --> - virtio_pci_write --> - virtio_pci_legacy_write --> - virtio_net_ping_txq --> // start TX thread to process, notify thread return - virtio_net_tx_thread --> // this is TX thread - virtio_net_proctx --> // call corresponding backend (tap) to process - virtio_net_tap_tx --> - writev --> // write data to tap device - -SOS TAP Device Forwarding -------------------------- - -.. code-block:: c - - do_writev --> - vfs_writev --> - do_iter_write --> - do_iter_readv_writev --> - call_write_iter --> - tun_chr_write_iter --> - tun_get_user --> - netif_receive_skb --> - netif_receive_skb_internal --> - __netif_receive_skb --> - __netif_receive_skb_core --> - - -SOS Bridge Forwarding ---------------------- - -.. code-block:: c - - br_handle_frame --> - br_handle_frame_finish --> - br_forward --> - __br_forward --> - br_forward_finish --> - br_dev_queue_push_xmit --> - -SOS MAC Layer -------------- - -.. code-block:: c - - dev_queue_xmit --> - __dev_queue_xmit --> - dev_hard_start_xmit --> - xmit_one --> - netdev_start_xmit --> - __netdev_start_xmit --> - - -SOS MAC Layer IGB Driver ------------------------- - -.. code-block:: c - - igb_xmit_frame --> // IGB physical NIC driver xmit function - -ACRN UOS RX FLOW -================ - -The following shows the ACRN UOS network RX flow, using TCP as an example. -Let's start by receiving a device interrupt. (Note that the hypervisor -will first get notified when receiving an interrupt even in passthrough -cases.) - -Hypervisor Interrupt Dispatch ------------------------------ - -.. code-block:: c - - vmexit_handler --> // vmexit because VMX_EXIT_REASON_EXTERNAL_INTERRUPT - external_interrupt_vmexit_handler --> - dispatch_interrupt --> - common_handler_edge --> - ptdev_interrupt_handler --> - ptdev_enqueue_softirq --> // Interrupt will be delivered in bottom-half softirq - - -Hypervisor Interrupt Injection ------------------------------- - -.. code-block:: c - - do_softirq --> - ptdev_softirq --> - vlapic_intr_msi --> // insert the interrupt into SOS - - start_vcpu --> // VM Entry here, will process the pending interrupts - -SOS MAC Layer IGB Driver ------------------------- - -.. code-block:: c - - do_IRQ --> - ... - igb_msix_ring --> - igbpoll --> - napi_gro_receive --> - napi_skb_finish --> - netif_receive_skb_internal --> - __netif_receive_skb --> - __netif_receive_skb_core -- - -SOS Bridge Forwarding ---------------------- - -.. code-block:: c - - br_handle_frame --> - br_handle_frame_finish --> - br_forward --> - __br_forward --> - br_forward_finish --> - br_dev_queue_push_xmit --> - -SOS MAC Layer -------------- - -.. code-block:: c - - dev_queue_xmit --> - __dev_queue_xmit --> - dev_hard_start_xmit --> - xmit_one --> - netdev_start_xmit --> - __netdev_start_xmit --> - -SOS MAC Layer TAP Driver ------------------------- - -.. code-block:: c - - tun_net_xmit --> // Notify and wake up reader process - -ACRN Device Model / virtio-net Backend Driver ---------------------------------------------- - -.. code-block:: c - - virtio_net_rx_callback --> // the tap fd get notified and this function invoked - virtio_net_tap_rx --> // read data from tap, prepare virtqueue, insert interrupt into the UOS - vq_endchains --> - vq_interrupt --> - pci_generate_msi --> - -VHM Module ----------- - -.. code-block:: c - - vhm_dev_ioctl --> // process the IOCTL and call hypercall to inject interrupt - hcall_inject_msi --> - -ACRN Hypervisor ---------------- - -.. code-block:: c - - vmexit_handler --> // vmexit because VMX_EXIT_REASON_VMCALL - vmcall_vmexit_handler --> - hcall_inject_msi --> // insert interrupt into UOS - vlapic_intr_msi --> - -UOS MAC Layer virtio_net Frontend Driver ----------------------------------------- - -.. code-block:: c - - vring_interrupt --> // virtio-net frontend driver interrupt handler - skb_recv_done --> //registed by virtnet_probe-->init_vqs-->virtnet_find_vqs - virtqueue_napi_schedule --> - __napi_schedule --> - virtnet_poll --> - virtnet_receive --> - receive_buf --> - -UOS MAC Layer -------------- - -.. code-block:: c - - napi_gro_receive --> - napi_skb_finish --> - netif_receive_skb_internal --> - __netif_receive_skb --> - __netif_receive_skb_core --> - -UOS IP Layer ------------- - -.. code-block:: c - - ip_rcv --> - ip_rcv_finish --> - dst_input --> - ip_local_deliver --> - ip_local_deliver_finish --> - - -UOS TCP Layer -------------- - -.. code-block:: c - - tcp_v4_rcv --> - tcp_v4_do_rcv --> - tcp_rcv_established --> - tcp_data_queue --> - tcp_queue_rcv --> - __skb_queue_tail --> - - sk->sk_data_ready --> // application will get notified - -How to Use -========== - -The network infrastructure shown in :numref:`net-virt-infra` needs to be -prepared in the SOS before we start. We need to create a bridge and at -least one tap device (two tap devices are needed to create a dual -virtual NIC) and attach a physical NIC and tap device to the bridge. - -.. figure:: images/network-virt-sos-infrastruct.png - :align: center - :width: 900px - :name: net-virt-infra - - Network Infrastructure in SOS - -You can use Linux commands (e.g. ip, brctl) to create this network. In -our case, we use systemd to automatically create the network by default. -You can check the files with prefix 50- in the SOS -``/usr/lib/systemd/network/``: - -- `50-acrn.netdev `__ -- `50-acrn.network `__ -- `50-acrn_tap0.netdev `__ -- `50-eth.network `__ - -When the SOS is started, run ``ifconfig`` to show the devices created by -this systemd configuration: - -.. code-block:: none - - acrn-br0 Link encap:Ethernet HWaddr B2:50:41:FE:F7:A3 - inet addr:10.239.154.43 Bcast:10.239.154.255 Mask:255.255.255.0 - inet6 addr: fe80::b050:41ff:fefe:f7a3/64 Scope:Link - UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 - RX packets:226932 errors:0 dropped:21383 overruns:0 frame:0 - TX packets:14816 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:100457754 (95.8 Mb) TX bytes:83481244 (79.6 Mb) - - acrn_tap0 Link encap:Ethernet HWaddr F6:A7:7E:52:50:C6 - UP BROADCAST MULTICAST MTU:1500 Metric:1 - RX packets:0 errors:0 dropped:0 overruns:0 frame:0 - TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) - - enp3s0 Link encap:Ethernet HWaddr 98:4F:EE:14:5B:74 - inet6 addr: fe80::9a4f:eeff:fe14:5b74/64 Scope:Link - UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 - RX packets:279174 errors:0 dropped:0 overruns:0 frame:0 - TX packets:69923 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:107312294 (102.3 Mb) TX bytes:87117507 (83.0 Mb) - Memory:82200000-8227ffff - - lo Link encap:Local Loopback - inet addr:127.0.0.1 Mask:255.0.0.0 - inet6 addr: ::1/128 Scope:Host - UP LOOPBACK RUNNING MTU:65536 Metric:1 - RX packets:16 errors:0 dropped:0 overruns:0 frame:0 - TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:1216 (1.1 Kb) TX bytes:1216 (1.1 Kb) - -Run ``brctl show`` to see the bridge ``acrn-br0`` and attached devices: - -.. code-block:: none - - bridge name bridge id STP enabled interfaces - - acrn-br0 8000.b25041fef7a3 no acrn_tap0 - enp3s0 - -Add a pci slot to the device model acrn-dm command line (mac address is -optional): - -.. code-block:: none - - -s 4,virtio-net,,[mac=] - -When the UOS is lauched, run ``ifconfig`` to check the network. enp0s4r -is the virtual NIC created by acrn-dm: - -.. code-block:: none - - enp0s4 Link encap:Ethernet HWaddr 00:16:3E:39:0F:CD - inet addr:10.239.154.186 Bcast:10.239.154.255 Mask:255.255.255.0 - inet6 addr: fe80::216:3eff:fe39:fcd/64 Scope:Link - UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 - RX packets:140 errors:0 dropped:8 overruns:0 frame:0 - TX packets:46 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:110727 (108.1 Kb) TX bytes:4474 (4.3 Kb) - - lo Link encap:Local Loopback - inet addr:127.0.0.1 Mask:255.0.0.0 - inet6 addr: ::1/128 Scope:Host - UP LOOPBACK RUNNING MTU:65536 Metric:1 - RX packets:0 errors:0 dropped:0 overruns:0 frame:0 - TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 - collisions:0 txqueuelen:1000 - RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) - -Performance Estimation -====================== - -We've introduced the network virtualization solution in ACRN, from the -top level architecture to the detailed TX and RX flow. Currently, the -control plane and data plane are all processed in ACRN device model, -which may bring some overhead. But this is not a bottleneck for 1000Mbit -NICs or below. Network bandwidth for virtualization can be very close to -the native bandwidgh. For high speed NIC (e.g. 10Gb or above), it is -necessary to separate the data plane from the control plane. We can use -vhost for acceleration. For most IoT scenarios, processing in user space -is simple and reasonable. +.. _virtio-net: + +Virtio-net +########## + +Virtio-net is the para-virtualization solution used in ACRN for +networking. The ACRN device model emulates virtual NICs for UOS and the +frontend virtio network driver, simulating the virtual NIC and following +the virtio specification. (Refer to :ref:`introduction` and +:ref:`virtio-hld` background introductions to ACRN and Virtio.) + +Here are some notes about Virtio-net support in ACRN: + +- Legacy devices are supported, modern devices are not supported +- Two virtqueues are used in virtio-net: RX queue and TX queue +- Indirect descriptor is supported +- TAP backend is supported +- Control queue is not supported +- NIC multiple queues are not supported + +Network Virtualization Architecture +*********************************** + +ACRN's network virtualization architecture is shown below in +:numref:`net-virt-arch`, and illustrates the many necessary network +virtualization components that must cooperate for the UOS to send and +receive data from the outside world. + +.. figure:: images/network-virt-arch.png + :align: center + :width: 900px + :name: net-virt-arch + + Network Virtualization Architecture + +(The green components are parts of the ACRN solution, while the gray +components are parts of the Linux kernel.) + +Let's explore these components further. + +SOS/UOS Network Stack: + This is the standard Linux TCP/IP stack, currently the most + feature-rich TCP/IP implementation. + +virtio-net Frontend Driver: + This is the standard driver in the Linux Kernel for virtual Ethernet + devices. This driver matches devices with PCI vendor ID 0x1AF4 and PCI + Device ID 0x1000 (for legacy devices in our case) or 0x1041 (for modern + devices). The virtual NIC supports two virtqueues, one for transmitting + packets and the other for receiving packets. The frontend driver places + empty buffers into one virtqueue for receiving packets, and enqueues + outgoing packets into another virtqueue for transmission. The size of + each virtqueue is 1024, configurable in the virtio-net backend driver. + +ACRN Hypervisor: + The ACRN hypervisor is a type 1 hypervisor, running directly on the + bare-metal hardware, and suitable for a variety of IoT and embedded + device solutions. It fetches and analyzes the guest instructions, puts + the decoded information into the shared page as an IOREQ, and notifies + or interrupts the VHM module in the SOS for processing. + +VHM Module: + The Virtio and Hypervisor Service Module (VHM) is a kernel module in the + Service OS (SOS) acting as a middle layer to support the device model + and hypervisor. The VHM forwards a IOREQ to the virtio-net backend + driver for processing. + +ACRN Device Model and virtio-net Backend Driver: + The ACRN Device Model (DM) gets an IOREQ from a shared page and calls + the virtio-net backend driver to process the request. The backend driver + receives the data in a shared virtqueue and sends it to the TAP device. + +Bridge and Tap Device: + Bridge and Tap are standard virtual network infrastructures. They play + an important role in communication among the SOS, the UOS, and the + outside world. + +IGB Driver: + IGB is the physical Network Interface Card (NIC) Linux kernel driver + responsible for sending data to and receiving data from the physical + NIC. + +The virtual network card (NIC) is implemented as a virtio legacy device +in the ACRN device model (DM). It is registered as a PCI virtio device +to the guest OS (UOS) and uses the standard virtio-net in the Linux kernel as +its driver (the guest kernel should be built with +``CONFIG_VIRTIO_NET=y``). + +The virtio-net backend in DM forwards the data received from the +frontend to the TAP device, then from the TAP device to the bridge, and +finally from the bridge to the physical NIC driver, and vice versa for +returning data from the NIC to the frontend. + +ACRN Virtio-Network Calling Stack +********************************* + +Various components of ACRN network virtualization are shown in the +architecture diagram shows in :numref:`net-virt-arch`. In this section, +we will use UOS data transmission (TX) and reception (RX) examples to +explain step-by-step how these components work together to implement +ACRN network virtualization. + +Initialization in Device Model +============================== + +**virtio_net_init** + +- Present frontend for a virtual PCI based NIC +- Setup control plan callbacks +- Setup data plan callbacks, including TX, RX +- Setup tap backend + +Initialization in virtio-net Frontend Driver +============================================ + +**virtio_pci_probe** + +- Construct virtio device using virtual pci device and register it to + virtio bus + +**virtio_dev_probe --> virtnet_probe --> init_vqs** + +- Register network driver +- Setup shared virtqueues + +ACRN UOS TX FLOW +================ + +The following shows the ACRN UOS network TX flow, using TCP as an +example, showing the flow through each layer: + +**UOS TCP Layer** + +.. code-block:: c + + tcp_sendmsg --> + tcp_sendmsg_locked --> + tcp_push_one --> + tcp_write_xmit --> + tcp_transmit_skb --> + +**UOS IP Layer** + +.. code-block:: c + + ip_queue_xmit --> + ip_local_out --> + __ip_local_out --> + dst_output --> + ip_output --> + ip_finish_output --> + ip_finish_output2 --> + neigh_output --> + neigh_resolve_output --> + +**UOS MAC Layer** + +.. code-block:: c + + dev_queue_xmit --> + __dev_queue_xmit --> + dev_hard_start_xmit --> + xmit_one --> + netdev_start_xmit --> + __netdev_start_xmit --> + + +**UOS MAC Layer virtio-net Frontend Driver** + +.. code-block:: c + + start_xmit --> // virtual NIC driver xmit in virtio_net + xmit_skb --> + virtqueue_add_outbuf --> // add out buffer to shared virtqueue + virtqueue_add --> + + virtqueue_kick --> // notify the backend + virtqueue_notify --> + vp_notify --> + iowrite16 --> // trap here, HV will first get notified + +**ACRN Hypervisor** + +.. code-block:: c + + vmexit_handler --> // vmexit because VMX_EXIT_REASON_IO_INSTRUCTION + pio_instr_vmexit_handler --> + emulate_io --> // ioreq cant be processed in HV, forward it to VHM + acrn_insert_request_wait --> + fire_vhm_interrupt --> // interrupt SOS, VHM will get notified + +**VHM Module** + +.. code-block:: c + + vhm_intr_handler --> // VHM interrupt handler + tasklet_schedule --> + io_req_tasklet --> + acrn_ioreq_distribute_request --> // ioreq can't be processed in VHM, forward it to device DM + acrn_ioreq_notify_client --> + wake_up_interruptible --> // wake up DM to handle ioreq + +**ACRN Device Model / virtio-net Backend Driver** + +.. code-block:: c + + handle_vmexit --> + vmexit_inout --> + emulate_inout --> + pci_emul_io_handler --> + virtio_pci_write --> + virtio_pci_legacy_write --> + virtio_net_ping_txq --> // start TX thread to process, notify thread return + virtio_net_tx_thread --> // this is TX thread + virtio_net_proctx --> // call corresponding backend (tap) to process + virtio_net_tap_tx --> + writev --> // write data to tap device + +**SOS TAP Device Forwarding** + +.. code-block:: c + + do_writev --> + vfs_writev --> + do_iter_write --> + do_iter_readv_writev --> + call_write_iter --> + tun_chr_write_iter --> + tun_get_user --> + netif_receive_skb --> + netif_receive_skb_internal --> + __netif_receive_skb --> + __netif_receive_skb_core --> + + +**SOS Bridge Forwarding** + +.. code-block:: c + + br_handle_frame --> + br_handle_frame_finish --> + br_forward --> + __br_forward --> + br_forward_finish --> + br_dev_queue_push_xmit --> + +**SOS MAC Layer** + +.. code-block:: c + + dev_queue_xmit --> + __dev_queue_xmit --> + dev_hard_start_xmit --> + xmit_one --> + netdev_start_xmit --> + __netdev_start_xmit --> + + +**SOS MAC Layer IGB Driver** + +.. code-block:: c + + igb_xmit_frame --> // IGB physical NIC driver xmit function + +ACRN UOS RX FLOW +================ + +The following shows the ACRN UOS network RX flow, using TCP as an example. +Let's start by receiving a device interrupt. (Note that the hypervisor +will first get notified when receiving an interrupt even in passthrough +cases.) + +**Hypervisor Interrupt Dispatch** + +.. code-block:: c + + vmexit_handler --> // vmexit because VMX_EXIT_REASON_EXTERNAL_INTERRUPT + external_interrupt_vmexit_handler --> + dispatch_interrupt --> + common_handler_edge --> + ptdev_interrupt_handler --> + ptdev_enqueue_softirq --> // Interrupt will be delivered in bottom-half softirq + + +**Hypervisor Interrupt Injection** + +.. code-block:: c + + do_softirq --> + ptdev_softirq --> + vlapic_intr_msi --> // insert the interrupt into SOS + + start_vcpu --> // VM Entry here, will process the pending interrupts + +**SOS MAC Layer IGB Driver** + +.. code-block:: c + + do_IRQ --> + ... + igb_msix_ring --> + igbpoll --> + napi_gro_receive --> + napi_skb_finish --> + netif_receive_skb_internal --> + __netif_receive_skb --> + __netif_receive_skb_core -- + +**SOS Bridge Forwarding** + +.. code-block:: c + + br_handle_frame --> + br_handle_frame_finish --> + br_forward --> + __br_forward --> + br_forward_finish --> + br_dev_queue_push_xmit --> + +**SOS MAC Layer** + +.. code-block:: c + + dev_queue_xmit --> + __dev_queue_xmit --> + dev_hard_start_xmit --> + xmit_one --> + netdev_start_xmit --> + __netdev_start_xmit --> + +**SOS MAC Layer TAP Driver** + +.. code-block:: c + + tun_net_xmit --> // Notify and wake up reader process + +**ACRN Device Model / virtio-net Backend Driver** + +.. code-block:: c + + virtio_net_rx_callback --> // the tap fd get notified and this function invoked + virtio_net_tap_rx --> // read data from tap, prepare virtqueue, insert interrupt into the UOS + vq_endchains --> + vq_interrupt --> + pci_generate_msi --> + +**VHM Module** + +.. code-block:: c + + vhm_dev_ioctl --> // process the IOCTL and call hypercall to inject interrupt + hcall_inject_msi --> + +**ACRN Hypervisor** + +.. code-block:: c + + vmexit_handler --> // vmexit because VMX_EXIT_REASON_VMCALL + vmcall_vmexit_handler --> + hcall_inject_msi --> // insert interrupt into UOS + vlapic_intr_msi --> + +**UOS MAC Layer virtio_net Frontend Driver** + +.. code-block:: c + + vring_interrupt --> // virtio-net frontend driver interrupt handler + skb_recv_done --> //registed by virtnet_probe-->init_vqs-->virtnet_find_vqs + virtqueue_napi_schedule --> + __napi_schedule --> + virtnet_poll --> + virtnet_receive --> + receive_buf --> + +**UOS MAC Layer** + +.. code-block:: c + + napi_gro_receive --> + napi_skb_finish --> + netif_receive_skb_internal --> + __netif_receive_skb --> + __netif_receive_skb_core --> + +**UOS IP Layer** + +.. code-block:: c + + ip_rcv --> + ip_rcv_finish --> + dst_input --> + ip_local_deliver --> + ip_local_deliver_finish --> + + +**UOS TCP Layer** + +.. code-block:: c + + tcp_v4_rcv --> + tcp_v4_do_rcv --> + tcp_rcv_established --> + tcp_data_queue --> + tcp_queue_rcv --> + __skb_queue_tail --> + + sk->sk_data_ready --> // application will get notified + +How to Use +========== + +The network infrastructure shown in :numref:`net-virt-infra` needs to be +prepared in the SOS before we start. We need to create a bridge and at +least one tap device (two tap devices are needed to create a dual +virtual NIC) and attach a physical NIC and tap device to the bridge. + +.. figure:: images/network-virt-sos-infrastruct.png + :align: center + :width: 900px + :name: net-virt-infra + + Network Infrastructure in SOS + +You can use Linux commands (e.g. ip, brctl) to create this network. In +our case, we use systemd to automatically create the network by default. +You can check the files with prefix 50- in the SOS +``/usr/lib/systemd/network/``: + +- `50-acrn.netdev `__ +- `50-acrn.network `__ +- `50-acrn_tap0.netdev `__ +- `50-eth.network `__ + +When the SOS is started, run ``ifconfig`` to show the devices created by +this systemd configuration: + +.. code-block:: none + + acrn-br0 Link encap:Ethernet HWaddr B2:50:41:FE:F7:A3 + inet addr:10.239.154.43 Bcast:10.239.154.255 Mask:255.255.255.0 + inet6 addr: fe80::b050:41ff:fefe:f7a3/64 Scope:Link + UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 + RX packets:226932 errors:0 dropped:21383 overruns:0 frame:0 + TX packets:14816 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:100457754 (95.8 Mb) TX bytes:83481244 (79.6 Mb) + + acrn_tap0 Link encap:Ethernet HWaddr F6:A7:7E:52:50:C6 + UP BROADCAST MULTICAST MTU:1500 Metric:1 + RX packets:0 errors:0 dropped:0 overruns:0 frame:0 + TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) + + enp3s0 Link encap:Ethernet HWaddr 98:4F:EE:14:5B:74 + inet6 addr: fe80::9a4f:eeff:fe14:5b74/64 Scope:Link + UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 + RX packets:279174 errors:0 dropped:0 overruns:0 frame:0 + TX packets:69923 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:107312294 (102.3 Mb) TX bytes:87117507 (83.0 Mb) + Memory:82200000-8227ffff + + lo Link encap:Local Loopback + inet addr:127.0.0.1 Mask:255.0.0.0 + inet6 addr: ::1/128 Scope:Host + UP LOOPBACK RUNNING MTU:65536 Metric:1 + RX packets:16 errors:0 dropped:0 overruns:0 frame:0 + TX packets:16 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:1216 (1.1 Kb) TX bytes:1216 (1.1 Kb) + +Run ``brctl show`` to see the bridge ``acrn-br0`` and attached devices: + +.. code-block:: none + + bridge name bridge id STP enabled interfaces + + acrn-br0 8000.b25041fef7a3 no acrn_tap0 + enp3s0 + +Add a pci slot to the device model acrn-dm command line (mac address is +optional): + +.. code-block:: none + + -s 4,virtio-net,,[mac=] + +When the UOS is lauched, run ``ifconfig`` to check the network. enp0s4r +is the virtual NIC created by acrn-dm: + +.. code-block:: none + + enp0s4 Link encap:Ethernet HWaddr 00:16:3E:39:0F:CD + inet addr:10.239.154.186 Bcast:10.239.154.255 Mask:255.255.255.0 + inet6 addr: fe80::216:3eff:fe39:fcd/64 Scope:Link + UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 + RX packets:140 errors:0 dropped:8 overruns:0 frame:0 + TX packets:46 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:110727 (108.1 Kb) TX bytes:4474 (4.3 Kb) + + lo Link encap:Local Loopback + inet addr:127.0.0.1 Mask:255.0.0.0 + inet6 addr: ::1/128 Scope:Host + UP LOOPBACK RUNNING MTU:65536 Metric:1 + RX packets:0 errors:0 dropped:0 overruns:0 frame:0 + TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 + collisions:0 txqueuelen:1000 + RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) + +Performance Estimation +====================== + +We've introduced the network virtualization solution in ACRN, from the +top level architecture to the detailed TX and RX flow. Currently, the +control plane and data plane are all processed in ACRN device model, +which may bring some overhead. But this is not a bottleneck for 1000Mbit +NICs or below. Network bandwidth for virtualization can be very close to +the native bandwidgh. For high speed NIC (e.g. 10Gb or above), it is +necessary to separate the data plane from the control plane. We can use +vhost for acceleration. For most IoT scenarios, processing in user space +is simple and reasonable. + + diff --git a/doc/developer-guides/hld/virtio-rnd.rst b/doc/developer-guides/hld/virtio-rnd.rst new file mode 100644 index 000000000..dea36d7f8 --- /dev/null +++ b/doc/developer-guides/hld/virtio-rnd.rst @@ -0,0 +1,21 @@ +.. _virtio-rnd: + +Virtio-rnd +########## + +The virtio-rnd entropy device supplies high-quality randomness for guest +use. The virtio device ID of the virtio-rnd device is 4, and it supports +one virtqueue, the size of which is 64, configurable in the source code. +It has no feature bits defined. + +When the FE driver requires some random bytes, the BE device will place +bytes of random data onto the virtqueue. + +To launch the virtio-rnd device, use the following virtio command:: + + -s ,virtio-rnd + +To verify the correctness in user OS, use the following +command:: + + od /dev/random diff --git a/doc/developer-guides/watchdog-hld.rst b/doc/developer-guides/hld/watchdog-hld.rst similarity index 100% rename from doc/developer-guides/watchdog-hld.rst rename to doc/developer-guides/hld/watchdog-hld.rst diff --git a/doc/developer-guides/index.rst b/doc/developer-guides/index.rst index 4a509194b..e014e8379 100644 --- a/doc/developer-guides/index.rst +++ b/doc/developer-guides/index.rst @@ -6,30 +6,13 @@ Developer Guides .. toctree:: :maxdepth: 1 - primer.rst - ../api/index.rst - ../reference/kconfig/index.rst + hld/index + primer + GVT-g-porting + trusty + ../api/index + ../reference/kconfig/index -High-Level Design Guides -************************ - -These documents describe the high-level design, background, and motivation for -specific areas within the ACRN hypervisor system. - -.. toctree:: - :maxdepth: 1 - - ACPI-virt-hld.rst - APL_GVT-g-hld.rst - GVT-g-porting.rst - interrupt-hld.rst - memmgt-hld.rst - network-virt-hld.rst - security-hld.rst - uart-virtualization.rst - virtio-hld.rst - virtio-console.rst - watchdog-hld.rst Contributing to the project *************************** @@ -41,6 +24,6 @@ project. .. toctree:: :maxdepth: 1 - contribute_guidelines.rst - doc_guidelines.rst - graphviz.rst + contribute_guidelines + doc_guidelines + graphviz diff --git a/doc/developer-guides/trusty.rst b/doc/developer-guides/trusty.rst index 44a0d4696..e9784aafd 100644 --- a/doc/developer-guides/trusty.rst +++ b/doc/developer-guides/trusty.rst @@ -1,9 +1,7 @@ -:orphan: - .. _trusty_tee: -Trusty TEE on ACRN -################## +Trusty TEE +########## Introduction ************