diff --git a/doc/developer-guides/hld/hld-overview.rst b/doc/developer-guides/hld/hld-overview.rst index be2a1ad8d..42469ad27 100644 --- a/doc/developer-guides/hld/hld-overview.rst +++ b/doc/developer-guides/hld/hld-overview.rst @@ -175,6 +175,9 @@ ACRN adopts various approaches for emulating devices for the User VM: resources (mostly data-plane related) are passed-through to the User VMs and others (mostly control-plane related) are emulated. + +.. _ACRN-io-mediator: + I/O Emulation ------------- @@ -193,6 +196,7 @@ I/O read from the User VM. I/O (PIO/MMIO) Emulation Path :numref:`overview-io-emu-path` shows an example I/O emulation flow path. + When a guest executes an I/O instruction (port I/O or MMIO), a VM exit happens. The HV takes control and executes the request based on the VM exit reason ``VMX_EXIT_REASON_IO_INSTRUCTION`` for port I/O access, for @@ -224,8 +228,9 @@ HSM/hypercall. The HV then stores the result to the guest register context, advances the guest IP to indicate the completion of instruction execution, and resumes the guest. -MMIO access path is similar except for a VM exit reason of *EPT -violation*. +MMIO access path is similar except for a VM exit reason of *EPT violation*. +MMIO access is usually trapped through a ``VMX_EXIT_REASON_EPT_VIOLATION`` in +the hypervisor. DMA Emulation ------------- diff --git a/doc/introduction/images/ACRN-Hybrid-RT.png b/doc/introduction/images/ACRN-Hybrid-RT.png deleted file mode 100644 index 7fde71b13..000000000 Binary files a/doc/introduction/images/ACRN-Hybrid-RT.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-Industry.png b/doc/introduction/images/ACRN-Industry.png deleted file mode 100644 index a77640d14..000000000 Binary files a/doc/introduction/images/ACRN-Industry.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-Logical-Partition.png b/doc/introduction/images/ACRN-Logical-Partition.png deleted file mode 100644 index e638f4158..000000000 Binary files a/doc/introduction/images/ACRN-Logical-Partition.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-V2-SDC-scenario.png b/doc/introduction/images/ACRN-V2-SDC-scenario.png deleted file mode 100644 index ba8c39216..000000000 Binary files a/doc/introduction/images/ACRN-V2-SDC-scenario.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-V2-industrial-scenario.png b/doc/introduction/images/ACRN-V2-industrial-scenario.png deleted file mode 100644 index a77640d14..000000000 Binary files a/doc/introduction/images/ACRN-V2-industrial-scenario.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-hybrid-rt-example.png b/doc/introduction/images/ACRN-hybrid-rt-example.png new file mode 100644 index 000000000..30b679d04 Binary files /dev/null and b/doc/introduction/images/ACRN-hybrid-rt-example.png differ diff --git a/doc/introduction/images/ACRN-industry-example.png b/doc/introduction/images/ACRN-industry-example.png new file mode 100644 index 000000000..d027739e6 Binary files /dev/null and b/doc/introduction/images/ACRN-industry-example.png differ diff --git a/doc/introduction/images/ACRN-partitioned-example.png b/doc/introduction/images/ACRN-partitioned-example.png new file mode 100644 index 000000000..805022d22 Binary files /dev/null and b/doc/introduction/images/ACRN-partitioned-example.png differ diff --git a/doc/introduction/images/VMX-brief.png b/doc/introduction/images/VMX-brief.png deleted file mode 100644 index 878f7f63b..000000000 Binary files a/doc/introduction/images/VMX-brief.png and /dev/null differ diff --git a/doc/introduction/images/architecture.png b/doc/introduction/images/architecture.png deleted file mode 100644 index f728af177..000000000 Binary files a/doc/introduction/images/architecture.png and /dev/null differ diff --git a/doc/introduction/images/boot-flow-2.dot b/doc/introduction/images/boot-flow-2.dot index 819d38d07..6effe74b6 100644 --- a/doc/introduction/images/boot-flow-2.dot +++ b/doc/introduction/images/boot-flow-2.dot @@ -1,7 +1,7 @@ digraph G { rankdir=LR; bgcolor="transparent"; - UEFI -> "GRUB" -> "acrn.32.out" -> "Pre-launched\nVM Kernel" - "acrn.32.out" -> "Service VM\nKernel" -> "ACRN\nDevice Model" -> + UEFI -> "GRUB" -> "acrn.bin" -> "Pre-launched\nVM Kernel" + "acrn.bin" -> "Service VM\nKernel" -> "ACRN\nDevice Model" -> "Virtual\nBootloader"; } diff --git a/doc/introduction/images/boot-flow.dot b/doc/introduction/images/boot-flow.dot deleted file mode 100644 index d65c82a50..000000000 --- a/doc/introduction/images/boot-flow.dot +++ /dev/null @@ -1,6 +0,0 @@ -digraph G { - rankdir=LR; - bgcolor="transparent"; - UEFI -> "acrn.efi" -> "OS\nBootloader" -> - "SOS\nKernel" -> "ACRN\nDevice Model" -> "Virtual\nBootloader"; -} diff --git a/doc/introduction/images/device-model.png b/doc/introduction/images/device-model.png deleted file mode 100644 index 62f5b0762..000000000 Binary files a/doc/introduction/images/device-model.png and /dev/null differ diff --git a/doc/introduction/images/io-emulation-path.png b/doc/introduction/images/io-emulation-path.png deleted file mode 100644 index 412e1caed..000000000 Binary files a/doc/introduction/images/io-emulation-path.png and /dev/null differ diff --git a/doc/introduction/images/virtio-architecture.png b/doc/introduction/images/virtio-architecture.png deleted file mode 100644 index 04f19ff1b..000000000 Binary files a/doc/introduction/images/virtio-architecture.png and /dev/null differ diff --git a/doc/introduction/images/virtio-framework-kernel.png b/doc/introduction/images/virtio-framework-kernel.png deleted file mode 100644 index baeb587fd..000000000 Binary files a/doc/introduction/images/virtio-framework-kernel.png and /dev/null differ diff --git a/doc/introduction/images/virtio-framework-userland.png b/doc/introduction/images/virtio-framework-userland.png deleted file mode 100644 index bcfb9e28f..000000000 Binary files a/doc/introduction/images/virtio-framework-userland.png and /dev/null differ diff --git a/doc/introduction/index.rst b/doc/introduction/index.rst index 4b8c4c9f6..4bd36e8a5 100644 --- a/doc/introduction/index.rst +++ b/doc/introduction/index.rst @@ -3,383 +3,403 @@ What Is ACRN ############ -Introduction to Project ACRN -**************************** - -ACRN |trade| is a flexible, lightweight reference hypervisor, built with -real-time and safety-criticality in mind, and optimized to streamline -embedded development through an open source platform. ACRN defines a -device hypervisor reference stack and an architecture for running -multiple software subsystems, managed securely, on a consolidated system -using a virtual machine manager (VMM). It also defines a reference -framework implementation for virtual device emulation, called the "ACRN -Device Model". - -The ACRN Hypervisor is a Type 1 reference hypervisor stack, running -directly on the bare-metal hardware, and is suitable for a variety of -IoT and embedded device solutions. The ACRN hypervisor addresses the gap -that currently exists between datacenter hypervisors, and hard -partitioning hypervisors. The ACRN hypervisor architecture partitions -the system into different functional domains, with carefully selected -user VM sharing optimizations for IoT and embedded devices. - -ACRN High-Level Architecture -**************************** - -The ACRN architecture has evolved since its initial v0.1 release in -July 2018. Beginning with the v1.1 release, the ACRN architecture has -flexibility to support *logical partitioning*, *sharing*, and a *hybrid* -mode. As shown in :numref:`V2-hl-arch`, hardware resources can be -partitioned into two parts: - -.. figure:: images/ACRN-V2-high-level-arch.png - :width: 700px - :align: center - :name: V2-hl-arch - - ACRN high-level architecture - -Shown on the left of :numref:`V2-hl-arch`, resources are partitioned and -used by a pre-launched user virtual machine (VM). Pre-launched here -means that it is launched by the hypervisor directly, even before the -service VM is launched. The pre-launched VM runs independently of other -virtual machines and owns dedicated hardware resources, such as a CPU -core, memory, and I/O devices. Other virtual machines may not even be -aware of the pre-launched VM's existence. Because of this, it can be -used as a safety OS virtual machine. Platform hardware failure -detection code runs inside this pre-launched VM and will take emergency -actions when system critical failures occur. - -Shown on the right of :numref:`V2-hl-arch`, the remaining hardware -resources are shared among the service VM and user VMs. The service VM -is similar to Xen's Dom0, and a user VM is similar to Xen's DomU. The -service VM is the first VM launched by ACRN, if there is no pre-launched -VM. The service VM can access hardware resources directly by running -native drivers and it provides device sharing services to the user VMs -through the Device Model. Currently, the service VM is based on Linux, -but it can also use other operating systems as long as the ACRN Device -Model is ported into it. A user VM can be Ubuntu*, Android*, -Windows* or VxWorks*. There is one special user VM, called a -post-launched real-time VM (RTVM), designed to run a hard real-time OS, -such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM -can be used for soft programmable logic controller (PLC), inter-process -communication (IPC), or Robotics applications. - -.. _usage-scenarios: - -Usage Scenarios -*************** - -ACRN can be used for heterogeneous workload consolidation in -resource-constrained embedded platform, targeting for functional safety, -or hard real-time support. It can take multiple separate systems and -enable a workload consolidation solution operating on a single compute -platform to run both safety-critical applications and non-safety -applications, together with security functions that safeguard the -system. - -There are a number of predefined scenarios included in ACRN's source code. They -all build upon the three fundamental modes of operation that have been explained -above, i.e. the *logical partitioning*, *sharing*, and *hybrid* modes. They -further specify the number of VMs that can be run, their attributes and the -resources they have access to, either shared with other VMs or exclusively. - -The predefined scenarios are in the :acrn_file:`misc/config_tools/data` folder -in the source code. - -The :ref:`acrn_configuration_tool` tutorial explains how to use the ACRN -configuration toolset to create your own scenario or modify an existing one. - -Industrial Workload Consolidation -================================= - -.. figure:: images/ACRN-V2-industrial-scenario.png - :width: 600px - :align: center - :name: V2-industrial-scenario - - ACRN Industrial Workload Consolidation scenario - -Supporting Workload consolidation for industrial applications is even -more challenging. The ACRN hypervisor needs to run different workloads with no -interference, increase security functions that safeguard the system, run hard -real-time sensitive workloads together with general computing workloads, and -conduct data analytics for timely actions and predictive maintenance. - -Virtualization is especially important in industrial environments -because of device and application longevity. Virtualization enables -factories to modernize their control system hardware by using VMs to run -older control systems and operating systems far beyond their intended -retirement dates. - -As shown in :numref:`V2-industrial-scenario`, the Service VM can start a number -of post-launched User VMs and can provide device sharing capabilities to these. -In total, up to 7 post-launched User VMs can be started: - -- 5 regular User VMs, -- One `Kata Containers `_ User VM (see - :ref:`run-kata-containers` for more details), and -- One real-time VM (RTVM). - -In this example, one post-launched User VM provides Human Machine Interface -(HMI) capability, another provides Artificial Intelligence (AI) capability, some -compute function is run the Kata Container and the RTVM runs the soft -Programmable Logic Controller (PLC) that requires hard real-time -characteristics. - -:numref:`V2-industrial-scenario` shows ACRN's block diagram for an -Industrial usage scenario: - -- ACRN boots from the SoC platform, and supports firmware such as the - UEFI BIOS. -- The ACRN hypervisor can create VMs that run different OSes: - - - a Service VM such as Ubuntu*, - - a Human Machine Interface (HMI) application OS such as Windows*, - - an Artificial Intelligence (AI) application on Linux*, - - a Kata Container application, and - - a real-time control OS such as Zephyr*, VxWorks* or RT-Linux*. - -- The Service VM, provides device sharing functionalities, such as - disk and network mediation, to other virtual machines. - It can also run an orchestration agent allowing User VM orchestration - with tools such as Kubernetes*. -- The HMI Application OS can be Windows* or Linux*. Windows is dominant - in Industrial HMI environments. -- ACRN can support a soft real-time OS such as preempt-rt Linux for - soft-PLC control, or a hard real-time OS that offers less jitter. - -Automotive Application Scenarios -================================ - -As shown in :numref:`V2-SDC-scenario`, the ACRN hypervisor can be used -for building Automotive Software Defined Cockpit (SDC) and in-vehicle -experience (IVE) solutions. - -.. figure:: images/ACRN-V2-SDC-scenario.png - :width: 600px - :align: center - :name: V2-SDC-scenario - - ACRN Automotive SDC scenario - -As a reference implementation, ACRN provides the basis for embedded -hypervisor vendors to build solutions with a reference I/O mediation -solution. In this scenario, an automotive SDC system consists of the -instrument cluster (IC) system running in the Service VM and the in-vehicle -infotainment (IVI) system is running the post-launched User VM. Additionally, -one could modify the SDC scenario to add more post-launched User VMs that can -host rear seat entertainment (RSE) systems (not shown on the picture). - -An **instrument cluster (IC)** system is used to show the driver operational -information about the vehicle, such as: - -- the speed, fuel level, trip mileage, and other driving information of - the car; -- projecting heads-up images on the windshield, with alerts for low - fuel or tire pressure; -- showing rear-view and surround-view cameras for parking assistance. - -An **in-vehicle infotainment (IVI)** system's capabilities can include: - -- navigation systems, radios, and other entertainment systems; -- connection to mobile devices for phone calls, music, and applications - via voice recognition; -- control interaction by gesture recognition or touch. - -A **rear seat entertainment (RSE)** system could run: - -- entertainment system; -- virtual office; -- connection to the front-seat IVI system and mobile devices (cloud - connectivity); -- connection to mobile devices for phone calls, music, and applications - via voice recognition; -- control interaction by gesture recognition or touch. - -The ACRN hypervisor can support both Linux* VM and Android* VM as User -VMs managed by the ACRN hypervisor. Developers and OEMs can use this -reference stack to run their own VMs, together with IC, IVI, and RSE -VMs. The Service VM runs in the background and the User VMs run as -Post-Launched VMs. - -A block diagram of ACRN's SDC usage scenario is shown in -:numref:`V2-SDC-scenario` above. - -- The ACRN hypervisor sits right on top of the bootloader for fast booting - capabilities. -- Resources are partitioned to ensure safety-critical and - non-safety-critical domains are able to coexist on one platform. -- Rich I/O mediators allow sharing of various I/O devices across VMs, - delivering a comprehensive user experience. -- Multiple operating systems are supported by one SoC through efficient - virtualization. - -Best Known Configurations -************************* - -The ACRN GitHub codebase defines five best known configurations (BKC) -targeting SDC and Industry usage scenarios. Developers can start with -one of these predefined configurations and customize it to their own -application scenario needs. - -.. list-table:: Scenario-based Best Known Configurations - :header-rows: 1 - - * - Predefined BKC - - Usage Scenario - - VM0 - - VM1 - - VM2 - - VM3 - - * - Software Defined Cockpit - - SDC - - Service VM - - Post-launched VM - - One Kata Containers VM - - - - * - Industry Usage Config - - Industry - - Service VM - - Up to 5 Post-launched VMs - - One Kata Containers VM - - Post-launched RTVM (Soft or Hard real-time) - - * - Hybrid Usage Config - - Hybrid - - Pre-launched VM (Safety VM) - - Service VM - - Post-launched VM - - - - * - Hybrid real-time Usage Config - - Hybrid RT - - Pre-launched VM (real-time VM) - - Service VM - - Post-launched VM - - - - * - Logical Partition - - Logical Partition - - Pre-launched VM (Safety VM) - - Pre-launched VM (QM Linux VM) - - - - - -Here are block diagrams for each of these four scenarios. - -SDC Scenario -============ - -In this SDC scenario, an instrument cluster (IC) system runs with the -Service VM and an in-vehicle infotainment (IVI) system runs in a user -VM. - -.. figure:: images/ACRN-V2-SDC-scenario.png - :width: 600px - :align: center - :name: ACRN-SDC - - SDC scenario with two VMs - -Industry Scenario -================= - -In this Industry scenario, the Service VM provides device sharing capability for -a Windows-based HMI User VM. One post-launched User VM can run a Kata Container -application. Another User VM supports either hard or soft real-time OS -applications. Up to five additional post-launched User VMs support functions -such as human/machine interface (HMI), artificial intelligence (AI), computer -vision, etc. - -.. figure:: images/ACRN-Industry.png - :width: 600px - :align: center - :name: Industry - - Industry scenario - -Hybrid Scenario -=============== - -In this Hybrid scenario, a pre-launched Safety/RTVM is started by the -hypervisor. The Service VM runs a post-launched User VM that runs non-safety or -non-real-time tasks. - -.. figure:: images/ACRN-Hybrid.png - :width: 600px - :align: center - :name: ACRN-Hybrid - - Hybrid scenario - -Hybrid Real-Time (RT) Scenario -============================== - -In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the -hypervisor. The Service VM runs a post-launched User VM that runs non-safety or -non-real-time tasks. - -.. figure:: images/ACRN-Hybrid-RT.png - :width: 600px - :align: center - :name: ACRN-Hybrid-RT - - Hybrid RT scenario - -Logical Partition Scenario -========================== - -This scenario is a simplified configuration for VM logical -partitioning: both User VMs are independent and isolated, they do not share -resources, and both are automatically launched at boot time by the hypervisor. -The User VMs can be Real-Time VMs (RTVMs), Safety VMs, or standard User VMs. - -.. figure:: images/ACRN-Logical-Partition.png - :width: 600px - :align: center - :name: logical-partition - - Logical Partitioning scenario - +Introduction +************ + +IoT and Edge system developers face mounting demands on the systems they build, as connected +devices are increasingly expected to support a range of hardware resources, +operating systems, and software tools and applications. Virtualization is key to +meeting these broad needs. Most existing hypervisor and Virtual Machine Manager +solutions don't offer the right size, boot speed, real-time support, and +flexibility for IoT and Edge systems. Data center hypervisor code is too big, doesn't +offer safety or hard real-time capabilities, and requires too much performance +overhead for embedded development. The ACRN hypervisor was built to fill this +need. + +ACRN is a type 1 reference hypervisor stack that runs on bare-metal hardware, +with fast booting, and is configurable for a variety of IoT, Edge, and embedded device +solutions. It provides a flexible, lightweight hypervisor, built with real-time +and safety-criticality in mind, optimized to streamline embedded development +through an open-source, scalable reference platform. It has an architecture that +can run multiple OSs and VMs, managed securely, on a consolidated system by +means of efficient virtualization. Resource partitioning ensures +co-existing heterogeneous workloads on one system hardware platform do not +interfere with each other. + +ACRN defines a reference framework implementation for virtual device emulation, +called the ACRN Device Model or DM, with rich I/O mediators. It also supports +non-emulated device passthrough access to satisfy time-sensitive requirements +and low-latency access needs of real-time applications. To keep the hypervisor +code base as small and efficient as possible, the bulk of the Device Model +implementation resides in the Service VM to provide sharing and other +capabilities. + +ACRN is built to virtualize embedded IoT and Edge development functions +(for a camera, audio, graphics, storage, networking, and more), so it's ideal +for a broad range of IoT and Edge uses, including industrial, automotive, and retail +applications. Licensing ********* .. _BSD-3-Clause: https://opensource.org/licenses/BSD-3-Clause -Both the ACRN hypervisor and ACRN Device model software are provided +The ACRN hypervisor and ACRN Device Model software are provided under the permissive `BSD-3-Clause`_ license, which allows *"redistribution and use in source and binary forms, with or without modification"* together with the intact copyright notice and disclaimers noted in the license. -ACRN Device Model, Service VM, and User VM -****************************************** +Key Capabilities +**************** -To keep the hypervisor code base as small and efficient as possible, the -bulk of the device model implementation resides in the Service VM to -provide sharing and other capabilities. The details of which devices are -shared and the mechanism used for their sharing is described in -`pass-through`_ section below. +ACRN has these key capabilities and benefits: + +* **Small Footprint**: The hypervisor is optimized for resource-constrained devices + with significantly fewer lines of code (about 40K) than datacenter-centric + hypervisors (over 150K). +* **Built with Real-time in Mind**: Low-latency, fast boot times, and responsive + hardware device communication supporting near bare-metal performance. Both + soft and hard real-time VM needs are supported including no VMExit during + runtime operations, LAPIC and PCI passthrough, static CPU assignment, and + more. +* **Built for Embedded IoT and Edge Virtualization**: ACRN supports virtualization beyond the + basics and includes CPU, I/O, and networking virtualization of embedded IoT + and Edge + device functions and a rich set of I/O mediators to share devices across + multiple VMs. The Service VM communicates directly with the system hardware + and devices ensuring low latency access. The hypervisor is booted directly by the + bootloader for fast and secure booting. +* **Built with Safety-Critical Virtualization in Mind**: Safety-critical workloads + can be isolated from the rest of the VMs and have priority to meet their + design needs. Partitioning of resources supports safety-critical and + non-safety-critical domains coexisting on one SoC using Intel VT-backed + isolation. +* **Adaptable and Flexible**: ACRN has multi-OS support with efficient + virtualization for VM OSs including Linux, Android, Zephyr, and Windows, as + needed for a variety of application use cases. ACRN scenario configurations + support shared, partitioned, and hybrid VM models to support a variety of + application use cases. +* **Truly Open Source**: With its permissive BSD licensing and reference + implementation, ACRN offers scalable support with a significant up-front R&D + cost saving, code transparency, and collaborative software development with + industry leaders. + +Background +********** + +The ACRN architecture has evolved since its initial v0.1 release in July 2018. +Beginning with the v1.1 release, the ACRN architecture has flexibility to +support VMs with shared HW resources, partitioned HW resources, and a hybrid +VM model that simultaneously supported shared and partitioned resources. It enables a +workload consolidation solution taking multiple separate systems and running +them on a single compute platform to run heterogeneous workloads, with hard and +soft real-time support. + +Workload management and orchestration are also enabled with ACRN, allowing +open-source orchestrators such as OpenStack to manage ACRN VMs. ACRN supports +secure container runtimes such as Kata Containers orchestrated via Docker or +Kubernetes. + + +High-Level Architecture +*********************** + +ACRN is a Type 1 hypervisor, meaning it runs directly on bare-metal +hardware. It implements a hybrid Virtual Machine Manager (VMM) architecture, +using a privileged Service VM that manages the I/O devices and provides I/O +mediation. Multiple User VMs are supported with each of them potentially running +different OSs. By running systems in separate VMs, you can isolate VMs +and their applications, reducing potential attack surfaces and minimizing +interference, but potentially introducing additional latency for applications. + +ACRN relies on Intel Virtualization Technology (Intel VT) and runs in Virtual +Machine Extension (VMX) root operation, host mode, or VMM mode. All the User VMs +and the Service VM run in VMX non-root operation, or guest mode. The Service VM runs with the system's highest virtual machine priority to ensure required device time-sensitive requirements and system quality of service (QoS). Service VM tasks run with mixed priority. Upon a callback servicing a particular User VM request, the corresponding software (or mediator) in the Service VM inherits the User VM priority. -There may also be additional low-priority background tasks within the -Service OS. -In the automotive example we described above, the User VM is the central -hub of vehicle control and in-vehicle entertainment. It provides support -for radio and entertainment options, control of the vehicle climate -control, and vehicle navigation displays. It also provides connectivity -options for using USB, Bluetooth, and Wi-Fi for third-party device -interaction with the vehicle, such as Android Auto\* or Apple CarPlay*, -and many other features. +As mentioned earlier, hardware resources used by VMs can be configured into +two parts, as shown in this hybrid VM sample configuration: + +.. figure:: images/ACRN-V2-high-level-arch.png + :width: 700px + :align: center + :name: V2-hl-arch + + ACRN High-Level Architecture Hybrid Example + +Shown on the left of :numref:`V2-hl-arch`, we've partitioned resources dedicated +to a User VM launched by the hypervisor and before the Service VM is started. +This pre-launched VM runs independently of other virtual machines and owns +dedicated hardware resources, such as a CPU core, memory, and I/O devices. Other +VMs may not even be aware of the pre-launched VM's existence. Because of this, +it can be used as a Safety VM that runs hardware failure detection code and can +take emergency actions when system critical failures occur. Failures in other +VMs or rebooting the Service VM will not directly impact execution of this +pre-launched Safety VM. + +Shown on the right of :numref:`V2-hl-arch`, the remaining hardware resources are +shared among the Service VM and User VMs. The Service VM is launched by the +hypervisor after any pre-launched VMs are launched. The Service VM can access +remaining hardware resources directly by running native drivers and provides +device sharing services to the User VMs, through the Device Model. These +post-launched User VMs can run one of many OSs including Ubuntu, Android, +Windows, or a real-time OS such as Zephyr, VxWorks, or Xenomai. Because of its +real-time capability, a real-time VM (RTVM) can be used for software +programmable logic controller (PLC), inter-process communication (IPC), or +Robotics applications. These shared User VMs could be impacted by a failure in +the Service VM since they may rely on its mediation services for device access. + +The Service VM owns most of the devices including the platform devices, and +provides I/O mediation. The notable exceptions are the devices assigned to the +pre-launched User VM. Some PCIe devices may be passed through to the +post-launched User OSes via the VM configuration. + +The ACRN hypervisor also runs the ACRN VM manager to collect running +information of the User VMs, and controls the User VMs such as starting, +stopping, and pausing a VM, and pausing or resuming a virtual CPU. + +See the :ref:`hld-overview` developer reference material for more in-depth +information. + +ACRN Device Model Architecture +****************************** + +Because devices may need to be shared between VMs, device emulation is +used to give VM applications (and their OSs) access to these shared devices. +Traditionally there are three architectural approaches to device +emulation: + +* **Device emulation within the hypervisor**: a common method implemented within + the VMware workstation product (an operating system-based hypervisor). In + this method, the hypervisor includes emulations of common devices that the + various guest operating systems can share, including virtual disks, virtual + network adapters, and other necessary platform elements. + +* **User space device emulation**: rather than the device emulation embedded + within the hypervisor, it is implemented in a separate user space application. + QEMU, for example, provides this kind of device emulation also used by other + hypervisors. This model is advantageous, because the device emulation is + independent of the hypervisor and can therefore be shared for other + hypervisors. It also permits arbitrary device emulation without having to + burden the hypervisor (which operates in a privileged state) with this + functionality. + +* **Paravirtualized (PV) drivers**: a hypervisor-based device emulation model + introduced by the `XEN Project`_. In this model, the hypervisor includes the + physical device drivers, and each guest operating system includes a + hypervisor-aware driver that works in concert with the hypervisor drivers. + +.. _XEN Project: + https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum + +There's a price to pay for sharing devices. Whether device emulation is +performed in the hypervisor, or in user space within an independent VM, overhead +exists. This overhead is worthwhile as long as the devices need to be shared by +multiple guest operating systems. If sharing is not necessary, then there are +more efficient methods for accessing devices, for example, "passthrough." + +All emulation, para-virtualization, and passthrough are used in ACRN project. +ACRN defines a device emulation model where the Service VM owns all devices not +previously partitioned to pre-launched User VMs, and emulates these devices for +the User VM via the ACRN Device Model. The ACRN Device Model is thereby a +placeholder of the User VM. It allocates memory for the User VM OS, configures +and initializes the devices used by the User VM, loads the virtual firmware, +initializes the virtual CPU state, and invokes the ACRN hypervisor service to +execute the guest instructions. ACRN Device Model is an application running in +the Service VM that emulates devices based on command line configuration. + +See the :ref:`hld-devicemodel` developer reference for more information. + +Device Passthrough +****************** + +At the highest level, device passthrough is about providing isolation +of a device to a given guest operating system so that the device can be +used exclusively by that User VM. + +.. figure:: images/device-passthrough.png + :align: center + :name: device-passthrough + + Device Passthrough + +Near-native performance can be achieved by using device passthrough. This is +ideal for networking applications (or those with high disk I/O needs) that have +not adopted virtualization because of contention and performance degradation +through the hypervisor (using a driver in the hypervisor or through the +hypervisor to a user space emulation). Assigning devices to specific User VMs is +also useful when those devices inherently wouldn't be shared. For example, if a +system includes multiple video adapters, those adapters could be passed through +to unique User VM domains. + +Finally, there may be specialized PCI devices that only one User VM uses, +so they should be passed through to the User VM. Individual USB ports could be +isolated to a given domain too, or a serial port (which is itself not shareable) +could be isolated to a particular User VM. In the ACRN hypervisor, we support USB +controller passthrough only, and we don't support passthrough for a legacy +serial port (for example, ``0x3f8``). + +Hardware Support for Device Passthrough +======================================= + +Intel's processor architectures provide support for device passthrough with +Virtual Technology for Directed I/O (VT-d). VT-d maps User VM physical addresses to +machine physical addresses, so devices can use User VM physical addresses directly. +When this mapping occurs, the hardware takes care of access (and protection), +and the User VM OS can use the device as if it were a +non-virtualized system. In addition to mapping User VM to physical memory, +isolation prevents this device from accessing memory belonging to other VMs +or the hypervisor. + +Another innovation that helps interrupts scale to large numbers of VMs is called +Message Signaled Interrupts (MSI). Rather than relying on physical interrupt +pins to be associated with a User VM, MSI transforms interrupts into messages that +are more easily virtualized, scaling to thousands of individual interrupts. MSI +has been available since PCI version 2.2 and is also available in PCI Express +(PCIe). MSI is ideal for I/O virtualization, as it allows isolation of +interrupt sources (as opposed to physical pins that must be multiplexed or +routed through software). + +Hypervisor Support for Device Passthrough +========================================= + +By using the latest virtualization-enhanced processor architectures, hypervisors +and virtualization solutions can support device passthrough (using VT-d), +including Xen, KVM, and ACRN hypervisor. In most cases, the User VM OS +must be compiled to support passthrough by using kernel +build-time options. + +.. _static-configuration-scenarios: + +Static Configuration Based on Scenarios +*************************************** + +Scenarios are a way to describe the system configuration settings of the ACRN +hypervisor, VMs, and resources they have access to that meet your specific +application's needs such as compute, memory, storage, graphics, networking, and +other devices. Scenario configurations are stored in an XML format file and +edited using the ACRN configurator. + +Following a general embedded-system programming model, the ACRN hypervisor is +designed to be statically customized at build time per hardware and scenario, +rather than providing one binary for all scenarios. Dynamic configuration +parsing is not used in the ACRN hypervisor for these reasons: + +* **Reduce complexity**. ACRN is a lightweight reference hypervisor, built for + embedded IoT and Edge. As new platforms for embedded systems are rapidly introduced, + support for one binary could require more and more complexity in the + hypervisor, which is something we strive to avoid. +* **Maintain small footprint**. Implementing dynamic parsing introduces hundreds or + thousands of lines of code. Avoiding dynamic parsing helps keep the + hypervisor's Lines of Code (LOC) in a desirable range (less than 40K). +* **Improve boot time**. Dynamic parsing at runtime increases the boot time. Using a + static build-time configuration and not dynamic parsing helps improve the boot + time of the hypervisor. + +The scenario XML file together with a target board XML file are used to build +the ACRN hypervisor image tailored to your hardware and application needs. The ACRN +project provides a board inspector tool to automatically create the board XML +file by inspecting the target hardware. ACRN also provides a +:ref:`configurator tool ` +to create and edit a tailored scenario XML file based on predefined sample +scenario configurations. + +.. _usage-scenarios: + +Predefined Sample Scenarios +*************************** + +Project ACRN provides some predefined sample scenarios to illustrate how you +can define your own configuration scenarios. + + +* **Industry** is a traditional computing, memory, and device resource sharing + model among VMs. The ACRN hypervisor launches the Service VM. The Service VM + then launches any post-launched User VMs and provides device and resource + sharing mediation through the Device Model. The Service VM runs the native + device drivers to access the hardware and provides I/O mediation to the User + VMs. + + .. figure:: images/ACRN-industry-example.png + :width: 700px + :align: center + :name: arch-shared-example + + ACRN High-Level Architecture Industry (Shared) Example + + Virtualization is especially important in industrial environments because of + device and application longevity. Virtualization enables factories to + modernize their control system hardware by using VMs to run older control + systems and operating systems far beyond their intended retirement dates. + + The ACRN hypervisor needs to run different workloads with little-to-no + interference, increase security functions that safeguard the system, run hard + real-time sensitive workloads together with general computing workloads, and + conduct data analytics for timely actions and predictive maintenance. + + In this example, one post-launched User VM provides Human Machine Interface + (HMI) capability, another provides Artificial Intelligence (AI) capability, + some compute function is run the Kata Container, d the RTVM runs the soft + Programmable Logic Controller (PLC) that requires hard real-time + characteristics. + + - The Service VM, provides device sharing functionalities, such as disk and + network mediation, to other virtual machines. It can also run an + orchestration agent allowing User VM orchestration with tools such as + Kubernetes. + - The HMI Application OS can be Windows* or Linux*. Windows is dominant in + Industrial HMI environments. + - ACRN can support a soft real-time OS such as preempt-rt Linux for soft-PLC + control, or a hard real-time OS that offers less jitter. + +* **Partitioned** is a VM resource partitioning model when a User VM requires + independence and isolation from other VMs. A partitioned VM's resources are + statically configured and are not shared with other VMs. Partitioned User VMs + can be Real-Time VMs, Safety VMs, or standard VMs and are launched at boot + time by the hypervisor. There is no need for the Service VM or Device Model + since all partitioned VMs run native device drivers and directly access their + configured resources. + + .. figure:: images/ACRN-partitioned-example.png + :width: 700px + :align: center + :name: arch-partitioned-example + + ACRN High-Level Architecture Partitioned Example + + This scenario is a simplified configuration showing VM partitioning: both + User VMs are independent and isolated, they do not share resources, and both + are automatically launched at boot time by the hypervisor. The User VMs can + be Real-Time VMs (RTVMs), Safety VMs, or standard User VMs. + +* **Hybrid** scenario simultaneously supports both sharing and partitioning on + the consolidated system. The pre-launched (partitioned) User VMs, with their + statically configured and unshared resources, are started by the hypervisor. + The hypervisor then launches the Service VM. The post-launched (shared) User + VMs are started by the Device Model in the Service VM and share the remaining + resources. + + .. figure:: images/ACRN-hybrid-rt-example.png + :width: 700px + :align: center + :name: arch-hybrid-rt-example + + ACRN High-Level Architecture Hybrid-RT Example + + In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the + hypervisor. The Service VM runs a post-launched User VM that runs non-safety or + non-real-time tasks. + +You can find the predefined scenario XML files in the +:acrn_file:`misc/config_tools/data` folder in the hypervisor source code. The +:ref:`acrn_configuration_tool` tutorial explains how to use the ACRN +configurator to create your own scenario, or to view and modify an existing one. Boot Sequence ************* @@ -411,448 +431,36 @@ The Boot process proceeds as follows: the ACRN Device Model and Virtual bootloader through ``dm-verity``. #. The virtual bootloader starts the User-side verified boot process. -In this boot mode, the boot options of pre-launched VM and service VM are defined +In this boot mode, the boot options of a pre-launched VM and the Service VM are defined in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config`` in the source code ``configs/scenarios/$(SCENARIO)/vm_configurations.c`` (which resides under the hypervisor build directory) by default. -Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for +These boot options can be overridden by the GRUB menu. See :ref:`using_grub` for details. The boot options of a post-launched VM are not covered by hypervisor -source code or a GRUB menu; they are defined in a guest image file or specified by +source code or a GRUB menu; they are defined in the User VM's OS image file or specified by launch scripts. -.. note:: +`Slim Bootloader`_ is an alternative boot firmware that can be used to +boot ACRN. The `Boot ACRN Hypervisor +`_ tutorial +provides more information on how to use SBL with ACRN. - `Slim Bootloader`_ is an alternative boot firmware that can be used to - boot ACRN. The `Boot ACRN Hypervisor - `_ tutorial - provides more information on how to use SBL with ACRN. +Learn More +********** +The ACRN documentation offers more details of topics found in this introduction +about the ACRN hypervisor architecture, Device Model, Service VM, and more. -ACRN Hypervisor Architecture -**************************** +These documents provide introductory information about development with ACRN: -ACRN hypervisor is a Type 1 hypervisor, running directly on bare-metal -hardware. It implements a hybrid VMM architecture, using a privileged -service VM, running the Service VM that manages the I/O devices and -provides I/O mediation. Multiple User VMs are supported, with each of -them running different OSs. +* :ref:`overview_dev` +* :ref:`gsg` +* :ref:`acrn_configuration_tool` -Running systems in separate VMs provides isolation between other VMs and -their applications, reducing potential attack surfaces and minimizing -safety interference. However, running the systems in separate VMs may -introduce additional latency for applications. +These documents provide more details and in-depth discussions of the ACRN +hypervisor architecture and high-level design, and a collection of advanced +guides and tutorials: -:numref:`V2-hl-arch` shows the ACRN hypervisor architecture, with -all types of Virtual Machines (VMs) represented: +* :ref:`hld` +* :ref:`develop_acrn` -- Pre-launched User VM (Safety/RTVM) -- Pre-launched Service VM -- Post-launched User VM -- Kata Container VM (post-launched) -- real-time VM (RTVM) - -The Service VM owns most of the devices including the platform devices, and -provides I/O mediation. The notable exceptions are the devices assigned to the -pre-launched User VM. Some PCIe devices may be passed through -to the post-launched User OSes via the VM configuration. The Service VM runs -hypervisor-specific applications together, such as the ACRN device model, and -ACRN VM manager. - -ACRN hypervisor also runs the ACRN VM manager to collect running -information of the User OS, and controls the User VM such as starting, -stopping, and pausing a VM, pausing or resuming a virtual CPU. - -.. figure:: images/architecture.png - :width: 600px - :align: center - :name: ACRN-architecture - - ACRN Hypervisor Architecture - -ACRN hypervisor takes advantage of Intel Virtualization Technology -(Intel VT), and ACRN hypervisor runs in Virtual Machine Extension (VMX) -root operation, or host mode, or VMM mode. All the guests, including -User VM and Service VM, run in VMX non-root operation, or guest mode. (Hereafter, -we use the terms VMM mode and Guest mode for simplicity). - -The VMM mode has 4 protection rings, but runs the ACRN hypervisor in -ring 0 privilege only, leaving rings 1-3 unused. The guest (including -Service VM and User VM), running in Guest mode, also has its own four protection -rings (ring 0 to 3). The User kernel runs in ring 0 of guest mode, and -user land applications run in ring 3 of User mode (ring 1 & 2 are -usually not used by commercial OSes). - -.. figure:: images/VMX-brief.png - :align: center - :name: VMX-brief - - VMX Brief - -As shown in :numref:`VMX-brief`, VMM mode and guest mode are switched -through VM Exit and VM Entry. When the bootloader hands off control to -the ACRN hypervisor, the processor hasn't enabled VMX operation yet. The -ACRN hypervisor needs to enable VMX operation through a VMXON instruction -first. Initially, the processor stays in VMM mode when the VMX operation -is enabled. It enters guest mode through a VM resume instruction (or -first-time VM launch), and returns to VMM mode through a VM exit event. VM -exit occurs in response to certain instructions and events. - -The behavior of processor execution in guest mode is controlled by a -virtual machine control structure (VMCS). VMCS contains the guest state -(loaded at VM Entry, and saved at VM Exit), the host state, (loaded at -the time of VM exit), and the guest execution controls. ACRN hypervisor -creates a VMCS data structure for each virtual CPU, and uses the VMCS to -configure the behavior of the processor running in guest mode. - -When the execution of the guest hits a sensitive instruction, a VM exit -event may happen as defined in the VMCS configuration. Control goes back -to the ACRN hypervisor when the VM exit happens. The ACRN hypervisor -emulates the guest instruction (if the exit was due to privilege issue) -and resumes the guest to its next instruction, or fixes the VM exit -reason (for example if a guest memory page is not mapped yet) and resume -the guest to re-execute the instruction. - -Note that the address space used in VMM mode is different from that in -guest mode. The guest mode and VMM mode use different memory-mapping -tables, and therefore the ACRN hypervisor is protected from guest -access. The ACRN hypervisor uses EPT to map the guest address, using the -guest page table to map from guest linear address to guest physical -address, and using the EPT table to map from guest physical address to -machine physical address or host physical address (HPA). - -ACRN Device Model Architecture -****************************** - -Because devices may need to be shared between VMs, device emulation is -used to give VM applications (and OSes) access to these shared devices. -Traditionally there are three architectural approaches to device -emulation: - -* The first architecture is **device emulation within the hypervisor**, which - is a common method implemented within the VMware\* workstation product - (an operating system-based hypervisor). In this method, the hypervisor - includes emulations of common devices that the various guest operating - systems can share, including virtual disks, virtual network adapters, - and other necessary platform elements. - -* The second architecture is called **user space device emulation**. As the - name implies, rather than the device emulation being embedded within - the hypervisor, it is instead implemented in a separate user space - application. QEMU, for example, provides this kind of device emulation - also used by many independent hypervisors. This model is - advantageous, because the device emulation is independent of the - hypervisor and can therefore be shared for other hypervisors. It also - permits arbitrary device emulation without having to burden the - hypervisor (which operates in a privileged state) with this - functionality. - -* The third variation on hypervisor-based device emulation is - **paravirtualized (PV) drivers**. In this model introduced by the `XEN - Project`_, the hypervisor includes the physical drivers, and each guest - operating system includes a hypervisor-aware driver that works in - concert with the hypervisor drivers. - -.. _XEN Project: - https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum - -In the device emulation models discussed above, there's a price to pay -for sharing devices. Whether device emulation is performed in the -hypervisor, or in user space within an independent VM, overhead exists. -This overhead is worthwhile as long as the devices need to be shared by -multiple guest operating systems. If sharing is not necessary, then -there are more efficient methods for accessing devices, for example -"passthrough". - -ACRN device model is a placeholder of the User VM. It allocates memory for -the User OS, configures and initializes the devices used by the User VM, -loads the virtual firmware, initializes the virtual CPU state, and -invokes the ACRN hypervisor service to execute the guest instructions. -ACRN Device model is an application running in the Service VM that -emulates devices based on command line configuration, as shown in -the architecture diagram :numref:`device-model` below: - -.. figure:: images/device-model.png - :align: center - :name: device-model - - ACRN Device Model - -ACRN Device model incorporates these three aspects: - -**Device Emulation**: - ACRN Device model provides device emulation routines that register - their I/O handlers to the I/O dispatcher. When there is an I/O request - from the User VM device, the I/O dispatcher sends this request to the - corresponding device emulation routine. - -**I/O Path**: - see `ACRN-io-mediator`_ below - -**HSM**: - The Hypervisor Service Module is a kernel module in the - Service VM acting as a middle layer to support the device model. The HSM - client handling flow is described below: - - #. ACRN hypervisor IOREQ is forwarded to the HSM by an upcall - notification to the Service VM. - #. HSM will mark the IOREQ as "in process" so that the same IOREQ will - not pick up again. The IOREQ will be sent to the client for handling. - Meanwhile, the HSM is ready for another IOREQ. - #. IOREQ clients are either a Service VM Userland application or a Service VM - Kernel space module. Once the IOREQ is processed and completed, the - Client will issue an IOCTL call to the HSM to notify an IOREQ state - change. The HSM then checks and hypercalls to ACRN hypervisor - notifying it that the IOREQ has completed. - -.. note:: - * Userland: dm as ACRN Device Model. - * Kernel space: VBS-K, MPT Service, HSM itself - -.. _pass-through: - -Device Passthrough -****************** - -At the highest level, device passthrough is about providing isolation -of a device to a given guest operating system so that the device can be -used exclusively by that guest. - -.. figure:: images/device-passthrough.png - :align: center - :name: device-passthrough - - Device Passthrough - -Near-native performance can be achieved by using device passthrough. -This is ideal for networking applications (or those with high disk I/O -needs) that have not adopted virtualization because of contention and -performance degradation through the hypervisor (using a driver in the -hypervisor or through the hypervisor to a user space emulation). -Assigning devices to specific guests is also useful when those devices -inherently wouldn't be shared. For example, if a system includes -multiple video adapters, those adapters could be passed through to -unique guest domains. - -Finally, there may be specialized PCI devices that only one guest domain -uses, so they should be passed through to the guest. Individual USB -ports could be isolated to a given domain too, or a serial port (which -is itself not shareable) could be isolated to a particular guest. In -ACRN hypervisor, we support USB controller passthrough only, and we -don't support passthrough for a legacy serial port, (for example -0x3f8). - - -Hardware Support for Device Passthrough -======================================= - -Intel's current processor architectures provides support for device -passthrough with VT-d. VT-d maps guest physical address to machine -physical address, so device can use guest physical address directly. -When this mapping occurs, the hardware takes care of access (and -protection), and the guest operating system can use the device as if it -were a non-virtualized system. In addition to mapping guest to physical -memory, isolation prevents this device from accessing memory belonging -to other guests or the hypervisor. - -Another innovation that helps interrupts scale to large numbers of VMs -is called Message Signaled Interrupts (MSI). Rather than relying on -physical interrupt pins to be associated with a guest, MSI transforms -interrupts into messages that are more easily virtualized (scaling to -thousands of individual interrupts). MSI has been available since PCI -version 2.2 but is also available in PCI Express (PCIe), where it allows -fabrics to scale to many devices. MSI is ideal for I/O virtualization, -as it allows isolation of interrupt sources (as opposed to physical pins -that must be multiplexed or routed through software). - -Hypervisor Support for Device Passthrough -========================================= - -By using the latest virtualization-enhanced processor architectures, -hypervisors and virtualization solutions can support device -passthrough (using VT-d), including Xen, KVM, and ACRN hypervisor. -In most cases, the guest operating system (User -OS) must be compiled to support passthrough, by using -kernel build-time options. Hiding the devices from the host VM may also -be required (as is done with Xen using pciback). Some restrictions apply -in PCI, for example, PCI devices behind a PCIe-to-PCI bridge must be -assigned to the same guest OS. PCIe does not have this restriction. - -.. _ACRN-io-mediator: - -ACRN I/O Mediator -***************** - -:numref:`io-emulation-path` shows the flow of an example I/O emulation path. - -.. figure:: images/io-emulation-path.png - :align: center - :name: io-emulation-path - - I/O Emulation Path - -Following along with the numbered items in :numref:`io-emulation-path`: - -1. When a guest executes an I/O instruction (PIO or MMIO), a VM exit happens. - ACRN hypervisor takes control, and analyzes the VM - exit reason, which is a VMX_EXIT_REASON_IO_INSTRUCTION for PIO access. -2. ACRN hypervisor fetches and analyzes the guest instruction, and - notices it is a PIO instruction (``in AL, 20h`` in this example), and put - the decoded information (including the PIO address, size of access, - read/write, and target register) into the shared page, and - notify/interrupt the Service VM to process. -3. The hypervisor service module (HSM) in Service VM receives the - interrupt, and queries the IO request ring to get the PIO instruction - details. -4. It checks to see if any kernel device claims - ownership of the IO port: if a kernel module claimed it, the kernel - module is activated to execute its processing APIs. Otherwise, the HSM - module leaves the IO request in the shared page and wakes up the - device model thread to process. -5. The ACRN device model follows the same mechanism as the HSM. The I/O - processing thread of device model queries the IO request ring to get the - PIO instruction details and checks to see if any (guest) device emulation - module claims ownership of the IO port: if a module claimed it, - the module is invoked to execute its processing APIs. -6. After the ACRN device module completes the emulation (port IO 20h access - in this example), (say uDev1 here), uDev1 puts the result into the - shared page (in register AL in this example). -7. ACRN device model then returns control to ACRN hypervisor to indicate the - completion of an IO instruction emulation, typically through HSM/hypercall. -8. The ACRN hypervisor then knows IO emulation is complete, and copies - the result to the guest register context. -9. The ACRN hypervisor finally advances the guest IP to - indicate completion of instruction execution, and resumes the guest. - -The MMIO path is very similar, except the VM exit reason is different. MMIO -access is usually trapped through a VMX_EXIT_REASON_EPT_VIOLATION in -the hypervisor. - -Virtio Framework Architecture -***************************** - -.. _Virtio spec: - http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html - -Virtio is an abstraction for a set of common emulated devices in any -type of hypervisor. In the ACRN reference stack, our -implementation is compatible with `Virtio spec`_ 0.9 and 1.0. By -following this spec, virtual environments and guests -should have a straightforward, efficient, standard and extensible -mechanism for virtual devices, rather than boutique per-environment or -per-OS mechanisms. - -Virtio provides a common frontend driver framework that not only -standardizes device interfaces, but also increases code reuse across -different virtualization platforms. - -.. figure:: images/virtio-architecture.png - :width: 500px - :align: center - :name: virtio-architecture - - Virtio Architecture - -To better understand Virtio, especially its usage in -the ACRN project, several key concepts of Virtio are highlighted -here: - -**Front-End Virtio driver** (a.k.a. frontend driver, or FE driver in this document) - Virtio adopts a frontend-backend architecture, which enables a simple - but flexible framework for both frontend and backend Virtio driver. The - FE driver provides APIs to configure the interface, pass messages, produce - requests, and notify backend Virtio driver. As a result, the FE driver - is easy to implement and the performance overhead of emulating device is - eliminated. - -**Back-End Virtio driver** (a.k.a. backend driver, or BE driver in this document) - Similar to FE driver, the BE driver, runs either in user-land or - kernel-land of host OS. The BE driver consumes requests from FE driver - and send them to the host's native device driver. Once the requests are - done by the host native device driver, the BE driver notifies the FE - driver about the completeness of the requests. - -**Straightforward**: Virtio devices as standard devices on existing Buses - Instead of creating new device buses from scratch, Virtio devices are - built on existing buses. This gives a straightforward way for both FE - and BE drivers to interact with each other. For example, FE driver could - read/write registers of the device, and the virtual device could - interrupt FE driver, on behalf of the BE driver, in case of something is - happening. Currently, Virtio supports PCI/PCIe bus and MMIO bus. In - ACRN project, only PCI/PCIe bus is supported, and all the Virtio devices - share the same vendor ID 0x1AF4. - -**Efficient**: batching operation is encouraged - Batching operation and deferred notification are important to achieve - high-performance I/O, since notification between FE and BE driver - usually involves an expensive exit of the guest. Therefore, batching - operating and notification suppression are highly encouraged if - possible. This will give an efficient implementation for performance - critical devices. - -**Standard: virtqueue** - All the Virtio devices share a standard ring buffer and descriptor - mechanism, called a virtqueue, shown in Figure 6. A virtqueue - is a queue of scatter-gather buffers. There are three important - methods on virtqueues: - - * ``add_buf`` is for adding a request/response buffer in a virtqueue - * ``get_buf`` is for getting a response/request in a virtqueue, and - * ``kick`` is for notifying the other side for a virtqueue to - consume buffers. - - The virtqueues are created in guest physical memory by the FE drivers. - The BE drivers only need to parse the virtqueue structures to obtain - the requests and get the requests done. Virtqueue organization is - specific to the User OS. In the implementation of Virtio in Linux, the - virtqueue is implemented as a ring buffer structure called - ``vring``. - - In ACRN, the virtqueue APIs can be leveraged - directly so users don't need to worry about the details of the - virtqueue. Refer to the User VM for - more details about the virtqueue implementations. - -**Extensible: feature bits** - A simple extensible feature negotiation mechanism exists for each virtual - device and its driver. Each virtual device could claim its - device-specific features while the corresponding driver could respond to - the device with the subset of features the driver understands. The - feature mechanism enables forward and backward compatibility for the - virtual device and driver. - -In the ACRN reference stack, we implement user-land and kernel -space as shown in :numref:`virtio-framework-userland`: - -.. figure:: images/virtio-framework-userland.png - :width: 600px - :align: center - :name: virtio-framework-userland - - Virtio Framework - User Land - -In the Virtio user-land framework, the implementation is compatible with -Virtio Spec 0.9/1.0. The VBS-U is statically linked with the Device Model, -and communicates with the Device Model through the PCIe interface: PIO/MMIO -or MSI/MSI-X. VBS-U accesses Virtio APIs through the user space ``vring`` service -API helpers. User space ``vring`` service API helpers access shared ring -through a remote memory map (mmap). HSM maps User VM memory with the help of -ACRN Hypervisor. - -.. figure:: images/virtio-framework-kernel.png - :width: 600px - :align: center - :name: virtio-framework-kernel - - Virtio Framework - Kernel Space - -VBS-U offloads data plane processing to VBS-K. VBS-U initializes VBS-K -at the right timings, for example. The FE driver sets -VIRTIO_CONFIG_S_DRIVER_OK to avoid unnecessary device configuration -changes while running. VBS-K can access shared rings through the VBS-K -virtqueue APIs. VBS-K virtqueue APIs are similar to VBS-U virtqueue -APIs. VBS-K registers as a HSM client to handle a continuous range of -registers. - -There may be one or more HSM-clients for each VBS-K, and there can be a -single HSM-client for all VBS-Ks as well. VBS-K notifies FE through HSM -interrupt APIs.