doc: Minor style cleanup

- Remove "currently"
- Capitalize titles and Device Model

Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
Reyes, Amy 2022-03-18 15:24:35 -07:00 committed by David Kinder
parent 21aeb4f422
commit f5b021b1b5
47 changed files with 226 additions and 218 deletions

View File

@ -821,8 +821,8 @@ C-FN-14: All defined functions shall be used
All defined functions shall be used, either called explicitly or indirectly All defined functions shall be used, either called explicitly or indirectly
via the address. Otherwise, the function shall be removed. The following case via the address. Otherwise, the function shall be removed. The following case
is an exception: Some extra functions may be kept in order to provide a more is an exception: Some extra functions may be kept in order to provide a more
complete library of APIs. These functions may be implemented but not used complete library of APIs. These functions may be implemented but not used.
currently. These functions will come in handy in the future. In this case, These functions will come in handy in the future. In this case,
these functions may remain. these functions may remain.
Compliant example:: Compliant example::

View File

@ -126,7 +126,7 @@ To clone the ACRN hypervisor repository (including the ``hypervisor``,
$ git clone https://github.com/projectacrn/acrn-hypervisor $ git clone https://github.com/projectacrn/acrn-hypervisor
In addition to the ACRN hypervisor and device model itself, you'll also find In addition to the ACRN hypervisor and Device Model itself, you'll also find
the sources for technical documentation available from the the sources for technical documentation available from the
`ACRN documentation site`_. All of these are available for developers to `ACRN documentation site`_. All of these are available for developers to
contribute to and enhance. contribute to and enhance.

View File

@ -3,25 +3,33 @@
AT Keyboard Controller Emulation AT Keyboard Controller Emulation
################################ ################################
This document describes the AT keyboard controller emulation implementation in the ACRN device model. The Atkbdc device emulates a PS2 keyboard and mouse. This document describes the AT keyboard controller emulation implementation in the ACRN Device Model. The Atkbdc device emulates a PS2 keyboard and mouse.
Overview Overview
******** ********
The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and mice to a PC-compatible computer system. Its name comes from the IBM Personal System/2 series of personal computers, with which it was introduced in 1987. PS2 keyboard/mouse emulation is based on ACPI Emulation. We can add ACPI description of PS2 keyboard/mouse into virtual DSDT table to emulate keyboard/mouse in the User VM. The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and
mice to a PC-compatible computer system. Its name comes from the IBM Personal
System/2 series of personal computers, with which it was introduced in 1987. PS2
keyboard/mouse emulation is based on ACPI emulation. We can add an ACPI
description of the PS2 keyboard/mouse to the virtual DSDT table to emulate the
keyboard/mouse in the User VM.
.. figure:: images/atkbdc-virt-hld.png .. figure:: images/atkbdc-virt-hld.png
:align: center :align: center
:name: atkbdc-virt-arch :name: atkbdc-virt-arch
AT keyboard controller emulation architecture AT Keyboard Controller Emulation Architecture
PS2 Keyboard Emulation PS2 Keyboard Emulation
********************** **********************
ACRN supports the AT keyboard controller for PS2 keyboard that can be accessed through I/O ports (0x60 and 0x64). 0x60 is used to access AT keyboard controller data register; 0x64 is used to access AT keyboard controller address register. ACRN supports an AT keyboard controller for PS2 keyboard that can be accessed
through I/O ports (0x60 and 0x64). 0x60 is used to access the AT keyboard
controller data register; 0x64 is used to access the AT keyboard controller
address register.
The PS2 keyboard ACPI description as below:: PS2 keyboard ACPI description::
Device (KBD) Device (KBD)
{ {
@ -48,10 +56,12 @@ The PS2 keyboard ACPI description as below::
PS2 Mouse Emulation PS2 Mouse Emulation
******************* *******************
ACRN supports AT keyboard controller for PS2 mouse that can be accessed through I/O ports (0x60 and 0x64). ACRN supports an AT keyboard controller for PS2 mouse that can be accessed
0x60 is used to access AT keyboard controller data register; 0x64 is used to access AT keyboard controller address register. through I/O ports (0x60 and 0x64). 0x60 is used to access the AT keyboard
controller data register; 0x64 is used to access the AT keyboard controller
address register.
The PS2 mouse ACPI description as below:: PS2 mouse ACPI description::
Device (MOU) Device (MOU)
{ {

View File

@ -993,7 +993,7 @@ An alternative ACPI resource abstraction option is for the Service VM to
own all devices and emulate a set of virtual devices for the User VM own all devices and emulate a set of virtual devices for the User VM
(POST_LAUNCHED_VM). (POST_LAUNCHED_VM).
This is the most popular ACPI resource model for virtualization, This is the most popular ACPI resource model for virtualization,
as shown in the picture below. ACRN currently as shown in the picture below. ACRN
uses device emulation plus some device passthrough for the User VM. uses device emulation plus some device passthrough for the User VM.
.. figure:: images/dm-image52.png .. figure:: images/dm-image52.png

View File

@ -235,9 +235,9 @@ the hypervisor.
DMA Emulation DMA Emulation
------------- -------------
Currently the only fully virtualized devices to the User VM are USB xHCI, UART, The only fully virtualized devices to the User VM are USB xHCI, UART,
and Automotive I/O controller. None of these require emulating and Automotive I/O controller. None of these require emulating
DMA transactions. ACRN does not currently support virtual DMA. DMA transactions. ACRN does not support virtual DMA.
Hypervisor Hypervisor
********** **********
@ -371,8 +371,7 @@ Refer to :ref:`hld-trace-log` for more details.
User VM User VM
******* *******
Currently, ACRN can boot Linux and Android guest OSes. For an Android guest OS, ACRN can boot Linux and Android guest OSes. For an Android guest OS, ACRN
ACRN
provides a VM environment with two worlds: normal world and trusty provides a VM environment with two worlds: normal world and trusty
world. The Android OS runs in the normal world. The trusty OS and world. The Android OS runs in the normal world. The trusty OS and
security sensitive applications run in the trusty world. The trusty security sensitive applications run in the trusty world. The trusty
@ -384,7 +383,7 @@ Guest Physical Memory Layout - User VM E820
DM creates an E820 table for a User VM based on these simple rules: DM creates an E820 table for a User VM based on these simple rules:
- If requested VM memory size < low memory limitation (currently 2 GB, - If requested VM memory size < low memory limitation (2 GB,
defined in DM), then low memory range = [0, requested VM memory defined in DM), then low memory range = [0, requested VM memory
size] size]

View File

@ -64,9 +64,9 @@ Px/Cx data for User VM P/C-state management:
:align: center :align: center
:name: vACPItable :name: vACPItable
System block for building vACPI table with Px/Cx data System Block for Building vACPI Table with Px/Cx Data
Some ioctl APIs are defined for the Device model to query Px/Cx data from Some ioctl APIs are defined for the Device Model to query Px/Cx data from
the Service VM HSM. The Hypervisor needs to provide hypercall APIs to transit the Service VM HSM. The Hypervisor needs to provide hypercall APIs to transit
Px/Cx data from the CPU state table to the Service VM HSM. Px/Cx data from the CPU state table to the Service VM HSM.
@ -75,11 +75,11 @@ The build flow is:
1) Use an offline tool (e.g. **iasl**) to parse the Px/Cx data and hard-code to 1) Use an offline tool (e.g. **iasl**) to parse the Px/Cx data and hard-code to
a CPU state table in the Hypervisor. The Hypervisor loads the data after a CPU state table in the Hypervisor. The Hypervisor loads the data after
the system boots. the system boots.
2) Before User VM launching, the Device model queries the Px/Cx data from the Service 2) Before User VM launching, the Device Model queries the Px/Cx data from the Service
VM HSM via ioctl interface. VM HSM via ioctl interface.
3) HSM transmits the query request to the Hypervisor by hypercall. 3) HSM transmits the query request to the Hypervisor by hypercall.
4) The Hypervisor returns the Px/Cx data. 4) The Hypervisor returns the Px/Cx data.
5) The Device model builds the virtual ACPI table with these Px/Cx data 5) The Device Model builds the virtual ACPI table with these Px/Cx data
Intercept Policy Intercept Policy
================ ================
@ -124,7 +124,7 @@ could customize it according to their hardware/software requirements.
:align: center :align: center
:name: systempmdiag :name: systempmdiag
ACRN System S3/S5 diagram ACRN System S3/S5 Diagram
System Low Power State Entry Process System Low Power State Entry Process
@ -156,7 +156,7 @@ with typical ISD configuration(S3 follows very similar process)
:align: center :align: center
:name: pmworkflow :name: pmworkflow
ACRN system S5 entry workflow ACRN System S5 Entry Workflow
For system power state entry: For system power state entry:

View File

@ -57,7 +57,7 @@ SoC in-vehicle platform.
:align: center :align: center
:name: security-vehicle :name: security-vehicle
SDC and IVE system In-Vehicle SDC and IVE System In-Vehicle
In this system, the ACRN hypervisor is running at the most privileged In this system, the ACRN hypervisor is running at the most privileged
@ -125,7 +125,7 @@ launched.
Note that measured boot (as described well in this `boot security Note that measured boot (as described well in this `boot security
technologies document technologies document
<https://firmwaresecurity.com/2015/07/29/survey-of-boot-security-technologies/>`_) <https://firmwaresecurity.com/2015/07/29/survey-of-boot-security-technologies/>`_)
is not currently supported for ACRN and its guest VMs. is not supported for ACRN and its guest VMs.
Boot Flow Boot Flow
--------- ---------
@ -137,7 +137,7 @@ As shown in :numref:`security-bootflow-sbl`, the Converged Security Engine
Firmware (CSE FW) behaves as the root of trust in this platform boot Firmware (CSE FW) behaves as the root of trust in this platform boot
flow. It authenticates and starts the BIOS (SBL), whereupon the SBL is flow. It authenticates and starts the BIOS (SBL), whereupon the SBL is
responsible for authenticating and verifying the ACRN hypervisor image. responsible for authenticating and verifying the ACRN hypervisor image.
Currently the Service VM kernel is built together with the ACRN hypervisor as The Service VM kernel is built together with the ACRN hypervisor as
one image bundle, so this whole image signature is verified by SBL one image bundle, so this whole image signature is verified by SBL
before launching. before launching.
@ -316,7 +316,7 @@ The ACRN hypervisor has ultimate access control of all the platform
memory spaces (see :ref:`memmgt-hld`). Note that on the APL platform, memory spaces (see :ref:`memmgt-hld`). Note that on the APL platform,
`SGX <https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html>`_ and `TME `SGX <https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html>`_ and `TME
<https://itpeernetwork.intel.com/memory-encryption/>`_ <https://itpeernetwork.intel.com/memory-encryption/>`_
are not currently supported. are not supported.
The hypervisor can read and write any physical memory space allocated The hypervisor can read and write any physical memory space allocated
to any guest VM, and can even fetch instructions and execute the code in to any guest VM, and can even fetch instructions and execute the code in
@ -969,7 +969,7 @@ Secure storage is one of the security services provided by the secure world
on the RPMB partition in eMMC (or UFS, and NVMe storage). Details of how on the RPMB partition in eMMC (or UFS, and NVMe storage). Details of how
RPMB works are out of scope for this document. RPMB works are out of scope for this document.
Since currently the eMMC in APL SoC platforms only has a single RPMB Since the eMMC in APL SoC platforms only has a single RPMB
partition for tamper-resistant and anti-replay secure storage, the partition for tamper-resistant and anti-replay secure storage, the
secure storage (RPMB) should be virtualized in order to support multiple secure storage (RPMB) should be virtualized in order to support multiple
guest User VMs. However, although future generations of flash storage guest User VMs. However, although future generations of flash storage

View File

@ -44,7 +44,7 @@ devices such as audio, eAVB/TSN, IPU, and CSMU devices. This section gives
an overview about virtio history, motivation, and advantages, and then an overview about virtio history, motivation, and advantages, and then
highlights virtio key concepts. Second, this section will describe highlights virtio key concepts. Second, this section will describe
ACRN's virtio architectures and elaborate on ACRN virtio APIs. Finally ACRN's virtio architectures and elaborate on ACRN virtio APIs. Finally
this section will introduce all the virtio devices currently supported this section will introduce all the virtio devices supported
by ACRN. by ACRN.
Virtio Introduction Virtio Introduction
@ -99,7 +99,7 @@ Straightforward: virtio devices as standard devices on existing buses
interrupt the FE driver, on behalf of the BE driver, in case something of interrupt the FE driver, on behalf of the BE driver, in case something of
interest is happening. interest is happening.
Currently virtio supports PCI/PCIe bus and MMIO bus. In ACRN, only The virtio supports PCI/PCIe bus and MMIO bus. In ACRN, only
PCI/PCIe bus is supported, and all the virtio devices share the same PCI/PCIe bus is supported, and all the virtio devices share the same
vendor ID 0x1AF4. vendor ID 0x1AF4.
@ -308,7 +308,7 @@ general workflow of ioeventfd.
:align: center :align: center
:name: ioeventfd-workflow :name: ioeventfd-workflow
ioeventfd general workflow Ioeventfd General Workflow
The workflow can be summarized as: The workflow can be summarized as:
@ -334,7 +334,7 @@ signaled. :numref:`irqfd-workflow` shows the general flow for irqfd.
:align: center :align: center
:name: irqfd-workflow :name: irqfd-workflow
irqfd general flow Irqfd General Flow
The workflow can be summarized as: The workflow can be summarized as:
@ -641,7 +641,7 @@ their temporary IDs are listed in the following table.
| GPIO | 0x8086 | 0x8609 | 0x8086 | 0xFFF7 | | GPIO | 0x8086 | 0x8609 | 0x8086 | 0xFFF7 |
+--------------+-------------+-------------+-------------+-------------+ +--------------+-------------+-------------+-------------+-------------+
The following sections introduce the status of virtio devices currently The following sections introduce the status of virtio devices
supported in ACRN. supported in ACRN.
.. toctree:: .. toctree::

View File

@ -15,7 +15,7 @@ The hypervisor console is a text-based terminal accessible from UART.
:align: center :align: center
:name: console-processing :name: console-processing
Periodic console processing Periodic Console Processing
A periodic timer is set on initialization to trigger console processing every 40ms. A periodic timer is set on initialization to trigger console processing every 40ms.
Processing behavior depends on whether the vUART Processing behavior depends on whether the vUART
@ -43,7 +43,7 @@ the physical UART only when the vUART is deactivated. See
Virtual UART Virtual UART
************ ************
Currently UART 16550 is owned by the hypervisor itself and used for UART 16550 is owned by the hypervisor itself and used for
debugging purposes. Properties are configured by hypervisor command debugging purposes. Properties are configured by hypervisor command
line. Hypervisor emulates a UART device with 0x3F8 address to Service VM that line. Hypervisor emulates a UART device with 0x3F8 address to Service VM that
acts as the console of Service VM with these features: acts as the console of Service VM with these features:
@ -61,7 +61,7 @@ The following diagram shows the activation state transition of vUART.
.. figure:: images/console-image41.png .. figure:: images/console-image41.png
:align: center :align: center
Periodic console processing Periodic Console Processing
Specifically: Specifically:

View File

@ -120,7 +120,7 @@ The physical CPU assignment is predefined by ``cpu_affinity`` in
``vm config``, while post-launched VMs could be launched on pCPUs that are ``vm config``, while post-launched VMs could be launched on pCPUs that are
a subset of it. a subset of it.
Currently, the ACRN hypervisor does not support virtual CPU migration to The ACRN hypervisor does not support virtual CPU migration to
different physical CPUs. No changes to the mapping of the virtual CPU to different physical CPUs. No changes to the mapping of the virtual CPU to
physical CPU can happen without first calling ``offline_vcpu``. physical CPU can happen without first calling ``offline_vcpu``.
@ -457,7 +457,7 @@ A bitmap in the vCPU structure lists the different requests:
ACRN provides the function *vcpu_make_request* to make different ACRN provides the function *vcpu_make_request* to make different
requests, set the bitmap of the corresponding request, and notify the target requests, set the bitmap of the corresponding request, and notify the target
vCPU through the IPI if necessary (when the target vCPU is not currently vCPU through the IPI if necessary (when the target vCPU is not
running). See :ref:`vcpu-request-interrupt-injection` for details. running). See :ref:`vcpu-request-interrupt-injection` for details.
.. code-block:: c .. code-block:: c
@ -471,7 +471,7 @@ running). See :ref:`vcpu-request-interrupt-injection` for details.
* if current hostcpu is not the target vcpu's hostcpu, we need * if current hostcpu is not the target vcpu's hostcpu, we need
* to invoke IPI to wake up target vcpu * to invoke IPI to wake up target vcpu
* *
* TODO: Here we just compare with cpuid, since cpuid currently is * TODO: Here we just compare with cpuid, since cpuid is
* global under pCPU / vCPU 1:1 mapping. If later we enabled vcpu * global under pCPU / vCPU 1:1 mapping. If later we enabled vcpu
* scheduling, we need change here to determine it target vcpu is * scheduling, we need change here to determine it target vcpu is
* VMX non-root or root mode * VMX non-root or root mode
@ -821,16 +821,16 @@ This table describes details for CPUID emulation:
- EBX, ECX, EDX: reserved to 0 - EBX, ECX, EDX: reserved to 0
* - 0AH * - 0AH
- - PMU currently disabled - - PMU disabled
* - 0FH, 10H * - 0FH, 10H
- - Intel RDT currently disabled - - Intel RDT disabled
* - 12H * - 12H
- - Fill according to SGX virtualization - - Fill according to SGX virtualization
* - 14H * - 14H
- - Intel Processor Trace currently disabled - - Intel Processor Trace disabled
* - Others * - Others
- - Get from per-vm CPUID entries cache - - Get from per-vm CPUID entries cache
@ -969,7 +969,7 @@ ACRN emulates ``mov to cr0``, ``mov to cr4``, ``mov to cr8``, and ``mov
from cr8`` through *cr_access_vmexit_handler* based on from cr8`` through *cr_access_vmexit_handler* based on
*VMX_EXIT_REASON_CR_ACCESS*. *VMX_EXIT_REASON_CR_ACCESS*.
.. note:: Currently ``mov to cr8`` and ``mov from cr8`` are actually .. note:: ``mov to cr8`` and ``mov from cr8`` are
not valid as ``CR8-load/store exiting`` bits are set as 0 in not valid as ``CR8-load/store exiting`` bits are set as 0 in
*VMX_PROC_VM_EXEC_CONTROLS*. *VMX_PROC_VM_EXEC_CONTROLS*.
@ -1134,7 +1134,7 @@ MMIO (EPT) and APIC access emulation. When such a VM exit is triggered, the
hypervisor needs to decode the instruction from RIP then attempt the hypervisor needs to decode the instruction from RIP then attempt the
corresponding emulation based on its instruction and read/write direction. corresponding emulation based on its instruction and read/write direction.
ACRN currently supports emulating instructions for ``mov``, ``movx``, ACRN supports emulating instructions for ``mov``, ``movx``,
``movs``, ``stos``, ``test``, ``and``, ``or``, ``cmp``, ``sub``, and ``movs``, ``stos``, ``test``, ``and``, ``or``, ``cmp``, ``sub``, and
``bittest`` without support for lock prefix. Real mode emulation is not ``bittest`` without support for lock prefix. Real mode emulation is not
supported. supported.

View File

@ -21,7 +21,7 @@ discussed here.
-------- --------
In the ACRN project, device emulation means emulating all existing In the ACRN project, device emulation means emulating all existing
hardware resources through a software component device model running in hardware resources through the Device Model, a software component running in
the Service VM. Device emulation must maintain the same SW the Service VM. Device emulation must maintain the same SW
interface as a native device, providing transparency to the VM software interface as a native device, providing transparency to the VM software
stack. Passthrough implemented in the hypervisor assigns a physical device stack. Passthrough implemented in the hypervisor assigns a physical device
@ -38,7 +38,7 @@ can't support device sharing.
:align: center :align: center
:name: emu-passthru-diff :name: emu-passthru-diff
Difference between emulation and passthrough Difference Between Emulation and Passthrough
Passthrough in the hypervisor provides the following functionalities to Passthrough in the hypervisor provides the following functionalities to
allow the VM to access PCI devices directly: allow the VM to access PCI devices directly:
@ -59,7 +59,7 @@ ACRN for a post-launched VM:
.. figure:: images/passthru-image22.png .. figure:: images/passthru-image22.png
:align: center :align: center
Passthrough devices initialization control flow Passthrough Devices Initialization Control Flow
Passthrough Device Status Passthrough Device Status
************************* *************************
@ -70,7 +70,7 @@ passthrough, as detailed here:
.. figure:: images/passthru-image77.png .. figure:: images/passthru-image77.png
:align: center :align: center
Passthrough device status Passthrough Device Status
Owner of Passthrough Devices Owner of Passthrough Devices
**************************** ****************************
@ -129,12 +129,12 @@ passthrough device to/from a post-launched VM is shown in the following figures:
.. figure:: images/passthru-image86.png .. figure:: images/passthru-image86.png
:align: center :align: center
ptdev assignment control flow Ptdev Assignment Control Flow
.. figure:: images/passthru-image42.png .. figure:: images/passthru-image42.png
:align: center :align: center
ptdev deassignment control flow Ptdev Deassignment Control Flow
.. _vtd-posted-interrupt: .. _vtd-posted-interrupt:
@ -199,7 +199,7 @@ Consider this scenario:
If an external interrupt from an assigned device destined to vCPU0 If an external interrupt from an assigned device destined to vCPU0
happens at this time, we do not want this interrupt to be incorrectly happens at this time, we do not want this interrupt to be incorrectly
consumed by vCPU1 currently running on pCPU0. This would happen if we consumed by vCPU1 running on pCPU0. This would happen if we
allocate the same Activation Notification Vector (ANV) to all vCPUs. allocate the same Activation Notification Vector (ANV) to all vCPUs.
To circumvent this issue, ACRN allocates unique ANVs for each vCPU that To circumvent this issue, ACRN allocates unique ANVs for each vCPU that
@ -301,7 +301,7 @@ virtual destination, etc. See the following figure for details:
.. figure:: images/passthru-image91.png .. figure:: images/passthru-image91.png
:align: center :align: center
Remapping of physical interrupts Remapping of Physical Interrupts
There are two different types of interrupt sources: IOAPIC and MSI. There are two different types of interrupt sources: IOAPIC and MSI.
The hypervisor will record different information for interrupt The hypervisor will record different information for interrupt
@ -315,7 +315,7 @@ done on-demand rather than on hypervisor initialization.
:align: center :align: center
:name: init-remapping :name: init-remapping
Initialization of remapping of virtual IOAPIC interrupts for Service VM Initialization of Remapping of Virtual IOAPIC Interrupts for Service VM
:numref:`init-remapping` above illustrates how remapping of (virtual) IOAPIC :numref:`init-remapping` above illustrates how remapping of (virtual) IOAPIC
interrupts are remapped for the Service VM. VM exit occurs whenever the Service interrupts are remapped for the Service VM. VM exit occurs whenever the Service
@ -330,7 +330,7 @@ Remapping of (virtual) MSI interrupts are set up in a similar sequence:
.. figure:: images/passthru-image98.png .. figure:: images/passthru-image98.png
:align: center :align: center
Initialization of remapping of virtual MSI for Service VM Initialization of Remapping of Virtual MSI for Service VM
This figure illustrates how mappings of MSI or MSI-X are set up for the This figure illustrates how mappings of MSI or MSI-X are set up for the
Service VM. The Service VM is responsible for issuing a hypercall to notify the Service VM. The Service VM is responsible for issuing a hypercall to notify the
@ -465,7 +465,7 @@ For a post-launched VM, you enable PTM by setting the
:width: 700 :width: 700
:name: ptm-flow :name: ptm-flow
PTM-enabling workflow in post-launched VM PTM-enabling Workflow in Post-launched VM
As shown in :numref:`ptm-flow`, PTM is enabled in the root port during the As shown in :numref:`ptm-flow`, PTM is enabled in the root port during the
hypervisor startup. The Device Model (DM) then checks whether the passthrough hypervisor startup. The Device Model (DM) then checks whether the passthrough
@ -483,7 +483,7 @@ passing through the device to the post-launched VM.
:width: 700 :width: 700
:name: ptm-vrp :name: ptm-vrp
PTM-enabled PCI device passthrough to post-launched VM PTM-enabled PCI Device Passthrough to Post-launched VM
:numref:`ptm-vrp` shows that, after enabling PTM, the passthrough device :numref:`ptm-vrp` shows that, after enabling PTM, the passthrough device
connects to the virtual root port instead of the virtual host bridge. connects to the virtual root port instead of the virtual host bridge.

View File

@ -39,7 +39,7 @@ interrupt to the specific RT VM with passthrough LAPIC.
:width: 600px :width: 600px
:name: interrupt-sw-modules :name: interrupt-sw-modules
ACRN Interrupt SW Modules Overview ACRN Interrupt Software Modules Overview
The hypervisor implements the following functionalities for handling The hypervisor implements the following functionalities for handling
@ -146,7 +146,7 @@ Native PIC is not used in the system.
:align: center :align: center
:name: hv-pic-config :name: hv-pic-config
HV PIC/IOAPIC/LAPIC configuration Hypervisor PIC/IOAPIC/LAPIC Configuration
LAPIC Initialization LAPIC Initialization
==================== ====================
@ -224,8 +224,8 @@ The interrupt vectors are assigned as shown here:
- SPURIOUS_APIC_VECTOR - SPURIOUS_APIC_VECTOR
Interrupts from either IOAPIC or MSI can be delivered to a target CPU. Interrupts from either IOAPIC or MSI can be delivered to a target CPU.
By default they are configured as Lowest Priority (FLAT mode), i.e. they By default, they are configured as Lowest Priority (FLAT mode), meaning they
are delivered to a CPU core that is currently idle or executing lowest are delivered to a CPU core that is idle or executing the lowest
priority ISR. There is no guarantee a device's interrupt will be priority ISR. There is no guarantee a device's interrupt will be
delivered to a specific Guest's CPU. Timer interrupts are an exception - delivered to a specific Guest's CPU. Timer interrupts are an exception -
these are always delivered to the CPU which programs the LAPIC timer. these are always delivered to the CPU which programs the LAPIC timer.
@ -237,7 +237,7 @@ allocation for CPUs is shown here:
.. figure:: images/interrupt-image89.png .. figure:: images/interrupt-image89.png
:align: center :align: center
FLAT mode vector allocation FLAT Mode Vector Allocation
IRQ Descriptor Table IRQ Descriptor Table
==================== ====================
@ -290,7 +290,7 @@ Interrupt and IRQ processing flow diagrams are shown below:
:align: center :align: center
:name: phy-interrupt-processing :name: phy-interrupt-processing
Processing of physical interrupts Processing of Physical Interrupts
When a physical interrupt is raised and delivered to a physical CPU, the When a physical interrupt is raised and delivered to a physical CPU, the
CPU may be running under either VMX root mode or non-root mode. CPU may be running under either VMX root mode or non-root mode.
@ -341,7 +341,7 @@ conditions:
:align: center :align: center
:name: request-irq :name: request-irq
Request IRQ for different conditions Request IRQ for Different Conditions
.. _ipi-management: .. _ipi-management:

View File

@ -5,7 +5,7 @@ I/O Emulation High-Level Design
As discussed in :ref:`intro-io-emulation`, there are multiple ways and As discussed in :ref:`intro-io-emulation`, there are multiple ways and
places to handle I/O emulation, including HV, Service VM Kernel HSM, and Service VM places to handle I/O emulation, including HV, Service VM Kernel HSM, and Service VM
user-land device model (acrn-dm). user-land Device Model (acrn-dm).
I/O emulation in the hypervisor provides these functionalities: I/O emulation in the hypervisor provides these functionalities:
@ -32,7 +32,7 @@ inside the hypervisor:
:align: center :align: center
:name: io-control-flow :name: io-control-flow
Control flow of I/O emulation in the hypervisor Control Flow of I/O Emulation in the Hypervisor
I/O emulation does not rely on any calibration data. I/O emulation does not rely on any calibration data.
@ -219,7 +219,7 @@ Post-Work
========= =========
After an I/O request is completed, some more work needs to be done for After an I/O request is completed, some more work needs to be done for
I/O reads to update guest registers accordingly. Currently the I/O reads to update guest registers accordingly. The
hypervisor re-enters the vCPU thread every time a vCPU is scheduled back hypervisor re-enters the vCPU thread every time a vCPU is scheduled back
in, rather than switching to where the vCPU is scheduled out. As a result, in, rather than switching to where the vCPU is scheduled out. As a result,
post-work is introduced for this purpose. post-work is introduced for this purpose.
@ -233,7 +233,7 @@ updating the vCPU guest state to reflect the effect of the I/O reads.
.. figure:: images/ioem-image100.png .. figure:: images/ioem-image100.png
:align: center :align: center
Workflow of MMIO I/O request completion Workflow of MMIO I/O Request Completion
The figure above illustrates the workflow to complete an I/O The figure above illustrates the workflow to complete an I/O
request for MMIO. Once the I/O request is completed, Service VM makes a request for MMIO. Once the I/O request is completed, Service VM makes a
@ -247,7 +247,7 @@ request slot to FREE, and continues execution of the vCPU.
:align: center :align: center
:name: port-io-completion :name: port-io-completion
Workflow of port I/O request completion Workflow of Port I/O Request Completion
Completion of a port I/O request (shown in :numref:`port-io-completion` Completion of a port I/O request (shown in :numref:`port-io-completion`
above) is above) is

View File

@ -160,8 +160,8 @@ char devices and UART DM immediately.
is done except that the heartbeat and RTC are only used by the IOC is done except that the heartbeat and RTC are only used by the IOC
mediator and will not be transferred to IOC mediator and will not be transferred to IOC
firmware. firmware.
- Currently, IOC mediator only cares about lifecycle, signal, and raw data. - IOC mediator only cares about lifecycle, signal, and raw data.
Others, e.g., diagnosis, are not used by the IOC mediator. Others, such as diagnosis, are not used by the IOC mediator.
State Transfer State Transfer
-------------- --------------
@ -217,7 +217,7 @@ priority for the frame, then send data to the UART driver.
The difference between the native and virtualization architectures is The difference between the native and virtualization architectures is
that the IOC mediator needs to re-compute the checksum and reset that the IOC mediator needs to re-compute the checksum and reset
priority. Currently, priority is not supported by IOC firmware; the priority. Priority is not supported by IOC firmware; the
priority setting by the IOC mediator is based on the priority setting of priority setting by the IOC mediator is based on the priority setting of
the CBC driver. The Service VM and User VM use the same CBC driver. the CBC driver. The Service VM and User VM use the same CBC driver.
@ -388,8 +388,8 @@ table:
Wakeup Reason Wakeup Reason
+++++++++++++ +++++++++++++
The wakeup reasons command contains a bit mask of all reasons, which is The wakeup reasons command contains a bitmask of all reasons that are
currently keeping the SoC/IOC active. The SoC itself also has a wakeup keeping the SoC/IOC active. The SoC itself also has a wakeup
reason, which allows the SoC to keep the IOC active. The wakeup reasons reason, which allows the SoC to keep the IOC active. The wakeup reasons
should be sent every 1000 ms by the IOC. should be sent every 1000 ms by the IOC.
@ -402,7 +402,7 @@ Wakeup reason frame definition is as below:
Wakeup Reason Frame Definition Wakeup Reason Frame Definition
Currently the wakeup reason bits are supported by sources shown here: The wakeup reason bits are supported by sources shown here:
.. list-table:: Wakeup Reason Bits .. list-table:: Wakeup Reason Bits
:header-rows: 1 :header-rows: 1
@ -563,8 +563,7 @@ IOC signal type definitions are as below.
shouldn't be forwarded to the native cbc signal channel. The Service VM shouldn't be forwarded to the native cbc signal channel. The Service VM
signal related services should do a real open/reset/close signal channel. signal related services should do a real open/reset/close signal channel.
- Every backend should maintain a passlist for different VMs. The - Every backend should maintain a passlist for different VMs. The
passlist can be stored in the Service VM file system (Read only) in the passlist is hard coded.
future, but currently it is hard coded.
IOC mediator has two passlist tables, one is used for rx IOC mediator has two passlist tables, one is used for rx
signals (SoC->IOC), and the other one is used for tx signals. The IOC signals (SoC->IOC), and the other one is used for tx signals. The IOC

View File

@ -307,7 +307,7 @@ processor does not take interrupts when it is executing in VMX root
mode. ACRN configures the processor to take vmexit upon external mode. ACRN configures the processor to take vmexit upon external
interrupt if the processor is executing in VMX non-root mode. Upon an interrupt if the processor is executing in VMX non-root mode. Upon an
external interrupt, after sending EOI to the physical LAPIC, ACRN external interrupt, after sending EOI to the physical LAPIC, ACRN
injects the vector into the vLAPIC of the vCPU currently running on the injects the vector into the vLAPIC of the vCPU running on the
processor. Guests using a Linux kernel use vectors less than 0xECh processor. Guests using a Linux kernel use vectors less than 0xECh
for device interrupts. for device interrupts.

View File

@ -187,7 +187,7 @@ are created by the Device Model (DM) in the Service VM. The main steps include:
Software configuration for Service VM (bzimage software load as example): Software configuration for Service VM (bzimage software load as example):
- **ACPI**: HV passes the entire ACPI table from the bootloader to the Service - **ACPI**: HV passes the entire ACPI table from the bootloader to the Service
VM directly. Legacy mode is currently supported as the ACPI table VM directly. Legacy mode is supported as the ACPI table
is loaded at F-Segment. is loaded at F-Segment.
- **E820**: HV passes the E820 table from the bootloader through the zero page - **E820**: HV passes the E820 table from the bootloader through the zero page

View File

@ -17,7 +17,7 @@ corresponds to one cache way.
On current generation systems, normally L3 cache is shared by all CPU cores on the same socket and On current generation systems, normally L3 cache is shared by all CPU cores on the same socket and
L2 cache is generally just shared by the hyperthreads on a core. But when dealing with ACRN L2 cache is generally just shared by the hyperthreads on a core. But when dealing with ACRN
vCAT COS IDs assignment, it is currently assumed that all the L2/L3 caches (and therefore all COS IDs) vCAT COS IDs assignment, it is assumed that all the L2/L3 caches (and therefore all COS IDs)
are system-wide caches shared by all cores in the system, this is done for convenience and to simplify are system-wide caches shared by all cores in the system, this is done for convenience and to simplify
the vCAT configuration process. If vCAT is enabled for a VM (abbreviated as vCAT VM), there should not the vCAT configuration process. If vCAT is enabled for a VM (abbreviated as vCAT VM), there should not
be any COS ID overlap between a vCAT VM and any other VMs. e.g. the vCAT VM has exclusive use of the be any COS ID overlap between a vCAT VM and any other VMs. e.g. the vCAT VM has exclusive use of the

View File

@ -15,7 +15,7 @@ Inter-VM Communication Overview
:align: center :align: center
:name: ivshmem-architecture-overview :name: ivshmem-architecture-overview
ACRN shared memory based inter-VM communication architecture ACRN Shared Memory Based Inter-VM Communication Architecture
ACRN can emulate the ``ivshmem`` device in two ways: ACRN can emulate the ``ivshmem`` device in two ways:
@ -117,7 +117,7 @@ MMIO Registers Definition
- 0x8 - 0x8
- RO - RO
- Inter-VM Position register is used to identify the VM ID. - Inter-VM Position register is used to identify the VM ID.
Currently its value is zero. Its value is zero.
* - IVSHMEM\_DOORBELL\_REG * - IVSHMEM\_DOORBELL\_REG
- 0xC - 0xC
- WO - WO

View File

@ -24,7 +24,7 @@ Here is how ACRN supports MMIO device passthrough:
if not, use ``--mmiodev_pt MMIO_regions``. if not, use ``--mmiodev_pt MMIO_regions``.
.. note:: .. note::
Currently, the vTPM and PT TPM in the ACRN-DM have the same HID so we The vTPM and PT TPM in the ACRN-DM have the same HID so we
can't support them both at the same time. The VM will fail to boot if can't support them both at the same time. The VM will fail to boot if
both are used. both are used.

View File

@ -15,18 +15,18 @@ System timer virtualization architecture
- In the User VM, vRTC, vHPET, and vPIT are used by the clock event module and the clock - In the User VM, vRTC, vHPET, and vPIT are used by the clock event module and the clock
source module in the kernel space. source module in the kernel space.
- In the Service VM, all vRTC, vHPET, and vPIT devices are created by the device - In the Service VM, the Device Model creates all vRTC, vHPET, and vPIT devices
model in the initialization phase and uses timer\_create and in the initialization phase. The Device Model uses timer\_create and
timerfd\_create interfaces to set up native timers for the trigger timeout timerfd\_create interfaces to set up native timers for the trigger timeout
mechanism. mechanism.
System Timer Initialization System Timer Initialization
=========================== ===========================
The device model initializes vRTC, vHEPT, and vPIT devices automatically when The Device Model initializes vRTC, vHEPT, and vPIT devices automatically when
the ACRN device model starts the booting initialization, and the initialization it starts the booting initialization. The initialization
flow goes from vrtc\_init to vpit\_init and ends with vhept\_init, see flow goes from vrtc\_init to vpit\_init and ends with vhept\_init. See
below code snippets.:: the code snippets below.::
static int static int
vm_init_vdevs(struct vmctx ctx)* vm_init_vdevs(struct vmctx ctx)*

View File

@ -26,7 +26,7 @@ The ACRN DM architecture for UART virtualization is shown here:
:name: uart-arch :name: uart-arch
:width: 800px :width: 800px
Device Model's UART virtualization architecture Device Model's UART Virtualization Architecture
There are three objects used to emulate one UART device in DM: There are three objects used to emulate one UART device in DM:
UART registers, rxFIFO, and backend tty devices. UART registers, rxFIFO, and backend tty devices.
@ -39,19 +39,19 @@ handler of each register depends on the register's functionality.
A **FIFO** is implemented to emulate RX. Normally characters are read A **FIFO** is implemented to emulate RX. Normally characters are read
from the backend tty device when available, then put into the rxFIFO. from the backend tty device when available, then put into the rxFIFO.
When the Guest application tries to read from the UART, the access to When the Guest application tries to read from the UART, the access to
register ``com_data`` causes a ``vmexit``. Device model catches the register ``com_data`` causes a ``vmexit``. Device Model catches the
``vmexit`` and emulates the UART by returning one character from rxFIFO. ``vmexit`` and emulates the UART by returning one character from rxFIFO.
.. note:: When ``com_fcr`` is available, the Guest application can write .. note:: When ``com_fcr`` is available, the Guest application can write
``0`` to this register to disable rxFIFO. In this case the rxFIFO in ``0`` to this register to disable rxFIFO. In this case the rxFIFO in
device model degenerates to a buffer containing only one character. the Device Model degenerates to a buffer containing only one character.
When the Guest application tries to send a character to the UART, it When the Guest application tries to send a character to the UART, it
writes to the ``com_data`` register, which will cause a ``vmexit`` as writes to the ``com_data`` register, which will cause a ``vmexit`` as
well. Device model catches the ``vmexit`` and emulates the UART by well. Device Model catches the ``vmexit`` and emulates the UART by
redirecting the character to the **backend tty device**. redirecting the character to the **backend tty device**.
The UART device emulated by the ACRN device model is connected to the system by The UART device emulated by the ACRN Device Model is connected to the system by
the LPC bus. In the current implementation, two channel LPC UARTs are I/O mapped to the LPC bus. In the current implementation, two channel LPC UARTs are I/O mapped to
the traditional COM port addresses of 0x3F8 and 0x2F8. These are defined in the traditional COM port addresses of 0x3F8 and 0x2F8. These are defined in
global variable ``uart_lres``. global variable ``uart_lres``.
@ -90,11 +90,11 @@ In the case of UART emulation, the registered handlers are ``uart_read``
and ``uart_write``. and ``uart_write``.
A similar virtual UART device is implemented in the hypervisor. A similar virtual UART device is implemented in the hypervisor.
Currently UART16550 is owned by the hypervisor itself and is used for UART16550 is owned by the hypervisor itself and is used for
debugging purposes. (The UART properties are configured by parameters debugging purposes. (The UART properties are configured by parameters
to the hypervisor command line.) The hypervisor emulates a UART device to the hypervisor command line.) The hypervisor emulates a UART device
with 0x3F8 address to the Service VM and acts as the Service VM console. The general with 0x3F8 address to the Service VM and acts as the Service VM console. The
emulation is the same as used in the device model, with the following general emulation is the same as used in the Device Model, with the following
differences: differences:
- PIO region is directly registered to the vmexit handler dispatcher via - PIO region is directly registered to the vmexit handler dispatcher via

View File

@ -17,7 +17,7 @@ virtqueue, the size of which is 64, configurable in the source code.
:width: 900px :width: 900px
:name: virtio-blk-arch :name: virtio-blk-arch
Virtio-blk architecture Virtio-blk Architecture
The feature bits supported by the BE device are shown as follows: The feature bits supported by the BE device are shown as follows:
@ -63,7 +63,7 @@ asynchronously.
Usage: Usage:
****** ******
The device model configuration command syntax for virtio-blk is:: The Device Model configuration command syntax for virtio-blk is::
-s <slot>,virtio-blk,<filepath>[,options] -s <slot>,virtio-blk,<filepath>[,options]

View File

@ -7,7 +7,7 @@ The Virtio-console is a simple device for data input and output. The
console's virtio device ID is ``3`` and can have from 1 to 16 ports. console's virtio device ID is ``3`` and can have from 1 to 16 ports.
Each port has a pair of input and output virtqueues used to communicate Each port has a pair of input and output virtqueues used to communicate
information between the Front End (FE) and Back end (BE) drivers. information between the Front End (FE) and Back end (BE) drivers.
Currently the size of each virtqueue is 64 (configurable in the source The size of each virtqueue is 64 (configurable in the source
code). The FE driver will place empty buffers for incoming data onto code). The FE driver will place empty buffers for incoming data onto
the receiving virtqueue, and enqueue outgoing characters onto the the receiving virtqueue, and enqueue outgoing characters onto the
transmitting virtqueue. transmitting virtqueue.
@ -27,11 +27,11 @@ The virtio-console architecture diagram in ACRN is shown below.
:width: 700px :width: 700px
:name: virtio-console-arch :name: virtio-console-arch
Virtio-console architecture diagram Virtio-console Architecture Diagram
Virtio-console is implemented as a virtio legacy device in the ACRN Virtio-console is implemented as a virtio legacy device in the ACRN
device model (DM), and is registered as a PCI virtio device to the guest Device Model (DM), and is registered as a PCI virtio device to the guest
OS. No changes are required in the frontend Linux virtio-console except OS. No changes are required in the frontend Linux virtio-console except
that the guest (User VM) kernel should be built with that the guest (User VM) kernel should be built with
``CONFIG_VIRTIO_CONSOLE=y``. ``CONFIG_VIRTIO_CONSOLE=y``.
@ -52,7 +52,7 @@ mevent to poll the available data from the backend file descriptor. When
new data is available, the BE driver reads it to the receiving virtqueue new data is available, the BE driver reads it to the receiving virtqueue
of the FE, followed by an interrupt injection. of the FE, followed by an interrupt injection.
The feature bits currently supported by the BE device are: The feature bits supported by the BE device are:
.. list-table:: Feature bits supported by BE drivers .. list-table:: Feature bits supported by BE drivers
:widths: 30 50 :widths: 30 50
@ -66,10 +66,10 @@ The feature bits currently supported by the BE device are:
- device supports emergency write. - device supports emergency write.
Virtio-console supports redirecting guest output to various backend Virtio-console supports redirecting guest output to various backend
devices. Currently the following backend devices are supported in ACRN devices. The following backend devices are supported in the ACRN
device model: STDIO, TTY, PTY and regular file. Device Model: STDIO, TTY, PTY and regular file.
The device model configuration command syntax for virtio-console is:: The Device Model configuration command syntax for virtio-console is::
virtio-console,[@]stdio|tty|pty|file:portname[=portpath]\ virtio-console,[@]stdio|tty|pty|file:portname[=portpath]\
[,[@]stdio|tty|pty|file:portname[=portpath][:socket_type]] [,[@]stdio|tty|pty|file:portname[=portpath][:socket_type]]
@ -109,7 +109,7 @@ The following sections elaborate on each backend.
STDIO STDIO
===== =====
1. Add a PCI slot to the device model (``acrn-dm``) command line:: 1. Add a PCI slot to the Device Model (``acrn-dm``) command line::
-s n,virtio-console,@stdio:stdio_port -s n,virtio-console,@stdio:stdio_port
@ -120,7 +120,7 @@ STDIO
PTY PTY
=== ===
1. Add a PCI slot to the device model (``acrn-dm``) command line:: 1. Add a PCI slot to the Device Model (``acrn-dm``) command line::
-s n,virtio-console,@pty:pty_port -s n,virtio-console,@pty:pty_port
@ -185,7 +185,7 @@ TTY
and detach the TTY by pressing :kbd:`CTRL-A` :kbd:`d`. and detach the TTY by pressing :kbd:`CTRL-A` :kbd:`d`.
#. Add a PCI slot to the device model (``acrn-dm``) command line #. Add a PCI slot to the Device Model (``acrn-dm``) command line
(changing the ``dev/pts/X`` to match your use case):: (changing the ``dev/pts/X`` to match your use case)::
-s n,virtio-console,@tty:tty_port=/dev/pts/X -s n,virtio-console,@tty:tty_port=/dev/pts/X
@ -207,7 +207,7 @@ FILE
The File backend only supports console output to a file (no input). The File backend only supports console output to a file (no input).
1. Add a PCI slot to the device model (``acrn-dm``) command line, 1. Add a PCI slot to the Device Model (``acrn-dm``) command line,
adjusting the ``</path/to/file>`` to your use case:: adjusting the ``</path/to/file>`` to your use case::
-s n,virtio-console,@file:file_port=</path/to/file> -s n,virtio-console,@file:file_port=</path/to/file>
@ -219,30 +219,31 @@ The File backend only supports console output to a file (no input).
SOCKET SOCKET
====== ======
The virtio-console socket-type can be set as socket server or client. Device model will The virtio-console socket-type can be set as socket server or client. The Device
create a Unix domain socket if appointed the socket_type as server, then server VM or Model creates a Unix domain socket if appointed the socket_type as server. Then
another user VM can bind and listen for communication requirement. If appointed to the Service VM or another User VM can bind and listen for communication
client, make sure the socket server is ready prior to launch device model. requirements. If appointed to client, make sure the socket server is ready
before launching the Device Model.
1. Add a PCI slot to the device model (``acrn-dm``) command line, adjusting 1. Add a PCI slot to the Device Model (``acrn-dm``) command line, adjusting
the ``</path/to/file.sock>`` to your use case in the VM1 configuration:: the ``</path/to/file.sock>`` to your use case in the VM1 configuration::
-s n,virtio-console,socket:socket_file_name=</path/to/file.sock>:server -s n,virtio-console,socket:socket_file_name=</path/to/file.sock>:server
#. Add a PCI slot to the device model (``acrn-dm``) command line, adjusting #. Add a PCI slot to the Device Model (``acrn-dm``) command line, adjusting
the ``</path/to/file.sock>`` to your use case in the VM2 configuration:: the ``</path/to/file.sock>`` to your use case in the VM2 configuration::
-s n,virtio-console,socket:socket_file_name=</path/to/file.sock>:client -s n,virtio-console,socket:socket_file_name=</path/to/file.sock>:client
#. Log in to VM1, connect to the virtual port (vport1p0, 1 is decided #. Log in to VM1, connect to the virtual port (vport1p0, 1 is decided
by front-end driver): by the front-end driver):
.. code-block:: console .. code-block:: console
# minicom -D /dev/vport1p0 # minicom -D /dev/vport1p0
#. Log in to VM2, connect to the virtual port (vport3p0, 3 is decided #. Log in to VM2, connect to the virtual port (vport3p0, 3 is decided
by front-end driver): by the front-end driver):
.. code-block:: console .. code-block:: console

View File

@ -6,7 +6,7 @@ Virtio-GPIO
Virtio-gpio provides a virtual general-purpose input/output (GPIO) controller Virtio-gpio provides a virtual general-purpose input/output (GPIO) controller
that can map native GPIOs to a User VM. The User VM can perform GPIO operations that can map native GPIOs to a User VM. The User VM can perform GPIO operations
through it, including set value, get value, set direction, get direction, and through it, including set value, get value, set direction, get direction, and
set configuration. Only Open Source and Open Drain types are currently set configuration. Only Open Source and Open Drain types are
supported. GPIOs are often used as IRQs, typically for wakeup events. supported. GPIOs are often used as IRQs, typically for wakeup events.
Virtio-gpio supports level and edge interrupt trigger modes. Virtio-gpio supports level and edge interrupt trigger modes.
@ -40,7 +40,7 @@ GPIO Mapping
:align: center :align: center
:name: virtio-gpio-2 :name: virtio-gpio-2
GPIO mapping GPIO Mapping
- Each User VM has only one GPIO chip instance. The number of GPIOs is - Each User VM has only one GPIO chip instance. The number of GPIOs is
based on the acrn-dm command line. The GPIO base always starts from 0. based on the acrn-dm command line. The GPIO base always starts from 0.

View File

@ -17,8 +17,8 @@ the client device driver in the guest OS does not need to change.
Virtio-i2c Architecture Virtio-i2c Architecture
Virtio-i2c is implemented as a virtio legacy device in the ACRN device Virtio-i2c is implemented as a virtio legacy device in the ACRN Device
model (DM) and is registered as a PCI virtio device to the guest OS. The Model (DM) and is registered as a PCI virtio device to the guest OS. The
Device ID of virtio-i2c is ``0x860A`` and the Sub Device ID is Device ID of virtio-i2c is ``0x860A`` and the Sub Device ID is
``0xFFF6``. ``0xFFF6``.
@ -63,8 +63,8 @@ notifies the frontend. The msg process flow is shown in
``node``: ``node``:
The ACPI node name supported in the current code. You can find the The ACPI node name supported in the current code. You can find the
supported name in the ``acpi_node_table[]`` from the source code. Currently, supported name in the ``acpi_node_table[]`` from the source code.
only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are Only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are
platform-specific. platform-specific.

View File

@ -4,44 +4,44 @@ Virtio-Input
############ ############
The virtio input device can be used to create virtual human interface The virtio input device can be used to create virtual human interface
devices such as keyboards, mice, and tablets. It basically sends Linux devices such as keyboards, mice, and tablets. It sends Linux
input layer events over virtio. input layer events over virtio.
The ACRN Virtio-input architecture is shown below. The ACRN virtio-input architecture is shown below.
.. figure:: images/virtio-hld-image53.png .. figure:: images/virtio-hld-image53.png
:align: center :align: center
Virtio-input Architecture on ACRN Virtio-input Architecture on ACRN
Virtio-input is implemented as a virtio modern device in ACRN device Virtio-input is implemented as a virtio modern device in the ACRN Device
model. It is registered as a PCI virtio device to guest OS. No changes Model. It is registered as a PCI virtio device to the guest OS. No changes
are required in frontend Linux virtio-input except that guest kernel are required in frontend Linux virtio-input except that the guest kernel
must be built with ``CONFIG_VIRTIO_INPUT=y``. must be built with ``CONFIG_VIRTIO_INPUT=y``.
Two virtqueues are used to transfer input_event between FE and BE. One Two virtqueues are used to transfer input_event between FE and BE. One
is for the input_events from BE to FE, as generated by input hardware is for the input_events from BE to FE, as generated by input hardware
devices in Service VM. The other is for status changes from FE to BE, as devices in the Service VM. The other is for status changes from FE to BE, as
finally sent to input hardware device in Service VM. finally sent to input hardware devices in the Service VM.
At the probe stage of FE virtio-input driver, a buffer (used to At the probe stage of the FE virtio-input driver, a buffer (used to
accommodate 64 input events) is allocated together with the driver data. accommodate 64 input events) is allocated together with the driver data.
Sixty-four descriptors are added to the event virtqueue. One descriptor Sixty-four descriptors are added to the event virtqueue. One descriptor
points to one entry in the buffer. Then a kick on the event virtqueue is points to one entry in the buffer. Then a kick on the event virtqueue is
performed. performed.
Virtio-input BE driver in device model uses mevent to poll the The virtio-input BE driver in the Device Model uses mevent to poll the
availability of the input events from an input device thru evdev char availability of the input events from an input device through the evdev char
device. When an input event is available, BE driver reads it out from the device. When an input event is available, the BE driver reads it out from the
char device and caches it into an internal buffer until an EV_SYN input char device and caches it into an internal buffer until an EV_SYN input
event with SYN_REPORT is received. BE driver then copies all the cached event with SYN_REPORT is received. The BE driver then copies all the cached
input events to the event virtqueue, one by one. These events are added by input events to the event virtqueue, one by one. These events are added by
the FE driver following a notification to FE driver, implemented the FE driver following a notification to the FE driver, implemented
as an interrupt injection to User VM. as an interrupt injection to the User VM.
For input events regarding status change, FE driver allocates a For input events regarding status change, the FE driver allocates a
buffer for an input event and adds it to the status virtqueue followed buffer for an input event and adds it to the status virtqueue followed
by a kick. BE driver reads the input event from the status virtqueue and by a kick. The BE driver reads the input event from the status virtqueue and
writes it to the evdev char device. writes it to the evdev char device.
The data transferred between FE and BE is organized as struct The data transferred between FE and BE is organized as struct
@ -58,7 +58,7 @@ input_event:
A structure virtio_input_config is defined and used as the A structure virtio_input_config is defined and used as the
device-specific configuration registers. To query a specific piece of device-specific configuration registers. To query a specific piece of
configuration information FE driver sets "select" and "subsel" configuration information, the FE driver sets "select" and "subsel"
accordingly. Information size is returned in "size" and information data accordingly. Information size is returned in "size" and information data
is returned in union "u": is returned in union "u":
@ -77,15 +77,15 @@ is returned in union "u":
} u; } u;
}; };
Read/Write to these registers results in a vmexit and cfgread/cfgwrite Read/Write to these registers results in a vmexit, and cfgread/cfgwrite
callbacks in struct virtio_ops are called finally in device model. callbacks in struct virtio_ops are called finally in the Device Model. The
Virtio-input BE in device model issues ioctl to evdev char device virtio-input BE in the Device Model issues ioctl to the evdev char device
according to the "select" and "subselect" registers to get the according to the "select" and "subselect" registers to get the corresponding
corresponding device capabilities information from kernel and return device capabilities information from the kernel. The virtio-input BE returns the
these information to guest OS. information to the guest OS.
All the device-specific configurations are obtained by FE driver at The FE driver obtains all the device-specific configurations at the
probe stage. Based on these information virtio-input FE driver registers probe stage. Based on this information, the virtio-input FE driver registers
an input device to the input subsystem. an input device to the input subsystem.
The general command syntax is:: The general command syntax is::
@ -93,7 +93,7 @@ The general command syntax is::
-s n,virtio-input,/dev/input/eventX[,serial] -s n,virtio-input,/dev/input/eventX[,serial]
- /dev/input/eventX is used to specify the evdev char device node in - /dev/input/eventX is used to specify the evdev char device node in
Service VM. the Service VM.
- "serial" is an optional string. When it is specified it will be used - "serial" is an optional string. When it is specified, it will be used
as the Uniq of guest virtio input device. as the Uniq of the guest virtio input device.

View File

@ -4,7 +4,7 @@ Virtio-Net
########## ##########
Virtio-net is the para-virtualization solution used in ACRN for Virtio-net is the para-virtualization solution used in ACRN for
networking. The ACRN device model emulates virtual NICs for User VM and the networking. The ACRN Device Model emulates virtual NICs for User VM and the
frontend virtio network driver, simulating the virtual NIC and following frontend virtio network driver, simulating the virtual NIC and following
the virtio specification. (Refer to :ref:`introduction` and the virtio specification. (Refer to :ref:`introduction` and
:ref:`virtio-hld` background introductions to ACRN and Virtio.) :ref:`virtio-hld` background introductions to ACRN and Virtio.)
@ -39,7 +39,7 @@ components are parts of the Linux kernel.)
Let's explore these components further. Let's explore these components further.
Service VM/User VM Network Stack: Service VM/User VM Network Stack:
This is the standard Linux TCP/IP stack, currently the most This is the standard Linux TCP/IP stack and the most
feature-rich TCP/IP implementation. feature-rich TCP/IP implementation.
virtio-net Frontend Driver: virtio-net Frontend Driver:
@ -61,7 +61,7 @@ ACRN Hypervisor:
HSM Module: HSM Module:
The Hypervisor Service Module (HSM) is a kernel module in the The Hypervisor Service Module (HSM) is a kernel module in the
Service VM acting as a middle layer to support the device model Service VM acting as a middle layer to support the Device Model
and hypervisor. The HSM forwards a IOREQ to the virtio-net backend and hypervisor. The HSM forwards a IOREQ to the virtio-net backend
driver for processing. driver for processing.
@ -81,7 +81,7 @@ IGB Driver:
NIC. NIC.
The virtual network card (NIC) is implemented as a virtio legacy device The virtual network card (NIC) is implemented as a virtio legacy device
in the ACRN device model (DM). It is registered as a PCI virtio device in the ACRN Device Model (DM). It is registered as a PCI virtio device
to the guest OS (User VM) and uses the standard virtio-net in the Linux kernel as to the guest OS (User VM) and uses the standard virtio-net in the Linux kernel as
its driver (the guest kernel should be built with its driver (the guest kernel should be built with
``CONFIG_VIRTIO_NET=y``). ``CONFIG_VIRTIO_NET=y``).
@ -479,7 +479,7 @@ Run ``brctl show`` to see the bridge ``acrn-br0`` and attached devices:
acrn-br0 8000.b25041fef7a3 no tap0 acrn-br0 8000.b25041fef7a3 no tap0
enp3s0 enp3s0
Add a PCI slot to the device model acrn-dm command line (mac address is Add a PCI slot to the Device Model acrn-dm command line (mac address is
optional): optional):
.. code-block:: none .. code-block:: none
@ -529,7 +529,7 @@ where ``eth0`` is the name of the physical network interface, and
sure the MacVTap interface name includes the keyword ``tap``.) sure the MacVTap interface name includes the keyword ``tap``.)
Once the MacVTap interface is created, the User VM can be launched by adding Once the MacVTap interface is created, the User VM can be launched by adding
a PCI slot to the device model acrn-dm as shown below. a PCI slot to the Device Model acrn-dm as shown below.
.. code-block:: none .. code-block:: none
@ -539,11 +539,11 @@ Performance Estimation
====================== ======================
We've introduced the network virtualization solution in ACRN, from the We've introduced the network virtualization solution in ACRN, from the
top level architecture to the detailed TX and RX flow. Currently, the top level architecture to the detailed TX and RX flow. The
control plane and data plane are all processed in ACRN device model, control plane and data plane are all processed in ACRN Device Model,
which may bring some overhead. But this is not a bottleneck for 1000Mbit which may bring some overhead. But this is not a bottleneck for 1000Mbit
NICs or below. Network bandwidth for virtualization can be very close to NICs or below. Network bandwidth for virtualization can be very close to
the native bandwidth. For high speed NIC (e.g. 10Gb or above), it is the native bandwidth. For a high-speed NIC (for example, 10Gb or above), it is
necessary to separate the data plane from the control plane. We can use necessary to separate the data plane from the control plane. We can use
vhost for acceleration. For most IoT scenarios, processing in user space vhost for acceleration. For most IoT scenarios, processing in user space
is simple and reasonable. is simple and reasonable.

View File

@ -42,7 +42,7 @@ Check to see if the frontend virtio_rng driver is available in the User VM:
# cat /sys/class/misc/hw_random/rng_available # cat /sys/class/misc/hw_random/rng_available
virtio_rng.0 virtio_rng.0
Check to see if the frontend virtio_rng is currently connected to ``/dev/random``: Check to see if the frontend virtio_rng is connected to ``/dev/random``:
.. code-block:: console .. code-block:: console

View File

@ -7,7 +7,7 @@ Architecture
************ ************
A vUART is a virtual 16550 UART implemented in the hypervisor. It can work as a A vUART is a virtual 16550 UART implemented in the hypervisor. It can work as a
console or a communication port. Currently, the vUART is mapped to the console or a communication port. The vUART is mapped to the
traditional COM port address. A UART driver in the kernel can auto detect the traditional COM port address. A UART driver in the kernel can auto detect the
port base and IRQ. port base and IRQ.
@ -15,10 +15,10 @@ port base and IRQ.
:align: center :align: center
:name: uart-arch-pic :name: uart-arch-pic
UART virtualization architecture UART Virtualization Architecture
Each vUART has two FIFOs: 8192 bytes TX FIFO and 256 bytes RX FIFO. Each vUART has two FIFOs: 8192 bytes TX FIFO and 256 bytes RX FIFO.
Currently, we only provide 4 ports for use. We only provide 4 ports for use.
- COM1 (port base: 0x3F8, irq: 4) - COM1 (port base: 0x3F8, irq: 4)
@ -47,7 +47,7 @@ FIFOs is overwritten if it is not taken out in time.
:align: center :align: center
:name: console-uart-arch :name: console-uart-arch
console vUART architecture Console vUART Architecture
Communication vUART Communication vUART
******************* *******************
@ -88,7 +88,7 @@ Operations in VM1
:align: center :align: center
:name: communication-uart-arch :name: communication-uart-arch
communication vUART architecture Communication vUART Architecture
Usage Usage
***** *****

View File

@ -4,7 +4,7 @@ Watchdog Virtualization in Device Model
####################################### #######################################
This document describes the watchdog virtualization implementation in This document describes the watchdog virtualization implementation in
ACRN device model. ACRN Device Model.
Overview Overview
******** ********
@ -27,7 +27,7 @@ Model following the PCI device framework. The following
:width: 900px :width: 900px
:name: watchdog-device :name: watchdog-device
Watchdog device flow Watchdog Device Flow
The DM in the Service VM treats the watchdog as a passive device. The DM in the Service VM treats the watchdog as a passive device.
It receives read/write commands from the watchdog driver, does the It receives read/write commands from the watchdog driver, does the
@ -56,7 +56,7 @@ from a User VM to the Service VM and return back:
:width: 900px :width: 900px
:name: watchdog-workflow :name: watchdog-workflow
Watchdog operation workflow Watchdog Operation Workflow
Implementation in ACRN and How to Use It Implementation in ACRN and How to Use It
**************************************** ****************************************

View File

@ -114,9 +114,7 @@ Affected Processors
=================== ===================
L1TF affects a range of Intel processors, but Intel Atom |reg| processors L1TF affects a range of Intel processors, but Intel Atom |reg| processors
(including Apollo Lake) are immune to it. Currently, ACRN hypervisor are immune to it.
supports only Apollo Lake. Support for other core-based platforms is
planned, so we still need a mitigation plan in ACRN.
Processors that have the RDCL_NO bit set to one (1) in the Processors that have the RDCL_NO bit set to one (1) in the
IA32_ARCH_CAPABILITIES MSR are not susceptible to the L1TF IA32_ARCH_CAPABILITIES MSR are not susceptible to the L1TF
@ -165,7 +163,7 @@ EPT Sanitization
EPT is sanitized to avoid pointing to valid host memory in PTEs that have EPT is sanitized to avoid pointing to valid host memory in PTEs that have
the present bit cleared or reserved bits set. the present bit cleared or reserved bits set.
For non-present PTEs, ACRN currently sets PFN bits to ZERO, which means For non-present PTEs, ACRN sets PFN bits to ZERO, which means
that page ZERO might fall into risk if it contains security information. that page ZERO might fall into risk if it contains security information.
ACRN reserves page ZERO (0~4K) from page allocator; thus page ZERO won't ACRN reserves page ZERO (0~4K) from page allocator; thus page ZERO won't
be used by anybody for a valid purpose. This sanitization logic is always be used by anybody for a valid purpose. This sanitization logic is always

View File

@ -76,7 +76,7 @@ Glossary of Terms
Interrupt Service Routine: Also known as an interrupt handler, an ISR Interrupt Service Routine: Also known as an interrupt handler, an ISR
is a callback function whose execution is triggered by a hardware is a callback function whose execution is triggered by a hardware
interrupt (or software interrupt instructions) and is used to handle interrupt (or software interrupt instructions) and is used to handle
high-priority conditions that require interrupting the code currently high-priority conditions that require interrupting the code that is
executing on the processor. executing on the processor.
Passthrough Device Passthrough Device

View File

@ -21,12 +21,12 @@ define post-launched User VM settings. This document describes these option sett
Specify the User VM memory size in megabytes. Specify the User VM memory size in megabytes.
``vbootloader``: ``vbootloader``:
Virtual bootloader type; currently only supports OVMF. Virtual bootloader type; only supports OVMF.
``vuart0``: ``vuart0``:
Specify whether the device model emulates the vUART0 (vCOM1); refer to Specify whether the Device Model emulates the vUART0 (vCOM1); refer to
:ref:`vuart_config` for details. If set to ``Enable``, the vUART0 is :ref:`vuart_config` for details. If set to ``Enable``, the vUART0 is
emulated by the device model; if set to ``Disable``, the vUART0 is emulated by the Device Model; if set to ``Disable``, the vUART0 is
emulated by the hypervisor if it is configured in the scenario XML. emulated by the hypervisor if it is configured in the scenario XML.
``enable_ptm``: ``enable_ptm``:
@ -57,7 +57,7 @@ define post-launched User VM settings. This document describes these option sett
:ref:`vuart_config` for details. :ref:`vuart_config` for details.
``passthrough_devices``: ``passthrough_devices``:
Select the passthrough device from the PCI device list. Currently we support: Select the passthrough device from the PCI device list. We support:
``usb_xdci``, ``audio``, ``audio_codec``, ``ipu``, ``ipu_i2c``, ``usb_xdci``, ``audio``, ``audio_codec``, ``ipu``, ``ipu_i2c``,
``cse``, ``wifi``, ``bluetooth``, ``sd_card``, ``cse``, ``wifi``, ``bluetooth``, ``sd_card``,
``ethernet``, ``sata``, and ``nvme``. ``ethernet``, ``sata``, and ``nvme``.

View File

@ -10,7 +10,7 @@ embedded development through an open source platform. Check out the
The project ACRN reference code can be found on GitHub in The project ACRN reference code can be found on GitHub in
https://github.com/projectacrn. It includes the ACRN hypervisor, the https://github.com/projectacrn. It includes the ACRN hypervisor, the
ACRN device model, and documentation. ACRN Device Model, and documentation.
.. rst-class:: rst-columns .. rst-class:: rst-columns

View File

@ -7,7 +7,7 @@ Introduction
************ ************
The goal of CPU Sharing is to fully utilize the physical CPU resource to The goal of CPU Sharing is to fully utilize the physical CPU resource to
support more virtual machines. Currently, ACRN only supports 1 to 1 support more virtual machines. ACRN only supports 1 to 1
mapping mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs). mapping mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs).
Because of the lack of CPU sharing ability, the number of VMs is Because of the lack of CPU sharing ability, the number of VMs is
limited. To support CPU Sharing, we have introduced a scheduling limited. To support CPU Sharing, we have introduced a scheduling
@ -40,7 +40,7 @@ Scheduling initialization is invoked in the hardware management layer.
CPU Affinity CPU Affinity
************* *************
Currently, we do not support vCPU migration; the assignment of vCPU mapping to We do not support vCPU migration; the assignment of vCPU mapping to
pCPU is fixed at the time the VM is launched. The statically configured pCPU is fixed at the time the VM is launched. The statically configured
cpu_affinity in the VM configuration defines a superset of pCPUs that cpu_affinity in the VM configuration defines a superset of pCPUs that
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU the VM is allowed to run on. One bit in this bitmap indicates that one pCPU

View File

@ -34,7 +34,7 @@ Ubuntu as the ACRN Service VM.
Supported Hardware Platform Supported Hardware Platform
*************************** ***************************
Currently, ACRN has enabled GVT-d on the following platforms: ACRN has enabled GVT-d on the following platforms:
* Kaby Lake * Kaby Lake
* Whiskey Lake * Whiskey Lake

View File

@ -20,7 +20,7 @@ and :ref:`vuart_config`).
:align: center :align: center
:name: Inter-VM vUART communication :name: Inter-VM vUART communication
Inter-VM vUART communication Inter-VM vUART Communication
- Pros: - Pros:
- POSIX APIs; development-friendly (easily used programmatically - POSIX APIs; development-friendly (easily used programmatically
@ -45,7 +45,7 @@ background introductions of ACRN Virtio-Net Architecture and Design).
:align: center :align: center
:name: Inter-VM network communication :name: Inter-VM network communication
Inter-VM network communication Inter-VM Network Communication
- Pros: - Pros:
- Socket-based APIs; development-friendly (easily used programmatically - Socket-based APIs; development-friendly (easily used programmatically
@ -61,7 +61,7 @@ Inter-VM shared memory communication (ivshmem)
********************************************** **********************************************
Inter-VM shared memory communication is based on a shared memory mechanism Inter-VM shared memory communication is based on a shared memory mechanism
to transfer data between VMs. The ACRN device model or hypervisor emulates to transfer data between VMs. The ACRN Device Model or hypervisor emulates
a virtual PCI device (called an ``ivshmem device``) to expose this shared memory's a virtual PCI device (called an ``ivshmem device``) to expose this shared memory's
base address and size. (Refer to :ref:`ivshmem-hld` and :ref:`enable_ivshmem` for the base address and size. (Refer to :ref:`ivshmem-hld` and :ref:`enable_ivshmem` for the
background introductions). background introductions).
@ -72,7 +72,7 @@ background introductions).
:align: center :align: center
:name: Inter-VM shared memory communication :name: Inter-VM shared memory communication
Inter-VM shared memory communication Inter-VM Shared Memory Communication
- Pros: - Pros:
- Shared memory is exposed to VMs via PCI MMIO Bar and is mapped and accessed directly. - Shared memory is exposed to VMs via PCI MMIO Bar and is mapped and accessed directly.
@ -323,7 +323,7 @@ after ivshmem device is initialized.
:align: center :align: center
:name: Inter-VM ivshmem data transfer state machine :name: Inter-VM ivshmem data transfer state machine
Inter-VM ivshmem data transfer state machine Inter-VM Ivshmem Data Transfer State Machine
:numref:`Inter-VM ivshmem handshake communication` shows the handshake communication between two machines: :numref:`Inter-VM ivshmem handshake communication` shows the handshake communication between two machines:
@ -331,7 +331,7 @@ after ivshmem device is initialized.
:align: center :align: center
:name: Inter-VM ivshmem handshake communication :name: Inter-VM ivshmem handshake communication
Inter-VM ivshmem handshake communication Inter-VM Ivshmem Handshake Communication
Reference Sender and Receiver Sample Code Based Doorbell Mode Reference Sender and Receiver Sample Code Based Doorbell Mode

View File

@ -33,7 +33,7 @@ RTVM With HV Emulated Device
**************************** ****************************
ACRN uses hypervisor emulated virtual UART (vUART) devices for inter-VM synchronization such as ACRN uses hypervisor emulated virtual UART (vUART) devices for inter-VM synchronization such as
logging output or command send/receive. Currently, the vUART only works in polling mode, but logging output or command send/receive. The vUART only works in polling mode, but
may be extended to support interrupt mode in a future release. In the meantime, for better RT may be extended to support interrupt mode in a future release. In the meantime, for better RT
behavior, the RT application using the vUART shall reserve a margin of CPU cycles to accommodate behavior, the RT application using the vUART shall reserve a margin of CPU cycles to accommodate
for the additional latency introduced by the VM-Exit to the vUART I/O registers (~2000-3000 cycles for the additional latency introduced by the VM-Exit to the vUART I/O registers (~2000-3000 cycles

View File

@ -261,7 +261,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
Now is a great time to take a snapshot of the container using ``lxc Now is a great time to take a snapshot of the container using ``lxc
snapshot``. If the OpenStack installation fails, manually rolling back snapshot``. If the OpenStack installation fails, manually rolling back
to the previous state can be difficult. Currently, no step exists to to the previous state can be difficult. No step exists to
reliably restart OpenStack after restarting the container. reliably restart OpenStack after restarting the container.
5. Install OpenStack:: 5. Install OpenStack::

View File

@ -40,7 +40,7 @@ No Enclave in a Hypervisor
-------------------------- --------------------------
ACRN does not support running an enclave in a hypervisor since the whole ACRN does not support running an enclave in a hypervisor since the whole
hypervisor is currently running in VMX root mode, ring 0, and an enclave must hypervisor is running in VMX root mode, ring 0, and an enclave must
run in ring 3. ACRN SGX virtualization provides the capability to run in ring 3. ACRN SGX virtualization provides the capability to
non-Service VMs. non-Service VMs.
@ -124,7 +124,7 @@ CPUID Leaf 07H
* CPUID_07H.EAX[2] SGX: Supports Intel Software Guard Extensions if 1. If SGX * CPUID_07H.EAX[2] SGX: Supports Intel Software Guard Extensions if 1. If SGX
is supported in Guest, this bit will be set. is supported in Guest, this bit will be set.
* CPUID_07H.ECX[30] SGX_LC: Supports SGX Launch Configuration if 1. Currently, * CPUID_07H.ECX[30] SGX_LC: Supports SGX Launch Configuration if 1.
ACRN does not support the SGX Launch Configuration. This bit will not be ACRN does not support the SGX Launch Configuration. This bit will not be
set. Thus, the Launch Enclave must be signed by the Intel SGX Launch Enclave set. Thus, the Launch Enclave must be signed by the Intel SGX Launch Enclave
Key. Key.
@ -172,7 +172,7 @@ The hypervisor will opt in to SGX for VM if SGX is enabled for VM.
IA32_SGXLEPUBKEYHASH[0-3] IA32_SGXLEPUBKEYHASH[0-3]
------------------------- -------------------------
This is read-only since SGX LC is currently not supported. This is read-only since SGX LC is not supported.
SGXOWNEREPOCH[0-1] SGXOWNEREPOCH[0-1]
------------------ ------------------
@ -245,7 +245,8 @@ PAUSE Exiting
Future Development Future Development
****************** ******************
Following are some currently unplanned areas of interest for future
Following are some unplanned areas of interest for future
ACRN development around SGX virtualization. ACRN development around SGX virtualization.
Launch Configuration Support Launch Configuration Support

View File

@ -135,7 +135,7 @@ SR-IOV Passthrough VF Architecture in ACRN
:align: center :align: center
:name: SR-IOV-vf-passthrough :name: SR-IOV-vf-passthrough
SR-IOV VF Passthrough Architecture In ACRN SR-IOV VF Passthrough Architecture in ACRN
1. The SR-IOV VF device needs to bind the PCI-stud driver instead of the 1. The SR-IOV VF device needs to bind the PCI-stud driver instead of the
vendor-specific VF driver before the device passthrough. vendor-specific VF driver before the device passthrough.
@ -213,7 +213,7 @@ SR-IOV VF Assignment Policy
1. All SR-IOV PF devices are managed by the Service VM. 1. All SR-IOV PF devices are managed by the Service VM.
2. Currently, the SR-IOV PF cannot passthrough to the User VM. 2. The SR-IOV PF cannot passthrough to the User VM.
3. All VFs can passthrough to the User VM, but we do not recommend 3. All VFs can passthrough to the User VM, but we do not recommend
a passthrough to high privilege VMs because the PF device may impact a passthrough to high privilege VMs because the PF device may impact
@ -236,7 +236,7 @@ only support LaaG (Linux as a Guest).
:align: center :align: center
:name: 82576-pf :name: 82576-pf
82576 SR-IOV PF devices 82576 SR-IOV PF Devices
#. Input the ``echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs`` #. Input the ``echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs``
command in the Service VM to enable n VF devices for the first PF command in the Service VM to enable n VF devices for the first PF
@ -249,7 +249,7 @@ only support LaaG (Linux as a Guest).
:align: center :align: center
:name: 82576-vf :name: 82576-vf
82576 SR-IOV VF devices 82576 SR-IOV VF Devices
.. figure:: images/sriov-image11.png .. figure:: images/sriov-image11.png
:align: center :align: center

View File

@ -140,7 +140,7 @@ details in this `Android keymaster functions document
:width: 600px :width: 600px
:name: keymaster-app :name: keymaster-app
Keystore service and Keymaster HAL Keystore Service and Keymaster HAL
As shown in :numref:`keymaster-app` above, the Keymaster HAL is a As shown in :numref:`keymaster-app` above, the Keymaster HAL is a
dynamically-loadable library used by the Keystore service to provide dynamically-loadable library used by the Keystore service to provide
@ -318,7 +318,7 @@ provided by secure world (TEE/Trusty). In the current ACRN
implementation, secure storage is built in the RPMB partition in eMMC implementation, secure storage is built in the RPMB partition in eMMC
(or UFS storage). (or UFS storage).
Currently the eMMC in the APL SoC platform only has a single RPMB The eMMC in the APL SoC platform only has a single RPMB
partition for tamper-resistant and anti-replay secure storage. The partition for tamper-resistant and anti-replay secure storage. The
secure storage (RPMB) is virtualized to support multiple guest User VM VMs. secure storage (RPMB) is virtualized to support multiple guest User VM VMs.
Although newer generations of flash storage (e.g. UFS 3.0, and NVMe) Although newer generations of flash storage (e.g. UFS 3.0, and NVMe)

View File

@ -5,14 +5,14 @@ Getting Started Guide for ACRN Hybrid Mode
ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr
or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is
launched by a Device model in the Service VM. launched by a Device Model in the Service VM.
.. figure:: images/ACRN-Hybrid.png .. figure:: images/ACRN-Hybrid.png
:align: center :align: center
:width: 600px :width: 600px
:name: hybrid_scenario_on_nuc :name: hybrid_scenario_on_nuc
The Hybrid scenario on the Intel NUC The Hybrid Scenario on the Intel NUC
The following guidelines The following guidelines
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC, describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
@ -177,7 +177,7 @@ Hybrid Scenario Startup Check
#. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console. #. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
#. Verify that the VM1's Service VM can boot and you can log in. #. Verify that the VM1's Service VM can boot and you can log in.
#. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script. #. ssh to VM1 and launch the post-launched VM2 using the ACRN Device Model launch script.
#. Go to the Service VM console, and enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Go to the Service VM console, and enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console. #. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console.
#. Verify that VM2 can boot and you can log in. #. Verify that VM2 can boot and you can log in.

View File

@ -62,7 +62,7 @@ Download Win10 Image and Drivers
- Check **I accept the terms in the license agreement**. Click **Continue**. - Check **I accept the terms in the license agreement**. Click **Continue**.
- From the list, right check the item labeled **Oracle VirtIO Drivers - From the list, right check the item labeled **Oracle VirtIO Drivers
Version for Microsoft Windows 1.1.x, yy MB**, and then **Save link as Version for Microsoft Windows 1.1.x, yy MB**, and then **Save link as
...**. Currently, it is named ``V982789-01.zip``. ...**. It is named ``V982789-01.zip``.
- Click **Download**. When the download is complete, unzip the file. You - Click **Download**. When the download is complete, unzip the file. You
will see an ISO named ``winvirtio.iso``. will see an ISO named ``winvirtio.iso``.

View File

@ -16,7 +16,7 @@ into XML in the scenario file:
- Edit :option:`hv.FEATURES.RDT.RDT_ENABLED` to `y` to enable RDT - Edit :option:`hv.FEATURES.RDT.RDT_ENABLED` to `y` to enable RDT
- Edit :option:`hv.FEATURES.RDT.CDP_ENABLED` to `n` to disable CDP. - Edit :option:`hv.FEATURES.RDT.CDP_ENABLED` to `n` to disable CDP.
Currently vCAT requires CDP to be disabled. vCAT requires CDP to be disabled.
- Edit :option:`hv.FEATURES.RDT.VCAT_ENABLED` to `y` to enable vCAT - Edit :option:`hv.FEATURES.RDT.VCAT_ENABLED` to `y` to enable vCAT

View File

@ -89,7 +89,7 @@ The ``vcpu_dumpreg <vm_id> <vcpu_id>`` command provides vCPU-related
information such as register values. information such as register values.
In the following example, we dump the vCPU0 RIP register value and get into In the following example, we dump the vCPU0 RIP register value and get into
the Service VM to search for the currently running function, using these the Service VM to search for the running function, using these
commands:: commands::
cat /proc/kallsyms | grep RIP_value cat /proc/kallsyms | grep RIP_value

View File

@ -9,7 +9,7 @@ Description
``acrnlog`` is a userland tool used to capture an ACRN hypervisor log. It runs ``acrnlog`` is a userland tool used to capture an ACRN hypervisor log. It runs
as a Service VM service at boot, capturing two kinds of logs: as a Service VM service at boot, capturing two kinds of logs:
- log of the currently running hypervisor - log of the running hypervisor
- log of the last running hypervisor if it crashed and the logs remain - log of the last running hypervisor if it crashed and the logs remain
Log files are saved in ``/tmp/acrnlog/``, so the log files would be lost Log files are saved in ``/tmp/acrnlog/``, so the log files would be lost