mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-07-04 11:07:51 +00:00
doc: Minor style cleanup
- Remove "currently" - Capitalize titles and Device Model Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
parent
21aeb4f422
commit
f5b021b1b5
@ -821,8 +821,8 @@ C-FN-14: All defined functions shall be used
|
||||
All defined functions shall be used, either called explicitly or indirectly
|
||||
via the address. Otherwise, the function shall be removed. The following case
|
||||
is an exception: Some extra functions may be kept in order to provide a more
|
||||
complete library of APIs. These functions may be implemented but not used
|
||||
currently. These functions will come in handy in the future. In this case,
|
||||
complete library of APIs. These functions may be implemented but not used.
|
||||
These functions will come in handy in the future. In this case,
|
||||
these functions may remain.
|
||||
|
||||
Compliant example::
|
||||
|
@ -126,7 +126,7 @@ To clone the ACRN hypervisor repository (including the ``hypervisor``,
|
||||
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
|
||||
In addition to the ACRN hypervisor and device model itself, you'll also find
|
||||
In addition to the ACRN hypervisor and Device Model itself, you'll also find
|
||||
the sources for technical documentation available from the
|
||||
`ACRN documentation site`_. All of these are available for developers to
|
||||
contribute to and enhance.
|
||||
|
@ -3,25 +3,33 @@
|
||||
AT Keyboard Controller Emulation
|
||||
################################
|
||||
|
||||
This document describes the AT keyboard controller emulation implementation in the ACRN device model. The Atkbdc device emulates a PS2 keyboard and mouse.
|
||||
This document describes the AT keyboard controller emulation implementation in the ACRN Device Model. The Atkbdc device emulates a PS2 keyboard and mouse.
|
||||
|
||||
Overview
|
||||
********
|
||||
|
||||
The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and mice to a PC-compatible computer system. Its name comes from the IBM Personal System/2 series of personal computers, with which it was introduced in 1987. PS2 keyboard/mouse emulation is based on ACPI Emulation. We can add ACPI description of PS2 keyboard/mouse into virtual DSDT table to emulate keyboard/mouse in the User VM.
|
||||
The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and
|
||||
mice to a PC-compatible computer system. Its name comes from the IBM Personal
|
||||
System/2 series of personal computers, with which it was introduced in 1987. PS2
|
||||
keyboard/mouse emulation is based on ACPI emulation. We can add an ACPI
|
||||
description of the PS2 keyboard/mouse to the virtual DSDT table to emulate the
|
||||
keyboard/mouse in the User VM.
|
||||
|
||||
.. figure:: images/atkbdc-virt-hld.png
|
||||
:align: center
|
||||
:name: atkbdc-virt-arch
|
||||
|
||||
AT keyboard controller emulation architecture
|
||||
AT Keyboard Controller Emulation Architecture
|
||||
|
||||
PS2 Keyboard Emulation
|
||||
**********************
|
||||
|
||||
ACRN supports the AT keyboard controller for PS2 keyboard that can be accessed through I/O ports (0x60 and 0x64). 0x60 is used to access AT keyboard controller data register; 0x64 is used to access AT keyboard controller address register.
|
||||
ACRN supports an AT keyboard controller for PS2 keyboard that can be accessed
|
||||
through I/O ports (0x60 and 0x64). 0x60 is used to access the AT keyboard
|
||||
controller data register; 0x64 is used to access the AT keyboard controller
|
||||
address register.
|
||||
|
||||
The PS2 keyboard ACPI description as below::
|
||||
PS2 keyboard ACPI description::
|
||||
|
||||
Device (KBD)
|
||||
{
|
||||
@ -48,10 +56,12 @@ The PS2 keyboard ACPI description as below::
|
||||
PS2 Mouse Emulation
|
||||
*******************
|
||||
|
||||
ACRN supports AT keyboard controller for PS2 mouse that can be accessed through I/O ports (0x60 and 0x64).
|
||||
0x60 is used to access AT keyboard controller data register; 0x64 is used to access AT keyboard controller address register.
|
||||
ACRN supports an AT keyboard controller for PS2 mouse that can be accessed
|
||||
through I/O ports (0x60 and 0x64). 0x60 is used to access the AT keyboard
|
||||
controller data register; 0x64 is used to access the AT keyboard controller
|
||||
address register.
|
||||
|
||||
The PS2 mouse ACPI description as below::
|
||||
PS2 mouse ACPI description::
|
||||
|
||||
Device (MOU)
|
||||
{
|
||||
|
@ -993,7 +993,7 @@ An alternative ACPI resource abstraction option is for the Service VM to
|
||||
own all devices and emulate a set of virtual devices for the User VM
|
||||
(POST_LAUNCHED_VM).
|
||||
This is the most popular ACPI resource model for virtualization,
|
||||
as shown in the picture below. ACRN currently
|
||||
as shown in the picture below. ACRN
|
||||
uses device emulation plus some device passthrough for the User VM.
|
||||
|
||||
.. figure:: images/dm-image52.png
|
||||
|
@ -235,9 +235,9 @@ the hypervisor.
|
||||
DMA Emulation
|
||||
-------------
|
||||
|
||||
Currently the only fully virtualized devices to the User VM are USB xHCI, UART,
|
||||
The only fully virtualized devices to the User VM are USB xHCI, UART,
|
||||
and Automotive I/O controller. None of these require emulating
|
||||
DMA transactions. ACRN does not currently support virtual DMA.
|
||||
DMA transactions. ACRN does not support virtual DMA.
|
||||
|
||||
Hypervisor
|
||||
**********
|
||||
@ -371,8 +371,7 @@ Refer to :ref:`hld-trace-log` for more details.
|
||||
User VM
|
||||
*******
|
||||
|
||||
Currently, ACRN can boot Linux and Android guest OSes. For an Android guest OS,
|
||||
ACRN
|
||||
ACRN can boot Linux and Android guest OSes. For an Android guest OS, ACRN
|
||||
provides a VM environment with two worlds: normal world and trusty
|
||||
world. The Android OS runs in the normal world. The trusty OS and
|
||||
security sensitive applications run in the trusty world. The trusty
|
||||
@ -384,7 +383,7 @@ Guest Physical Memory Layout - User VM E820
|
||||
|
||||
DM creates an E820 table for a User VM based on these simple rules:
|
||||
|
||||
- If requested VM memory size < low memory limitation (currently 2 GB,
|
||||
- If requested VM memory size < low memory limitation (2 GB,
|
||||
defined in DM), then low memory range = [0, requested VM memory
|
||||
size]
|
||||
|
||||
|
@ -64,9 +64,9 @@ Px/Cx data for User VM P/C-state management:
|
||||
:align: center
|
||||
:name: vACPItable
|
||||
|
||||
System block for building vACPI table with Px/Cx data
|
||||
System Block for Building vACPI Table with Px/Cx Data
|
||||
|
||||
Some ioctl APIs are defined for the Device model to query Px/Cx data from
|
||||
Some ioctl APIs are defined for the Device Model to query Px/Cx data from
|
||||
the Service VM HSM. The Hypervisor needs to provide hypercall APIs to transit
|
||||
Px/Cx data from the CPU state table to the Service VM HSM.
|
||||
|
||||
@ -75,11 +75,11 @@ The build flow is:
|
||||
1) Use an offline tool (e.g. **iasl**) to parse the Px/Cx data and hard-code to
|
||||
a CPU state table in the Hypervisor. The Hypervisor loads the data after
|
||||
the system boots.
|
||||
2) Before User VM launching, the Device model queries the Px/Cx data from the Service
|
||||
2) Before User VM launching, the Device Model queries the Px/Cx data from the Service
|
||||
VM HSM via ioctl interface.
|
||||
3) HSM transmits the query request to the Hypervisor by hypercall.
|
||||
4) The Hypervisor returns the Px/Cx data.
|
||||
5) The Device model builds the virtual ACPI table with these Px/Cx data
|
||||
5) The Device Model builds the virtual ACPI table with these Px/Cx data
|
||||
|
||||
Intercept Policy
|
||||
================
|
||||
@ -124,7 +124,7 @@ could customize it according to their hardware/software requirements.
|
||||
:align: center
|
||||
:name: systempmdiag
|
||||
|
||||
ACRN System S3/S5 diagram
|
||||
ACRN System S3/S5 Diagram
|
||||
|
||||
|
||||
System Low Power State Entry Process
|
||||
@ -156,7 +156,7 @@ with typical ISD configuration(S3 follows very similar process)
|
||||
:align: center
|
||||
:name: pmworkflow
|
||||
|
||||
ACRN system S5 entry workflow
|
||||
ACRN System S5 Entry Workflow
|
||||
|
||||
For system power state entry:
|
||||
|
||||
|
@ -57,7 +57,7 @@ SoC in-vehicle platform.
|
||||
:align: center
|
||||
:name: security-vehicle
|
||||
|
||||
SDC and IVE system In-Vehicle
|
||||
SDC and IVE System In-Vehicle
|
||||
|
||||
|
||||
In this system, the ACRN hypervisor is running at the most privileged
|
||||
@ -125,7 +125,7 @@ launched.
|
||||
Note that measured boot (as described well in this `boot security
|
||||
technologies document
|
||||
<https://firmwaresecurity.com/2015/07/29/survey-of-boot-security-technologies/>`_)
|
||||
is not currently supported for ACRN and its guest VMs.
|
||||
is not supported for ACRN and its guest VMs.
|
||||
|
||||
Boot Flow
|
||||
---------
|
||||
@ -137,7 +137,7 @@ As shown in :numref:`security-bootflow-sbl`, the Converged Security Engine
|
||||
Firmware (CSE FW) behaves as the root of trust in this platform boot
|
||||
flow. It authenticates and starts the BIOS (SBL), whereupon the SBL is
|
||||
responsible for authenticating and verifying the ACRN hypervisor image.
|
||||
Currently the Service VM kernel is built together with the ACRN hypervisor as
|
||||
The Service VM kernel is built together with the ACRN hypervisor as
|
||||
one image bundle, so this whole image signature is verified by SBL
|
||||
before launching.
|
||||
|
||||
@ -316,7 +316,7 @@ The ACRN hypervisor has ultimate access control of all the platform
|
||||
memory spaces (see :ref:`memmgt-hld`). Note that on the APL platform,
|
||||
`SGX <https://www.intel.com/content/www/us/en/developer/tools/software-guard-extensions/overview.html>`_ and `TME
|
||||
<https://itpeernetwork.intel.com/memory-encryption/>`_
|
||||
are not currently supported.
|
||||
are not supported.
|
||||
|
||||
The hypervisor can read and write any physical memory space allocated
|
||||
to any guest VM, and can even fetch instructions and execute the code in
|
||||
@ -969,7 +969,7 @@ Secure storage is one of the security services provided by the secure world
|
||||
on the RPMB partition in eMMC (or UFS, and NVMe storage). Details of how
|
||||
RPMB works are out of scope for this document.
|
||||
|
||||
Since currently the eMMC in APL SoC platforms only has a single RPMB
|
||||
Since the eMMC in APL SoC platforms only has a single RPMB
|
||||
partition for tamper-resistant and anti-replay secure storage, the
|
||||
secure storage (RPMB) should be virtualized in order to support multiple
|
||||
guest User VMs. However, although future generations of flash storage
|
||||
|
@ -44,7 +44,7 @@ devices such as audio, eAVB/TSN, IPU, and CSMU devices. This section gives
|
||||
an overview about virtio history, motivation, and advantages, and then
|
||||
highlights virtio key concepts. Second, this section will describe
|
||||
ACRN's virtio architectures and elaborate on ACRN virtio APIs. Finally
|
||||
this section will introduce all the virtio devices currently supported
|
||||
this section will introduce all the virtio devices supported
|
||||
by ACRN.
|
||||
|
||||
Virtio Introduction
|
||||
@ -99,7 +99,7 @@ Straightforward: virtio devices as standard devices on existing buses
|
||||
interrupt the FE driver, on behalf of the BE driver, in case something of
|
||||
interest is happening.
|
||||
|
||||
Currently virtio supports PCI/PCIe bus and MMIO bus. In ACRN, only
|
||||
The virtio supports PCI/PCIe bus and MMIO bus. In ACRN, only
|
||||
PCI/PCIe bus is supported, and all the virtio devices share the same
|
||||
vendor ID 0x1AF4.
|
||||
|
||||
@ -308,7 +308,7 @@ general workflow of ioeventfd.
|
||||
:align: center
|
||||
:name: ioeventfd-workflow
|
||||
|
||||
ioeventfd general workflow
|
||||
Ioeventfd General Workflow
|
||||
|
||||
The workflow can be summarized as:
|
||||
|
||||
@ -334,7 +334,7 @@ signaled. :numref:`irqfd-workflow` shows the general flow for irqfd.
|
||||
:align: center
|
||||
:name: irqfd-workflow
|
||||
|
||||
irqfd general flow
|
||||
Irqfd General Flow
|
||||
|
||||
The workflow can be summarized as:
|
||||
|
||||
@ -641,7 +641,7 @@ their temporary IDs are listed in the following table.
|
||||
| GPIO | 0x8086 | 0x8609 | 0x8086 | 0xFFF7 |
|
||||
+--------------+-------------+-------------+-------------+-------------+
|
||||
|
||||
The following sections introduce the status of virtio devices currently
|
||||
The following sections introduce the status of virtio devices
|
||||
supported in ACRN.
|
||||
|
||||
.. toctree::
|
||||
|
@ -15,7 +15,7 @@ The hypervisor console is a text-based terminal accessible from UART.
|
||||
:align: center
|
||||
:name: console-processing
|
||||
|
||||
Periodic console processing
|
||||
Periodic Console Processing
|
||||
|
||||
A periodic timer is set on initialization to trigger console processing every 40ms.
|
||||
Processing behavior depends on whether the vUART
|
||||
@ -43,7 +43,7 @@ the physical UART only when the vUART is deactivated. See
|
||||
Virtual UART
|
||||
************
|
||||
|
||||
Currently UART 16550 is owned by the hypervisor itself and used for
|
||||
UART 16550 is owned by the hypervisor itself and used for
|
||||
debugging purposes. Properties are configured by hypervisor command
|
||||
line. Hypervisor emulates a UART device with 0x3F8 address to Service VM that
|
||||
acts as the console of Service VM with these features:
|
||||
@ -61,7 +61,7 @@ The following diagram shows the activation state transition of vUART.
|
||||
.. figure:: images/console-image41.png
|
||||
:align: center
|
||||
|
||||
Periodic console processing
|
||||
Periodic Console Processing
|
||||
|
||||
Specifically:
|
||||
|
||||
|
@ -120,7 +120,7 @@ The physical CPU assignment is predefined by ``cpu_affinity`` in
|
||||
``vm config``, while post-launched VMs could be launched on pCPUs that are
|
||||
a subset of it.
|
||||
|
||||
Currently, the ACRN hypervisor does not support virtual CPU migration to
|
||||
The ACRN hypervisor does not support virtual CPU migration to
|
||||
different physical CPUs. No changes to the mapping of the virtual CPU to
|
||||
physical CPU can happen without first calling ``offline_vcpu``.
|
||||
|
||||
@ -457,7 +457,7 @@ A bitmap in the vCPU structure lists the different requests:
|
||||
|
||||
ACRN provides the function *vcpu_make_request* to make different
|
||||
requests, set the bitmap of the corresponding request, and notify the target
|
||||
vCPU through the IPI if necessary (when the target vCPU is not currently
|
||||
vCPU through the IPI if necessary (when the target vCPU is not
|
||||
running). See :ref:`vcpu-request-interrupt-injection` for details.
|
||||
|
||||
.. code-block:: c
|
||||
@ -471,7 +471,7 @@ running). See :ref:`vcpu-request-interrupt-injection` for details.
|
||||
* if current hostcpu is not the target vcpu's hostcpu, we need
|
||||
* to invoke IPI to wake up target vcpu
|
||||
*
|
||||
* TODO: Here we just compare with cpuid, since cpuid currently is
|
||||
* TODO: Here we just compare with cpuid, since cpuid is
|
||||
* global under pCPU / vCPU 1:1 mapping. If later we enabled vcpu
|
||||
* scheduling, we need change here to determine it target vcpu is
|
||||
* VMX non-root or root mode
|
||||
@ -821,16 +821,16 @@ This table describes details for CPUID emulation:
|
||||
- EBX, ECX, EDX: reserved to 0
|
||||
|
||||
* - 0AH
|
||||
- - PMU currently disabled
|
||||
- - PMU disabled
|
||||
|
||||
* - 0FH, 10H
|
||||
- - Intel RDT currently disabled
|
||||
- - Intel RDT disabled
|
||||
|
||||
* - 12H
|
||||
- - Fill according to SGX virtualization
|
||||
|
||||
* - 14H
|
||||
- - Intel Processor Trace currently disabled
|
||||
- - Intel Processor Trace disabled
|
||||
|
||||
* - Others
|
||||
- - Get from per-vm CPUID entries cache
|
||||
@ -969,7 +969,7 @@ ACRN emulates ``mov to cr0``, ``mov to cr4``, ``mov to cr8``, and ``mov
|
||||
from cr8`` through *cr_access_vmexit_handler* based on
|
||||
*VMX_EXIT_REASON_CR_ACCESS*.
|
||||
|
||||
.. note:: Currently ``mov to cr8`` and ``mov from cr8`` are actually
|
||||
.. note:: ``mov to cr8`` and ``mov from cr8`` are
|
||||
not valid as ``CR8-load/store exiting`` bits are set as 0 in
|
||||
*VMX_PROC_VM_EXEC_CONTROLS*.
|
||||
|
||||
@ -1134,7 +1134,7 @@ MMIO (EPT) and APIC access emulation. When such a VM exit is triggered, the
|
||||
hypervisor needs to decode the instruction from RIP then attempt the
|
||||
corresponding emulation based on its instruction and read/write direction.
|
||||
|
||||
ACRN currently supports emulating instructions for ``mov``, ``movx``,
|
||||
ACRN supports emulating instructions for ``mov``, ``movx``,
|
||||
``movs``, ``stos``, ``test``, ``and``, ``or``, ``cmp``, ``sub``, and
|
||||
``bittest`` without support for lock prefix. Real mode emulation is not
|
||||
supported.
|
||||
|
@ -21,7 +21,7 @@ discussed here.
|
||||
--------
|
||||
|
||||
In the ACRN project, device emulation means emulating all existing
|
||||
hardware resources through a software component device model running in
|
||||
hardware resources through the Device Model, a software component running in
|
||||
the Service VM. Device emulation must maintain the same SW
|
||||
interface as a native device, providing transparency to the VM software
|
||||
stack. Passthrough implemented in the hypervisor assigns a physical device
|
||||
@ -38,7 +38,7 @@ can't support device sharing.
|
||||
:align: center
|
||||
:name: emu-passthru-diff
|
||||
|
||||
Difference between emulation and passthrough
|
||||
Difference Between Emulation and Passthrough
|
||||
|
||||
Passthrough in the hypervisor provides the following functionalities to
|
||||
allow the VM to access PCI devices directly:
|
||||
@ -59,7 +59,7 @@ ACRN for a post-launched VM:
|
||||
.. figure:: images/passthru-image22.png
|
||||
:align: center
|
||||
|
||||
Passthrough devices initialization control flow
|
||||
Passthrough Devices Initialization Control Flow
|
||||
|
||||
Passthrough Device Status
|
||||
*************************
|
||||
@ -70,7 +70,7 @@ passthrough, as detailed here:
|
||||
.. figure:: images/passthru-image77.png
|
||||
:align: center
|
||||
|
||||
Passthrough device status
|
||||
Passthrough Device Status
|
||||
|
||||
Owner of Passthrough Devices
|
||||
****************************
|
||||
@ -129,12 +129,12 @@ passthrough device to/from a post-launched VM is shown in the following figures:
|
||||
.. figure:: images/passthru-image86.png
|
||||
:align: center
|
||||
|
||||
ptdev assignment control flow
|
||||
Ptdev Assignment Control Flow
|
||||
|
||||
.. figure:: images/passthru-image42.png
|
||||
:align: center
|
||||
|
||||
ptdev deassignment control flow
|
||||
Ptdev Deassignment Control Flow
|
||||
|
||||
.. _vtd-posted-interrupt:
|
||||
|
||||
@ -199,7 +199,7 @@ Consider this scenario:
|
||||
|
||||
If an external interrupt from an assigned device destined to vCPU0
|
||||
happens at this time, we do not want this interrupt to be incorrectly
|
||||
consumed by vCPU1 currently running on pCPU0. This would happen if we
|
||||
consumed by vCPU1 running on pCPU0. This would happen if we
|
||||
allocate the same Activation Notification Vector (ANV) to all vCPUs.
|
||||
|
||||
To circumvent this issue, ACRN allocates unique ANVs for each vCPU that
|
||||
@ -301,7 +301,7 @@ virtual destination, etc. See the following figure for details:
|
||||
.. figure:: images/passthru-image91.png
|
||||
:align: center
|
||||
|
||||
Remapping of physical interrupts
|
||||
Remapping of Physical Interrupts
|
||||
|
||||
There are two different types of interrupt sources: IOAPIC and MSI.
|
||||
The hypervisor will record different information for interrupt
|
||||
@ -315,7 +315,7 @@ done on-demand rather than on hypervisor initialization.
|
||||
:align: center
|
||||
:name: init-remapping
|
||||
|
||||
Initialization of remapping of virtual IOAPIC interrupts for Service VM
|
||||
Initialization of Remapping of Virtual IOAPIC Interrupts for Service VM
|
||||
|
||||
:numref:`init-remapping` above illustrates how remapping of (virtual) IOAPIC
|
||||
interrupts are remapped for the Service VM. VM exit occurs whenever the Service
|
||||
@ -330,7 +330,7 @@ Remapping of (virtual) MSI interrupts are set up in a similar sequence:
|
||||
.. figure:: images/passthru-image98.png
|
||||
:align: center
|
||||
|
||||
Initialization of remapping of virtual MSI for Service VM
|
||||
Initialization of Remapping of Virtual MSI for Service VM
|
||||
|
||||
This figure illustrates how mappings of MSI or MSI-X are set up for the
|
||||
Service VM. The Service VM is responsible for issuing a hypercall to notify the
|
||||
@ -465,7 +465,7 @@ For a post-launched VM, you enable PTM by setting the
|
||||
:width: 700
|
||||
:name: ptm-flow
|
||||
|
||||
PTM-enabling workflow in post-launched VM
|
||||
PTM-enabling Workflow in Post-launched VM
|
||||
|
||||
As shown in :numref:`ptm-flow`, PTM is enabled in the root port during the
|
||||
hypervisor startup. The Device Model (DM) then checks whether the passthrough
|
||||
@ -483,7 +483,7 @@ passing through the device to the post-launched VM.
|
||||
:width: 700
|
||||
:name: ptm-vrp
|
||||
|
||||
PTM-enabled PCI device passthrough to post-launched VM
|
||||
PTM-enabled PCI Device Passthrough to Post-launched VM
|
||||
|
||||
:numref:`ptm-vrp` shows that, after enabling PTM, the passthrough device
|
||||
connects to the virtual root port instead of the virtual host bridge.
|
||||
|
@ -39,7 +39,7 @@ interrupt to the specific RT VM with passthrough LAPIC.
|
||||
:width: 600px
|
||||
:name: interrupt-sw-modules
|
||||
|
||||
ACRN Interrupt SW Modules Overview
|
||||
ACRN Interrupt Software Modules Overview
|
||||
|
||||
|
||||
The hypervisor implements the following functionalities for handling
|
||||
@ -146,7 +146,7 @@ Native PIC is not used in the system.
|
||||
:align: center
|
||||
:name: hv-pic-config
|
||||
|
||||
HV PIC/IOAPIC/LAPIC configuration
|
||||
Hypervisor PIC/IOAPIC/LAPIC Configuration
|
||||
|
||||
LAPIC Initialization
|
||||
====================
|
||||
@ -224,8 +224,8 @@ The interrupt vectors are assigned as shown here:
|
||||
- SPURIOUS_APIC_VECTOR
|
||||
|
||||
Interrupts from either IOAPIC or MSI can be delivered to a target CPU.
|
||||
By default they are configured as Lowest Priority (FLAT mode), i.e. they
|
||||
are delivered to a CPU core that is currently idle or executing lowest
|
||||
By default, they are configured as Lowest Priority (FLAT mode), meaning they
|
||||
are delivered to a CPU core that is idle or executing the lowest
|
||||
priority ISR. There is no guarantee a device's interrupt will be
|
||||
delivered to a specific Guest's CPU. Timer interrupts are an exception -
|
||||
these are always delivered to the CPU which programs the LAPIC timer.
|
||||
@ -237,7 +237,7 @@ allocation for CPUs is shown here:
|
||||
.. figure:: images/interrupt-image89.png
|
||||
:align: center
|
||||
|
||||
FLAT mode vector allocation
|
||||
FLAT Mode Vector Allocation
|
||||
|
||||
IRQ Descriptor Table
|
||||
====================
|
||||
@ -290,7 +290,7 @@ Interrupt and IRQ processing flow diagrams are shown below:
|
||||
:align: center
|
||||
:name: phy-interrupt-processing
|
||||
|
||||
Processing of physical interrupts
|
||||
Processing of Physical Interrupts
|
||||
|
||||
When a physical interrupt is raised and delivered to a physical CPU, the
|
||||
CPU may be running under either VMX root mode or non-root mode.
|
||||
@ -341,7 +341,7 @@ conditions:
|
||||
:align: center
|
||||
:name: request-irq
|
||||
|
||||
Request IRQ for different conditions
|
||||
Request IRQ for Different Conditions
|
||||
|
||||
.. _ipi-management:
|
||||
|
||||
|
@ -5,7 +5,7 @@ I/O Emulation High-Level Design
|
||||
|
||||
As discussed in :ref:`intro-io-emulation`, there are multiple ways and
|
||||
places to handle I/O emulation, including HV, Service VM Kernel HSM, and Service VM
|
||||
user-land device model (acrn-dm).
|
||||
user-land Device Model (acrn-dm).
|
||||
|
||||
I/O emulation in the hypervisor provides these functionalities:
|
||||
|
||||
@ -32,7 +32,7 @@ inside the hypervisor:
|
||||
:align: center
|
||||
:name: io-control-flow
|
||||
|
||||
Control flow of I/O emulation in the hypervisor
|
||||
Control Flow of I/O Emulation in the Hypervisor
|
||||
|
||||
I/O emulation does not rely on any calibration data.
|
||||
|
||||
@ -219,7 +219,7 @@ Post-Work
|
||||
=========
|
||||
|
||||
After an I/O request is completed, some more work needs to be done for
|
||||
I/O reads to update guest registers accordingly. Currently the
|
||||
I/O reads to update guest registers accordingly. The
|
||||
hypervisor re-enters the vCPU thread every time a vCPU is scheduled back
|
||||
in, rather than switching to where the vCPU is scheduled out. As a result,
|
||||
post-work is introduced for this purpose.
|
||||
@ -233,7 +233,7 @@ updating the vCPU guest state to reflect the effect of the I/O reads.
|
||||
.. figure:: images/ioem-image100.png
|
||||
:align: center
|
||||
|
||||
Workflow of MMIO I/O request completion
|
||||
Workflow of MMIO I/O Request Completion
|
||||
|
||||
The figure above illustrates the workflow to complete an I/O
|
||||
request for MMIO. Once the I/O request is completed, Service VM makes a
|
||||
@ -247,7 +247,7 @@ request slot to FREE, and continues execution of the vCPU.
|
||||
:align: center
|
||||
:name: port-io-completion
|
||||
|
||||
Workflow of port I/O request completion
|
||||
Workflow of Port I/O Request Completion
|
||||
|
||||
Completion of a port I/O request (shown in :numref:`port-io-completion`
|
||||
above) is
|
||||
|
@ -160,8 +160,8 @@ char devices and UART DM immediately.
|
||||
is done except that the heartbeat and RTC are only used by the IOC
|
||||
mediator and will not be transferred to IOC
|
||||
firmware.
|
||||
- Currently, IOC mediator only cares about lifecycle, signal, and raw data.
|
||||
Others, e.g., diagnosis, are not used by the IOC mediator.
|
||||
- IOC mediator only cares about lifecycle, signal, and raw data.
|
||||
Others, such as diagnosis, are not used by the IOC mediator.
|
||||
|
||||
State Transfer
|
||||
--------------
|
||||
@ -217,7 +217,7 @@ priority for the frame, then send data to the UART driver.
|
||||
|
||||
The difference between the native and virtualization architectures is
|
||||
that the IOC mediator needs to re-compute the checksum and reset
|
||||
priority. Currently, priority is not supported by IOC firmware; the
|
||||
priority. Priority is not supported by IOC firmware; the
|
||||
priority setting by the IOC mediator is based on the priority setting of
|
||||
the CBC driver. The Service VM and User VM use the same CBC driver.
|
||||
|
||||
@ -388,8 +388,8 @@ table:
|
||||
Wakeup Reason
|
||||
+++++++++++++
|
||||
|
||||
The wakeup reasons command contains a bit mask of all reasons, which is
|
||||
currently keeping the SoC/IOC active. The SoC itself also has a wakeup
|
||||
The wakeup reasons command contains a bitmask of all reasons that are
|
||||
keeping the SoC/IOC active. The SoC itself also has a wakeup
|
||||
reason, which allows the SoC to keep the IOC active. The wakeup reasons
|
||||
should be sent every 1000 ms by the IOC.
|
||||
|
||||
@ -402,7 +402,7 @@ Wakeup reason frame definition is as below:
|
||||
|
||||
Wakeup Reason Frame Definition
|
||||
|
||||
Currently the wakeup reason bits are supported by sources shown here:
|
||||
The wakeup reason bits are supported by sources shown here:
|
||||
|
||||
.. list-table:: Wakeup Reason Bits
|
||||
:header-rows: 1
|
||||
@ -563,8 +563,7 @@ IOC signal type definitions are as below.
|
||||
shouldn't be forwarded to the native cbc signal channel. The Service VM
|
||||
signal related services should do a real open/reset/close signal channel.
|
||||
- Every backend should maintain a passlist for different VMs. The
|
||||
passlist can be stored in the Service VM file system (Read only) in the
|
||||
future, but currently it is hard coded.
|
||||
passlist is hard coded.
|
||||
|
||||
IOC mediator has two passlist tables, one is used for rx
|
||||
signals (SoC->IOC), and the other one is used for tx signals. The IOC
|
||||
|
@ -307,7 +307,7 @@ processor does not take interrupts when it is executing in VMX root
|
||||
mode. ACRN configures the processor to take vmexit upon external
|
||||
interrupt if the processor is executing in VMX non-root mode. Upon an
|
||||
external interrupt, after sending EOI to the physical LAPIC, ACRN
|
||||
injects the vector into the vLAPIC of the vCPU currently running on the
|
||||
injects the vector into the vLAPIC of the vCPU running on the
|
||||
processor. Guests using a Linux kernel use vectors less than 0xECh
|
||||
for device interrupts.
|
||||
|
||||
|
@ -187,7 +187,7 @@ are created by the Device Model (DM) in the Service VM. The main steps include:
|
||||
Software configuration for Service VM (bzimage software load as example):
|
||||
|
||||
- **ACPI**: HV passes the entire ACPI table from the bootloader to the Service
|
||||
VM directly. Legacy mode is currently supported as the ACPI table
|
||||
VM directly. Legacy mode is supported as the ACPI table
|
||||
is loaded at F-Segment.
|
||||
|
||||
- **E820**: HV passes the E820 table from the bootloader through the zero page
|
||||
|
@ -17,7 +17,7 @@ corresponds to one cache way.
|
||||
|
||||
On current generation systems, normally L3 cache is shared by all CPU cores on the same socket and
|
||||
L2 cache is generally just shared by the hyperthreads on a core. But when dealing with ACRN
|
||||
vCAT COS IDs assignment, it is currently assumed that all the L2/L3 caches (and therefore all COS IDs)
|
||||
vCAT COS IDs assignment, it is assumed that all the L2/L3 caches (and therefore all COS IDs)
|
||||
are system-wide caches shared by all cores in the system, this is done for convenience and to simplify
|
||||
the vCAT configuration process. If vCAT is enabled for a VM (abbreviated as vCAT VM), there should not
|
||||
be any COS ID overlap between a vCAT VM and any other VMs. e.g. the vCAT VM has exclusive use of the
|
||||
|
@ -15,7 +15,7 @@ Inter-VM Communication Overview
|
||||
:align: center
|
||||
:name: ivshmem-architecture-overview
|
||||
|
||||
ACRN shared memory based inter-VM communication architecture
|
||||
ACRN Shared Memory Based Inter-VM Communication Architecture
|
||||
|
||||
ACRN can emulate the ``ivshmem`` device in two ways:
|
||||
|
||||
@ -117,7 +117,7 @@ MMIO Registers Definition
|
||||
- 0x8
|
||||
- RO
|
||||
- Inter-VM Position register is used to identify the VM ID.
|
||||
Currently its value is zero.
|
||||
Its value is zero.
|
||||
* - IVSHMEM\_DOORBELL\_REG
|
||||
- 0xC
|
||||
- WO
|
||||
|
@ -24,7 +24,7 @@ Here is how ACRN supports MMIO device passthrough:
|
||||
if not, use ``--mmiodev_pt MMIO_regions``.
|
||||
|
||||
.. note::
|
||||
Currently, the vTPM and PT TPM in the ACRN-DM have the same HID so we
|
||||
The vTPM and PT TPM in the ACRN-DM have the same HID so we
|
||||
can't support them both at the same time. The VM will fail to boot if
|
||||
both are used.
|
||||
|
||||
|
@ -15,18 +15,18 @@ System timer virtualization architecture
|
||||
- In the User VM, vRTC, vHPET, and vPIT are used by the clock event module and the clock
|
||||
source module in the kernel space.
|
||||
|
||||
- In the Service VM, all vRTC, vHPET, and vPIT devices are created by the device
|
||||
model in the initialization phase and uses timer\_create and
|
||||
- In the Service VM, the Device Model creates all vRTC, vHPET, and vPIT devices
|
||||
in the initialization phase. The Device Model uses timer\_create and
|
||||
timerfd\_create interfaces to set up native timers for the trigger timeout
|
||||
mechanism.
|
||||
|
||||
System Timer Initialization
|
||||
===========================
|
||||
|
||||
The device model initializes vRTC, vHEPT, and vPIT devices automatically when
|
||||
the ACRN device model starts the booting initialization, and the initialization
|
||||
flow goes from vrtc\_init to vpit\_init and ends with vhept\_init, see
|
||||
below code snippets.::
|
||||
The Device Model initializes vRTC, vHEPT, and vPIT devices automatically when
|
||||
it starts the booting initialization. The initialization
|
||||
flow goes from vrtc\_init to vpit\_init and ends with vhept\_init. See
|
||||
the code snippets below.::
|
||||
|
||||
static int
|
||||
vm_init_vdevs(struct vmctx ctx)*
|
||||
|
@ -26,7 +26,7 @@ The ACRN DM architecture for UART virtualization is shown here:
|
||||
:name: uart-arch
|
||||
:width: 800px
|
||||
|
||||
Device Model's UART virtualization architecture
|
||||
Device Model's UART Virtualization Architecture
|
||||
|
||||
There are three objects used to emulate one UART device in DM:
|
||||
UART registers, rxFIFO, and backend tty devices.
|
||||
@ -39,19 +39,19 @@ handler of each register depends on the register's functionality.
|
||||
A **FIFO** is implemented to emulate RX. Normally characters are read
|
||||
from the backend tty device when available, then put into the rxFIFO.
|
||||
When the Guest application tries to read from the UART, the access to
|
||||
register ``com_data`` causes a ``vmexit``. Device model catches the
|
||||
register ``com_data`` causes a ``vmexit``. Device Model catches the
|
||||
``vmexit`` and emulates the UART by returning one character from rxFIFO.
|
||||
|
||||
.. note:: When ``com_fcr`` is available, the Guest application can write
|
||||
``0`` to this register to disable rxFIFO. In this case the rxFIFO in
|
||||
device model degenerates to a buffer containing only one character.
|
||||
the Device Model degenerates to a buffer containing only one character.
|
||||
|
||||
When the Guest application tries to send a character to the UART, it
|
||||
writes to the ``com_data`` register, which will cause a ``vmexit`` as
|
||||
well. Device model catches the ``vmexit`` and emulates the UART by
|
||||
well. Device Model catches the ``vmexit`` and emulates the UART by
|
||||
redirecting the character to the **backend tty device**.
|
||||
|
||||
The UART device emulated by the ACRN device model is connected to the system by
|
||||
The UART device emulated by the ACRN Device Model is connected to the system by
|
||||
the LPC bus. In the current implementation, two channel LPC UARTs are I/O mapped to
|
||||
the traditional COM port addresses of 0x3F8 and 0x2F8. These are defined in
|
||||
global variable ``uart_lres``.
|
||||
@ -90,11 +90,11 @@ In the case of UART emulation, the registered handlers are ``uart_read``
|
||||
and ``uart_write``.
|
||||
|
||||
A similar virtual UART device is implemented in the hypervisor.
|
||||
Currently UART16550 is owned by the hypervisor itself and is used for
|
||||
UART16550 is owned by the hypervisor itself and is used for
|
||||
debugging purposes. (The UART properties are configured by parameters
|
||||
to the hypervisor command line.) The hypervisor emulates a UART device
|
||||
with 0x3F8 address to the Service VM and acts as the Service VM console. The general
|
||||
emulation is the same as used in the device model, with the following
|
||||
with 0x3F8 address to the Service VM and acts as the Service VM console. The
|
||||
general emulation is the same as used in the Device Model, with the following
|
||||
differences:
|
||||
|
||||
- PIO region is directly registered to the vmexit handler dispatcher via
|
||||
|
@ -17,7 +17,7 @@ virtqueue, the size of which is 64, configurable in the source code.
|
||||
:width: 900px
|
||||
:name: virtio-blk-arch
|
||||
|
||||
Virtio-blk architecture
|
||||
Virtio-blk Architecture
|
||||
|
||||
The feature bits supported by the BE device are shown as follows:
|
||||
|
||||
@ -63,7 +63,7 @@ asynchronously.
|
||||
Usage:
|
||||
******
|
||||
|
||||
The device model configuration command syntax for virtio-blk is::
|
||||
The Device Model configuration command syntax for virtio-blk is::
|
||||
|
||||
-s <slot>,virtio-blk,<filepath>[,options]
|
||||
|
||||
|
@ -7,7 +7,7 @@ The Virtio-console is a simple device for data input and output. The
|
||||
console's virtio device ID is ``3`` and can have from 1 to 16 ports.
|
||||
Each port has a pair of input and output virtqueues used to communicate
|
||||
information between the Front End (FE) and Back end (BE) drivers.
|
||||
Currently the size of each virtqueue is 64 (configurable in the source
|
||||
The size of each virtqueue is 64 (configurable in the source
|
||||
code). The FE driver will place empty buffers for incoming data onto
|
||||
the receiving virtqueue, and enqueue outgoing characters onto the
|
||||
transmitting virtqueue.
|
||||
@ -27,11 +27,11 @@ The virtio-console architecture diagram in ACRN is shown below.
|
||||
:width: 700px
|
||||
:name: virtio-console-arch
|
||||
|
||||
Virtio-console architecture diagram
|
||||
Virtio-console Architecture Diagram
|
||||
|
||||
|
||||
Virtio-console is implemented as a virtio legacy device in the ACRN
|
||||
device model (DM), and is registered as a PCI virtio device to the guest
|
||||
Device Model (DM), and is registered as a PCI virtio device to the guest
|
||||
OS. No changes are required in the frontend Linux virtio-console except
|
||||
that the guest (User VM) kernel should be built with
|
||||
``CONFIG_VIRTIO_CONSOLE=y``.
|
||||
@ -52,7 +52,7 @@ mevent to poll the available data from the backend file descriptor. When
|
||||
new data is available, the BE driver reads it to the receiving virtqueue
|
||||
of the FE, followed by an interrupt injection.
|
||||
|
||||
The feature bits currently supported by the BE device are:
|
||||
The feature bits supported by the BE device are:
|
||||
|
||||
.. list-table:: Feature bits supported by BE drivers
|
||||
:widths: 30 50
|
||||
@ -66,10 +66,10 @@ The feature bits currently supported by the BE device are:
|
||||
- device supports emergency write.
|
||||
|
||||
Virtio-console supports redirecting guest output to various backend
|
||||
devices. Currently the following backend devices are supported in ACRN
|
||||
device model: STDIO, TTY, PTY and regular file.
|
||||
devices. The following backend devices are supported in the ACRN
|
||||
Device Model: STDIO, TTY, PTY and regular file.
|
||||
|
||||
The device model configuration command syntax for virtio-console is::
|
||||
The Device Model configuration command syntax for virtio-console is::
|
||||
|
||||
virtio-console,[@]stdio|tty|pty|file:portname[=portpath]\
|
||||
[,[@]stdio|tty|pty|file:portname[=portpath][:socket_type]]
|
||||
@ -109,7 +109,7 @@ The following sections elaborate on each backend.
|
||||
STDIO
|
||||
=====
|
||||
|
||||
1. Add a PCI slot to the device model (``acrn-dm``) command line::
|
||||
1. Add a PCI slot to the Device Model (``acrn-dm``) command line::
|
||||
|
||||
-s n,virtio-console,@stdio:stdio_port
|
||||
|
||||
@ -120,7 +120,7 @@ STDIO
|
||||
PTY
|
||||
===
|
||||
|
||||
1. Add a PCI slot to the device model (``acrn-dm``) command line::
|
||||
1. Add a PCI slot to the Device Model (``acrn-dm``) command line::
|
||||
|
||||
-s n,virtio-console,@pty:pty_port
|
||||
|
||||
@ -185,7 +185,7 @@ TTY
|
||||
|
||||
and detach the TTY by pressing :kbd:`CTRL-A` :kbd:`d`.
|
||||
|
||||
#. Add a PCI slot to the device model (``acrn-dm``) command line
|
||||
#. Add a PCI slot to the Device Model (``acrn-dm``) command line
|
||||
(changing the ``dev/pts/X`` to match your use case)::
|
||||
|
||||
-s n,virtio-console,@tty:tty_port=/dev/pts/X
|
||||
@ -207,7 +207,7 @@ FILE
|
||||
|
||||
The File backend only supports console output to a file (no input).
|
||||
|
||||
1. Add a PCI slot to the device model (``acrn-dm``) command line,
|
||||
1. Add a PCI slot to the Device Model (``acrn-dm``) command line,
|
||||
adjusting the ``</path/to/file>`` to your use case::
|
||||
|
||||
-s n,virtio-console,@file:file_port=</path/to/file>
|
||||
@ -219,30 +219,31 @@ The File backend only supports console output to a file (no input).
|
||||
SOCKET
|
||||
======
|
||||
|
||||
The virtio-console socket-type can be set as socket server or client. Device model will
|
||||
create a Unix domain socket if appointed the socket_type as server, then server VM or
|
||||
another user VM can bind and listen for communication requirement. If appointed to
|
||||
client, make sure the socket server is ready prior to launch device model.
|
||||
The virtio-console socket-type can be set as socket server or client. The Device
|
||||
Model creates a Unix domain socket if appointed the socket_type as server. Then
|
||||
the Service VM or another User VM can bind and listen for communication
|
||||
requirements. If appointed to client, make sure the socket server is ready
|
||||
before launching the Device Model.
|
||||
|
||||
1. Add a PCI slot to the device model (``acrn-dm``) command line, adjusting
|
||||
1. Add a PCI slot to the Device Model (``acrn-dm``) command line, adjusting
|
||||
the ``</path/to/file.sock>`` to your use case in the VM1 configuration::
|
||||
|
||||
-s n,virtio-console,socket:socket_file_name=</path/to/file.sock>:server
|
||||
|
||||
#. Add a PCI slot to the device model (``acrn-dm``) command line, adjusting
|
||||
#. Add a PCI slot to the Device Model (``acrn-dm``) command line, adjusting
|
||||
the ``</path/to/file.sock>`` to your use case in the VM2 configuration::
|
||||
|
||||
-s n,virtio-console,socket:socket_file_name=</path/to/file.sock>:client
|
||||
|
||||
#. Login to VM1, connect to the virtual port(vport1p0, 1 is decided
|
||||
by front-end driver):
|
||||
#. Log in to VM1, connect to the virtual port (vport1p0, 1 is decided
|
||||
by the front-end driver):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
# minicom -D /dev/vport1p0
|
||||
|
||||
#. Login to VM2, connect to the virtual port(vport3p0, 3 is decided
|
||||
by front-end driver):
|
||||
#. Log in to VM2, connect to the virtual port (vport3p0, 3 is decided
|
||||
by the front-end driver):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -6,7 +6,7 @@ Virtio-GPIO
|
||||
Virtio-gpio provides a virtual general-purpose input/output (GPIO) controller
|
||||
that can map native GPIOs to a User VM. The User VM can perform GPIO operations
|
||||
through it, including set value, get value, set direction, get direction, and
|
||||
set configuration. Only Open Source and Open Drain types are currently
|
||||
set configuration. Only Open Source and Open Drain types are
|
||||
supported. GPIOs are often used as IRQs, typically for wakeup events.
|
||||
Virtio-gpio supports level and edge interrupt trigger modes.
|
||||
|
||||
@ -40,7 +40,7 @@ GPIO Mapping
|
||||
:align: center
|
||||
:name: virtio-gpio-2
|
||||
|
||||
GPIO mapping
|
||||
GPIO Mapping
|
||||
|
||||
- Each User VM has only one GPIO chip instance. The number of GPIOs is
|
||||
based on the acrn-dm command line. The GPIO base always starts from 0.
|
||||
|
@ -17,8 +17,8 @@ the client device driver in the guest OS does not need to change.
|
||||
|
||||
Virtio-i2c Architecture
|
||||
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN device
|
||||
model (DM) and is registered as a PCI virtio device to the guest OS. The
|
||||
Virtio-i2c is implemented as a virtio legacy device in the ACRN Device
|
||||
Model (DM) and is registered as a PCI virtio device to the guest OS. The
|
||||
Device ID of virtio-i2c is ``0x860A`` and the Sub Device ID is
|
||||
``0xFFF6``.
|
||||
|
||||
@ -63,8 +63,8 @@ notifies the frontend. The msg process flow is shown in
|
||||
|
||||
``node``:
|
||||
The ACPI node name supported in the current code. You can find the
|
||||
supported name in the ``acpi_node_table[]`` from the source code. Currently,
|
||||
only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are
|
||||
supported name in the ``acpi_node_table[]`` from the source code.
|
||||
Only ``cam1``, ``cam2``, and ``hdac`` are supported for MRB. These nodes are
|
||||
platform-specific.
|
||||
|
||||
|
||||
|
@ -4,44 +4,44 @@ Virtio-Input
|
||||
############
|
||||
|
||||
The virtio input device can be used to create virtual human interface
|
||||
devices such as keyboards, mice, and tablets. It basically sends Linux
|
||||
devices such as keyboards, mice, and tablets. It sends Linux
|
||||
input layer events over virtio.
|
||||
|
||||
The ACRN Virtio-input architecture is shown below.
|
||||
The ACRN virtio-input architecture is shown below.
|
||||
|
||||
.. figure:: images/virtio-hld-image53.png
|
||||
:align: center
|
||||
|
||||
Virtio-input Architecture on ACRN
|
||||
|
||||
Virtio-input is implemented as a virtio modern device in ACRN device
|
||||
model. It is registered as a PCI virtio device to guest OS. No changes
|
||||
are required in frontend Linux virtio-input except that guest kernel
|
||||
Virtio-input is implemented as a virtio modern device in the ACRN Device
|
||||
Model. It is registered as a PCI virtio device to the guest OS. No changes
|
||||
are required in frontend Linux virtio-input except that the guest kernel
|
||||
must be built with ``CONFIG_VIRTIO_INPUT=y``.
|
||||
|
||||
Two virtqueues are used to transfer input_event between FE and BE. One
|
||||
is for the input_events from BE to FE, as generated by input hardware
|
||||
devices in Service VM. The other is for status changes from FE to BE, as
|
||||
finally sent to input hardware device in Service VM.
|
||||
devices in the Service VM. The other is for status changes from FE to BE, as
|
||||
finally sent to input hardware devices in the Service VM.
|
||||
|
||||
At the probe stage of FE virtio-input driver, a buffer (used to
|
||||
At the probe stage of the FE virtio-input driver, a buffer (used to
|
||||
accommodate 64 input events) is allocated together with the driver data.
|
||||
Sixty-four descriptors are added to the event virtqueue. One descriptor
|
||||
points to one entry in the buffer. Then a kick on the event virtqueue is
|
||||
performed.
|
||||
|
||||
Virtio-input BE driver in device model uses mevent to poll the
|
||||
availability of the input events from an input device thru evdev char
|
||||
device. When an input event is available, BE driver reads it out from the
|
||||
The virtio-input BE driver in the Device Model uses mevent to poll the
|
||||
availability of the input events from an input device through the evdev char
|
||||
device. When an input event is available, the BE driver reads it out from the
|
||||
char device and caches it into an internal buffer until an EV_SYN input
|
||||
event with SYN_REPORT is received. BE driver then copies all the cached
|
||||
event with SYN_REPORT is received. The BE driver then copies all the cached
|
||||
input events to the event virtqueue, one by one. These events are added by
|
||||
the FE driver following a notification to FE driver, implemented
|
||||
as an interrupt injection to User VM.
|
||||
the FE driver following a notification to the FE driver, implemented
|
||||
as an interrupt injection to the User VM.
|
||||
|
||||
For input events regarding status change, FE driver allocates a
|
||||
For input events regarding status change, the FE driver allocates a
|
||||
buffer for an input event and adds it to the status virtqueue followed
|
||||
by a kick. BE driver reads the input event from the status virtqueue and
|
||||
by a kick. The BE driver reads the input event from the status virtqueue and
|
||||
writes it to the evdev char device.
|
||||
|
||||
The data transferred between FE and BE is organized as struct
|
||||
@ -58,7 +58,7 @@ input_event:
|
||||
|
||||
A structure virtio_input_config is defined and used as the
|
||||
device-specific configuration registers. To query a specific piece of
|
||||
configuration information FE driver sets "select" and "subsel"
|
||||
configuration information, the FE driver sets "select" and "subsel"
|
||||
accordingly. Information size is returned in "size" and information data
|
||||
is returned in union "u":
|
||||
|
||||
@ -77,15 +77,15 @@ is returned in union "u":
|
||||
} u;
|
||||
};
|
||||
|
||||
Read/Write to these registers results in a vmexit and cfgread/cfgwrite
|
||||
callbacks in struct virtio_ops are called finally in device model.
|
||||
Virtio-input BE in device model issues ioctl to evdev char device
|
||||
according to the "select" and "subselect" registers to get the
|
||||
corresponding device capabilities information from kernel and return
|
||||
these information to guest OS.
|
||||
Read/Write to these registers results in a vmexit, and cfgread/cfgwrite
|
||||
callbacks in struct virtio_ops are called finally in the Device Model. The
|
||||
virtio-input BE in the Device Model issues ioctl to the evdev char device
|
||||
according to the "select" and "subselect" registers to get the corresponding
|
||||
device capabilities information from the kernel. The virtio-input BE returns the
|
||||
information to the guest OS.
|
||||
|
||||
All the device-specific configurations are obtained by FE driver at
|
||||
probe stage. Based on these information virtio-input FE driver registers
|
||||
The FE driver obtains all the device-specific configurations at the
|
||||
probe stage. Based on this information, the virtio-input FE driver registers
|
||||
an input device to the input subsystem.
|
||||
|
||||
The general command syntax is::
|
||||
@ -93,7 +93,7 @@ The general command syntax is::
|
||||
-s n,virtio-input,/dev/input/eventX[,serial]
|
||||
|
||||
- /dev/input/eventX is used to specify the evdev char device node in
|
||||
Service VM.
|
||||
the Service VM.
|
||||
|
||||
- "serial" is an optional string. When it is specified it will be used
|
||||
as the Uniq of guest virtio input device.
|
||||
- "serial" is an optional string. When it is specified, it will be used
|
||||
as the Uniq of the guest virtio input device.
|
||||
|
@ -4,7 +4,7 @@ Virtio-Net
|
||||
##########
|
||||
|
||||
Virtio-net is the para-virtualization solution used in ACRN for
|
||||
networking. The ACRN device model emulates virtual NICs for User VM and the
|
||||
networking. The ACRN Device Model emulates virtual NICs for User VM and the
|
||||
frontend virtio network driver, simulating the virtual NIC and following
|
||||
the virtio specification. (Refer to :ref:`introduction` and
|
||||
:ref:`virtio-hld` background introductions to ACRN and Virtio.)
|
||||
@ -39,7 +39,7 @@ components are parts of the Linux kernel.)
|
||||
Let's explore these components further.
|
||||
|
||||
Service VM/User VM Network Stack:
|
||||
This is the standard Linux TCP/IP stack, currently the most
|
||||
This is the standard Linux TCP/IP stack and the most
|
||||
feature-rich TCP/IP implementation.
|
||||
|
||||
virtio-net Frontend Driver:
|
||||
@ -61,7 +61,7 @@ ACRN Hypervisor:
|
||||
|
||||
HSM Module:
|
||||
The Hypervisor Service Module (HSM) is a kernel module in the
|
||||
Service VM acting as a middle layer to support the device model
|
||||
Service VM acting as a middle layer to support the Device Model
|
||||
and hypervisor. The HSM forwards a IOREQ to the virtio-net backend
|
||||
driver for processing.
|
||||
|
||||
@ -81,7 +81,7 @@ IGB Driver:
|
||||
NIC.
|
||||
|
||||
The virtual network card (NIC) is implemented as a virtio legacy device
|
||||
in the ACRN device model (DM). It is registered as a PCI virtio device
|
||||
in the ACRN Device Model (DM). It is registered as a PCI virtio device
|
||||
to the guest OS (User VM) and uses the standard virtio-net in the Linux kernel as
|
||||
its driver (the guest kernel should be built with
|
||||
``CONFIG_VIRTIO_NET=y``).
|
||||
@ -479,7 +479,7 @@ Run ``brctl show`` to see the bridge ``acrn-br0`` and attached devices:
|
||||
acrn-br0 8000.b25041fef7a3 no tap0
|
||||
enp3s0
|
||||
|
||||
Add a PCI slot to the device model acrn-dm command line (mac address is
|
||||
Add a PCI slot to the Device Model acrn-dm command line (mac address is
|
||||
optional):
|
||||
|
||||
.. code-block:: none
|
||||
@ -529,7 +529,7 @@ where ``eth0`` is the name of the physical network interface, and
|
||||
sure the MacVTap interface name includes the keyword ``tap``.)
|
||||
|
||||
Once the MacVTap interface is created, the User VM can be launched by adding
|
||||
a PCI slot to the device model acrn-dm as shown below.
|
||||
a PCI slot to the Device Model acrn-dm as shown below.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -539,11 +539,11 @@ Performance Estimation
|
||||
======================
|
||||
|
||||
We've introduced the network virtualization solution in ACRN, from the
|
||||
top level architecture to the detailed TX and RX flow. Currently, the
|
||||
control plane and data plane are all processed in ACRN device model,
|
||||
top level architecture to the detailed TX and RX flow. The
|
||||
control plane and data plane are all processed in ACRN Device Model,
|
||||
which may bring some overhead. But this is not a bottleneck for 1000Mbit
|
||||
NICs or below. Network bandwidth for virtualization can be very close to
|
||||
the native bandwidth. For high speed NIC (e.g. 10Gb or above), it is
|
||||
the native bandwidth. For a high-speed NIC (for example, 10Gb or above), it is
|
||||
necessary to separate the data plane from the control plane. We can use
|
||||
vhost for acceleration. For most IoT scenarios, processing in user space
|
||||
is simple and reasonable.
|
||||
|
@ -42,7 +42,7 @@ Check to see if the frontend virtio_rng driver is available in the User VM:
|
||||
# cat /sys/class/misc/hw_random/rng_available
|
||||
virtio_rng.0
|
||||
|
||||
Check to see if the frontend virtio_rng is currently connected to ``/dev/random``:
|
||||
Check to see if the frontend virtio_rng is connected to ``/dev/random``:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -7,7 +7,7 @@ Architecture
|
||||
************
|
||||
|
||||
A vUART is a virtual 16550 UART implemented in the hypervisor. It can work as a
|
||||
console or a communication port. Currently, the vUART is mapped to the
|
||||
console or a communication port. The vUART is mapped to the
|
||||
traditional COM port address. A UART driver in the kernel can auto detect the
|
||||
port base and IRQ.
|
||||
|
||||
@ -15,10 +15,10 @@ port base and IRQ.
|
||||
:align: center
|
||||
:name: uart-arch-pic
|
||||
|
||||
UART virtualization architecture
|
||||
UART Virtualization Architecture
|
||||
|
||||
Each vUART has two FIFOs: 8192 bytes TX FIFO and 256 bytes RX FIFO.
|
||||
Currently, we only provide 4 ports for use.
|
||||
We only provide 4 ports for use.
|
||||
|
||||
- COM1 (port base: 0x3F8, irq: 4)
|
||||
|
||||
@ -34,7 +34,7 @@ Console vUART
|
||||
*************
|
||||
|
||||
A vUART can be used as a console port, and it can be activated by
|
||||
a ``vm_console <vm_id>`` command in the hypervisor console.
|
||||
a ``vm_console <vm_id>`` command in the hypervisor console.
|
||||
:numref:`console-uart-arch` shows only one physical UART, but four console
|
||||
vUARTs (green color blocks). A hypervisor console is implemented above the
|
||||
physical UART, and it works in polling mode. The hypervisor console has a
|
||||
@ -47,7 +47,7 @@ FIFOs is overwritten if it is not taken out in time.
|
||||
:align: center
|
||||
:name: console-uart-arch
|
||||
|
||||
console vUART architecture
|
||||
Console vUART Architecture
|
||||
|
||||
Communication vUART
|
||||
*******************
|
||||
@ -88,7 +88,7 @@ Operations in VM1
|
||||
:align: center
|
||||
:name: communication-uart-arch
|
||||
|
||||
communication vUART architecture
|
||||
Communication vUART Architecture
|
||||
|
||||
Usage
|
||||
*****
|
||||
|
@ -4,7 +4,7 @@ Watchdog Virtualization in Device Model
|
||||
#######################################
|
||||
|
||||
This document describes the watchdog virtualization implementation in
|
||||
ACRN device model.
|
||||
ACRN Device Model.
|
||||
|
||||
Overview
|
||||
********
|
||||
@ -27,7 +27,7 @@ Model following the PCI device framework. The following
|
||||
:width: 900px
|
||||
:name: watchdog-device
|
||||
|
||||
Watchdog device flow
|
||||
Watchdog Device Flow
|
||||
|
||||
The DM in the Service VM treats the watchdog as a passive device.
|
||||
It receives read/write commands from the watchdog driver, does the
|
||||
@ -56,7 +56,7 @@ from a User VM to the Service VM and return back:
|
||||
:width: 900px
|
||||
:name: watchdog-workflow
|
||||
|
||||
Watchdog operation workflow
|
||||
Watchdog Operation Workflow
|
||||
|
||||
Implementation in ACRN and How to Use It
|
||||
****************************************
|
||||
|
@ -113,10 +113,8 @@ the Android guest.
|
||||
Affected Processors
|
||||
===================
|
||||
|
||||
L1TF affects a range of Intel processors, but Intel Atom |reg| processors
|
||||
(including Apollo Lake) are immune to it. Currently, ACRN hypervisor
|
||||
supports only Apollo Lake. Support for other core-based platforms is
|
||||
planned, so we still need a mitigation plan in ACRN.
|
||||
L1TF affects a range of Intel processors, but Intel Atom |reg| processors
|
||||
are immune to it.
|
||||
|
||||
Processors that have the RDCL_NO bit set to one (1) in the
|
||||
IA32_ARCH_CAPABILITIES MSR are not susceptible to the L1TF
|
||||
@ -165,7 +163,7 @@ EPT Sanitization
|
||||
EPT is sanitized to avoid pointing to valid host memory in PTEs that have
|
||||
the present bit cleared or reserved bits set.
|
||||
|
||||
For non-present PTEs, ACRN currently sets PFN bits to ZERO, which means
|
||||
For non-present PTEs, ACRN sets PFN bits to ZERO, which means
|
||||
that page ZERO might fall into risk if it contains security information.
|
||||
ACRN reserves page ZERO (0~4K) from page allocator; thus page ZERO won't
|
||||
be used by anybody for a valid purpose. This sanitization logic is always
|
||||
|
@ -76,7 +76,7 @@ Glossary of Terms
|
||||
Interrupt Service Routine: Also known as an interrupt handler, an ISR
|
||||
is a callback function whose execution is triggered by a hardware
|
||||
interrupt (or software interrupt instructions) and is used to handle
|
||||
high-priority conditions that require interrupting the code currently
|
||||
high-priority conditions that require interrupting the code that is
|
||||
executing on the processor.
|
||||
|
||||
Passthrough Device
|
||||
|
@ -21,12 +21,12 @@ define post-launched User VM settings. This document describes these option sett
|
||||
Specify the User VM memory size in megabytes.
|
||||
|
||||
``vbootloader``:
|
||||
Virtual bootloader type; currently only supports OVMF.
|
||||
Virtual bootloader type; only supports OVMF.
|
||||
|
||||
``vuart0``:
|
||||
Specify whether the device model emulates the vUART0 (vCOM1); refer to
|
||||
Specify whether the Device Model emulates the vUART0 (vCOM1); refer to
|
||||
:ref:`vuart_config` for details. If set to ``Enable``, the vUART0 is
|
||||
emulated by the device model; if set to ``Disable``, the vUART0 is
|
||||
emulated by the Device Model; if set to ``Disable``, the vUART0 is
|
||||
emulated by the hypervisor if it is configured in the scenario XML.
|
||||
|
||||
``enable_ptm``:
|
||||
@ -57,7 +57,7 @@ define post-launched User VM settings. This document describes these option sett
|
||||
:ref:`vuart_config` for details.
|
||||
|
||||
``passthrough_devices``:
|
||||
Select the passthrough device from the PCI device list. Currently we support:
|
||||
Select the passthrough device from the PCI device list. We support:
|
||||
``usb_xdci``, ``audio``, ``audio_codec``, ``ipu``, ``ipu_i2c``,
|
||||
``cse``, ``wifi``, ``bluetooth``, ``sd_card``,
|
||||
``ethernet``, ``sata``, and ``nvme``.
|
||||
|
@ -10,7 +10,7 @@ embedded development through an open source platform. Check out the
|
||||
|
||||
The project ACRN reference code can be found on GitHub in
|
||||
https://github.com/projectacrn. It includes the ACRN hypervisor, the
|
||||
ACRN device model, and documentation.
|
||||
ACRN Device Model, and documentation.
|
||||
|
||||
.. rst-class:: rst-columns
|
||||
|
||||
|
@ -7,7 +7,7 @@ Introduction
|
||||
************
|
||||
|
||||
The goal of CPU Sharing is to fully utilize the physical CPU resource to
|
||||
support more virtual machines. Currently, ACRN only supports 1 to 1
|
||||
support more virtual machines. ACRN only supports 1 to 1
|
||||
mapping mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs).
|
||||
Because of the lack of CPU sharing ability, the number of VMs is
|
||||
limited. To support CPU Sharing, we have introduced a scheduling
|
||||
@ -40,7 +40,7 @@ Scheduling initialization is invoked in the hardware management layer.
|
||||
CPU Affinity
|
||||
*************
|
||||
|
||||
Currently, we do not support vCPU migration; the assignment of vCPU mapping to
|
||||
We do not support vCPU migration; the assignment of vCPU mapping to
|
||||
pCPU is fixed at the time the VM is launched. The statically configured
|
||||
cpu_affinity in the VM configuration defines a superset of pCPUs that
|
||||
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU
|
||||
|
@ -34,7 +34,7 @@ Ubuntu as the ACRN Service VM.
|
||||
Supported Hardware Platform
|
||||
***************************
|
||||
|
||||
Currently, ACRN has enabled GVT-d on the following platforms:
|
||||
ACRN has enabled GVT-d on the following platforms:
|
||||
|
||||
* Kaby Lake
|
||||
* Whiskey Lake
|
||||
|
@ -20,7 +20,7 @@ and :ref:`vuart_config`).
|
||||
:align: center
|
||||
:name: Inter-VM vUART communication
|
||||
|
||||
Inter-VM vUART communication
|
||||
Inter-VM vUART Communication
|
||||
|
||||
- Pros:
|
||||
- POSIX APIs; development-friendly (easily used programmatically
|
||||
@ -37,7 +37,7 @@ Inter-VM network communication
|
||||
|
||||
Inter-VM network communication is based on the network stack. ACRN supports
|
||||
both pass-through NICs to VMs and Virtio-Net solutions. (Refer to :ref:`virtio-net`
|
||||
background introductions of ACRN Virtio-Net Architecture and Design).
|
||||
background introductions of ACRN Virtio-Net Architecture and Design).
|
||||
|
||||
:numref:`Inter-VM network communication` shows the Inter-VM network communication overview:
|
||||
|
||||
@ -45,7 +45,7 @@ background introductions of ACRN Virtio-Net Architecture and Design).
|
||||
:align: center
|
||||
:name: Inter-VM network communication
|
||||
|
||||
Inter-VM network communication
|
||||
Inter-VM Network Communication
|
||||
|
||||
- Pros:
|
||||
- Socket-based APIs; development-friendly (easily used programmatically
|
||||
@ -61,7 +61,7 @@ Inter-VM shared memory communication (ivshmem)
|
||||
**********************************************
|
||||
|
||||
Inter-VM shared memory communication is based on a shared memory mechanism
|
||||
to transfer data between VMs. The ACRN device model or hypervisor emulates
|
||||
to transfer data between VMs. The ACRN Device Model or hypervisor emulates
|
||||
a virtual PCI device (called an ``ivshmem device``) to expose this shared memory's
|
||||
base address and size. (Refer to :ref:`ivshmem-hld` and :ref:`enable_ivshmem` for the
|
||||
background introductions).
|
||||
@ -72,7 +72,7 @@ background introductions).
|
||||
:align: center
|
||||
:name: Inter-VM shared memory communication
|
||||
|
||||
Inter-VM shared memory communication
|
||||
Inter-VM Shared Memory Communication
|
||||
|
||||
- Pros:
|
||||
- Shared memory is exposed to VMs via PCI MMIO Bar and is mapped and accessed directly.
|
||||
@ -224,7 +224,7 @@ a data transfer notification mechanism between the VMs.
|
||||
/* set eventfds of msix to kernel driver by ioctl */
|
||||
p_ivsh_dev_ctx->irq_data[i].vector = i;
|
||||
p_ivsh_dev_ctx->irq_data[i].fd = evt_fd;
|
||||
ioctl(p_ivsh_dev_ctx->uio_dev_fd, UIO_IRQ_DATA, &p_ivsh_dev_ctx->irq_data[i])
|
||||
ioctl(p_ivsh_dev_ctx->uio_dev_fd, UIO_IRQ_DATA, &p_ivsh_dev_ctx->irq_data[i])
|
||||
|
||||
/* create epoll */
|
||||
p_ivsh_dev_ctx->epfds_irq[i] = epoll_create1(0);
|
||||
@ -323,7 +323,7 @@ after ivshmem device is initialized.
|
||||
:align: center
|
||||
:name: Inter-VM ivshmem data transfer state machine
|
||||
|
||||
Inter-VM ivshmem data transfer state machine
|
||||
Inter-VM Ivshmem Data Transfer State Machine
|
||||
|
||||
:numref:`Inter-VM ivshmem handshake communication` shows the handshake communication between two machines:
|
||||
|
||||
@ -331,7 +331,7 @@ after ivshmem device is initialized.
|
||||
:align: center
|
||||
:name: Inter-VM ivshmem handshake communication
|
||||
|
||||
Inter-VM ivshmem handshake communication
|
||||
Inter-VM Ivshmem Handshake Communication
|
||||
|
||||
|
||||
Reference Sender and Receiver Sample Code Based Doorbell Mode
|
||||
|
@ -33,7 +33,7 @@ RTVM With HV Emulated Device
|
||||
****************************
|
||||
|
||||
ACRN uses hypervisor emulated virtual UART (vUART) devices for inter-VM synchronization such as
|
||||
logging output or command send/receive. Currently, the vUART only works in polling mode, but
|
||||
logging output or command send/receive. The vUART only works in polling mode, but
|
||||
may be extended to support interrupt mode in a future release. In the meantime, for better RT
|
||||
behavior, the RT application using the vUART shall reserve a margin of CPU cycles to accommodate
|
||||
for the additional latency introduced by the VM-Exit to the vUART I/O registers (~2000-3000 cycles
|
||||
|
@ -261,7 +261,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
|
||||
|
||||
Now is a great time to take a snapshot of the container using ``lxc
|
||||
snapshot``. If the OpenStack installation fails, manually rolling back
|
||||
to the previous state can be difficult. Currently, no step exists to
|
||||
to the previous state can be difficult. No step exists to
|
||||
reliably restart OpenStack after restarting the container.
|
||||
|
||||
5. Install OpenStack::
|
||||
|
@ -40,7 +40,7 @@ No Enclave in a Hypervisor
|
||||
--------------------------
|
||||
|
||||
ACRN does not support running an enclave in a hypervisor since the whole
|
||||
hypervisor is currently running in VMX root mode, ring 0, and an enclave must
|
||||
hypervisor is running in VMX root mode, ring 0, and an enclave must
|
||||
run in ring 3. ACRN SGX virtualization provides the capability to
|
||||
non-Service VMs.
|
||||
|
||||
@ -124,7 +124,7 @@ CPUID Leaf 07H
|
||||
* CPUID_07H.EAX[2] SGX: Supports Intel Software Guard Extensions if 1. If SGX
|
||||
is supported in Guest, this bit will be set.
|
||||
|
||||
* CPUID_07H.ECX[30] SGX_LC: Supports SGX Launch Configuration if 1. Currently,
|
||||
* CPUID_07H.ECX[30] SGX_LC: Supports SGX Launch Configuration if 1.
|
||||
ACRN does not support the SGX Launch Configuration. This bit will not be
|
||||
set. Thus, the Launch Enclave must be signed by the Intel SGX Launch Enclave
|
||||
Key.
|
||||
@ -172,7 +172,7 @@ The hypervisor will opt in to SGX for VM if SGX is enabled for VM.
|
||||
IA32_SGXLEPUBKEYHASH[0-3]
|
||||
-------------------------
|
||||
|
||||
This is read-only since SGX LC is currently not supported.
|
||||
This is read-only since SGX LC is not supported.
|
||||
|
||||
SGXOWNEREPOCH[0-1]
|
||||
------------------
|
||||
@ -245,7 +245,8 @@ PAUSE Exiting
|
||||
|
||||
Future Development
|
||||
******************
|
||||
Following are some currently unplanned areas of interest for future
|
||||
|
||||
Following are some unplanned areas of interest for future
|
||||
ACRN development around SGX virtualization.
|
||||
|
||||
Launch Configuration Support
|
||||
|
@ -135,7 +135,7 @@ SR-IOV Passthrough VF Architecture in ACRN
|
||||
:align: center
|
||||
:name: SR-IOV-vf-passthrough
|
||||
|
||||
SR-IOV VF Passthrough Architecture In ACRN
|
||||
SR-IOV VF Passthrough Architecture in ACRN
|
||||
|
||||
1. The SR-IOV VF device needs to bind the PCI-stud driver instead of the
|
||||
vendor-specific VF driver before the device passthrough.
|
||||
@ -213,7 +213,7 @@ SR-IOV VF Assignment Policy
|
||||
|
||||
1. All SR-IOV PF devices are managed by the Service VM.
|
||||
|
||||
2. Currently, the SR-IOV PF cannot passthrough to the User VM.
|
||||
2. The SR-IOV PF cannot passthrough to the User VM.
|
||||
|
||||
3. All VFs can passthrough to the User VM, but we do not recommend
|
||||
a passthrough to high privilege VMs because the PF device may impact
|
||||
@ -236,7 +236,7 @@ only support LaaG (Linux as a Guest).
|
||||
:align: center
|
||||
:name: 82576-pf
|
||||
|
||||
82576 SR-IOV PF devices
|
||||
82576 SR-IOV PF Devices
|
||||
|
||||
#. Input the ``echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs``
|
||||
command in the Service VM to enable n VF devices for the first PF
|
||||
@ -249,7 +249,7 @@ only support LaaG (Linux as a Guest).
|
||||
:align: center
|
||||
:name: 82576-vf
|
||||
|
||||
82576 SR-IOV VF devices
|
||||
82576 SR-IOV VF Devices
|
||||
|
||||
.. figure:: images/sriov-image11.png
|
||||
:align: center
|
||||
|
@ -140,7 +140,7 @@ details in this `Android keymaster functions document
|
||||
:width: 600px
|
||||
:name: keymaster-app
|
||||
|
||||
Keystore service and Keymaster HAL
|
||||
Keystore Service and Keymaster HAL
|
||||
|
||||
As shown in :numref:`keymaster-app` above, the Keymaster HAL is a
|
||||
dynamically-loadable library used by the Keystore service to provide
|
||||
@ -318,7 +318,7 @@ provided by secure world (TEE/Trusty). In the current ACRN
|
||||
implementation, secure storage is built in the RPMB partition in eMMC
|
||||
(or UFS storage).
|
||||
|
||||
Currently the eMMC in the APL SoC platform only has a single RPMB
|
||||
The eMMC in the APL SoC platform only has a single RPMB
|
||||
partition for tamper-resistant and anti-replay secure storage. The
|
||||
secure storage (RPMB) is virtualized to support multiple guest User VM VMs.
|
||||
Although newer generations of flash storage (e.g. UFS 3.0, and NVMe)
|
||||
|
@ -5,14 +5,14 @@ Getting Started Guide for ACRN Hybrid Mode
|
||||
|
||||
ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr
|
||||
or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is
|
||||
launched by a Device model in the Service VM.
|
||||
launched by a Device Model in the Service VM.
|
||||
|
||||
.. figure:: images/ACRN-Hybrid.png
|
||||
:align: center
|
||||
:width: 600px
|
||||
:name: hybrid_scenario_on_nuc
|
||||
|
||||
The Hybrid scenario on the Intel NUC
|
||||
The Hybrid Scenario on the Intel NUC
|
||||
|
||||
The following guidelines
|
||||
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
|
||||
@ -109,7 +109,7 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
|
||||
}
|
||||
|
||||
|
||||
|
||||
.. note:: The module ``/boot/zephyr.elf`` is the VM0 (Zephyr) kernel file.
|
||||
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
|
||||
``kern_mod`` of VM0, which is configured in the ``misc/config_tools/data/nuc11tnbi5/hybrid.xml``
|
||||
@ -138,7 +138,7 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
module2 /boot/zephyr.elf Zephyr_ElfImage
|
||||
module2 /boot/bzImage Linux_bzImage
|
||||
module2 /boot/ACPI_VM0.bin ACPI_VM0
|
||||
|
||||
|
||||
}
|
||||
|
||||
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu
|
||||
@ -177,7 +177,7 @@ Hybrid Scenario Startup Check
|
||||
#. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
|
||||
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
|
||||
#. Verify that the VM1's Service VM can boot and you can log in.
|
||||
#. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script.
|
||||
#. ssh to VM1 and launch the post-launched VM2 using the ACRN Device Model launch script.
|
||||
#. Go to the Service VM console, and enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
|
||||
#. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console.
|
||||
#. Verify that VM2 can boot and you can log in.
|
||||
|
@ -62,7 +62,7 @@ Download Win10 Image and Drivers
|
||||
- Check **I accept the terms in the license agreement**. Click **Continue**.
|
||||
- From the list, right check the item labeled **Oracle VirtIO Drivers
|
||||
Version for Microsoft Windows 1.1.x, yy MB**, and then **Save link as
|
||||
...**. Currently, it is named ``V982789-01.zip``.
|
||||
...**. It is named ``V982789-01.zip``.
|
||||
- Click **Download**. When the download is complete, unzip the file. You
|
||||
will see an ISO named ``winvirtio.iso``.
|
||||
|
||||
|
@ -16,7 +16,7 @@ into XML in the scenario file:
|
||||
- Edit :option:`hv.FEATURES.RDT.RDT_ENABLED` to `y` to enable RDT
|
||||
|
||||
- Edit :option:`hv.FEATURES.RDT.CDP_ENABLED` to `n` to disable CDP.
|
||||
Currently vCAT requires CDP to be disabled.
|
||||
vCAT requires CDP to be disabled.
|
||||
|
||||
- Edit :option:`hv.FEATURES.RDT.VCAT_ENABLED` to `y` to enable vCAT
|
||||
|
||||
|
@ -85,11 +85,11 @@ state (init, paused, running, zombie, or unknown).
|
||||
vcpu_dumpreg
|
||||
============
|
||||
|
||||
The ``vcpu_dumpreg <vm_id> <vcpu_id>`` command provides vCPU-related
|
||||
The ``vcpu_dumpreg <vm_id> <vcpu_id>`` command provides vCPU-related
|
||||
information such as register values.
|
||||
|
||||
In the following example, we dump the vCPU0 RIP register value and get into
|
||||
the Service VM to search for the currently running function, using these
|
||||
the Service VM to search for the running function, using these
|
||||
commands::
|
||||
|
||||
cat /proc/kallsyms | grep RIP_value
|
||||
@ -185,7 +185,7 @@ IRQ vector number, etc.
|
||||
pt
|
||||
==
|
||||
|
||||
The ``pt`` command provides passthrough detailed information, such as the
|
||||
The ``pt`` command provides passthrough detailed information, such as the
|
||||
virtual machine number, interrupt type, interrupt request, interrupt vector,
|
||||
and trigger mode.
|
||||
|
||||
@ -197,7 +197,7 @@ and trigger mode.
|
||||
int
|
||||
===
|
||||
|
||||
The ``int`` command provides interrupt information on all CPUs and their
|
||||
The ``int`` command provides interrupt information on all CPUs and their
|
||||
corresponding interrupt vector.
|
||||
|
||||
.. figure:: images/shell_image17.png
|
||||
|
@ -9,7 +9,7 @@ Description
|
||||
``acrnlog`` is a userland tool used to capture an ACRN hypervisor log. It runs
|
||||
as a Service VM service at boot, capturing two kinds of logs:
|
||||
|
||||
- log of the currently running hypervisor
|
||||
- log of the running hypervisor
|
||||
- log of the last running hypervisor if it crashed and the logs remain
|
||||
|
||||
Log files are saved in ``/tmp/acrnlog/``, so the log files would be lost
|
||||
|
Loading…
Reference in New Issue
Block a user