doc: update release_2.7 branch with updated docs from master

Tracked-On: #5692

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2021-12-16 09:14:23 -08:00 committed by David Kinder
parent ab8274bab1
commit 3af09b0120
125 changed files with 1637 additions and 1327 deletions

View File

@ -10,6 +10,8 @@ This document briefly summarizes the full `Contribution
Guidelines <http://projectacrn.github.io/latest/developer-guides/contribute_guidelines.html>`_
documentation.
.. start_include_here
* ACRN uses the permissive open source `BSD 3-Clause license`_
that allows you to freely use, modify, distribute and sell your own products
that include such licensed software.
@ -35,5 +37,21 @@ documentation.
* The `ACRN user mailing list`_ is a great place to engage with the
community, ask questions, discuss issues, and help each other.
.. _tsc_members:
Technical Steering Committee (TSC)
**********************************
The Technical Steering Committee (TSC) is responsible for technical oversight of
the open source ACRN project. The role and rules governing the operations of
the TSC and its membership, are described in the project's `technical-charter`_.
These are the current TSC voting members and chair person:
- Anthony Xu (chair): anthony.xu@intel.com
- Helmut Buchsbaum: helmut.buchsbaum@tttech-industrial.com
- Thomas Gleixner: tglx@linutronix.de
.. _ACRN user mailing list: https://lists.projectacrn.org/g/acrn-user
.. _BSD 3-Clause license: https://github.com/projectacrn/acrn-hypervisor/blob/master/LICENSE
.. _technical-charter: https://projectacrn.org/technical-charter/

View File

@ -3,6 +3,21 @@
Security Advisory
#################
Addressed in ACRN v2.7
************************
We recommend that all developers upgrade to this v2.7 release (or later), which
addresses the following security issue discovered in previous releases:
-----
- Heap-use-after-free happens in ``MEVENT mevent_handle``
The file descriptor of ``mevent`` could be closed in another thread while being
monitored by ``epoll_wait``. This causes a heap-use-after-free error in
the ``mevent_handle()`` function.
**Affected Release:** v2.6 and earlier
Addressed in ACRN v2.6
************************

View File

@ -70,7 +70,7 @@ master_doc = 'index'
# General information about the project.
project = u'Project ACRN™'
copyright = u'2018-' + str(datetime.now().year) + u', ' + project
copyright = u'2018-' + str(datetime.now().year) + u' ' + project + ', a Series of LF Projects, LLC'
author = u'Project ACRN developers'
# The version info for the project you're documenting, acts as replacement for
@ -389,4 +389,8 @@ html_redirect_pages = [
('getting-started/rt_industry', 'getting-started/getting-started'),
('getting-started/rt_industry_ubuntu', 'getting-started/getting-started'),
('getting-started/building-from-source', 'getting-started/getting-started'),
('tutorials/using_vx_works_as_uos', 'tutorials/using_vx_works_as_user_vm'),
('tutorials/using_xenomai_as_uos', 'tutorials/using_xenomai_as_user_vm'),
('tutorials/using_zephyr_as_uos', 'tutorials/using_zephyr_as_user_vm'),
('tutorials/using_windows_as_uos', 'tutorials/using_windows_as_user_vm')
]

View File

@ -25,8 +25,6 @@ also find details about specific architecture topics.
developer-guides/sw_design_guidelines
developer-guides/trusty
developer-guides/l1tf
developer-guides/VBSK-analysis
Contribute Guides
*****************

View File

@ -35,12 +35,12 @@ User VM Tutorials
.. toctree::
:maxdepth: 1
tutorials/using_windows_as_uos
tutorials/using_windows_as_user_vm
tutorials/running_ubun_as_user_vm
tutorials/running_deb_as_user_vm
tutorials/using_xenomai_as_uos
tutorials/using_vxworks_as_uos
tutorials/using_zephyr_as_uos
tutorials/using_xenomai_as_user_vm
tutorials/using_vxworks_as_user_vm
tutorials/using_zephyr_as_user_vm
Configuration Tutorials
***********************
@ -74,6 +74,7 @@ Advanced Features
tutorials/nvmx_virtualization
tutorials/vuart_configuration
tutorials/rdt_configuration
tutorials/vcat_configuration
tutorials/waag-secure-boot
tutorials/enable_s5
tutorials/cpu_sharing

View File

@ -1,144 +0,0 @@
.. _vbsk-overhead:
VBS-K Framework Virtualization Overhead Analysis
################################################
Introduction
************
The ACRN Hypervisor follows the Virtual I/O Device (virtio) specification to
realize I/O virtualization for many performance-critical devices supported in
the ACRN project. The hypervisor provides the virtio backend service (VBS)
APIs, which make it very straightforward to implement a virtio device in the
hypervisor. We can evaluate the virtio backend service in kernel-land (VBS-K)
framework overhead through a test virtual device called virtio-echo. The
total overhead of a frontend-backend application based on VBS-K consists of
VBS-K framework overhead and application-specific overhead. The
application-specific overhead depends on the specific frontend-backend design,
from microseconds to seconds. In our hardware case, the overall VBS-K
framework overhead is on the microsecond level, sufficient to meet the needs
of most applications.
Architecture of VIRTIO-ECHO
***************************
virtio-echo is a virtual device based on virtio, and designed for testing
ACRN virtio backend services in the kernel (VBS-K) framework. It includes a
virtio-echo frontend driver, a virtio-echo driver in ACRN device model (DM)
for initialization, and a virtio-echo driver based on VBS-K for data reception
and transmission. For more virtualization background introduction, refer to:
* :ref:`introduction`
* :ref:`virtio-hld`
virtio-echo is implemented as a virtio legacy device in the ACRN device
model (DM), and is registered as a PCI virtio device to the guest OS
(User VM). The virtio-echo software has three parts:
- **virtio-echo Frontend Driver**: This driver runs in the User VM. It
prepares the RXQ and notifies the backend for receiving incoming data when
the User VM starts. Second, it copies the received data from the RXQ to TXQ
and sends them to the backend. After receiving the message that the
transmission is completed, it starts again another round of reception
and transmission, and keeps running until a specified number of cycles
is reached.
- **virtio-echo Driver in DM**: This driver is used for initialization
configuration. It simulates a virtual PCI device for the frontend
driver use, and sets necessary information such as the device
configuration and virtqueue information to the VBS-K. After
initialization, all data exchange is taken over by the VBS-K
vbs-echo driver.
- **vbs-echo Backend Driver**: This driver sets all frontend RX buffers to
be a specific value and sends the data to the frontend driver. After
receiving the data in RXQ, the fronted driver copies the data to the
TXQ, and then sends them back to the backend. The backend driver then
notifies the frontend driver that the data in the TXQ has been successfully
received. In virtio-echo, the backend driver doesn't process or use the
received data.
:numref:`vbsk-virtio-echo-arch` shows the whole architecture of virtio-echo.
.. figure:: images/vbsk-image2.png
:width: 900px
:align: center
:name: vbsk-virtio-echo-arch
virtio-echo Architecture
Virtualization Overhead Analysis
********************************
Let's analyze the overhead of the VBS-K framework. As we know, the VBS-K
handles notifications in the Service VM kernel instead of in the Service VM
user space DM. This can avoid overhead from switching between kernel space
and user space. Virtqueues are allocated by User VM, and virtqueue
information is configured to VBS-K backend by the virtio-echo driver in DM;
thus virtqueues can be shared between User VM and Service VM. There is no
copy overhead in this sense. The overhead of VBS-K framework mainly contains
two parts: kick overhead and notify overhead.
- **Kick Overhead**: The User VM gets trapped when it executes sensitive
instructions that notify the hypervisor first. The notification is
assembled into an IOREQ, saved in a shared IO page, and then
forwarded to the HSM module by the hypervisor. The HSM notifies its
client for this IOREQ, in this case, the client is the vbs-echo
backend driver. Kick overhead is defined as the interval from the
beginning of User VM trap to a specific VBS-K driver, e.g. when
virtio-echo gets notified.
- **Notify Overhead**: After the data in virtqueue being processed by the
backend driver, vbs-echo calls the HSM module to inject an interrupt
into the frontend. The HSM then uses the hypercall provided by the
hypervisor, which causes a User VM VMEXIT. The hypervisor finally injects
an interrupt into the vLAPIC of the User VM and resumes it. The User VM
therefore receives the interrupt notification. Notify overhead is
defined as the interval from the beginning of the interrupt injection
to when the User VM starts interrupt processing.
The overhead of a specific application based on VBS-K includes two parts:
VBS-K framework overhead and application-specific overhead.
- **VBS-K Framework Overhead**: As defined above, VBS-K framework overhead
refers to kick overhead and notify overhead.
- **Application-Specific Overhead**: A specific virtual device has its own
frontend driver and backend driver. The application-specific overhead
depends on its own design.
:numref:`vbsk-virtio-echo-e2e` shows the overhead of one end-to-end
operation in virtio-echo. Overhead of steps marked as red are caused by
the virtualization scheme based on VBS-K framework. Costs of one "kick"
operation and one "notify" operation are both on a microsecond level.
Overhead of steps marked as blue depend on specific frontend and backend
virtual device drivers. For virtio-echo, the whole end-to-end process
(from step1 to step 9) costs about four dozen microseconds. That's
because virtio-echo performs small operations in its frontend and backend
driver that are just for testing, and there is very little process overhead.
.. figure:: images/vbsk-image1.png
:width: 600px
:align: center
:name: vbsk-virtio-echo-e2e
End to End Overhead of virtio-echo
:numref:`vbsk-virtio-echo-path` details the path of kick and notify
operation shown in :numref:`vbsk-virtio-echo-e2e`. The VBS-K framework
overhead is caused by operations through these paths. As we can see, all
these operations are processed in kernel mode, which avoids the extra
overhead of passing IOREQ to userspace processing.
.. figure:: images/vbsk-image3.png
:width: 900px
:align: center
:name: vbsk-virtio-echo-path
Path of VBS-K Framework Overhead
Conclusion
**********
Unlike VBS-U processing in user mode, VBS-K moves processing into the kernel
mode and can be used to accelerate processing. A virtual device virtio-echo
based on the VBS-K framework is used to evaluate the VBS-K framework overhead.
In our test, the VBS-K framework overhead (one kick operation and one
notify operation) is on the microsecond level, which can meet the needs of
most applications.

View File

@ -12,6 +12,11 @@ This document explains how to participate in project conversations, log
and track bugs and enhancement requests, and submit patches to the
project so that your patch will be accepted quickly into the codebase.
Here's a quick summary:
.. include:: ../../../../CONTRIBUTING.rst
:start-after: start_include_here
Licensing
*********

View File

@ -522,6 +522,59 @@ Keep the line length for documentation less than 80 characters to make it
easier for reviewing in GitHub. Long lines due to URL references are an
allowed exception.
Background Colors
*****************
We've defined some CSS styles for use as background colors for paragraphs.
These styles can be applied using the ``.. rst-class`` directive using one of
these style names. You can also use the defined ``centered`` style to place the
text centered within the element, useful for centering text within a table cell
or column span:
.. rst-class:: bg-acrn-green centered
\.\. rst-class:: bg-acrn-green centered
.. rst-class:: bg-acrn-lightgreen centered
\.\. rst-class:: bg-acrn-lightgreen centered
.. rst-class:: bg-acrn-brown centered
\.\. rst-class:: bg-acrn-brown centered
.. rst-class:: bg-acrn-lightbrown centered
\.\. rst-class:: bg-acrn-lightbrown centered
.. rst-class:: bg-acrn-blue centered
\.\. rst-class:: bg-acrn-blue centered
.. rst-class:: bg-acrn-red centered
\.\. rst-class:: bg-acrn-red centered
.. rst-class:: bg-acrn-gradient centered
\.\. rst-class:: bg-acrn-gradient centered
.. rst-class:: bg-lightyellow centered
\.\. rst-class:: bg-lightyellow centered
.. rst-class:: bg-lightgreen centered
\.\. rst-class:: bg-lightgreen centered
.. rst-class:: bg-lavender centered
\.\. rst-class:: bg-lavender centered
.. rst-class:: bg-lightgrey centered
\.\. rst-class:: bg-lightgrey centered
Drawings
********

View File

@ -3,9 +3,9 @@
Device Model High-Level Design
##############################
Hypervisor Device Model (DM) is a QEMU-like application in Service VM
responsible for creating a User VM and then performing devices emulation
based on command line configurations.
The Device Model (DM) ``acrn-dm`` is a QEMU-like application in the Service VM
responsible for creating a User VM and then performing device emulation
based on command-line configurations.
.. figure:: images/dm-image75.png
:align: center
@ -13,8 +13,8 @@ based on command line configurations.
Device Model Framework
:numref:`dm-framework` above gives a big picture overview of DM
framework. There are 3 major subsystems in Service VM:
:numref:`dm-framework` above gives a big picture overview of the DM
framework. There are 3 major subsystems in the Service VM:
- **Device Emulation**: DM provides backend device emulation routines for
frontend User VM device drivers. These routines register their I/O
@ -25,36 +25,36 @@ framework. There are 3 major subsystems in Service VM:
- I/O Path in Service VM:
- HV initializes an I/O request and notifies HSM driver in Service VM
through upcall.
- Hypervisor initializes an I/O request and notifies the HSM driver in the
Service VM through upcall.
- HSM driver dispatches I/O requests to I/O clients and notifies the
clients (in this case the client is the DM, which is notified
through char device)
- DM I/O dispatcher calls corresponding I/O handlers
- I/O dispatcher notifies HSM driver the I/O request is completed
through char device
- HSM driver notifies HV on the completion through hypercall
- DM injects VIRQ to User VM frontend device through hypercall
through char device).
- DM I/O dispatcher calls corresponding I/O handlers.
- I/O dispatcher notifies the HSM driver that the I/O request is completed
through char device.
- HSM driver notifies the hypervisor on the completion through hypercall.
- DM injects VIRQ to the User VM frontend device through hypercall.
- HSM: Hypervisor Service Module is a kernel module in Service VM as a
middle layer to support DM. Refer to :ref:`virtio-APIs` for details
- HSM: Hypervisor Service Module is a kernel module in the Service VM and is a
middle layer to support the DM. Refer to :ref:`virtio-APIs` for details.
This section introduces how the acrn-dm application is configured and
This section introduces how the ``acrn-dm`` application is configured and
walks through the DM overall flow. We'll then elaborate on device,
ISA, and PCI emulation.
Configuration
*************
The acrn-dm runs using these command line configuration
The ``acrn-dm`` runs using these command-line configuration
options:
.. code-block:: none
acrn-dm [-hAWYv] [-B bootargs] [-E elf_image_path]
acrn-dm [-hAWYv] [-B bootargs] [-E elf_image_path]
[-G GVT_args] [-i ioc_mediator_parameters] [-k kernel_image_path]
[-l lpc] [-m mem] [-r ramdisk_image_path]
[-s pci] [-U uuid] [--vsbl vsbl_file_name] [--ovmf ovmf_file_path]
[-s pci] [--vsbl vsbl_file_name] [--ovmf ovmf_file_path]
[--part_info part_info_name] [--enable_trusty] [--intr_monitor param_setting]
[--acpidev_pt HID] [--mmiodev_pt MMIO_regions]
[--vtpm2 sock_path] [--virtio_poll interval] [--mac_seed seed_string]
@ -72,7 +72,6 @@ options:
-m: memory size in MB
-r: ramdisk image path
-s: <slot,driver,configinfo> PCI slot config
-U: uuid
-v: version
-W: force virtio to use single-vector MSI
-Y: disable MPtable generation
@ -117,7 +116,7 @@ Here's an example showing how to run a VM with:
-s 0:0,hostbridge \
-s 1:0,lpc -l com1,stdio \
-s 5,virtio-console,@pty:pty_port \
-s 3,virtio-blk,b,/home/acrn/uos.img \
-s 3,virtio-blk,b,/home/acrn/UserVM.img \
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
--acpidev_pt MSFT0101 \
--intr_monitor 10000,10,1,100 \
@ -140,48 +139,49 @@ DM Initialization
- **DM Start**: DM application starts to run.
- **Option Parsing**: DM parse options from command line inputs.
- **Option Parsing**: DM parses options from command-line inputs.
- **VM Create**: DM calls ioctl to Service VM HSM, then Service VM HSM makes
hypercalls to HV to create a VM, it returns a vmid for a
- **VM Create**: DM calls ioctl to the Service VM HSM, then the Service VM HSM
makes hypercalls to the hypervisor to create a VM. It returns a vmid for a
dedicated VM.
- **Set I/O Request Buffer**: the I/O request buffer is a page buffer
allocated by DM for a specific VM in user space. This buffer is
shared between DM, HSM and HV. **Set I/O Request buffer** calls
- **Set I/O Request Buffer**: The I/O request buffer is a page buffer
allocated by the DM for a specific VM in user space. This buffer is
shared among the DM, HSM, and hypervisor. **Set I/O Request Buffer** calls
an ioctl executing a hypercall to share this unique page buffer
with HSM and HV. Refer to :ref:`hld-io-emulation` and
:ref:`IO-emulation-in-sos` for more details.
with the HSM and hypervisor. Refer to :ref:`hld-io-emulation` and
:ref:`IO-emulation-in-service-vm` for more details.
- **Memory Setup**: User VM memory is allocated from Service VM
memory. This section of memory will use Service VM hugetlbfs to allocate
linear continuous host physical address for guest memory. It will
try to get the page size as big as possible to guarantee maximum
utilization of TLB. It then invokes a hypercall to HV for its EPT
utilization of TLB. It then invokes a hypercall to the hypervisor for its EPT
mapping, and maps the memory segments into user space.
- **PIO/MMIO Handler Init**: PIO/MMIO handlers provide callbacks for
trapped PIO/MMIO requests that are triggered from I/O request
server in HV for DM-owned device emulation. This is the endpoint
of I/O path in DM. After this initialization, device emulation
driver in DM could register its MMIO handler by *register_mem()*
API and PIO handler by *register_inout()* API or INOUT_PORT()
trapped PIO/MMIO requests that are triggered from the I/O request
server in the hypervisor for DM-owned device emulation. This is the endpoint
of the I/O path in the DM. After this initialization, the device emulation
driver in the DM can register its MMIO handler by the ``register_mem()``
API and its PIO handler by the ``register_inout()`` API or ``INOUT_PORT()``
macro.
- **PCI Init**: PCI initialization scans the PCI bus/slot/function to
identify each configured PCI device on the acrn-dm command line
identify each configured PCI device on the ``acrn-dm`` command line
and initializes their configuration space by calling their
dedicated vdev_init() function. For more details on the DM PCI
dedicated ``vdev_init()`` function. For more details on the DM PCI
emulation, refer to `PCI Emulation`_.
- **ACPI Build**: If there is "-A" option in acrn-dm command line, DM
will build ACPI table into its VM's F-Segment (0xf2400). This
- **ACPI Build**: If there is an "-A" option in the ``acrn-dm`` command line,
the DM
will build an ACPI table into its VM's F-Segment (0xf2400). This
ACPI table includes full tables for RSDP, RSDT, XSDT, MADT, FADT,
HPET, MCFG, FACS, and DSDT. All these items are programed
according to acrn-dm command line configuration and derived from
according to the ``acrn-dm`` command-line configuration and derived from
their default value.
- **SW Load**: DM prepares User VM's SW configuration such as kernel,
- **SW Load**: DM prepares the User VM's software configuration such as kernel,
ramdisk, and zeropage, according to these memory locations:
.. code-block:: c
@ -194,8 +194,8 @@ DM Initialization
For example, if the User VM memory is set as 800M size, then **SW Load**
will prepare its ramdisk (if there is) at 0x31c00000 (796M), bootargs at
0x31ffe000 (800M - 8K), kernel entry at 0x31ffe800(800M - 6K) and zero
page at 0x31fff000 (800M - 4K). The hypervisor will finally run VM based
0x31ffe000 (800M - 8K), kernel entry at 0x31ffe800 (800M - 6K), and zero
page at 0x31fff000 (800M - 4K). The hypervisor will finally run the VM based
on these configurations.
Note that the zero page above also includes e820 setting for this VM.
@ -226,8 +226,8 @@ DM Initialization
* 6: HIGHRAM_START_ADDR - mmio64 start RAM ctx->highmem
*/
- **VM Loop Thread**: DM kicks this VM loop thread to create I/O
request client for DM, runs VM, and then enters I/O request
- **VM Loop Thread**: DM kicks this VM loop thread to create an I/O
request client for the DM, runs the VM, and enters the I/O request
handling loop:
.. code-block:: c
@ -279,7 +279,7 @@ DM Initialization
pr_err("VM loop exit\n");
}
- **Mevent Dispatch Loop**: It's the final loop of the main acrn-dm
- **Mevent Dispatch Loop**: It's the final loop of the main ``acrn-dm``
thread. mevent dispatch will do polling for potential async
event.
@ -291,19 +291,19 @@ HSM
HSM Overview
============
Device Model manages User VM by accessing interfaces exported from HSM
module. HSM module is a Service VM kernel driver. The ``/dev/acrn_hsm`` node is
created when HSM module is initialized. Device Model follows the standard
Linux char device API (ioctl) to access the functionality of HSM.
The Device Model manages a User VM by accessing interfaces exported from the HSM
module. The HSM module is a Service VM kernel driver. The ``/dev/acrn_hsm``
node is created when the HSM module is initialized. The Device Model follows
the standard Linux char device API (ioctl) to access HSM functionality.
In most of ioctl, HSM converts the ioctl command to a corresponding
In most of ioctl, the HSM converts the ioctl command to a corresponding
hypercall to the hypervisor. There are two exceptions:
- I/O request client management is implemented in HSM.
- I/O request client management is implemented in the HSM.
- For memory range management of User VM, HSM needs to save all memory
range info of User VM. The subsequent memory mapping update of User VM
needs this information.
- For memory range management of a User VM, the HSM needs to save all memory
range information of the User VM. The subsequent memory mapping update of
the User VM needs this information.
.. figure:: images/dm-image108.png
:align: center
@ -316,14 +316,14 @@ HSM ioctl Interfaces
.. note:: Reference API documents for General interface, VM Management,
IRQ and Interrupts, Device Model management, Guest Memory management,
PCI assignment, and Power management
PCI assignment, and Power management.
.. _IO-emulation-in-sos:
.. _IO-emulation-in-service-vm:
I/O Emulation in Service VM
***************************
I/O requests from the hypervisor are dispatched by HSM in the Service VM kernel
The HSM in the Service VM kernel dispatches I/O requests from the hypervisor
to a registered client, responsible for further processing the
I/O access and notifying the hypervisor on its completion.
@ -332,10 +332,10 @@ Initialization of Shared I/O Request Buffer
For each VM, there is a shared 4-KByte memory region used for I/O request
communication between the hypervisor and Service VM. Upon initialization
of a VM, the DM (acrn-dm) in Service VM userland first allocates a 4-KByte
page and passes the GPA of the buffer to HV via hypercall. The buffer is
used as an array of 16 I/O request slots with each I/O request being
256 bytes. This array is indexed by vCPU ID. Thus, each vCPU of the VM
of a VM, the DM (``acrn-dm``) in the Service VM userland first allocates a
4-KByte page and passes the GPA of the buffer to the hypervisor via hypercall.
The buffer is used as an array of 16 I/O request slots with each I/O request
being 256 bytes. This array is indexed by vCPU ID. Thus, each vCPU of the VM
corresponds to one I/O request slot in the request buffer since a vCPU
cannot issue multiple I/O requests at the same time.
@ -344,13 +344,13 @@ cannot issue multiple I/O requests at the same time.
I/O Clients
===========
An I/O client is either a Service VM userland application or a Service VM kernel space
module responsible for handling I/O access whose address
An I/O client is either a Service VM userland application or a Service VM
kernel space module responsible for handling an I/O access whose address
falls in a certain range. Each VM has an array of registered I/O
clients that are initialized with a fixed I/O address range, plus a PCI
BDF on VM creation. There is a special client in each VM, called the
fallback client, that handles all I/O requests that do not fit into
the range of any other client. In the current design, the device model
BDF on VM creation. In each VM, a special client, called the
fallback client, handles all I/O requests that do not fit into
the range of any other client. In the current design, the Device Model
acts as the fallback client for any VM.
Each I/O client can be configured to handle the I/O requests in the
@ -363,27 +363,27 @@ specifically created for this purpose.
:align: center
:name: hsm-interaction
Interaction of in-kernel I/O clients and HSM
Interaction of In-kernel I/O Clients and HSM
- On registration, the client requests a fresh ID, registers a
handler, adds the I/O range (or PCI BDF) to be emulated by this
client, and finally attaches it to HSM that kicks off
client, and finally attaches it to the HSM. The HSM kicks off
a new kernel thread.
- The kernel thread waits for any I/O request to be handled. When a
pending I/O request is assigned to the client by HSM, the kernel
- The kernel thread waits for any I/O request to be handled. When the HSM
assigns a pending I/O request to the client, the kernel
thread wakes up and calls the registered callback function
to process the request.
- Before the client is destroyed, HSM ensures that the kernel
- Before the client is destroyed, the HSM ensures that the kernel
thread exits.
An I/O client can also handle I/O requests in its own thread context.
:numref:`dm-hsm-interaction` shows the interactions in such a case, using the
device model as an example. No callback is registered on
registration and the I/O client (device model in the example) attaches
itself to HSM every time it is ready to process additional I/O requests.
Device Model as an example. No callback is registered on
registration and the I/O client (Device Model in the example) attaches
itself to the HSM every time it is ready to process additional I/O requests.
Note also that the DM runs in userland and talks to HSM via the ioctl
interface in `HSM ioctl interfaces`_.
@ -401,16 +401,16 @@ Processing I/O Requests
.. figure:: images/dm-image96.png
:align: center
:name: io-sequence-sos
:name: io-sequence-service-vm
I/O request handling sequence in Service VM
I/O Request Handling Sequence in Service VM
:numref:`io-sequence-sos` above illustrates the interactions among the
:numref:`io-sequence-service-vm` above illustrates the interactions among the
hypervisor, HSM,
and the device model for handling I/O requests. The main interactions
and the Device Model for handling I/O requests. The main interactions
are as follows:
1. The hypervisor makes an upcall to Service VM as an interrupt
1. The hypervisor makes an upcall to the Service VM as an interrupt
handled by the upcall handler in HSM.
2. The upcall handler schedules the execution of the I/O request
@ -423,7 +423,8 @@ are as follows:
all clients that have I/O requests to be processed. The flow is
illustrated in more detail in :numref:`io-dispatcher-flow`.
4. The woken client (the DM in :numref:`io-sequence-sos` above) handles the
4. The awakened client (the DM in :numref:`io-sequence-service-vm` above)
handles the
assigned I/O requests, updates their state to COMPLETE, and notifies
the HSM of the completion via ioctl. :numref:`dm-io-flow` shows this
flow.
@ -435,13 +436,13 @@ are as follows:
:align: center
:name: io-dispatcher-flow
I/O dispatcher control flow
I/O Dispatcher Control Flow
.. figure:: images/dm-image74.png
:align: center
:name: dm-io-flow
Device model control flow on handling I/O requests
Device Model Control Flow on Handling I/O Requests
Emulation of Accesses to PCI Configuration Space
@ -462,7 +463,7 @@ The following table summarizes the emulation of accesses to I/O port
+=================+========================+===========================+
| Load from 0xcf8 | Return value previously stored to port 0xcf8 |
+-----------------+------------------------+---------------------------+
| Store to 0xcf8 | If MSB of value is 1, cache BFD and offset, |
| Store to 0xcf8 | If MSB of value is 1, cache BDF and offset; |
| | otherwise, invalidate cache. |
+-----------------+------------------------+---------------------------+
| Load from 0xcfc | Assigned to client | Return all 1's |
@ -473,7 +474,7 @@ The following table summarizes the emulation of accesses to I/O port
I/O Client Interfaces
=====================
.. note:: replace with reference to API documentation
.. note:: Replace with reference to API documentation.
The APIs for I/O client development are as follows:
@ -502,8 +503,8 @@ Device Emulation
****************
The DM emulates different kinds of devices, such as RTC,
LPC, UART, PCI devices, virtio block device, etc. It is important
for device emulation can handle I/O requests
LPC, UART, PCI devices, and virtio block device. It is important
that device emulation can handle I/O requests
from different devices including PIO, MMIO, and PCI CFG
SPACE access. For example, a CMOS RTC device may access 0x70/0x71 PIO to
get CMOS time, a GPU PCI device may access its MMIO or PIO bar space to
@ -511,24 +512,24 @@ complete its framebuffer rendering, or the bootloader may access a PCI
device's CFG SPACE for BAR reprogramming.
The DM needs to inject interrupts/MSIs to its frontend devices whenever
necessary. For example, an RTC device needs get its ALARM interrupt, or a
necessary. For example, an RTC device needs to get its ALARM interrupt, or a
PCI device with MSI capability needs to get its MSI.
DM also provides a PIRQ routing mechanism for platform devices.
The DM also provides a PIRQ routing mechanism for platform devices.
PIO/MMIO/CFG SPACE Handler
==========================
This chapter will do a quick introduction of different I/O requests.
This chapter provides a quick introduction of different I/O requests.
PIO Handler Register
--------------------
A PIO range structure in DM is like below, it's the parameter needed to
register PIO handler for special PIO range:
A PIO range structure in the DM is shown below. It's the parameter needed to
register a PIO handler for a special PIO range:
.. note:: this should be references to API documentation in
devicemodel/include/inout.h
.. note:: This should be references to API documentation in
``devicemodel/include/inout.h``.
.. code-block:: c
@ -551,9 +552,9 @@ A PIO emulation handler is defined as:
typedef int (*inout_func_t)(struct vmctx *ctx, int vcpu, int in, int port, int bytes, uint32_t *eax, void *arg);
The DM pre-registers the PIO emulation handlers through MACRO
INOUT_PORT, or registers the PIO emulation handlers through
register_inout() function after init_inout():
The DM pre-registers the PIO emulation handlers through the macro
``INOUT_PORT``, or registers the PIO emulation handlers through the
``register_inout()`` function after ``init_inout()``:
.. code-block:: c
@ -575,7 +576,7 @@ MMIO Handler Register
---------------------
An MMIO range structure is defined below. As with PIO, it's the
parameter needed to register MMIO handler for special MMIO range:
parameter needed to register a MMIO handler for a special MMIO range:
.. code-block:: c
@ -596,7 +597,7 @@ An MMIO emulation handler is defined as:
typedef int (*mem_func_t)(struct vmctx *ctx, int vcpu, int dir, uint64_t addr,
int size, uint64_t *val, void *arg1, long arg2);
DM needs to call register_mem() function to register its emulated
The DM needs to call the ``register_mem()`` function to register its emulated
device's MMIO handler:
.. code-block:: c
@ -625,7 +626,7 @@ has no need to update this function.
Interrupt Interface
===================
DM calls these interrupt functions to send level, edge or MSI interrupt
The DM calls these interrupt functions to send a level, edge, or MSI interrupt
to destination emulated devices:
.. code-block:: c
@ -653,7 +654,7 @@ PIRQ Routing
:numref:`pirq-routing` shows a PCI device PIRQ routing example. On a platform,
there could be more PCI devices than available IRQ pin resources on its
PIC or IOAPIC interrupt controller. ICH HW provides a PIRQ Routing
PIC or IOAPIC interrupt controller. ICH hardware provides a PIRQ Routing
mechanism to share IRQ pin resources between different PCI devices.
.. figure:: images/dm-image33.png
@ -663,7 +664,7 @@ mechanism to share IRQ pin resources between different PCI devices.
PIRQ Routing
DM calls pci_lintr_route() to emulate this PIRQ routing:
The DM calls ``pci_lintr_route()`` to emulate this PIRQ routing:
.. code-block:: c
@ -705,17 +706,19 @@ The PIRQ routing for IOAPIC and PIC is dealt with differently.
* For IOAPIC, the IRQ pin is allocated in a round-robin fashion within the
pins permitted for PCI devices. The IRQ information will be built
into ACPI DSDT table then passed to guest VM.
into the ACPI DSDT table then passed to the guest VM.
* For PIC, the ``pin2irq`` information is maintained in a ``pirqs[]`` array (the array size is 8
* For PIC, the ``pin2irq`` information is maintained in a ``pirqs[]`` array
(the array size is 8
representing 8 shared PIRQs). When a PCI device tries to allocate a
pIRQ pin, it will do a balancing calculation to figure out a best pin
vs. IRQ pair. The IRQ number will be programed into PCI INTLINE config space
and the pin number will be built into ACPI DSDT table then passed to guest VM.
vs. IRQ pair. The IRQ number will be programed into PCI INTLINE config space,
and the pin number will be built into the ACPI DSDT table then passed to
the guest VM.
.. note:: "IRQ" here is also called as "GSI" in ACPI terminology.
.. note:: "IRQ" here is also called "GSI" in ACPI terminology.
Regarding to INT A/B/C/D for PCI devices, DM just allocates them evenly
Regarding INT A/B/C/D for PCI devices, the DM just allocates them evenly
prior to pIRQ routing and then programs into PCI INTPIN config space.
ISA and PCI Emulation
@ -745,17 +748,17 @@ PCI emulation takes care of three interfaces:
The core PCI emulation structures are:
.. note:: reference ``struct businfo`` API from devicemodel/hw/pci/core.c
.. note:: Reference ``struct businfo`` API from ``devicemodel/hw/pci/core.c``.
During PCI initialization, ACRN DM will scan each PCI bus, slot and
function and identify the PCI devices configured by acrn-dm command
During PCI initialization, the DM will scan each PCI bus, slot, and
function and identify the PCI devices configured by ``acrn-dm`` command
line. The corresponding PCI device's initialization function will
be called to initialize its config space, allocate its BAR resource, its
irq, and do its IRQ routing.
.. note:: reference API documentation for pci_vdev, pci_vdef_ops
.. note:: Reference API documentation for ``pci_vdev, pci_vdef_ops``.
The pci_vdev_ops of the pci_vdev structure could be installed by
The ``pci_vdev_ops`` of the ``pci_vdev`` structure could be installed by
customized handlers for cfgwrite/cfgread and barwrite/barread.
The cfgwrite/cfgread handlers will be called from the configuration
@ -768,8 +771,8 @@ its interrupt injection.
PCI Host Bridge and Hierarchy
=============================
There is PCI host bridge emulation in DM. The bus hierarchy is
determined by acrn-dm command line input. Using this command line, as an
The DM provides PCI host bridge emulation. The ``acrn-dm`` command-line
input determines the bus hierarchy. Using this command line, as an
example:
.. code-block:: bash
@ -778,7 +781,7 @@ example:
-s 0:0,hostbridge \
-s 1:0,lpc -l com1,stdio \
-s 5,virtio-console,@pty:pty_port \
-s 3,virtio-blk,b,/home/acrn/uos.img \
-s 3,virtio-blk,b,/home/acrn/UserVM.img \
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
-B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
@ -818,8 +821,8 @@ Functions implemented by ACPI include:
- Battery management
- Thermal management
All critical functions depends on ACPI tables.
On an APL platform with Linux installed we can see these tables using:
All critical functions depend on ACPI tables.
On an Apollo Lake platform with Linux installed, we can see these tables using:
.. code-block:: console
@ -829,12 +832,12 @@ On an APL platform with Linux installed we can see these tables using:
These tables provide different information and functions:
- Advanced Programmable Interrupt Controller (APIC) for Symmetric
Multiprocessor systems (SMP),
Multiprocessor systems (SMP)
- DMA remapping (DMAR) for Intel |reg| Virtualization Technology for
Directed I/O (VT-d),
- Non-HD Audio Link Table (NHLT) for supporting audio device,
- and Differentiated System Description Table (DSDT) for system
configuration info. DSDT is a major ACPI table used to describe what
Directed I/O (VT-d)
- Non-HD Audio Link Table (NHLT) for supporting audio device
- Differentiated System Description Table (DSDT) for system
configuration information. DSDT is a major ACPI table used to describe what
peripherals the machine has, and information on PCI IRQ mappings and
power management
@ -844,7 +847,7 @@ ACPI functionality is provided in ACPI Machine Language (AML) bytecode
stored in the ACPI tables. To make use of these tables, Linux implements
an interpreter for the AML bytecode. When the BIOS is built, AML
bytecode is compiled from the ASL (ACPI Source Language) code. To
dissemble the ACPI table, use the ``iasl`` tool:
disassemble the ACPI table, use the ``iasl`` tool:
.. code-block:: console
@ -872,10 +875,10 @@ dissemble the ACPI table, use the ``iasl`` tool:
[038h 0056 8] Register Base Address : 00000000FED64000
From the displayed ASL, we can see some generic table fields, such as
version info, and one VTd remapping engine description with FED64000 as
version info, and one VT-d remapping engine description with FED64000 as
base address.
We can modify DMAR.dsl and assemble it again to AML:
We can modify ``DMAR.dsl`` and assemble it again to AML:
.. code-block:: console
@ -890,24 +893,24 @@ We can modify DMAR.dsl and assemble it again to AML:
A new AML file ``DMAR.aml`` is created.
There are many ACPI tables in the system, linked together via table
pointers. In all ACPI-compatible system, the OS can enumerate all
pointers. In all ACPI-compatible systems, the OS can enumerate all
needed tables starting with the Root System Description Pointer (RSDP)
provided at a known place in the system low address space, and pointing
to an XSDT (Extended System Description Table). The following picture
shows a typical ACPI table layout in an Intel APL platform:
shows a typical ACPI table layout in an Apollo Lake platform:
.. figure:: images/dm-image36.png
:align: center
Typical ACPI table layout on Intel APL platform
Typical ACPI Table Layout on Apollo Lake Platform
ACPI Virtualization
===================
Most modern OSes requires ACPI, so we need ACPI virtualization to
emulate one ACPI-capable virtual platform for guest OS. To achieve this,
there are two options, depending on the way to abstract physical device and
ACPI resources: Partitioning and Emulation.
Most modern OSes require ACPI, so we need ACPI virtualization to
emulate one ACPI-capable virtual platform for a guest OS. To achieve this,
there are two options, depending on the method used to abstract the physical
device and ACPI resources: Partitioning and Emulation.
ACPI Partitioning
-----------------
@ -984,8 +987,8 @@ tables for other VMs. Opregion also must be copied for different VMs.
For each table, we make modifications, based on the physical table, to
reflect the assigned devices to this VM. As shown in the figure below,
we keep SP2(0:19.1) for VM0, and SP1(0:19.0)/SP3(0:19.2) for VM1.
Anytime partition policy changes we must modify both tables again,
including dissembling, modifying, and assembling, which is tricky and
Any time the partition policy changes, we must modify both tables again,
including disassembling, modifying, and assembling, which is tricky and
potentially error-prone.
.. figure:: images/dm-image43.png
@ -996,10 +999,11 @@ ACPI Emulation
--------------
An alternative ACPI resource abstraction option is for the Service VM to
own all devices and emulate a set of virtual devices for the User VM (POST_LAUNCHED_VM).
own all devices and emulate a set of virtual devices for the User VM
(POST_LAUNCHED_VM).
This is the most popular ACPI resource model for virtualization,
as shown in the picture below. ACRN currently
uses device emulation plus some device passthrough for User VM.
uses device emulation plus some device passthrough for the User VM.
.. figure:: images/dm-image52.png
:align: center
@ -1009,20 +1013,20 @@ uses device emulation plus some device passthrough for User VM.
For ACPI virtualization in ACRN, different policies are used for
different components:
- **Hypervisor** - ACPI is transparent to the Hypervisor, and has no knowledge
- **Hypervisor** - ACPI is transparent to the hypervisor, and has no knowledge
of ACPI at all.
- **Service VM** - All ACPI resources are physically owned by Service VM, and enumerates
all ACPI tables and devices.
- **Service VM** - The Service VM owns all physical ACPI resources
and enumerates all ACPI tables and devices.
- **User VM** - Virtual ACPI resources, exposed by device model, are owned by
User VM.
- **User VM** - Virtual ACPI resources, exposed by the Device Model, are owned
by the User VM.
ACPI emulation code of device model is found in
The ACPI emulation code of the Device Model is found in
``hw/platform/acpi/acpi.c``
Each entry in ``basl_ftables`` is related to each virtual ACPI table,
including following elements:
including the following elements:
- wsect - output handler to write related ACPI table contents to
specific file
@ -1046,7 +1050,7 @@ including following elements:
{ basl_fwrite_facs, FACS_OFFSET, true },
{ basl_fwrite_nhlt, NHLT_OFFSET, false }, /*valid with audio ptdev*/
{ basl_fwrite_tpm2, TPM2_OFFSET, false },
{ basl_fwrite_psds, PSDS_OFFSET, false }, /*valid when psds present in sos */
{ basl_fwrite_psds, PSDS_OFFSET, false }, /*valid when psds present in Service VM */
{ basl_fwrite_dsdt, DSDT_OFFSET, true }
};
@ -1109,9 +1113,9 @@ The main function to create virtual ACPI tables is ``acpi_build`` that calls
After handling each entry, virtual ACPI tables are present in User VM
memory.
For passthrough dev in User VM, we may need to add some ACPI description
in virtual DSDT table. There is one hook (passthrough_write_dsdt) in
``hw/pci/passthrough.c`` for this. The following source code, shows
For passthrough devices in the User VM, we may need to add some ACPI description
in the virtual DSDT table. There is one hook (``passthrough_write_dsdt``) in
``hw/pci/passthrough.c`` for this. The following source code
calls different functions to add different contents for each vendor and
device id:
@ -1152,9 +1156,9 @@ device id:
}
For instance, write_dsdt_urt1 provides ACPI contents for Bluetooth
UART device when passthroughed to User VM. It provides virtual PCI
device/function as _ADR. With other description, it could be used for
For instance, ``write_dsdt_urt1`` provides ACPI contents for a Bluetooth
UART device when passed through to the User VM. It provides the virtual PCI
device/function as ``_ADR``. With another description, it could be used for
Bluetooth UART enumeration.
.. code-block:: c
@ -1185,24 +1189,26 @@ Bluetooth UART enumeration.
PM in Device Model
******************
PM module in Device Model emulates the User VM low power state transition.
The power management (PM) module in the Device Model emulates the User VM
low-power state transition.
Each time User VM writes an ACPI control register to initialize low power
state transition, the writing operation is trapped to DM as an I/O
Each time the User VM writes an ACPI control register to initialize low-power
state transition, the writing operation is trapped to the DM as an I/O
emulation request by the I/O emulation framework.
To emulate User VM S5 entry, DM will destroy I/O request client, release
allocated User VM memory, stop all created threads, destroy User VM, and exit
DM. To emulate S5 exit, a fresh DM start by VM manager is used.
To emulate User VM S5 entry, the DM destroys the I/O request client, releases
allocated User VM memory, stops all created threads, destroys the User VM, and
exits the DM. To emulate S5 exit, a fresh DM started by the VM manager is used.
To emulate User VM S3 entry, DM pauses the User VM, stops the User VM watchdog,
and waits for a resume signal. When the User VM should exit from S3, DM will
get a wakeup signal and reset the User VM to emulate the User VM exit from
To emulate User VM S3 entry, the DM pauses the User VM, stops the User VM
watchdog,
and waits for a resume signal. When the User VM should exit from S3, the DM
gets a wakeup signal and resets the User VM to emulate the User VM exit from
S3.
Passthrough in Device Model
****************************
You may refer to :ref:`hv-device-passthrough` for passthrough realization
in device model and :ref:`mmio-device-passthrough` for MMIO passthrough realization
in device model and ACRN Hypervisor.
Refer to :ref:`hv-device-passthrough` for passthrough realization
in the Device Model and :ref:`mmio-device-passthrough` for MMIO passthrough
realization in the Device Model and ACRN hypervisor.

View File

@ -25,4 +25,5 @@ Hypervisor High-Level Design
Hypercall / HSM upcall <hv-hypercall>
Compile-time configuration <hv-config>
RDT support <hv-rdt>
vCAT support <hv-vcat>
Split-locked Access handling <hld-splitlock>

View File

@ -416,14 +416,14 @@ from different huge pages in the Service VM as shown in
:numref:`overview-mem-layout`.
As the Service VM knows the size of these huge pages,
GPA\ :sup:`SOS` and GPA\ :sup:`UOS`, it works with the hypervisor
GPA\ :sup:`service_vm` and GPA\ :sup:`user_vm`, it works with the hypervisor
to complete the User VM's host-to-guest mapping using this pseudo code:
.. code-block:: none
for x in allocated huge pages do
x.hpa = gpa2hpa_for_sos(x.sos_gpa)
host2guest_map_for_uos(x.hpa, x.uos_gpa, x.size)
x.hpa = gpa2hpa_for_service_vm(x.service_vm_gpa)
host2guest_map_for_user_vm(x.hpa, x.user_vm_gpa, x.size)
end
Virtual Slim Bootloader

View File

@ -254,7 +254,7 @@ In ACRN, User VM Secure Boot can be enabled as follows:
#. Sign the User VM images with `db.key` and `db.crt`.
#. Boot the User VM with Secure Boot enabled.
.. _sos_hardening:
.. _service_vm_hardening:
Service VM Hardening
--------------------
@ -732,7 +732,7 @@ must be disabled in a production release. Users who want to use this
feature must possess the private signing key to re-sign the image after
enabling the configuration.
.. _uos_suspend_resume:
.. _user_vm_suspend_resume:
User VM Suspend/Resume
~~~~~~~~~~~~~~~~~~~~~~
@ -793,7 +793,7 @@ Extract-and-Expand Key Derivation Function), `RFC5869
The parameters of HKDF derivation in the hypervisor are:
#. VMInfo= vm-uuid (from the hypervisor configuration file)
#. VMInfo= vm name (from the hypervisor configuration file)
#. theHash=SHA-256
#. OutSeedLen = 64 in bytes
#. Guest Dev and User SEED (dvSEED/uvSEED)
@ -1041,7 +1041,7 @@ Note that there are some security considerations in this design:
Keeping the Service VM system as secure as possible is a very important goal in
the system security design. Follow the recommendations in
:ref:`sos_hardening`.
:ref:`service_vm_hardening`.
SEED Derivation
---------------
@ -1065,7 +1065,7 @@ the restore state hypercall is called only by vBIOS when the User VM is ready to
resume from suspend state.
For security design considerations of handling secure world S3,
read the previous section: :ref:`uos_suspend_resume`.
read the previous section: :ref:`user_vm_suspend_resume`.
Platform Security Feature Virtualization and Enablement
=======================================================

View File

@ -92,9 +92,6 @@ ACRNTrace application includes a binary to retrieve trace data from
sbuf, and Python scripts to convert trace data from raw format into
readable text, and do analysis.
.. note:: There was no Figure showing the sequence of trace
initialization and trace data collection.
With a debug build, trace components are initialized at boot
time. After initialization, HV writes trace event date into sbuf
until sbuf is full, which can happen easily if the ACRNTrace app is not
@ -104,9 +101,6 @@ Once ACRNTrace is launched, for each physical CPU a consumer thread is
created to periodically read RAW trace data from sbuf and write to a
file.
.. note:: TODO figure is missing
Figure 2.2 Sequence of trace init and trace data collection
These are the Python scripts provided:
- **acrntrace_format.py** converts RAW trace data to human-readable

View File

@ -214,26 +214,6 @@ virtqueues, feature mechanisms, configuration space, and buses.
Virtio Frontend/Backend Layered Architecture
Virtio Framework Considerations
===============================
How to implement the virtio framework is specific to a
hypervisor implementation. In ACRN, the virtio framework implementations
can be classified into two types, virtio backend service in userland
(VBS-U) and virtio backend service in kernel-land (VBS-K), according to
where the virtio backend service (VBS) is located. Although different in BE
drivers, both VBS-U and VBS-K share the same FE drivers. The reason
behind the two virtio implementations is to meet the requirement of
supporting a large number of diverse I/O devices in ACRN project.
When developing a virtio BE device driver, the device owner should choose
carefully between the VBS-U and VBS-K. Generally VBS-U targets
non-performance-critical devices, but enables easy development and
debugging. VBS-K targets performance critical devices.
The next two sections introduce ACRN's two implementations of the virtio
framework.
Userland Virtio Framework
==========================
@ -266,49 +246,15 @@ virtqueue through the user-level vring service API helpers.
Kernel-Land Virtio Framework
============================
ACRN supports two kernel-land virtio frameworks:
ACRN supports one kernel-land virtio frameworks:
* VBS-K, designed from scratch for ACRN
* Vhost, compatible with Linux Vhost
VBS-K Framework
---------------
The architecture of ACRN VBS-K is shown in
:numref:`kernel-virtio-framework` below.
Generally VBS-K provides acceleration towards performance critical
devices emulated by VBS-U modules by handling the "data plane" of the
devices directly in the kernel. When VBS-K is enabled for certain
devices, the kernel-land vring service API helpers, instead of the
userland helpers, are used to access the virtqueues shared by the FE
driver. Compared to VBS-U, this eliminates the overhead of copying data
back-and-forth between userland and kernel-land within the Service VM, but
requires the extra implementation complexity of the BE drivers.
Except for the differences mentioned above, VBS-K still relies on VBS-U
for feature negotiations between FE and BE drivers. This means the
"control plane" of the virtio device still remains in VBS-U. When
feature negotiation is done, which is determined by the FE driver setting up
an indicative flag, the VBS-K module will be initialized by VBS-U.
Afterward, all request handling will be offloaded to the VBS-K in the
kernel.
Finally the FE driver is not aware of how the BE driver is implemented,
either in VBS-U or VBS-K. This saves engineering effort regarding FE
driver development.
.. figure:: images/virtio-hld-image54.png
:align: center
:name: kernel-virtio-framework
ACRN Kernel-Land Virtio Framework
Vhost Framework
---------------
Vhost is similar to VBS-K. Vhost is a common solution upstreamed in the
Linux kernel, with several kernel mediators based on it.
Vhost is a common solution upstreamed in the Linux kernel,
with several kernel mediators based on it.
Architecture
~~~~~~~~~~~~
@ -448,51 +394,6 @@ DM, and DM finds other key data structures through it. The ``struct
virtio_ops`` abstracts a series of virtio callbacks to be provided by the
device owner.
VBS-K Key Data Structures
=========================
The key data structures for VBS-K are listed as follows, and their
relationships are shown in :numref:`VBS-K-data`.
``struct vbs_k_rng``
In-kernel VBS-K component handling data plane of a
VBS-U virtio device, for example, virtio random_num_generator.
``struct vbs_k_dev``
In-kernel VBS-K component common to all VBS-K.
``struct vbs_k_vq``
In-kernel VBS-K component for working with kernel
vring service API helpers.
``struct vbs_k_dev_inf``
Virtio device information to be synchronized
from VBS-U to VBS-K kernel module.
``struct vbs_k_vq_info``
A single virtqueue information to be
synchronized from VBS-U to VBS-K kernel module.
``struct vbs_k_vqs_info``
Virtqueue information, of a virtio device,
to be synchronized from VBS-U to VBS-K kernel module.
.. figure:: images/virtio-hld-image8.png
:width: 900px
:align: center
:name: VBS-K-data
VBS-K Key Data Structures
In VBS-K, the struct vbs_k_xxx represents the in-kernel component
handling a virtio device's data plane. It presents a char device for VBS-U
to open and register device status after feature negotiation with the FE
driver.
The device status includes negotiated features, number of virtqueues,
interrupt information, and more. All these statuses will be synchronized
from VBS-U to VBS-K. In VBS-U, the ``struct vbs_k_dev_info`` and ``struct
vbs_k_vqs_info`` will collect all the information and notify VBS-K through
ioctls. In VBS-K, the ``struct vbs_k_dev`` and ``struct vbs_k_vq``, which are
common to all VBS-K modules, are the counterparts to preserve the
related information. The related information is necessary to kernel-land
vring service API helpers.
VHOST Key Data Structures
=========================
@ -547,8 +448,7 @@ VBS APIs
========
The VBS APIs are exported by VBS related modules, including VBS, DM, and
Service VM kernel modules. They can be classified into VBS-U and VBS-K APIs
listed as follows.
Service VM kernel modules.
VBS-U APIs
----------
@ -583,12 +483,6 @@ the virtio framework within DM will invoke them appropriately.
.. doxygenfunction:: virtio_config_changed
:project: Project ACRN
VBS-K APIs
----------
The VBS-K APIs are exported by VBS-K related modules. Users can use
the following APIs to implement their VBS-K modules.
APIs Provided by DM
~~~~~~~~~~~~~~~~~~~
@ -674,10 +568,7 @@ VQ APIs
The virtqueue APIs, or VQ APIs, are used by a BE device driver to
access the virtqueues shared by the FE driver. The VQ APIs abstract the
details of virtqueues so that users don't need to worry about the data
structures within the virtqueues. In addition, the VQ APIs are designed
to be identical between VBS-U and VBS-K, so that users don't need to
learn different APIs when implementing BE drivers based on VBS-U and
VBS-K.
structures within the virtqueues.
.. doxygenfunction:: vq_interrupt
:project: Project ACRN

View File

@ -6,7 +6,10 @@ Hostbridge Emulation
Overview
********
Hostbridge emulation is based on PCI emulation; however, the hostbridge emulation only sets the PCI configuration space. The device model sets the PCI configuration space for hostbridge in the Service VM and then exposes it to the User VM to detect the PCI hostbridge.
Hostbridge emulation is based on PCI emulation; however, the hostbridge
emulation only sets the PCI configuration space. The Device Model (DM) sets the
PCI configuration space for hostbridge in the Service VM and then exposes it to
the User VM to detect the PCI hostbridge.
PCI Host Bridge and Hierarchy
*****************************
@ -17,7 +20,7 @@ There is PCI host bridge emulation in DM. The bus hierarchy is determined by ``a
-s 2,pci-gvt -G "$2" \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,/home/acrn/uos.img \
-s 3,virtio-blk,/home/acrn/UserVM.img \
-s 4,virtio-net,tap0 \
-s 7,virtio-rnd \
--ovmf /usr/share/acrn/bios/OVMF.fd \

View File

@ -19,7 +19,7 @@ peripherals.
The main purpose of IOC virtualization is to transfer data between
native Carrier Board Communication (CBC) char devices and a virtual
UART. IOC virtualization is implemented as full virtualization so the
user OS can directly reuse native CBC driver.
User VM can directly reuse the native CBC driver.
The IOC Mediator has several virtualization requirements, such as S3/S5
wakeup reason emulation, CBC link frame packing/unpacking, signal
@ -72,14 +72,14 @@ different serial connections, such as SPI or UART.
:align: center
:name: ioc-cbc-frame-def
IOC Native - CBC frame definition
IOC Native - CBC Frame Definition
The CBC protocol is based on a four-layer system:
- The **Physical layer** is a serial interface with full
- The **Physical Layer** is a serial interface with full
duplex capabilities. A hardware handshake is required. The required
bit rate depends on the peripherals connected, e.g. UART, and SPI.
- The **Link layer** handles the length and payload verification.
bit rate depends on the peripherals connected, e.g., UART and SPI.
- The **Link Layer** handles the length and payload verification.
- The **Address Layer** is used to distinguish between the general data
transferred. It is placed in front of the underlying Service Layer
and contains Multiplexer (MUX) and Priority fields.
@ -100,7 +100,7 @@ devices.
:align: center
:name: ioc-software-arch
IOC Native - Software architecture
IOC Native - Software Architecture
Virtualization Architecture
---------------------------
@ -120,7 +120,7 @@ device as its backend.
:align: center
:name: ioc-virt-software-arch
IOC Virtualization - Software architecture
IOC Virtualization - Software Architecture
High-Level Design
=================
@ -131,7 +131,7 @@ There are five parts in this high-level design:
* State transfer introduces IOC mediator work states
* CBC protocol illustrates the CBC data packing/unpacking
* Power management involves boot/resume/suspend/shutdown flows
* Emulated CBC commands introduces some commands workflow
* Emulated CBC commands introduce some commands workflow
IOC mediator has three threads to transfer data between User VM and Service VM. The
core thread is responsible for data reception, and Tx and Rx threads are
@ -144,7 +144,7 @@ char devices and UART DM immediately.
:align: center
:name: ioc-med-sw-data-flow
IOC Mediator - Software data flow
IOC Mediator - Software Data Flow
- For Tx direction, the data comes from IOC firmware. IOC mediator
receives service data from native CBC char devices such as
@ -161,7 +161,7 @@ char devices and UART DM immediately.
mediator and will not be transferred to IOC
firmware.
- Currently, IOC mediator only cares about lifecycle, signal, and raw data.
Others, e.g. diagnosis, are not used by the IOC mediator.
Others, e.g., diagnosis, are not used by the IOC mediator.
State Transfer
--------------
@ -201,7 +201,7 @@ virtualization, as shown in the detailed flow below:
:align: center
:name: ioc-cbc-frame-usage
IOC Native - CBC frame usage
IOC Native - CBC Frame Usage
In the native architecture, the CBC link frame is unpacked by CBC
driver. The usage services only get the service data from the CBC char
@ -213,7 +213,7 @@ priority for the frame, then send data to the UART driver.
:align: center
:name: ioc-cbc-prot
IOC Virtualizaton - CBC protocol virtualization
IOC Virtualizaton - CBC Protocol Virtualization
The difference between the native and virtualization architectures is
that the IOC mediator needs to re-compute the checksum and reset
@ -240,7 +240,7 @@ Boot Flow
:align: center
:name: ioc-virt-boot
IOC Virtualizaton - Boot flow
IOC Virtualizaton - Boot Flow
#. Press ignition button for booting.
#. Service VM lifecycle service gets a "booting" wakeup reason.
@ -275,7 +275,7 @@ Suspend & Shutdown Flow
#. PM DM executes User VM suspend/shutdown request based on ACPI.
#. VM Manager queries each VM state from PM DM. Suspend request maps
to a paused state and shutdown request maps to a stop state.
#. VM Manager collects all VMs state, and reports it to Service VM lifecycle
#. VM Manager collects all VMs' state, and reports it to Service VM lifecycle
service.
#. Service VM lifecycle sends inactive heartbeat to IOC firmware with
suspend/shutdown SUS_STAT, based on the Service VM's own lifecycle service
@ -289,9 +289,9 @@ Resume Flow
:align: center
:name: ioc-resume
IOC Virtualizaton - Resume flow
IOC Virtualizaton - Resume Flow
The resume reason contains both the ignition button and RTC, and have
The resume reason contains both the ignition button and RTC, and has
the same flow blocks.
For ignition resume flow:
@ -324,7 +324,7 @@ For RTC resume flow
#. PM DM resumes User VM.
#. User VM lifecycle service gets the wakeup reason 0x000200, and sends
initial or active heartbeat. The User VM gets wakeup reason 0x800200
after resuming..
after resuming.
System Control Data
-------------------
@ -338,7 +338,7 @@ control includes Wakeup Reasons, Heartbeat, Boot Selector, Suppress
Heartbeat Check, and Set Wakeup Timer functions. Details are in this
table:
.. list-table:: System control SVC values
.. list-table:: System Control SVC Values
:header-rows: 1
* - System Control
@ -354,22 +354,22 @@ table:
* - 2
- Heartbeat
- Heartbeat
- Soc to IOC
- SoC to IOC
* - 3
- Boot Selector
- Boot Selector
- Soc to IOC
- SoC to IOC
* - 4
- Suppress Heartbeat Check
- Suppress Heartbeat Check
- Soc to IOC
- SoC to IOC
* - 5
- Set Wakeup Timer
- Set Wakeup Timer in AIOC firmware
- Soc to IOC
- SoC to IOC
- IOC mediator only supports wakeup reasons Heartbeat and Set Wakeup
Timer.
@ -413,21 +413,21 @@ Currently the wakeup reason bits are supported by sources shown here:
* - wakeup_button
- 5
- Get from IOC FW, forward to User VM
- Get from IOC firmware, forward to User VM
* - RTC wakeup
- 9
- Get from IOC FW, forward to User VM
- Get from IOC firmware, forward to User VM
* - car door wakeup
* - Car door wakeup
- 11
- Get from IOC FW, forward to User VM
- Get from IOC firmware, forward to User VM
* - SoC wakeup
- 23
- Emulation (Depends on User VM's heartbeat message
- CBC_WK_RSN_BTN (bit 5): ignition button.
- CBC_WK_RSN_BTN (bit 5): Ignition button.
- CBC_WK_RSN_RTC (bit 9): RTC timer.
- CBC_WK_RSN_DOR (bit 11): Car door.
- CBC_WK_RSN_SOC (bit 23): SoC active/inactive.
@ -437,7 +437,7 @@ Currently the wakeup reason bits are supported by sources shown here:
:align: center
:name: ioc-wakeup-flow
IOC Mediator - Wakeup reason flow
IOC Mediator - Wakeup Reason Flow
Bit 23 is for the SoC wakeup indicator and should not be forwarded
directly because every VM has a different heartbeat status.
@ -445,7 +445,7 @@ directly because every VM has a different heartbeat status.
Heartbeat
+++++++++
The Heartbeat is used for SOC watchdog, indicating the SOC power
The Heartbeat is used for SoC watchdog, indicating the SoC power
reset behavior. Heartbeat needs to be sent every 1000 ms by
the SoC.
@ -454,7 +454,7 @@ the SoC.
:align: center
:name: ioc-heartbeat
System control - Heartbeat
System Control - Heartbeat
Heartbeat frame definition is shown here:
@ -513,7 +513,7 @@ Heartbeat frame definition is shown here:
RTC
+++
RTC timer is used to wakeup SoC when the timer is expired. (A use
RTC timer is used to wake up the SoC when the timer is expired. (A use
case is for an automatic software upgrade with a specific time.) RTC frame
definition is as below.
@ -530,16 +530,16 @@ definition is as below.
:align: center
:name: ioc-rtc-flow
IOC Mediator - RTC flow
IOC Mediator - RTC Flow
Signal Data
-----------
Signal channel is an API between the SOC and IOC for
Signal channel is an API between the SoC and IOC for
miscellaneous requirements. The process data includes all vehicle bus and
carrier board data (GPIO, sensors, and so on). It supports
transportation of single signals and group signals. Each signal consists
of a signal ID (reference), its value, and its length. IOC and SOC need
of a signal ID (reference), its value, and its length. IOC and SoC need
agreement on the definition of signal IDs that can be treated as API
interface definitions.
@ -550,24 +550,24 @@ IOC signal type definitions are as below.
:align: center
:name: ioc-process-data-svc-val
Process Data SVC values
Process Data SVC Values
.. figure:: images/ioc-image2.png
:width: 900px
:align: center
:name: ioc-med-signal-flow
IOC Mediator - Signal flow
IOC Mediator - Signal Flow
- The IOC backend needs to emulate the channel open/reset/close message which
shouldn't be forward to the native cbc signal channel. The Service VM signal
related services should do a real open/reset/close signal channel.
shouldn't be forwarded to the native cbc signal channel. The Service VM
signal related services should do a real open/reset/close signal channel.
- Every backend should maintain a passlist for different VMs. The
passlist can be stored in the Service VM file system (Read only) in the
future, but currently it is hard coded.
IOC mediator has two passlist tables, one is used for rx
signals(SOC->IOC), and the other one is used for tx signals. The IOC
signals (SoC->IOC), and the other one is used for tx signals. The IOC
mediator drops the single signals and group signals if the signals are
not defined in the passlist. For multi signal, IOC mediator generates a
new multi signal, which contains the signals in the passlist.
@ -577,37 +577,37 @@ new multi signal, which contains the signals in the passlist.
:align: center
:name: ioc-med-multi-signal
IOC Mediator - Multi-Signal passlist
IOC Mediator - Multi-Signal Passlist
Raw Data
--------
OEM raw channel only assigns to a specific User VM following that OEM
configuration. The IOC Mediator will directly forward all read/write
message from IOC firmware to User VM without any modification.
messages from IOC firmware to the User VM without any modification.
IOC Mediator Usage
******************
The device model configuration command syntax for IOC mediator is as
The Device Model configuration command syntax for IOC mediator is as
follows::
-i,[ioc_channel_path],[wakeup_reason]
-l,[lpc_port],[ioc_channel_path]
The "ioc_channel_path" is an absolute path for communication between
The ``ioc_channel_path`` is an absolute path for communication between
IOC mediator and UART DM.
The "lpc_port" is "com1" or "com2", IOC mediator needs one unassigned
The ``lpc_port`` is ``com1`` or ``com2``. IOC mediator needs one unassigned
lpc port for data transfer between User VM and Service VM.
The "wakeup_reason" is IOC mediator boot reason, each bit represents
The ``wakeup_reason`` is the IOC mediator boot reason. Each bit represents
one wakeup reason.
For example, the following commands are used to enable IOC feature, the
For example, the following commands are used to enable the IOC feature. The
initial wakeup reason is the ignition button and cbc_attach uses ttyS1
for TTY line discipline in User VM::
for TTY line discipline in the User VM::
-i /run/acrn/ioc_$vm_name,0x20
-l com2,/run/acrn/ioc_$vm_name

View File

@ -16,7 +16,7 @@ translate a guest-physical address into a host-physical address. The HV enables
EPT and VPID hardware virtualization features, establishes EPT page
tables for Service and User VMs, and provides EPT page tables operation interfaces to others.
In the ACRN hypervisor system, there are few different memory spaces to
In the ACRN hypervisor system, there are a few different memory spaces to
consider. From the hypervisor's point of view:
- **Host Physical Address (HPA)**: the native physical address space.
@ -42,7 +42,7 @@ From the Guest OS running on a hypervisor:
:numref:`mem-overview` provides an overview of the ACRN system memory
mapping, showing:
- GVA to GPA mapping based on vMMU on a VCPU in a VM
- GVA to GPA mapping based on vMMU on a vCPU in a VM
- GPA to HPA mapping based on EPT for a VM in the hypervisor
- HVA to HPA mapping based on MMU in the hypervisor
@ -52,7 +52,8 @@ inside the hypervisor and from a VM:
- How ACRN hypervisor manages host memory (HPA/HVA)
- How ACRN hypervisor manages the Service VM guest memory (HPA/GPA)
- How ACRN hypervisor and the Service VM DM manage the User MV guest memory (HPA/GPA)
- How ACRN hypervisor and the Service VM Device Model (DM) manage the User VM
guest memory (HPA/GPA)
Hypervisor Physical Memory Management
*************************************
@ -60,8 +61,9 @@ Hypervisor Physical Memory Management
In ACRN, the HV initializes MMU page tables to manage all physical
memory and then switches to the new MMU page tables. After MMU page
tables are initialized at the platform initialization stage, no updates
are made for MMU page tables except when hv_access_memory_region_update is called.
However, the memory region updated by hv_access_memory_region_update
are made for MMU page tables except when ``set_paging_supervisor/nx/x`` is
called.
However, the memory region updated by ``set_paging_supervisor/nx/x``
must not be accessed by the ACRN hypervisor in advance because access could
make mapping in the TLB and there is no TLB flush mechanism for the ACRN HV memory.
@ -91,12 +93,12 @@ Hypervisor Memory Initialization
The ACRN hypervisor runs in paging mode. After the bootstrap
processor (BSP) gets the platform E820 table, the BSP creates its MMU page
table based on it. This is done by the function *init_paging()*.
table based on it. This is done by the function ``init_paging()``.
After the application processor (AP) receives the IPI CPU startup
interrupt, it uses the MMU page tables created by the BSP. In order to bring
the memory access rights into effect, some other APIs are provided:
enable_paging will enable IA32_EFER.NXE and CR0.WP, enable_smep will
enable CR4.SMEP, and enable_smap will enable CR4.SMAP.
``enable_paging`` will enable IA32_EFER.NXE and CR0.WP, ``enable_smep`` will
enable CR4.SMEP, and ``enable_smap`` will enable CR4.SMAP.
:numref:`hv-mem-init` describes the hypervisor memory initialization for the BSP
and APs.
@ -114,9 +116,9 @@ The following memory mapping policy used is:
and execute-disable access right
- Remap [0, low32_max_ram) regions to WRITE-BACK type
- Remap [4G, high64_max_ram) regions to WRITE-BACK type
- set the paging-structure entries' U/S flag to
- Set the paging-structure entries' U/S flag to
supervisor-mode for hypervisor-owned memory
(exclude the memory reserve for trusty)
(exclude the memory reserved for trusty)
- Remove 'NX' bit for pages that contain the hv code section
.. figure:: images/mem-image69.png
@ -145,7 +147,7 @@ support map linear addresses to 4-KByte pages.
address space mapping and 2MB hugepage can be used, the corresponding
PDT entry shall be set for this 2MB hugepage.
If the memory type or access rights of a page is updated, or some virtual
If the memory type or access rights of a page are updated, or some virtual
address space is deleted, it will lead to splitting of the corresponding
page. The hypervisor will still keep using minimum memory pages to map from
the virtual address space into the physical address space.
@ -228,7 +230,7 @@ The hypervisor:
Memory Virtualization Capability Checking
=========================================
In the hypervisor, memory virtualization provides EPT/VPID capability
In the hypervisor, memory virtualization provides an EPT/VPID capability
checking service and an EPT hugepage supporting checking service. Before the HV
enables memory virtualization and uses the EPT hugepage, these services need
to be invoked by other units.
@ -247,9 +249,10 @@ instruction data.
Access GPA From Hypervisor
--------------------------
When the hypervisor needs to access the GPA for data transfer, the caller from guest
When the hypervisor needs to access the GPA for data transfer, the caller from
a guest
must make sure this memory range's GPA is continuous. But for HPA in the
hypervisor, it could be discontinuous (especially for User VM under hugetlb
hypervisor, it could be discontinuous (especially for a User VM under hugetlb
allocation mechanism). For example, a 4M GPA range may map to 2
different 2M huge host-physical pages. The ACRN hypervisor must take
care of this kind of data transfer by doing EPT page walking based on
@ -278,13 +281,13 @@ space.
- If both 1GB hugepage and 2MB hugepage can't be used for GPA
space mapping, the corresponding EPT PT entry shall be set.
If memory type or access rights of a page is updated or some GPA space
If memory type or access rights of a page are updated or some GPA space
is deleted, it will lead to the corresponding EPT page being split. The
hypervisor should still keep to using minimum EPT pages to map from GPA
space into HPA space.
The hypervisor provides EPT guest-physical mappings adding service, EPT
guest-physical mappings modifying/deleting service and EPT guest-physical
The hypervisor provides an EPT guest-physical mappings adding service, EPT
guest-physical mappings modifying/deleting service, and EPT guest-physical
mappings invalidation service.
Virtual MTRR
@ -301,14 +304,14 @@ hypervisor uses the default memory type in the MTRR (Write-Back).
When the guest disables MTRRs, the HV sets the guest address memory type
as UC.
If the guest physical address is in fixed range (0~1MB), the HV sets
memory type according to the fixed virtual MTRRs.
If the guest physical address is in the fixed range (0~1MB), the HV sets
the memory type according to the fixed virtual MTRRs.
When the guest enable MTRRs, MTRRs have no effect on the memory type
When the guest enables MTRRs, MTRRs have no effect on the memory type
used for access to GPA. The HV first intercepts MTRR MSR registers
access through MSR access VM exit and updates EPT memory type field in EPT
PTE according to the memory type selected by MTRRs. This combines with
PAT entry in the PAT MSR (which is determined by PAT, PCD, and PWT bits
access through MSR access VM exit and updates the EPT memory type field in EPT
PTE according to the memory type selected by MTRRs. This combines with the
PAT entry in the PAT MSR (which is determined by the PAT, PCD, and PWT bits
from the guest paging structures) to determine the effective memory
type.
@ -466,15 +469,16 @@ VPID
.. doxygenfunction:: flush_vpid_global
:project: Project ACRN
Service OS Memory Management
Service VM Memory Management
****************************
After the ACRN hypervisor starts, it creates the Service VM as its first
VM. The Service VM runs all the native device drivers, manages the
hardware devices, and provides I/O mediation to guest VMs. The Service
OS is in charge of the memory allocation for Guest VMs as well.
hardware devices, and provides I/O mediation to post-launched User VMs. The
Service VM is in charge of the memory allocation for post-launched User VMs as
well.
ACRN hypervisor passes the whole system memory access (except its own
The ACRN hypervisor passes the whole system memory access (except its own
part) to the Service VM. The Service VM must be able to access all of
the system memory except the hypervisor part.
@ -482,28 +486,28 @@ Guest Physical Memory Layout - E820
===================================
The ACRN hypervisor passes the original E820 table to the Service VM
after filtering out its own part. So from Service VM's view, it sees
after filtering out its own part. From the Service VM's view, it sees
almost all the system memory as shown here:
.. figure:: images/mem-image3.png
:align: center
:width: 900px
:name: sos-mem-layout
:name: service-vm-mem-layout
Service VM Physical Memory Layout
Host to Guest Mapping
=====================
ACRN hypervisor creates the Service OS's guest (GPA) to host (HPA) mapping
(EPT mapping) through the function ``prepare_sos_vm_memmap()``
The ACRN hypervisor creates the Service VM's guest (GPA) to host (HPA) mapping
(EPT mapping) through the function ``prepare_service_vm_memmap()``
when it creates the Service VM. It follows these rules:
- Identical mapping
- Map all memory range with UNCACHED type
- Map all memory ranges with UNCACHED type
- Remap RAM entries in E820 (revised) with WRITE-BACK type
- Unmap ACRN hypervisor memory range
- Unmap all platform EPC resource
- Unmap all platform EPC resources
- Unmap ACRN hypervisor emulated vLAPIC/vIOAPIC MMIO range
The guest to host mapping is static for the Service VM; it will not
@ -515,9 +519,9 @@ in the hypervisor for Service VM.
Trusty
******
For an Android User OS, there is a secure world named trusty world
support, whose memory must be secured by the ACRN hypervisor and
must not be accessible by the Seervice/User VM normal world.
For an Android User VM, there is a secure world named trusty world,
whose memory must be secured by the ACRN hypervisor and
must not be accessible by the Service VM and User VM normal world.
.. figure:: images/mem-image18.png
:align: center

View File

@ -3,13 +3,13 @@
Partition Mode
##############
ACRN is a type-1 hypervisor that supports running multiple guest operating
ACRN is a type 1 hypervisor that supports running multiple guest operating
systems (OS). Typically, the platform BIOS/bootloader boots ACRN, and
ACRN loads single or multiple guest OSes. Refer to :ref:`hv-startup` for
details on the start-up flow of the ACRN hypervisor.
ACRN supports two modes of operation: Sharing mode and Partition mode.
This document describes ACRN's high-level design for Partition mode
ACRN supports two modes of operation: sharing mode and partition mode.
This document describes ACRN's high-level design for partition mode
support.
.. contents::
@ -23,10 +23,10 @@ In partition mode, ACRN provides guests with exclusive access to cores,
memory, cache, and peripheral devices. Partition mode enables developers
to dedicate resources exclusively among the guests. However, there is no
support today in x86 hardware or in ACRN to partition resources such as
peripheral buses (e.g. PCI). On x86 platforms that support Cache
Allocation Technology (CAT) and Memory Bandwidth Allocation(MBA), resources
such as Cache and memory bandwidth can be used by developers to partition
L2, Last Level Cache (LLC), and memory bandwidth among the guests. Refer to
peripheral buses (e.g., PCI). On x86 platforms that support Cache
Allocation Technology (CAT) and Memory Bandwidth Allocation (MBA), developers
can partition Level 2 (L2) cache, Last Level Cache (LLC), and memory bandwidth
among the guests. Refer to
:ref:`hv_rdt` for more details on ACRN RDT high-level design and
:ref:`rdt_configuration` for RDT configuration.
@ -34,15 +34,15 @@ L2, Last Level Cache (LLC), and memory bandwidth among the guests. Refer to
ACRN expects static partitioning of resources either by code
modification for guest configuration or through compile-time config
options. All the devices exposed to the guests are either physical
resources or are emulated in the hypervisor. So, there is no need for a
device-model and Service OS. :numref:`pmode2vms` shows a partition mode
resources or are emulated in the hypervisor. There is no need for a
Device Model and Service VM. :numref:`pmode2vms` shows a partition mode
example of two VMs with exclusive access to physical resources.
.. figure:: images/partition-image3.png
:align: center
:name: pmode2vms
Partition Mode example with two VMs
Partition Mode Example with Two VMs
Guest Info
**********
@ -51,12 +51,14 @@ ACRN uses multi-boot info passed from the platform bootloader to know
the location of each guest kernel in memory. ACRN creates a copy of each
guest kernel into each of the guests' memory. Current implementation of
ACRN requires developers to specify kernel parameters for the guests as
part of guest configuration. ACRN picks up kernel parameters from guest
part of the guest configuration. ACRN picks up kernel parameters from the guest
configuration and copies them to the corresponding guest memory.
.. figure:: images/partition-image18.png
:align: center
Guest Info
ACRN Setup for Guests
*********************
@ -65,9 +67,9 @@ Cores
ACRN requires the developer to specify the number of guests and the
cores dedicated for each guest. Also, the developer needs to specify
the physical core used as the Boot Strap Processor (BSP) for each guest. As
the physical core used as the bootstrap processor (BSP) for each guest. As
the processors are brought to life in the hypervisor, it checks if they are
configured as BSP for any of the guests. If a processor is BSP of any of
configured as BSP for any of the guests. If a processor is the BSP of any of
the guests, ACRN proceeds to build the memory mapping for the guest,
mptable, E820 entries, and zero page for the guest. As described in
`Guest info`_, ACRN creates copies of guest kernel and kernel
@ -78,7 +80,7 @@ events in chronological order.
:align: center
:name: partBSPsetup
Event Order for Processor Set Up
Event Order for Processor Setup
Memory
======
@ -103,7 +105,7 @@ E820 and Zero Page Info
A default E820 is used for all the guests in partition mode. This table
shows the reference E820 layout. Zero page is created with this
e820 info for all the guests.
E820 info for all the guests.
+------------------------+
| RAM |
@ -146,9 +148,9 @@ host-bridge at BDF (Bus Device Function) 0.0:0 to each guest. Access to
I/O - Passthrough Devices
=========================
ACRN, in partition mode, supports passing thru PCI devices on the
ACRN, in partition mode, supports passing through PCI devices on the
platform. All the passthrough devices are exposed as child devices under
the virtual host bridge. ACRN does not support either passing thru
the virtual host bridge. ACRN does not support either passing through
bridges or emulating virtual bridges. Passthrough devices should be
statically allocated to each guest using the guest configuration. ACRN
expects the developer to provide the virtual BDF to BDF of the
@ -158,11 +160,11 @@ configuration.
Runtime ACRN Support for Guests
*******************************
ACRN, in partition mode, supports an option to passthrough LAPIC of the
ACRN, in partition mode, supports an option to pass through LAPIC of the
physical CPUs to the guest. ACRN expects developers to specify if the
guest needs LAPIC passthrough using guest configuration. When guest
guest needs LAPIC passthrough using guest configuration. When the guest
configures vLAPIC as x2APIC, and if the guest configuration has LAPIC
passthrough enabled, ACRN passes the LAPIC to the guest. Guest can access
passthrough enabled, ACRN passes the LAPIC to the guest. The guest can access
the LAPIC hardware directly without hypervisor interception. During
runtime of the guest, this option differentiates how ACRN supports
inter-processor interrupt handling and device interrupt handling. This
@ -171,13 +173,14 @@ will be discussed in detail in the corresponding sections.
.. figure:: images/partition-image16.png
:align: center
LAPIC Passthrough
Guest SMP Boot Flow
===================
The core APIC IDs are reported to the guest using mptable info. SMP boot
flow is similar to sharing mode. Refer to :ref:`vm-startup`
for guest SMP boot flow in ACRN. Partition mode guests startup is same as
for guest SMP boot flow in ACRN. Partition mode guests startup is the same as
the Service VM startup in sharing mode.
Inter-Processor Interrupt (IPI) Handling
@ -195,7 +198,7 @@ Guests With LAPIC Passthrough
ACRN supports passthrough if and only if the guest is using x2APIC mode
for the vLAPIC. In LAPIC passthrough mode, writes to the Interrupt Command
Register (ICR) x2APIC MSR is intercepted. Guest writes the IPI info,
Register (ICR) x2APIC MSR are intercepted. The guest writes the IPI info,
including vector, and destination APIC IDs to the ICR. Upon an IPI request
from the guest, ACRN does a sanity check on the destination processors
programmed into the ICR. If the destination is a valid target for the guest,
@ -205,6 +208,7 @@ corresponding to the destination processor info in the ICR.
.. figure:: images/partition-image14.png
:align: center
IPI Handling for Guests With LAPIC Passthrough
Passthrough Device Support
==========================
@ -224,6 +228,7 @@ passthrough devices. Refer to the `I/O`_ section below for more details.
.. figure:: images/partition-image1.png
:align: center
Configuration Space Access
DMA
---
@ -247,12 +252,13 @@ ACRN supports I/O for passthrough devices with two restrictions.
As the guest PCI sub-system scans the PCI bus and assigns a Guest Physical
Address (GPA) to the MMIO BAR, ACRN maps the GPA to the address in the
physical BAR of the passthrough device using EPT. The following timeline chart
explains how PCI devices are assigned to guest and BARs are mapped upon
explains how PCI devices are assigned to the guest and how BARs are mapped upon
guest initialization.
.. figure:: images/partition-image13.png
:align: center
I/O for Passthrough Devices
Interrupt Configuration
-----------------------
@ -265,21 +271,21 @@ INTx Support
ACRN expects developers to identify the interrupt line info (0x3CH) from
the physical BAR of the passthrough device and build an interrupt entry in
the mptable for the corresponding guest. As guest configures the vIOAPIC
the mptable for the corresponding guest. As the guest configures the vIOAPIC
for the interrupt RTE, ACRN writes the info from the guest RTE into the
physical IOAPIC RTE. Upon the guest kernel request to mask the interrupt,
ACRN writes to the physical RTE to mask the interrupt at the physical
IOAPIC. When guest masks the RTE in vIOAPIC, ACRN masks the interrupt
IOAPIC. When the guest masks the RTE in vIOAPIC, ACRN masks the interrupt
RTE in the physical IOAPIC. Level triggered interrupts are not
supported.
MSI Support
~~~~~~~~~~~
Guest reads/writes to PCI configuration space for configuring MSI
interrupts using an address. Data and control registers are passthrough to
The guest reads/writes to the PCI configuration space to configure MSI
interrupts using an address. Data and control registers are passed through to
the physical BAR of the passthrough device. Refer to `Configuration
space access`_ for details on how the PCI configuration space is emulated.
Space Access`_ for details on how the PCI configuration space is emulated.
Virtual Device Support
======================
@ -296,18 +302,19 @@ Interrupt Delivery
Guests Without LAPIC Passthrough
--------------------------------
In partition mode of ACRN, interrupts stay disabled after a vmexit. The
In ACRN partition mode, interrupts stay disabled after a vmexit. The
processor does not take interrupts when it is executing in VMX root
mode. ACRN configures the processor to take vmexit upon external
interrupt if the processor is executing in VMX non-root mode. Upon an
external interrupt, after sending EOI to the physical LAPIC, ACRN
injects the vector into the vLAPIC of vCPU currently running on the
processor. Guests using Linux as kernel, uses vectors less than 0xECh
injects the vector into the vLAPIC of the vCPU currently running on the
processor. Guests using a Linux kernel use vectors less than 0xECh
for device interrupts.
.. figure:: images/partition-image20.png
:align: center
Interrupt Delivery for Guests Without LAPIC Passthrough
Guests With LAPIC Passthrough
-----------------------------
@ -320,7 +327,7 @@ Hypervisor IPI Service
======================
ACRN needs IPIs for events such as flushing TLBs across CPUs, sending virtual
device interrupts (e.g. vUART to vCPUs), and others.
device interrupts (e.g., vUART to vCPUs), and others.
Guests Without LAPIC Passthrough
--------------------------------
@ -330,7 +337,7 @@ Hypervisor IPIs work the same way as in sharing mode.
Guests With LAPIC Passthrough
-----------------------------
Since external interrupts are passthrough to the guest IDT, IPIs do not
Since external interrupts are passed through to the guest IDT, IPIs do not
trigger vmexit. ACRN uses NMI delivery mode and the NMI exiting is
chosen for vCPUs. At the time of NMI interrupt on the target processor,
if the processor is in non-root mode, vmexit happens on the processor
@ -339,7 +346,7 @@ and the event mask is checked for servicing the events.
Debug Console
=============
For details on how hypervisor console works, refer to
For details on how the hypervisor console works, refer to
:ref:`hv-console`.
For a guest console in partition mode, ACRN provides an option to pass
@ -356,16 +363,16 @@ Hypervisor Console
ACRN uses the TSC deadline timer to provide a timer service. The hypervisor
console uses a timer on CPU0 to poll characters on the serial device. To
support LAPIC passthrough, the TSC deadline MSR is passthrough and the local
support LAPIC passthrough, the TSC deadline MSR is passed through and the local
timer interrupt is also delivered to the guest IDT. Instead of the TSC
deadline timer, ACRN uses the VMX preemption timer to poll the serial device.
Guest Console
=============
ACRN exposes vUART to partition mode guests. vUART uses vPIC to inject
interrupt to the guest BSP. In cases of the guest having more than one core,
ACRN exposes vUART to partition mode guests. vUART uses vPIC to inject an
interrupt to the guest BSP. If the guest has more than one core,
during runtime, vUART might need to inject an interrupt to the guest BSP from
another core (other than BSP). As mentioned in section <Hypervisor IPI
service>, ACRN uses NMI delivery mode for notifying the CPU running the BSP
another core (other than BSP). As mentioned in section `Hypervisor IPI
Service`_, ACRN uses NMI delivery mode for notifying the CPU running the BSP
of the guest.

View File

@ -0,0 +1,139 @@
.. _hv_vcat:
Enable vCAT
###########
vCAT refers to the virtualization of Cache Allocation Technology (CAT), one of the
RDT (Resource Director Technology) technologies.
ACRN vCAT is built on top of ACRN RDT: ACRN RDT provides a number of physical CAT resources
(COS IDs + cache ways), ACRN vCAT exposes some number of virtual CAT resources to VMs
and then transparently map them to the assigned physical CAT resources in the ACRN hypervisor;
VM can take advantage of vCAT to prioritize and partition virtual cache ways for its own tasks.
In current CAT implementation, one COS ID corresponds to one ``IA32_type_MASK_n`` (type: L2 or L3,
n ranges from 0 to ``MAX_CACHE_CLOS_NUM_ENTRIES`` - 1) MSR and a bit in a capacity bitmask (CBM)
corresponds to one cache way.
On current generation systems, normally L3 cache is shared by all CPU cores on the same socket and
L2 cache is generally just shared by the hyperthreads on a core. But when dealing with ACRN
vCAT COS IDs assignment, it is currently assumed that all the L2/L3 caches (and therefore all COS IDs)
are system-wide caches shared by all cores in the system, this is done for convenience and to simplify
the vCAT configuration process. If vCAT is enabled for a VM (abbreviated as vCAT VM), there should not
be any COS ID overlap between a vCAT VM and any other VMs. e.g. the vCAT VM has exclusive use of the
assigned COS IDs.
When assigning cache ways, however, the VM can be given exclusive, shared, or mixed access to the cache
ways depending on particular performance needs. For example, use dedicated cache ways for RTVM, and use
shared cache ways between low priority VMs.
In ACRN, the CAT resources allocated for vCAT VMs are determined in :ref:`vcat_configuration`.
For further details on the RDT, refer to the ACRN RDT high-level design :ref:`hv_rdt`.
High Level ACRN vCAT Design
***************************
ACRN CAT virtualization support can be divided into two parts:
- CAT Capability Exposure to Guest VM
- CAT resources (COS IDs + cache ways) management
The figure below shows high-level design of vCAT in ACRN:
.. figure:: images/vcat-hld.png
:align: center
CAT Capability Exposure to Guest VM
***********************************
ACRN exposes CAT capability and resource to a Guest VM via vCPUID and vMSR, as explained
in the following sections.
vCPUID
======
CPUID Leaf 07H
--------------
- CPUID.(EAX=07H, ECX=0).EBX.PQE[bit 15]: Supports RDT capability if 1. This bit will be set for a vCAT VM.
CPUID Leaf 10H
--------------
**CAT Resource Type and Capability Enumeration**
- CPUID.(EAX=10H, ECX=0):EBX[1]: If 1, indicate L3 CAT support for a vCAT VM.
- CPUID.(EAX=10H, ECX=0):EBX[2]: If 1, indicate L2 CAT support for a vCAT VM.
- CPUID.(EAX=10H, ECX=1): CAT capability enumeration sub-leaf for L3. Reports L3 COS_MAX and CBM_LEN to a vCAT VM
- CPUID.(EAX=10H, ECX=2): CAT capability enumeration sub-leaf for L2. Reports L2 COS_MAX and CBM_LEN to a vCAT VM
vMSR
====
The following CAT MSRs will be virtualized for a vCAT VM:
- IA32_PQR_ASSOC
- IA32_type_MASK_0 ~ IA32_type_MASK_n
By default, after reset, all CPU cores are assigned to COS 0 and all IA32_type_MASK_n MSRs
are programmed to allow fill into all cache ways.
CAT resources (COS IDs + cache ways) management
************************************************
All accesses to the CAT MSRs are intercepted by vMSR and control is passed to vCAT, which will perform
the following actions:
- Intercept IA32_PQR_ASSOC MSR to re-map virtual COS ID to physical COS ID.
Upon writes, store the re-mapped physical COS ID into its vCPU ``msr_store_area``
data structure guest part. It will be loaded to physical IA32_PQR_ASSOC on each VM-Enter.
- Intercept IA32_type_MASK_n MSRs to re-map virtual CBM to physical CBM. Upon writes,
program re-mapped physical CBM into corresponding physical IA32_type_MASK_n MSR
Several vCAT P2V (physical to virtual) and V2P (virtual to physical)
mappings exist, as illustrated in the following pseudocode:
.. code-block:: none
struct acrn_vm_config *vm_config = get_vm_config(vm_id)
max_pcbm = vm_config->max_type_pcbm (type: l2 or l3)
mask_shift = ffs64(max_pcbm)
vcosid = vmsr - MSR_IA32_type_MASK_0
pcosid = vm_config->pclosids[vcosid]
pmsr = MSR_IA32_type_MASK_0 + pcosid
pcbm = vcbm << mask_shift
vcbm = pcbm >> mask_shift
Where
``vm_config->pclosids[]``: array of physical COS IDs, where each corresponds to one ``vcpu_clos`` that
is defined in the scenario file
``max_pcbm``: a bitmask that selects all the physical cache ways assigned to the VM, corresponds to
the nth ``CLOS_MASK`` that is defined in scenario file, where n = the first physical COS ID assigned
= ``vm_config->pclosids[0]``
``ffs64(max_pcbm)``: find the first (least significant) bit set in ``max_pcbm`` and return
the index of that bit.
``MSR_IA32_type_MASK_0``: 0xD10 for L2, 0xC90 for L3
``vcosid``: virtual COS ID, always starts from 0
``pcosid``: corresponding physical COS ID for a given ``vcosid``
``vmsr``: virtual MSR address, passed to vCAT handlers by the
caller functions ``rdmsr_vmexit_handler()``/``wrmsr_vmexit_handler()``
``pmsr``: physical MSR address
``vcbm``: virtual CBM, passed to vCAT handlers by the
caller functions ``rdmsr_vmexit_handler()``/``wrmsr_vmexit_handler()``
``pcbm``: physical CBM

Binary file not shown.

Before

Width:  |  Height:  |  Size: 49 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 50 KiB

After

Width:  |  Height:  |  Size: 73 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 84 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 54 KiB

After

Width:  |  Height:  |  Size: 44 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 44 KiB

After

Width:  |  Height:  |  Size: 55 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 88 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 62 KiB

After

Width:  |  Height:  |  Size: 101 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 58 KiB

After

Width:  |  Height:  |  Size: 94 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 15 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 21 KiB

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 48 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 24 KiB

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 42 KiB

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 33 KiB

After

Width:  |  Height:  |  Size: 30 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.7 KiB

After

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 52 KiB

After

Width:  |  Height:  |  Size: 47 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 16 KiB

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 16 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 56 KiB

After

Width:  |  Height:  |  Size: 68 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 161 KiB

After

Width:  |  Height:  |  Size: 133 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 9.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 48 KiB

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

After

Width:  |  Height:  |  Size: 15 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 94 KiB

After

Width:  |  Height:  |  Size: 90 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 39 KiB

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 156 KiB

After

Width:  |  Height:  |  Size: 147 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 26 KiB

After

Width:  |  Height:  |  Size: 21 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 53 KiB

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 97 KiB

After

Width:  |  Height:  |  Size: 87 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 107 KiB

After

Width:  |  Height:  |  Size: 114 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 250 KiB

After

Width:  |  Height:  |  Size: 228 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 135 KiB

After

Width:  |  Height:  |  Size: 138 KiB

View File

@ -3,7 +3,10 @@
RTC Virtualization
##################
This document describes the RTC virtualization implementation in
ACRN device model.
This document describes the real-time clock (RTC) virtualization implementation
in the ACRN Device Model.
vRTC is a read-only RTC for the pre-launched VM, Service OS, and post-launched RT VM. It supports RW for the CMOS address port 0x70 and RO for the CMOS data port 0x71. Reads to the CMOS RAM offsets are fetched by reading the CMOS h/w directly and writes to CMOS offsets are discarded.
vRTC is a read-only RTC for the pre-launched VM, Service VM, and post-launched
RTVM. It supports read/write (RW) for the CMOS address port 0x70 and read only
(RO) for the CMOS data port 0x71. Reads to the CMOS RAM offsets are fetched from
the CMOS hardware directly. Writes to the CMOS offsets are discarded.

View File

@ -12,7 +12,7 @@ and their peripheral devices.
:align: center
:name: usb-virt-arch
USB architecture overview
USB Architecture Overview
The ACRN USB virtualization includes
@ -21,55 +21,30 @@ emulation of three components, described here and shown in
- **xHCI DM** (Host Controller Interface) provides multiple
instances of virtual xHCI controllers to share among multiple User
OSes, each USB port can be assigned and dedicated to a VM by user
VMs, each USB port can be assigned and dedicated to a VM by user
settings.
- **xDCI controller** (Device Controller Interface)
can be passed through to the
specific User OS with I/O MMU assistance.
specific User VM with I/O MMU assistance.
- **DRD DM** (Dual Role Device) emulates the PHY MUX control
logic. The sysfs interface in a User VM is used to trap the switch operation
into DM, and the the sysfs interface in the Service VM is used to operate on the physical
registers to switch between DCI and HCI role.
into DM, and the sysfs interface in the Service VM is used to operate on the
physical registers to switch between DCI and HCI role.
On Intel Apollo Lake platform, the sysfs interface path is
``/sys/class/usb_role/intel_xhci_usb_sw/role``. If user echos string
``device`` to role node, the usb phy will be connected with xDCI controller as
device mode. Similarly, by echoing ``host``, the usb phy will be
connected with xHCI controller as host mode.
On Apollo Lake platforms, the sysfs interface path is
``/sys/class/usb_role/intel_xhci_usb_sw/role``. If the user echos the string
``device`` to the role node, the USB PHY will be connected with the xDCI
controller as
device mode. Similarly, by echoing ``host``, the USB PHY will be
connected with the xHCI controller as host mode.
An xHCI register access from a User VM will induce EPT trap from the User VM to
An xHCI register access from a User VM will induce an EPT trap from the User VM
to
DM, and the xHCI DM or DRD DM will emulate hardware behaviors to make
the subsystem run.
USB Devices Supported by USB Mediator
*************************************
The following USB devices are supported for the WaaG and LaaG operating systems.
+--------------+---------+---------+
| Device | WaaG OS | LaaG OS |
+==============+=========+=========+
| USB Storage | Y | Y |
+--------------+---------+---------+
| USB Mouse | Y | Y |
+--------------+---------+---------+
| USB Keyboard | Y | Y |
+--------------+---------+---------+
| USB Camera | Y | Y |
+--------------+---------+---------+
| USB Headset | Y | Y |
+--------------+---------+---------+
| USB Hub | Y | Y |
| (20 ports max| | |
| per VM) | | |
+--------------+---------+---------+
.. note::
The above information is current as of ACRN 1.4.
USB Host Virtualization
***********************
@ -80,28 +55,28 @@ USB host virtualization is implemented as shown in
:align: center
:name: xhci-dm-arch
xHCI DM software architecture
xHCI DM Software Architecture
The yellow-colored components make up the ACRN USB stack supporting xHCI
The following components make up the ACRN USB stack supporting xHCI
DM:
- **xHCI DM** emulates the xHCI controller logic following the xHCI spec;
- **xHCI DM** emulates the xHCI controller logic following the xHCI spec.
- **USB core** is a middle abstract layer to isolate the USB controller
emulators and USB device emulators.
- **USB Port Mapper** maps the specific native physical USB
ports to virtual USB ports. It communicate with
native USB ports though libusb.
ports to virtual USB ports. It communicates with
native USB ports though libusb.
All the USB data buffers from a User VM are in the form of TRB
(Transfer Request Blocks), according to xHCI spec. xHCI DM will fetch
these data buffers when the related xHCI doorbell registers are set.
These data will convert to *struct usb_data_xfer* and, through USB core,
forward to the USB port mapper module which will communicate with native USB
The data will convert to ``struct usb_data_xfer`` and, through USB core,
forward to the USB port mapper module which will communicate with the native USB
stack over libusb.
The device model configuration command syntax for xHCI is as follows::
The Device Model configuration command syntax for xHCI is as follows::
-s <slot>,xhci,[bus1-port1,bus2-port2]
@ -124,34 +99,34 @@ USB DRD (Dual Role Device) emulation works as shown in this figure:
.. figure:: images/usb-image31.png
:align: center
xHCI DRD DM software architecture
xHCI DRD DM Software Architecture
ACRN emulates the DRD hardware logic of an Intel Apollo Lake platform to
support the dual role requirement. The DRD feature is implemented as xHCI
ACRN emulates the DRD hardware logic of an Apollo Lake platform to
support the dual role requirement. The DRD feature is implemented as an xHCI
vendor extended capability. ACRN emulates
the same way, so the native driver can be reused in a User VM. When a User VM DRD
driver reads or writes the related xHCI extended registers, these access will
driver reads or writes the related xHCI extended registers, these accesses will
be captured by xHCI DM. xHCI DM uses the native DRD related
sysfs interface to do the Host/Device mode switch operations.
The device model configuration command syntax for xHCI DRD is as
The Device Model configuration command syntax for xHCI DRD is as
follows::
-s <slot>,xhci,[bus1-port1,bus2-port2],cap=platform
- *cap*: cap means virtual xHCI capability. This parameter
indicates virtual xHCI should emulate the named platform's xHCI
capabilities.
indicates virtual xHCI should emulate the named platform's xHCI
capabilities.
A simple example::
-s 7,xhci,1-2,2-2,cap=apl
This configuration means the virtual xHCI should emulate xHCI
capabilities for the Intel Apollo Lake platform, which supports DRD
capabilities for the Apollo Lake platform, which supports the DRD
feature.
Interface Specification
***********************
.. note:: reference doxygen-generated API content
.. note:: Reference the Doxygen-generated API content.

View File

@ -414,7 +414,7 @@ prepared in the Service VM before we start. We need to create a bridge and at
least one TAP device (two TAP devices are needed to create a dual
virtual NIC) and attach a physical NIC and TAP device to the bridge.
.. figure:: images/network-virt-sos-infrastruct.png
.. figure:: images/network-virt-service-vm-infrastruct.png
:align: center
:width: 900px
:name: net-virt-infra

View File

@ -13,14 +13,14 @@ Architecture
The green components are parts of the ACRN solution while the gray
components are parts of Linux software or third party tools.
virtio-rnd is implemented as a virtio legacy device in the ACRN device
model (DM), and is registered as a PCI virtio device to the guest OS
virtio-rnd is implemented as a virtio legacy device in the ACRN Device
Model (DM), and is registered as a PCI virtio device to the guest OS
(User VM). Tools such as :command:`od` (dump a file in octal or other format) can
be used to read random values from ``/dev/random``. This device file in the
User VM is bound with the frontend virtio-rng driver. (The guest kernel must
be built with ``CONFIG_HW_RANDOM_VIRTIO=y``). The backend
virtio-rnd reads the HW random value from ``/dev/random`` in the SOS and sends
them to the frontend.
virtio-rnd reads the HW random values from ``/dev/random`` in the Service
VM and sends them to the frontend.
.. figure:: images/virtio-hld-image61.png
:align: center
@ -31,7 +31,7 @@ them to the frontend.
How to Use
**********
Add a PCI slot to the device model acrn-dm command line; for example::
Add a PCI slot to the Device Model acrn-dm command line; for example::
-s <slot_number>,virtio-rnd

Binary file not shown.

Before

Width:  |  Height:  |  Size: 51 KiB

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 18 KiB

After

Width:  |  Height:  |  Size: 16 KiB

View File

@ -2,9 +2,10 @@ digraph G {
rankdir=LR;
rank=same;
bgcolor="transparent";
uosl1 [label="UOS_Loader"]
uservml1 [label="User VM OS\nBootloader"]
acrn_init [shape=box style="rounded,filled" label="ACRN"]
acrn_switch [shape=box style="rounded,filled" label="ACRN"]
uosl2 [label="UOS_Loader"]
uosl1 -> acrn_init -> "Trusty" -> acrn_switch -> uosl2;
uservml2 [label="User VM OS\nBootloader"]
uservml1 -> acrn_init -> "Trusty" -> acrn_switch -> uservml2;
}

View File

@ -113,8 +113,8 @@ level are shown below.
**Return error code**
The hypervisor shall return an error code to the VM when the below cases
occur. The error code shall indicate the error type detected (e.g. invalid
parameter, device not found, device busy, resource unavailable, etc).
occur. The error code shall indicate the error type detected (e.g., invalid
parameter, device not found, device busy, and resource unavailable).
This method applies to the following case:
@ -123,8 +123,8 @@ level are shown below.
**Inform the safety VM through specific register or memory area**
The hypervisor shall inform the safety VM through a specific register or
memory area when the below cases occur. The VM will decide how to handle
the related error. This shall be done only after the VM (Safety OS or
Service OS) dedicated to error handling has started.
the related error. This shall be done only after the VM (Safety VM or
Service VM) dedicated to error handling has started.
This method applies to the following cases:
@ -273,7 +273,7 @@ The rules of error detection and error handling on a module level are shown in
| Resource Class | Failure | Error Detection via | Error Handling Policy | Example |
| | Mode | Hypervisor | | |
+====================+===========+============================+===========================+=========================+
| Internal data of | N/A | Partial. | The hypervisor shall use | virtual PCI device |
| Internal data of | N/A | Partial. | The hypervisor shall use | Virtual PCI device |
| the hypervisor | | The related pre-conditions | the internal resource/data| information, defined |
| | | are required. | directly. | with array |
| | | | | ``pci_vdevs[]`` |
@ -570,7 +570,7 @@ The following table shows some use cases of module level configuration design:
- This module is used to virtualize part of LAPIC functionalities.
It can be done via APICv or software emulation depending on CPU
capabilities.
For example, KBL Intel NUC doesn't support virtual-interrupt delivery,
For example, Kaby Lake NUC doesn't support virtual-interrupt delivery,
while other platforms support it.
- If a function pointer is used, the prerequisite is
"hv_operation_mode == OPERATIONAL".

View File

@ -32,10 +32,13 @@ Trusty Architecture
.. figure:: images/trusty-arch.png
:align: center
:width: 800px
:name: Trusty Architectural diagram
:name: trusty-architectural-diagram
Trusty Architectural Diagram
.. note::
Trusty OS is running in Secure World in the architecture drawing above.
The Trusty OS is running in the Secure World in the architecture drawing
above.
.. _trusty-hypercalls:
@ -51,7 +54,7 @@ There are a few :ref:`hypercall_apis` that are related to Trusty.
Trusty Boot Flow
****************
By design, the User OS bootloader (``UOS_Loader``) will trigger the Trusty
By design, the User VM OS bootloader will trigger the Trusty
boot process. The complete boot flow is illustrated below.
.. graphviz:: images/trusty-boot-flow.dot
@ -62,12 +65,12 @@ boot process. The complete boot flow is illustrated below.
As shown in the above figure, here are some details about the Trusty
boot flow processing:
1. UOS_Loader
1. User VM OS bootloader
a. Load and verify Trusty image from virtual disk
#. Allocate runtime memory for trusty
#. Do ELF relocation of trusty image and get entry address
#. Call ``hcall_initialize_trusty`` with trusty memory base and
#. Allocate runtime memory for Trusty
#. Do ELF relocation of Trusty image and get entry address
#. Call ``hcall_initialize_trusty`` with Trusty memory base and
entry address
#. ACRN (``hcall_initialize_trusty``)
@ -83,41 +86,44 @@ boot flow processing:
a. Save World context for the World that caused this ``vmexit``
(Secure World)
#. Restore World context for next World (Normal World (UOS_Loader))
#. Resume to next World (UOS_Loader)
#. UOS_Loader
#. Restore World context for next World (Normal World: User VM OS bootloader)
#. Resume to next World (User VM OS bootloader)
#. User VM OS bootloader
a. Continue to boot
EPT Hierarchy
*************
As per the Trusty design, Trusty can access Normal World's memory, but Normal
World cannot access Secure World's memory. Hence it means Secure World EPTP
page table hierarchy must contain normal world GPA address space, while Trusty
world's GPA address space must be removed from the Normal world EPTP page
table hierarchy.
As per the Trusty design, Trusty can access the Normal World's memory, but the
Normal World cannot access the Secure World's memory. Hence it means the Secure
World EPTP page table hierarchy must contain the Normal World GPA address space,
while the Trusty world's GPA address space must be removed from the Normal World
EPTP page table hierarchy.
Design
======
Put Secure World's GPA to very high position: 511 GB - 512 GB. The PML4/PDPT
for Trusty World are separated from Normal World. PD/PT for low memory
(< 511 GB) are shared in both Trusty World's EPT and Normal World's EPT.
PD/PT for high memory (>= 511 GB) are valid for Trusty World's EPT only.
Put the Secure World's GPA to a very high position: 511 GB - 512 GB. The
PML4/PDPT for the Trusty World are separated from the Normal World. PD and PT
for low memory
(< 511 GB) are shared in both the Trusty World's EPT and the Normal World's EPT.
PD and PT for high memory (>= 511 GB) are valid for the Trusty World's EPT only.
Benefit
=======
This design will benefit the EPT changes of Normal World. There are
requirements to modify Normal World's EPT during runtime such as increasing
memory, changing attributes, etc. If such behavior happened, only PD and PT
for Normal World need to be updated.
This design will benefit the EPT changes of the Normal World. There are
requirements to modify the Normal World's EPT during runtime such as increasing
memory and changing attributes. If such behavior happens, only PD and PT
for the Normal World need to be updated.
.. figure:: images/ept-hierarchy.png
:align: center
:width: 800px
:name: EPT hierarchy pic
:name: ept-hierarchy
EPT Hierarchy
API
===

View File

@ -163,10 +163,10 @@ To set up the ACRN build environment on the development computer:
cd ~/acrn-work
git clone https://github.com/projectacrn/acrn-hypervisor.git
cd acrn-hypervisor
git checkout v2.6
git checkout v2.7
cd ..
git clone --depth 1 --branch release_2.6 https://github.com/projectacrn/acrn-kernel.git
git clone --depth 1 --branch release_2.7 https://github.com/projectacrn/acrn-kernel.git
.. _gsg-board-setup:
@ -180,7 +180,7 @@ information extracted from the target system. The file is used to configure the
ACRN hypervisor, because each hypervisor instance is specific to your target
hardware.
You use the **board inspector tool** to generate the board
You use the **Board Inspector tool** to generate the board
configuration file.
.. important::
@ -192,7 +192,7 @@ configuration file.
Install OS on the Target
============================
The target system needs Ubuntu 18.04 to run the board inspector tool.
The target system needs Ubuntu 18.04 to run the Board Inspector tool.
To install Ubuntu 18.04:
@ -248,7 +248,7 @@ Configure Target BIOS Settings
Generate a Board Configuration File
=========================================
#. On the target system, install the board inspector dependencies:
#. On the target system, install the Board Inspector dependencies:
.. code-block:: bash
@ -281,7 +281,7 @@ Generate a Board Configuration File
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash idle=nomwait iomem=relaxed intel_idle.max_cstate=0 intel_pstate=disable"
These settings allow the board inspector tool to
These settings allow the Board Inspector tool to
gather important information about the board.
#. Save and close the file.
@ -293,7 +293,7 @@ Generate a Board Configuration File
sudo update-grub
reboot
#. Copy the board inspector tool folder from the development computer to the
#. Copy the Board Inspector tool folder from the development computer to the
target via USB disk as follows:
a. Move to the development computer.
@ -311,32 +311,32 @@ Generate a Board Configuration File
Confirm that only one disk name appears. You'll use that disk name in
the following steps.
#. Copy the board inspector tool folder from the acrn-hypervisor source code to the USB disk:
#. Copy the Board Inspector tool folder from the acrn-hypervisor source code to the USB disk:
.. code-block:: bash
cd ~/acrn-work/
disk="/media/$USER/"$(ls /media/$USER)
cp -r acrn-hypervisor/misc/config_tools/board_inspector/ $disk/
sync && sudo umount $disk
cp -r acrn-hypervisor/misc/config_tools/board_inspector/ "$disk"/
sync && sudo umount "$disk"
#. Insert the USB disk into the target system.
#. Copy the board inspector tool from the USB disk to the target:
#. Copy the Board Inspector tool from the USB disk to the target:
.. code-block:: bash
mkdir -p ~/acrn-work
disk="/media/$USER/"$(ls /media/$USER)
cp -r $disk/board_inspector ~/acrn-work
cp -r "$disk"/board_inspector ~/acrn-work
#. On the target, load the ``msr`` driver, used by the board inspector:
#. On the target, load the ``msr`` driver, used by the Board Inspector:
.. code-block:: bash
sudo modprobe msr
#. Run the board inspector tool ( ``board_inspector.py``)
#. Run the Board Inspector tool ( ``board_inspector.py``)
to generate the board configuration file. This
example uses the parameter ``my_board`` as the file name.
@ -360,8 +360,8 @@ Generate a Board Configuration File
.. code-block:: bash
disk="/media/$USER/"$(ls /media/$USER)
cp ~/acrn-work/board_inspector/my_board.xml $disk/
sync && sudo umount $disk
cp ~/acrn-work/board_inspector/my_board.xml "$disk"/
sync && sudo umount "$disk"
#. Insert the USB disk into the development computer.
@ -370,8 +370,8 @@ Generate a Board Configuration File
.. code-block:: bash
disk="/media/$USER/"$(ls /media/$USER)
cp $disk/my_board.xml ~/acrn-work
sudo umount $disk
cp "$disk"/my_board.xml ~/acrn-work
sudo umount "$disk"
.. _gsg-dev-setup:
@ -380,7 +380,7 @@ Generate a Board Configuration File
Generate a Scenario Configuration File and Launch Scripts
*********************************************************
You use the **ACRN configurator** to generate scenario configuration files and
You use the **ACRN Configurator** to generate scenario configuration files and
launch scripts.
A **scenario configuration file** is an XML file that holds the parameters of
@ -388,18 +388,18 @@ a specific ACRN configuration, such as the number of VMs that can be run,
their attributes, and the resources they have access to.
A **launch script** is a shell script that is used to configure and create a
User VM. Each User VM has its own launch script.
post-launched User VM. Each User VM has its own launch script.
To generate a scenario configuration file and launch scripts:
#. On the development computer, install ACRN configurator dependencies:
#. On the development computer, install ACRN Configurator dependencies:
.. code-block:: bash
cd ~/acrn-work/acrn-hypervisor/misc/config_tools/config_app
sudo pip3 install -r requirements
#. Launch the ACRN configurator:
#. Launch the ACRN Configurator:
.. code-block:: bash
@ -407,7 +407,7 @@ To generate a scenario configuration file and launch scripts:
#. Your web browser should open the website `<http://127.0.0.1:5001/>`__
automatically, or you may need to visit this website manually.
The ACRN configurator is supported on Chrome and Firefox.
The ACRN Configurator is supported on Chrome and Firefox.
#. Click the **Import Board XML** button and browse to the board configuration
file ``my_board.xml`` previously generated. When it is successfully
@ -461,9 +461,10 @@ To generate a scenario configuration file and launch scripts:
.. image:: ./images/gsg_config_launch_default.png
:class: drop-shadow
#. In the dialog box, select **shared_launch_6uos** as the default launch
setting and click **OK**. Because our sample ``shared`` scenario defines six
User VMs, we're using this ``shared_launch_6uos`` launch XML configuration.
#. In the dialog box, select **shared_launch_6user_vm** as the default launch
setting and click **OK**. Because our sample ``shared`` scenario defines
six User VMs, we're using this ``shared_launch_6user_vm`` launch XML
configuration.
.. image:: ./images/gsg_config_launch_load.png
:class: drop-shadow
@ -480,10 +481,10 @@ To generate a scenario configuration file and launch scripts:
.. image:: ./images/gsg_config_launch_save.png
:class: drop-shadow
#. Confirm that ``launch_uos_id3.sh`` appears in the expected output
#. Confirm that ``launch_user_vm_id3.sh`` appears in the expected output
directory::
ls ~/acrn-work/my_board/output/launch_uos_id3.sh
ls ~/acrn-work/my_board/output/launch_user_vm_id3.sh
#. Close the browser and press :kbd:`CTRL` + :kbd:`C` to terminate the
``acrn_configurator.py`` program running in the terminal window.
@ -510,7 +511,7 @@ Build ACRN
.. code-block:: bash
cd ~/acrn-work/acrn-kernel
cp kernel_config_uefi_sos .config
cp kernel_config_service_vm .config
make olddefconfig
make -j $(nproc) targz-pkg
@ -525,58 +526,73 @@ Build ACRN
.. code-block:: bash
disk="/media/$USER/"$(ls /media/$USER)
cp linux-5.10.52-acrn-sos-x86.tar.gz $disk/
cp ~/acrn-work/acrn-hypervisor/build/hypervisor/acrn.bin $disk/
cp ~/acrn-work/my_board/output/launch_uos_id3.sh $disk/
cp ~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl $disk/
cp ~/acrn-work/acrn-hypervisor/build/acrn-2.6-unstable.tar.gz $disk/
sync && sudo umount $disk/
cp linux-5.10.65-acrn-service-vm-x86.tar.gz "$disk"/
cp ~/acrn-work/acrn-hypervisor/build/hypervisor/acrn.bin "$disk"/
cp ~/acrn-work/my_board/output/launch_user_vm_id3.sh "$disk"/
cp ~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl "$disk"/
cp ~/acrn-work/acrn-hypervisor/build/acrn-2.7-unstable.tar.gz "$disk"/
sync && sudo umount "$disk"/
Even though our sample default scenario defines six User VMs, we're only
going to launch one of them, so we'll only need the one launch script.
.. note:: The :file:`serial.conf` is only generated if non-standard
vUARTs (not COM1-COM4)
are configured for the Service VM in the scenario XML file.
Please copy the ``serial.conf`` file using::
cp ~/acrn-work/acrn-hypervisor/build/hypervisor/serial.conf "$disk"/
#. Insert the USB disk you just used into the target system and run these
commands to copy the tar files locally:
.. code-block:: bash
disk="/media/$USER/"$(ls /media/$USER)
cp $disk/linux-5.10.52-acrn-sos-x86.tar.gz ~/acrn-work
cp $disk/acrn-2.6-unstable.tar.gz ~/acrn-work
cp "$disk"/linux-5.10.65-acrn-service-vm-x86.tar.gz ~/acrn-work
cp "$disk"/acrn-2.7-unstable.tar.gz ~/acrn-work
#. Extract the Service VM files onto the target system:
.. code-block:: bash
cd ~/acrn-work
sudo tar -zxvf linux-5.10.52-acrn-sos-x86.tar.gz -C / --keep-directory-symlink
sudo tar -zxvf linux-5.10.65-acrn-service-vm-x86.tar.gz -C / --keep-directory-symlink
This tar extraction replaces parts of the Ubuntu installation we installed
and used for running the board inspector, with the Linux kernel we built
and used for running the Board Inspector, with the Linux kernel we built
based on the board and scenario configuration.
#. Extract the ACRN tools and images:
.. code-block:: bash
sudo tar -zxvf acrn-2.6-unstable.tar.gz -C / --keep-directory-symlink
sudo tar -zxvf acrn-2.7-unstable.tar.gz -C / --keep-directory-symlink
#. Copy a few additional ACRN files to the expected locations:
.. code-block:: bash
sudo mkdir -p /boot/acrn/
sudo cp $disk/acrn.bin /boot/acrn
sudo cp $disk/iasl /usr/sbin/
cp $disk/launch_uos_id3.sh ~/acrn-work
sudo umount $disk/
sudo cp "$disk"/acrn.bin /boot/acrn
sudo cp "$disk"/serial.conf /etc
sudo cp "$disk"/iasl /usr/sbin/
cp "$disk"/launch_user_vm_id3.sh ~/acrn-work
sudo umount "$disk"/
.. rst-class:: numbered-step
Install ACRN
************
In the following steps, you will configure GRUB on the target system.
In the following steps, you will install the serial configuration tool and
configure GRUB on the target system.
#. Install the serial configuration tool in the target system as follows:
.. code-block:: bash
sudo apt install setserial
#. On the target, find the root filesystem (rootfs) device name by using the
``lsblk`` command:
@ -629,15 +645,15 @@ In the following steps, you will configure GRUB on the target system.
#. Add the ACRN Service VM to the GRUB boot menu:
a. Edit the GRUB 40_custom file. The following command uses ``vi``, but
a. Edit the GRUB ``40_custom`` file. The following command uses ``vi``, but
you can use any text editor.
.. code-block:: bash
sudo vi /etc/grub.d/40_custom
#. Add the following text at the end of the file. Replace ``<UUID>`` and
``<PARTUUID>`` with the output from the previous step.
#. Add the following text at the end of the file. Replace ``UUID`` and
``PARTUUID`` with the output from the previous step.
.. code-block:: bash
:emphasize-lines: 6,8
@ -650,12 +666,10 @@ In the following steps, you will configure GRUB on the target system.
search --no-floppy --fs-uuid --set "UUID"
echo 'loading ACRN...'
multiboot2 /boot/acrn/acrn.bin root=PARTUUID="PARTUUID"
module2 /boot/vmlinuz-5.10.52-acrn-sos Linux_bzImage
module2 /boot/vmlinuz-5.10.65-acrn-service-vm Linux_bzImage
}
#. Save and close the file.
#. Correct example image
Example:
.. code-block:: console
@ -667,9 +681,11 @@ In the following steps, you will configure GRUB on the target system.
search --no-floppy --fs-uuid --set "3cac5675-e329-4cal-b346-0a3e65f99016"
echo 'loading ACRN...'
multiboot2 /boot/acrn/acrn.bin root=PARTUUID="03db7f45-8a6c-454b-adf7-30343d82c4f4"
module2 /boot/vmlinuz-5.10.52-acrn-sos Linux_bzImage
module2 /boot/vmlinuz-5.10.65-acrn-service-vm Linux_bzImage
}
#. Save and close the file.
#. Make the GRUB menu visible when
booting and make it load the Service VM kernel by default:
@ -759,23 +775,41 @@ Launch the User VM
.. code-block:: bash
vi ~/acrn-work/launch_uos_id3.sh
vi ~/acrn-work/launch_user_vm_id3.sh
#. Look for the line that contains the term ``virtio-blk`` and replace the
existing image file path with your ISO image file path. In the following
example, the ISO image file path is
``/home/acrn/acrn-work/ubuntu-18.04.5-desktop-amd64.iso``.
``/home/acrn/acrn-work/ubuntu-18.04.6-desktop-amd64.iso``. Here is the
``launch_user_vm_id3.sh`` before editing:
.. code-block:: bash
:emphasize-lines: 4
acrn-dm -A -m $mem_size -s 0:0,hostbridge -U 615db82a-e189-4b4f-8dbb-d321343e4ab3 \
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
--mac_seed $mac_seed \
$logger_setting \
-s 7,virtio-blk,/home/acrn/acrn-work/ubuntu-18.04.5-desktop-amd64.iso \
-s 8,virtio-net,tap_YaaG3 \
-s 6,virtio-console,@stdio:stdio_port \
-s 9,virtio-blk,./YaaG.img \
-s 10,virtio-net,tap_YaaG3 \
-s 8,virtio-console,@stdio:stdio_port \
--ovmf /usr/share/acrn/bios/OVMF.fd \
--cpu_affinity 0,1 \
-s 1:0,lpc \
$vm_name
And here is the example ``launch_user_vm_id3.sh`` after editing:
.. code-block:: bash
:emphasize-lines: 4
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
--mac_seed $mac_seed \
$logger_setting \
-s 9,virtio-blk,/home/acrn/acrn-work/ubuntu-18.04.6-desktop-amd64.iso \
-s 10,virtio-net,tap_YaaG3 \
-s 8,virtio-console,@stdio:stdio_port \
--ovmf /usr/share/acrn/bios/OVMF.fd \
--cpu_affinity 0,1 \
-s 1:0,lpc \
$vm_name
@ -785,10 +819,10 @@ Launch the User VM
.. code-block:: bash
sudo chmod +x ~/acrn-work/launch_uos_id3.sh
sudo chmod +x ~/acrn-work/launch_user_vm_id3.sh
sudo chmod +x /usr/bin/acrn-dm
sudo chmod +x /usr/sbin/iasl
sudo ~/acrn-work/launch_uos_id3.sh
sudo ~/acrn-work/launch_user_vm_id3.sh
#. It will take a few seconds for the User VM to boot and start running the
Ubuntu image. Confirm that you see the console of the User VM on the Service
@ -837,3 +871,4 @@ Next Steps
:ref:`overview_dev` describes the ACRN configuration process, with links to
additional details.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 47 KiB

After

Width:  |  Height:  |  Size: 48 KiB

View File

@ -6,10 +6,9 @@ Glossary of Terms
.. glossary::
:sorted:
AaaG
LaaG
WaaG
Acronyms for Android, Linux, and Windows as a Guest VM. ACRN supports a
Acronyms for Linux and Windows as a Guest VM. ACRN supports a
variety of :term:`User VM` OS choices. Your choice depends on the
needs of your application. For example, Windows is popular for
Human-Machine Interface (HMI) applications in industrial applications,
@ -172,7 +171,7 @@ Glossary of Terms
User VM
A :term:`VM` where user-defined environments and applications run. User VMs can
run different OSes based on their needs, including for example, Ubuntu for
an AI application, Android or Windows for a Human-Machine Interface, or a
an AI application, Windows for a Human-Machine Interface, or a
hard real-time control OS such as Zephyr, VxWorks, or RT-Linux for soft or
hard real-time control. There are three types of ACRN User VMs: pre-launched,
post-launched standard, and post-launched real-time. *(Historically, a

View File

@ -76,7 +76,7 @@ ACRN has these key capabilities and benefits:
non-safety-critical domains coexisting on one SoC using Intel VT-backed
isolation.
* **Adaptable and Flexible**: ACRN has multi-OS support with efficient
virtualization for VM OSs including Linux, Android, Zephyr, and Windows, as
virtualization for VM OSs including Linux, Zephyr, and Windows, as
needed for a variety of application use cases. ACRN scenario configurations
support shared, partitioned, and hybrid VM models to support a variety of
application use cases.
@ -148,7 +148,7 @@ shared among the Service VM and User VMs. The Service VM is launched by the
hypervisor after any pre-launched VMs are launched. The Service VM can access
remaining hardware resources directly by running native drivers and provides
device sharing services to the User VMs, through the Device Model. These
post-launched User VMs can run one of many OSs including Ubuntu, Android,
post-launched User VMs can run one of many OSs including Ubuntu or
Windows, or a real-time OS such as Zephyr, VxWorks, or Xenomai. Because of its
real-time capability, a real-time VM (RTVM) can be used for software
programmable logic controller (PLC), inter-process communication (IPC), or

View File

@ -6,10 +6,10 @@ Launch Configuration Options
As explained in :ref:`acrn_configuration_tool`, launch configuration files
define post-launched User VM settings. This document describes these option settings.
``uos``:
``user_vm``:
Specify the User VM ``id`` to the Service VM.
``uos_type``:
``user_vm_type``:
Specify the User VM type, such as ``CLEARLINUX``, ``ANDROID``, ``ALIOS``,
``PREEMPT-RT LINUX``, ``GENERIC LINUX``, ``WINDOWS``, ``YOCTO``, ``UBUNTU``,
``ZEPHYR`` or ``VXWORKS``.

View File

@ -1,196 +1,119 @@
.. _hardware:
Supported Hardware
##################
We welcome community contributions to help build Project ACRN support
for a broad collection of architectures and platforms.
Minimum System Requirements for Installing ACRN
***********************************************
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
| Hardware | Minimum Requirements | Recommended |
+========================+===================================+=================================================================================+
| Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper-threading Technology enabled in the BIOS or more cores |
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
| System memory | 4GB RAM | 8GB or more (< 32G) |
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
| Storage capabilities | 20GB | 120GB or more |
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
Minimum Requirements for Processor
**********************************
1 GB Large pages
Known Limitations
*****************
Platforms with multiple PCI segments are not supported.
ACRN assumes the following conditions are satisfied from the Platform BIOS:
* All the PCI device BARs must be assigned resources, including SR-IOV VF BARs if a device supports it.
* Bridge windows for PCI bridge devices and the resources for root bus must be programmed with values
that enclose resources used by all the downstream devices.
* There should be no conflict in resources among the PCI devices or with other platform devices.
.. _hardware_tested:
Tested Platforms by ACRN Release
********************************
These platforms have been tested by the development team with the noted ACRN
release version and may not work as expected on later ACRN releases.
.. _NUC11TNHi5:
https://ark.intel.com/content/www/us/en/ark/products/205594/intel-nuc-11-pro-kit-nuc11tnhi5.html
.. _NUC6CAYH:
https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc6cayh.html
.. _NUC7i5BNH:
https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/NUC7i5BNH.html
.. _NUC7i7BNH:
https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/NUC7i7BNH.html
.. _NUC7i5DNH:
https://ark.intel.com/content/www/us/en/ark/products/122488/intel-nuc-kit-nuc7i5dnhe.html
.. _NUC7i7DNH:
https://ark.intel.com/content/www/us/en/ark/products/130393/intel-nuc-kit-nuc7i7dnhe.html
.. _WHL-IPC-I7:
http://www.maxtangpc.com/industrialmotherboards/142.html#parameters
.. _UP2-N3350:
.. _UP2-N4200:
.. _UP2-x5-E3940:
.. _UP2 Shop:
https://up-shop.org/home/270-up-squared.html
For general instructions setting up ACRN on supported hardware platforms, visit the :ref:`gsg` page.
.. list-table:: Supported Target Platforms
:widths: 20 20 12 5 5
:header-rows: 1
* - Intel x86 Platform Family
- Product / Kit Name
- Board configuration
- ACRN Release
- Graphics
* - **Tiger Lake**
- `NUC11TNHi5`_ |br| (Board: NUC11TNBi5)
- :acrn_file:`nuc11tnbi5.xml <misc/config_tools/data/nuc11tnbi5/nuc11tnbi5.xml>`
- v2.5
- GVT-d
* - **Whiskey Lake**
- `WHL-IPC-I7`_ |br| (Board: WHL-IPC-I7)
-
- v2.0
- GVT-g
* - **Kaby Lake** |br| (Dawson Canyon)
- `NUC7i7DNH`_ |br| (board: NUC7i7DNB)
-
- v1.6.1
- GVT-g
* - **Apollo Lake**
- `NUC6CAYH`_, |br| `UP2-N3350`_, `UP2-N4200`_, |br| `UP2-x5-E3940`_
-
- v1.0
- GVT-g
If an XML file is not provided by project ACRN for your board, we recommend you
use the board inspector tool to generate an XML file specifically for your board.
Refer to :ref:`board_inspector_tool` for more details on using the board inspector
tool.
Tested Hardware Specifications Detail
*************************************
+---------------------------+------------------------+------------------------+------------------------------------------------------------+
| Platform (Intel x86) | Product/Kit Name | Hardware Class | Description |
+===========================+========================+========================+============================================================+
| | **Tiger Lake** | | NUC11TNHi5 | Processor | - Intel |copy| Core |trade| i5-113G7 CPU (8M Cache, |
| | | | (Board: NUC11TNBi5) | | up to 4.2 GHz) |
| | +------------------------+------------------------------------------------------------+
| | | Graphics | - Dual HDMI 2.0b w/HDMI CEC, Dual DP 1.4a via Type C |
| | | | - Supports 4 displays |
| | +------------------------+------------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 64 GB, 3200 MHz), 1.2V |
| | +------------------------+------------------------------------------------------------+
| | | Storage capabilities | - One M.2 connector for storage |
| | | | 22x80 NVMe (M), 22x42 SATA (B) |
| | +------------------------+------------------------------------------------------------+
| | | Serial Port | - Yes |
+---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | **Whiskey Lake** | | WHL-IPC-I7 | Processor | - Intel |copy| Core |trade| i7-8565U CPU @ 1.80GHz (4C8T) |
| | | | (Board: WHL-IPC-I7) | | |
| | +------------------------+------------------------------------------------------------+
| | | Graphics | - HD Graphics 610/620 |
| | | | - ONE HDMI\* 1.4a ports supporting 4K at 60 Hz |
| | +------------------------+------------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
| | +------------------------+------------------------------------------------------------+
| | | Storage capabilities | - One M.2 connector for Wi-Fi |
| | | | - One M.2 connector for 3G/4G module, supporting |
| | | | LTE Category 6 and above |
| | | | - One M.2 connector for 2242 SSD |
| | | | - TWO SATA3 port (only one if Celeron onboard) |
| | +------------------------+------------------------------------------------------------+
| | | Serial Port | - Yes |
+---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | **Kaby Lake** | | NUC7i7DNH | Processor | - Intel |copy| Core |trade| i7-8650U Processor |
| | (Dawson Canyon) | | (Board: NUC7i7DNB) | | (8M Cache, up to 4.2 GHz) |
| | +------------------------+------------------------------------------------------------+
| | | Graphics | - Dual HDMI 2.0a, 4-lane eDP 1.4 |
| | | | - Supports 2 displays |
| | +------------------------+------------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
| | +------------------------+------------------------------------------------------------+
| | | Storage capabilities | - One M.2 connector supporting 22x80 M.2 SSD |
| | | | - One M.2 connector supporting 22x30 M.2 card |
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
| | +------------------------+------------------------------------------------------------+
| | | Serial Port | - Yes |
+---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | **Apollo Lake** | | NUC6CAYH | Processor | - Intel |copy| Celeron |trade| CPU J3455 @ 1.50GHz (4C4T) |
| | (Arches Canyon) | | (Board: NUC6CAYB) | | |
| | +------------------------+------------------------------------------------------------+
| | | Graphics | - Intel |copy| HD Graphics 500 |
| | | | - VGA (HDB15); HDMI 2.0 |
| | +------------------------+------------------------------------------------------------+
| | | System memory | - Two DDR3L SO-DIMM sockets |
| | | | (up to 8 GB, 1866 MHz), 1.35V |
| | +------------------------+------------------------------------------------------------+
| | | Storage capabilities | - SDXC slot with UHS-I support on the side |
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
| | | | (up to 9.5 mm thickness) |
| | +------------------------+------------------------------------------------------------+
| | | Serial Port | - No |
+---------------------------+------------------------+------------------------+------------------------------------------------------------+
| | **Apollo Lake** | | UP2 - N3350 | Processor | - Intel |copy| Celeron |trade| N3350 (2C2T, up to 2.4 GHz)|
| | | UP2 - N4200 | | - Intel |copy| Pentium |trade| N4200 (4C4T, up to 2.5 GHz)|
| | | UP2 - x5-E3940 | | - Intel |copy| Atom |trade| x5-E3940 (4C4T) |
| | | | (up to 1.8GHz)/x7-E3950 (4C4T, up to 2.0GHz) |
| | +------------------------+------------------------------------------------------------+
| | | Graphics | - 2GB (single channel) LPDDR4 |
| | | | - 4GB/8GB (dual channel) LPDDR4 |
| | +------------------------+------------------------------------------------------------+
| | | System memory | - Intel |copy| Gen 9 HD, supporting 4K Codec |
| | | | Decode and Encode for HEVC4, H.264, VP8 |
| | +------------------------+------------------------------------------------------------+
| | | Storage capabilities | - 32 GB / 64 GB / 128 GB eMMC |
| | +------------------------+------------------------------------------------------------+
| | | Serial Port | - Yes |
+---------------------------+------------------------+------------------------+------------------------------------------------------------+
.. # vim: tw=200
.. _hardware:
Supported Hardware
##################
The ACRN project development team is continually adding support for new hardware
products, as documented below. As we add new hardware, we also lower our support
level for older hardware products. We welcome community contributions to help
build ACRN support for a broad collection of architectures and platforms.
.. _hardware_tested:
Selecting Hardware
******************
When you are selecting hardware to use with ACRN, consider the
following:
* When the development team is working on a new ACRN version, we focus our
development and testing on one product. The product is typically a board
or kit from the latest processor family.
* We also provide a level of maintenance for some older products.
* For all products, we welcome and encourage the community to contribute support
by submitting patches for code, documentation, tests, and more.
The following table shows supported processor families, along with the
products that the development team has tested. The products are categorized
into three support levels: Release, Maintenance, and Community. Each
level includes the activities described in the lower levels.
.. _NUC11TNHi5:
https://ark.intel.com/content/www/us/en/ark/products/205594/intel-nuc-11-pro-kit-nuc11tnhi5.html
.. _NUC6CAYH:
https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc6cayh.html
.. _NUC7i5BNH:
https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/NUC7i5BNH.html
.. _NUC7i7BNH:
https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/NUC7i7BNH.html
.. _NUC7i5DNH:
https://ark.intel.com/content/www/us/en/ark/products/122488/intel-nuc-kit-nuc7i5dnhe.html
.. _NUC7i7DNHE:
https://ark.intel.com/content/www/us/en/ark/products/130393/intel-nuc-kit-nuc7i7dnhe.html
.. _WHL-IPC-I5:
http://www.maxtangpc.com/industrialmotherboards/142.html#parameters
.. _UP2-N3350:
.. _UP2-N4200:
.. _UP2-x5-E3940:
.. _UP2 Shop:
https://up-shop.org/home/270-up-squared.html
+------------------------+------------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+
| | | .. rst-class:: centered |
| | | |
| | | ACRN Version |
+------------------------+------------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+
| Intel Processor Family | Tested Product | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | centered | centered | centered | centered | centered | centered |
| | | | | | | | |
| | | v1.0 | v1.6.1 | v2.0 | v2.5 | v2.6 | v2.7 |
+========================+====================================+========================+========================+========================+========================+========================+========================+
| Tiger Lake | `NUC11TNHi5`_ | | | | .. rst-class:: | .. rst-class:: |
| | | | | | centered | centered |
| | | | | | | |
| | | | | | Release | Maintenance |
+------------------------+------------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+
| Whiskey Lake | `WHL-IPC-I5`_ | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | | | centered | centered | centered |
| | | | | | | |
| | | | | Release | Maintenance | Community |
+------------------------+------------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+
| Kaby Lake | `NUC7i7DNHE`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | | centered | centered | centered |
| | | | | | |
| | | | Release | Maintenance | Community |
+------------------------+------------------------------------+------------------------+------------------------+-------------------------------------------------+-------------------------------------------------+
| Apollo Lake | | `NUC6CAYH`_, | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | `UP2-N3350`_, | centered | centered | centered |
| | | `UP2-N4200`_, | | | |
| | | `UP2-x5-E3940`_ | Release | Maintenance | Community |
+------------------------+------------------------------------+------------------------+------------------------+---------------------------------------------------------------------------------------------------+
* **Release**: New ACRN features are complete and tested for the listed product.
This product is recommended for this ACRN version. Support for older products
will transition to the maintenance category as development continues for newer
products.
* **Maintenance**: For new ACRN versions with maintenance-level support, we
verify our :ref:`gsg` instructions to ensure the baseline development workflow
works and the hypervisor will boot on the listed products. While we don't
verify that all new features will work on this product, we will do best-effort
support on reported issues. Maintenance support for a hardware product
is typically done for two subsequent ACRN releases (about six months).
* **Community**: Community responds with best-effort support for that
ACRN version to reported bugs for the listed product.
Urgent bug and security fixes are targeted to the latest release only.
Developers should either update to the most current release or back-port these
fixes to their own production release.
When you start to explore ACRN, we recommend you select
the latest product from the table above. You can also choose
other products and give them a try. In either case, use the
:ref:`board_inspector_tool` to generate a board configuration file
you will use to configure the ACRN hypervisor, as described in the
:ref:`gsg`. We encourage your feedback on the
acrn-user@lists.projectacrn.org mailing list on your findings about
unlisted products.
.. # vim: tw=200

View File

@ -74,7 +74,7 @@ New and updated reference documents are available, including:
* :ref:`asa`
* GVT-g-porting (obsolete with v2.6))
* :ref:`vbsk-overhead`
* VBS-K Framework Virtualization Overhead Analysis
* :ref:`asm_coding_guidelines`
* :ref:`c_coding_guidelines`
* :ref:`contribute_guidelines`

View File

@ -1,7 +1,7 @@
.. _release_notes_2.7:
ACRN v2.7 (DRAFT)
#################
ACRN v2.7 (Dec 2021)
####################
We are pleased to announce the release of the Project ACRN hypervisor
version 2.7.
@ -85,6 +85,24 @@ Update Scenario Names
VMs and the Service VM provides resource emulation and sharing for
post-launched User VMs, all in the same system configuration.
User-Friendly VM names
Instead of using a UUID as the User VM identifier, we're now using a
user-friendly VM name.
Extend Use of CAT Cache Tuning to VMs
In previous releases, Cache Allocation Technology (CAT) was available only
at the hypervisor level and with per-pCPU granularity. In this v2.7 release,
each VM with exclusive cache resources can partition them with
per-thread granularity and allocate cache resources to prioritized tasks.
Expand Passthrough Device Use Cases to Pre-Launched VMs
We now allow pre-launched VMs (in partitioned or hybrid scenarios) to use
graphics device passthrough for improved performance, a feature previously
available to only post-launched VMs.
Trusted Platform Module (TPM) 2.0 and its associated resource can also be
passthrough to post-launched VMs.
Upgrading to v2.7 From Previous Releases
****************************************
@ -99,18 +117,42 @@ that is essential to build ACRN. Compared to previous versions, ACRN v2.7 adds
the following hardware information to board XMLs to support new features and
fixes.
- list features here
- Always initialize ``hw_ignore`` when parsing ``DMAR``.
The new board XML can be generated using the ACRN board inspector in the same
way as ACRN v2.6. Refer to :ref:`acrn_config_workflow` for a complete list of
steps to deploy and run the tool.
Add New Configuration Options
=============================
Update Configuration Options
============================
In v2.7, the following elements are added to scenario XML files.
- list elements here
- :option:`vm.name` (This is a required element. Names must be unique, up to 15
characters long, and contain no space characters.)
- :option:`hv.CAPACITIES.MAX_VM_NUM` (Default value is ``8``)
- :option:`hv.FEATURES.RDT.VCAT_ENABLED` (Default value is ``n``)
The following elements were removed.
- ``KATA_VM`` VM type.
- ``hv.CAPACITIES.MAX_EFI_MMAP_ENTRIES``
- ``hv.MEMORY.HV_RAM_SIZE`` (Hypervisor RAM size is now computed by the linker)
As part of using consistent names for UOS and SOS, we also change configuration
option names or values using these obsolete terms:
- The :option:`vm.vm_type` option value ``SOS_VM`` is now ``SERVICE_VM``
- The :option:`vm.legacy_vuart.base` option value ``SOS_VM_COM1_BASE`` is now
``SERVICE_VM_COM1_BASE``, with the same change for COM2, COM3, and COM4 base
and for the :option:`vm.legacy_vuart.irq` option values.
In v2.7, the ``acrn-dm`` command line parameter ``--cpu_affinity`` is now mandatory
when launching a User VM. If the launch XML settings, used to generate the launch
scripts, do not specify a ``cpu_affinity`` value, the ACRN Configurator will look for
it from the scenario XML settings. Verify that your existing launch scripts
specify this ``--cpu_affinity`` parameter as ``acrn-dm`` will now complain if it's
missing.
Document Updates
****************
@ -144,6 +186,73 @@ Fixed Issues Details
.. comment example item
- :acrn-issue:`5626` - [CFL][industry] Host Call Trace once detected
- :acrn-issue:`5112` - ACRN debug shell help output behavior, line length, and misspellings
- :acrn-issue:`5626` - [CFL][industry] Host Call Trace once detected
- :acrn-issue:`5692` - Update config option documentation in schema definition files
- :acrn-issue:`6012` - [Mainline][PTCM] [ConfigTool]Obsolete terms cleanup for SSRAM
- :acrn-issue:`6024` - [TGL][Master][IVSHMEM] Only one share memory device in SOS while enabled two from the scenario xml
- :acrn-issue:`6270` - [ADL-S][Industry][Yocto] WaaG boot up but no UI display with more than 1G memory
- :acrn-issue:`6284` - [v2.6] vulnerable coding style in hypervisor and DM
- :acrn-issue:`6340` - [EF]Invalid LPC entry prevents GOP driver from working properly in WaaG for DP3
- :acrn-issue:`6360` - ACRN Makefile missing dependencies
- :acrn-issue:`6366` - TPM pass-thru shall be able to support start method 6, not only support Start Method of 7
- :acrn-issue:`6387` - enable GVT-d for pre-launched linux guest
- :acrn-issue:`6405` - [ADL-S][Industry][Yocto] WaaG BSOD in startup when run reboot or create/destroy stability test.
- :acrn-issue:`6417` - ACRN ConfigTool improvement from DX view
- :acrn-issue:`6428` - [acrn-configuration-tool] Fail to generate launch script when disable CPU sharing
- :acrn-issue:`6431` - virtio_console use-after-free
- :acrn-issue:`6434` - HV panic when SOS VM boot 5.4 kernel
- :acrn-issue:`6442` - [EF]Post-launched VMs do not boot with "EFI Network" enabled
- :acrn-issue:`6461` - [config_tools] kernel load addr/entry addr should not be configurable for kernel type KERNEL_ELF
- :acrn-issue:`6473` - [HV]HV can't be used after dumpreg rtvm vcpu
- :acrn-issue:`6476` - [hypercube][TGL][ADL]pci_xhci_insert_event SEGV on read from NULL
- :acrn-issue:`6481` - ACRN on QEMU can't boot up with v2.6 branch
- :acrn-issue:`6482` - [ADL-S][RTVM]rtvm poweroff causes sos to crash
- :acrn-issue:`6494` - acrn_trace build failure with latest e2fsprogs v1.46.2 version
- :acrn-issue:`6502` - [ADL][HV][UC lock] SoS kernel panic when #GP for UC lock enabled
- :acrn-issue:`6508` - [HV]Refine pass-thru device PIO BAR handling
- :acrn-issue:`6518` - [hypercube][ADL]acrn-dm program crash during hypercube testing
- :acrn-issue:`6528` - [TGL][HV][hybrid_rt] dmidecode Fail on pre-launched RTVM
- :acrn-issue:`6530` - [ADL-S][EHL][Hybrid]Path of sos rootfs in hybrid.xml is wrong
- :acrn-issue:`6533` - [hypercube][tgl][ADL] mem leak while poweroff in guest
- :acrn-issue:`6542` - [hypercube][tgl][ADL] mem leak while poweroff in guest
- :acrn-issue:`6562` - [ADL-S][Config tool] fail to tpm_getcap -l
- :acrn-issue:`6565` - [acrn-configuration-tool] "modprobe pci_stub" should be executed before unbinding passthru devices
- :acrn-issue:`6572` - [ADL-S][Acrntrace]failed to run acrntrace test
- :acrn-issue:`6584` - HV:check vmx capability
- :acrn-issue:`6592` - [doc] failed to make hvdiffconfig
- :acrn-issue:`6610` - [config tool vUART] IRQ of vUART of pnp 8250 is not generated correctly
- :acrn-issue:`6620` - acrn-config: pass-thru device PIO BAR identical mapping
- :acrn-issue:`6663` - Current HV_RAM_SIZE calculation algorithm sometimes cause build failure
- :acrn-issue:`6674` - [TGL][HV][hybrid] (v2.7 only) during boot zephyr64.elf find HV error: "Unable to copy HPA 0x100000 to GPA 0x7fe00000 in VM0"
- :acrn-issue:`6677` - Service VM shall not have capability to access IOMMU
- :acrn-issue:`6704` - [ADL-S][Partitioned]Kernel panic when boot Pre-launched RTVM with 8 pci devices passthru
- :acrn-issue:`6709` - Issues for platform ICX-D HCC enabling
- :acrn-issue:`6719` - Board Inspector tool crashes if cpuid is not installed
- :acrn-issue:`6724` - (v2.7 only) Remove the GET_PLATFORM_INFO support in ACRN
- :acrn-issue:`6736` - Improved readability desirable for the Board Inspector tool
- :acrn-issue:`6743` - acrn-crashlog/acrnprobe compilation failure with OpenSSL 3.0
- :acrn-issue:`6752` - ACRN HV shows multiple PCIe devices with "out of mmio window" warnings - false alert
- :acrn-issue:`6755` - [icx-d lcc]CAT_capability enable RDT fail
- :acrn-issue:`6767` - [acrn-configuration-tool] Getting duplicate PT_SLOT value If generate launch script continuously through the UI
- :acrn-issue:`6769` - [v2.7] vulnerable coding style in hypervisor and DM
- :acrn-issue:`6778` - [ADL][SSRAM][Master]Error messages output during RTCM unit test
- :acrn-issue:`6780` - [ADL][SSRAM][Master]ACRN boot crash with SSRAM enabled
- :acrn-issue:`6799` - [REG][ADL-S][VxWorks] SOS force reboot while launching vxworks
- :acrn-issue:`6834` - [Acrn-hypervisor][Debug release]Failed to build hypervisor with hv_debug_release enable
- :acrn-issue:`6848` - [ADL][RTVM]ACPI error while launching rtvm
- :acrn-issue:`6851` - [DM] segfault on virtio_console_control_tx()
- :acrn-issue:`6877` - [DM][ASAN] UAF in mevent_handle()
- :acrn-issue:`6885` - adl-s-shared sos can't get in
- :acrn-issue:`6888` - [ADL-S]Yaag reboots too slowly
- :acrn-issue:`6899` - [ADL-S][shared] Core type error when launch RTVM use atom core.
- :acrn-issue:`6907` - [ADL-S][ICX-D][shared][Regression]Multi RT launch failed with V2.7_RC3 build.
- :acrn-issue:`6908` - [ADL-S][Multi_RT]Shutdown one RT and others will hang when launch multi RT.
- :acrn-issue:`6919` - [hypercube][ADL] mem leak while power off in guest (phase-II)
- :acrn-issue:`6931` - [ADL][CPUID] RTVM CPUID 0x2 EBX value is not equal to HV cpuid 0x2 EBX
Known Issues
************
- :acrn-issue:`6631` - [KATA][5.10 Kernel]failed to start docker with Service VM 5.10 kernel
- :acrn-issue:`6978` - [TGL] openstack failed with ACRN v2.7

View File

@ -332,3 +332,48 @@ img.drop-shadow {
.lastupdated {
float:right;
}
/* some custom classes used in rst-class directives */
.centered {
text-align: center;
}
/* colors from ACRN brand pallet */
.bg-acrn-green {
background-color: #006368;
color: white;
}
.bg-acrn-lightgreen {
background-color: #69BFAD;
}
.bg-acrn-brown {
background-color: #998265;
color: white;
}
.bg-acrn-lightbrown {
background-color: #D7AF96;
}
.bg-acrn-blue {
background-color: #232256;
color: white;
}
.bg-acrn-red {
background-color: #7F0F24;
color: white;
}
.bg-acrn-gradient {
background: linear-gradient(135deg, #232256 0%, #69BFAD 100%);
color: white;
}
.bg-lightyellow {
background-color: lightyellow;
}
.bg-lightgreen {
background-color: #D0F0C0; /* tea green */
}
.bg-lavender {
background-color: lavender;
}
.bg-lightgrey {
background-color: lightgrey;
}

View File

@ -5,16 +5,16 @@ Enable ACRN Secure Boot With GRUB
This document shows how to enable ACRN secure boot with GRUB including:
- ACRN Secure Boot Sequence
- Generate GPG Key
- Setup Standalone GRUB EFI Binary
- Enable UEFI Secure Boot
- `ACRN Secure Boot Sequence`_
- `Generate GPG Key`_
- `Setup Standalone GRUB EFI Binary`_
- `Enable UEFI Secure Boot`_
**Validation Environment:**
- Hardware Platform: TGL-I7, Supported hardware described in
- Hardware Platform: Tiger Lake, supported hardware described in
:ref:`hardware`.
- ACRN Scenario: Industry
- ACRN Scenario: Shared
- Service VM: Yocto & Ubuntu
- GRUB: 2.04
@ -25,7 +25,7 @@ This document shows how to enable ACRN secure boot with GRUB including:
ACRN Secure Boot Sequence
*************************
ACRN can be booted by Multiboot compatible bootloader, following diagram
ACRN can be booted by a multiboot compatible bootloader. The following diagram
illustrates the boot sequence of ACRN with GRUB:
.. image:: images/acrn_secureboot_flow.png
@ -35,16 +35,16 @@ illustrates the boot sequence of ACRN with GRUB:
For details on enabling GRUB on ACRN, see :ref:`using_grub`.
From a secureboot point of view:
From a secure boot point of view:
- UEFI firmware verifies shim/GRUB
- GRUB verifies ACRN, Service VM kernel, and pre-launched User VM kernel
- Service VM OS kernel verifies the Device Model (``acrn-dm``) and User
VM OVMF bootloader (with the help of ``acrn-dm``)
- User VM virtual bootloader (e.g. OVMF) starts the guest side verified boot process
- User VM virtual bootloader (e.g., OVMF) starts the guest side verified boot process
This document shows you how to enable GRUB to
verify ACRN binaries such ``acrn.bin``, Service VM kernel (``bzImage``), and
verify ACRN binaries such as ``acrn.bin``, Service VM kernel (``bzImage``), and
if present, a pre-launched User VM kernel image.
.. rst-class:: numbered-step
@ -185,9 +185,9 @@ For example::
Use the output of the :command:`blkid` to find the right values for the
UUID (``--set``) and PARTUUID (``root=PARTUUID=`` parameter) of the root
partition (e.g. `/dev/nvme0n1p2`) according to your your hardware.
partition (e.g., ``/dev/nvme0n1p2``) according to your hardware.
Copy this new :file:`grub.cfg` to your ESP (e.g. `/boot/efi/EFI/`).
Copy this new :file:`grub.cfg` to your ESP (e.g., ``/boot/efi/EFI/``).
Sign grub.cfg and ACRN Binaries
@ -196,11 +196,11 @@ Sign grub.cfg and ACRN Binaries
The :file:`grub.cfg` and all ACRN binaries that will be loaded by GRUB
**must** be signed with the same GPG key.
Here's sequence example of signing the individual binaries::
Here's a sequence example of signing the individual binaries::
gpg --homedir keys --detach-sign path/to/grub.cfg
gpg --homedir keys --detach-sign path/to/acrn.bin
gpg --homedir keys --detach-sign path/to/sos_kernel/bzImage
gpg --homedir keys --detach-sign path/to/service_vm_kernel/bzImage
Now, you can reboot and the system will boot with the signed GRUB EFI binary.
GRUB will refuse to boot if any files it attempts to load have been tampered
@ -215,25 +215,25 @@ Enable UEFI Secure Boot
Creating UEFI Secure Boot Key
=============================
-Generate your own keys for Secure Boot::
- Generate your own keys for Secure Boot::
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=PK/" -keyout PK.key -out PK.crt -days 7300 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=KEK/" -keyout KEK.key -out KEK.crt -days 7300 -nodes -sha256
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=db/" -keyout db.key -out db.crt -days 7300 -nodes -sha256
-Convert ``*.crt`` keys to the ESL format understood for UEFI::
- Convert ``*.crt`` keys to the ESL format understood for UEFI::
cert-to-efi-sig-list PK.crt PK.esl
cert-to-efi-sig-list KEK.crt KEK.esl
cert-to-efi-sig-list db.crt db.esl
-Sign ESL files::
- Sign ESL files::
sign-efi-sig-list -k PK.key -c PK.crt PK PK.esl PK.auth
sign-efi-sig-list -k PK.key -c PK.crt KEK KEK.esl KEK.auth
sign-efi-sig-list -k KEK.key -c KEK.crt db db.esl db.auth
-Convert to DER format::
- Convert to DER format::
openssl x509 -outform DER -in PK.crt -out PK.der
openssl x509 -outform DER -in KEK.crt -out KEK.der
@ -246,6 +246,8 @@ The keys to sign bootloader image: :file:`grubx64.efi`, :file:`db.key` , :file:`
Sign GRUB Image With db Key
===========================
Command example::
sbsign --key db.key --cert db.crt path/to/grubx64.efi
:file:`grubx64.efi.signed` will be created, it will be your bootloader.

View File

@ -13,9 +13,9 @@ ACRN configuration consists of the following key components.
* A configuration toolset that helps users to generate and edit configuration
data. The toolset includes:
- **Board inspector**: Collects board-specific information on target
- **Board Inspector**: Collects board-specific information on target
machines.
- **ACRN configurator**: Enables you to edit configuration data via a
- **ACRN Configurator**: Enables you to edit configuration data via a
web-based UI.
The following sections introduce the concepts and tools of ACRN configuration
@ -121,8 +121,8 @@ Using ACRN Configuration Toolset
The ACRN configuration toolset enables you to create
and edit configuration data. The toolset consists of the following:
* :ref:`Board inspector tool <board_inspector_tool>`
* :ref:`ACRN configurator tool <acrn_configurator_tool>`
* :ref:`Board Inspector <board_inspector_tool>`
* :ref:`ACRN Configurator <acrn_configurator_tool>`
As introduced in :ref:`overview_dev`, configuration takes place at
:ref:`overview_dev_board_config` and :ref:`overview_dev_config_editor` in
@ -162,7 +162,7 @@ The ``board`` attribute defines the board name and must match the
configuration file. The file name of the board configuration file
(example: ``my_board.xml``) doesn't affect the board name.
Board XML files are input to the ACRN configurator tool and the build system,
Board XML files are input to the ACRN Configurator tool and the build system,
and are not intended for end users to modify.
Scenario XML Format
@ -188,11 +188,11 @@ Launch XML Format
=================
The launch XML has an ``acrn-config`` root element as well as
``board``, ``scenario`` and ``uos_launcher`` attributes:
``board``, ``scenario``, and ``user_vm_launcher`` attributes:
.. code-block:: xml
<acrn-config board="BOARD" scenario="SCENARIO" uos_launcher="UOS_NUMBER">
<acrn-config board="BOARD" scenario="SCENARIO" user_vm_launcher="USER_VM_NUMBER">
The ``board`` attribute specifies the board name and must match the ``board``
attribute in the board configuration file and the scenario configuration file.
@ -200,8 +200,8 @@ attribute in the board configuration file and the scenario configuration file.
The ``scenario`` attribute specifies the scenario name and must match the
``scenario`` attribute in the scenario configuration file.
The ``uos_launcher`` attribute specifies the number of post-launched User VMs
in a scenario.
The ``user_vm_launcher`` attribute specifies the number of post-launched User
VMs in a scenario.
See :ref:`launch-config-options` for a full explanation of available launch
XML elements.

View File

@ -8,7 +8,7 @@ This guide describes all features and uses of the tool.
About the ACRN Configurator Tool
*********************************
The ACRN configurator tool ``acrn_configurator.py`` provides a web-based
The ACRN Configurator tool ``acrn_configurator.py`` provides a web-based
user interface to help you customize your
:ref:`ACRN configuration <acrn_configuration_tool>`. Capabilities:
@ -26,7 +26,7 @@ dependencies among the different types of configuration files. Here's an
overview of what to expect:
#. Import the board configuration file that you generated via the
:ref:`board inspector tool <board_inspector_tool>`.
:ref:`Board Inspector tool <board_inspector_tool>`.
#. Customize your scenario configuration file by defining hypervisor and
VM settings that will be used to build the ACRN hypervisor.
@ -39,19 +39,19 @@ overview of what to expect:
a. Configure settings for all post-launched User VMs in your scenario
and save the configuration in a launch configuration file.
#. Generate the launch scripts. The ACRN configurator creates one
#. Generate the launch scripts. The ACRN Configurator creates one
launch script for each VM defined in the launch configuration file.
Generate a Scenario Configuration File and Launch Scripts
*********************************************************
The following steps describe all options in the ACRN configurator for generating
The following steps describe all options in the ACRN Configurator for generating
a custom scenario configuration file and launch scripts.
#. Make sure the development computer is set up and ready to launch the ACRN
configurator, according to :ref:`gsg-dev-setup` in the Getting Started Guide.
Configurator, according to :ref:`gsg-dev-setup` in the Getting Started Guide.
#. Launch the ACRN configurator. This example assumes the tool is in the
#. Launch the ACRN Configurator. This example assumes the tool is in the
``~/acrn-work/`` directory. Feel free to modify the command as needed.
.. code-block:: bash
@ -60,12 +60,13 @@ a custom scenario configuration file and launch scripts.
#. Your web browser should open the website `<http://127.0.0.1:5001/>`_
automatically, or you may need to visit this website manually. The ACRN
configurator is supported on Chrome and Firefox.
Configurator is supported on Chrome and Firefox.
#. Click the **Import Board XML** button and browse to your board
configuration file. After the file is uploaded, make sure the board name
is selected in the **Board info** drop-down list and the board information
appears.
#. Click the **Import Board XML** button and browse to the board
configuration file that you generated via the
:ref:`Board Inspector <board_inspector_tool>`. After the file is uploaded,
make sure the board name is selected in the **Board info** drop-down list
and the board information appears.
#. Start the scenario configuration process by selecting an option from the
**Scenario Settings** menu on the top banner of the UI or by importing a
@ -83,6 +84,7 @@ a custom scenario configuration file and launch scripts.
.. image:: images/choose_scenario.png
:align: center
:class: drop-shadow
* Click the **Import XML** button to import a customized scenario
configuration file.
@ -96,6 +98,7 @@ a custom scenario configuration file and launch scripts.
.. image:: images/configure_scenario.png
:align: center
:class: drop-shadow
* You can edit these items directly in the text boxes, or you can choose
single or even multiple items from the drop-down list.
@ -112,30 +115,32 @@ a custom scenario configuration file and launch scripts.
* Click **Remove this VM** in a VM's settings to remove the VM from the
scenario.
When a VM is added or removed, the configurator reassigns the VM IDs for
When a VM is added or removed, the ACRN Configurator reassigns the VM IDs for
the remaining VMs by the order of pre-launched User VMs, Service VM, and
post-launched User VMs.
.. image:: images/configure_vm_add.png
:align: center
:class: drop-shadow
#. Click **Export XML** to save the scenario configuration file. A dialog box
appears, enabling you to save the file to a specific folder by inputting the
absolute path to this folder. If you don't specify a path, the file will be
saved to the default folder: ``acrn-hypervisor/../user_config/<board name>``.
Before saving the scenario configuration file, the configurator validates
the configurable items. If errors exist, the configurator lists all
Before saving the scenario configuration file, the Configurator validates
the configurable items. If errors exist, the Configurator lists all
incorrectly configured items and shows the errors. Example:
.. image:: images/err_acrn_configuration.png
:align: center
:class: drop-shadow
After the scenario is saved, the page automatically displays the saved
scenario configuration file.
#. To delete a scenario configuration file, click **Export XML** > **Remove**.
The configurator will delete the loaded file, even if you change the name of
The Configurator will delete the loaded file, even if you change the name of
the file in the dialog box.
#. If your scenario has post-launched User VMs, continue to the next step
@ -158,6 +163,7 @@ a custom scenario configuration file and launch scripts.
.. image:: images/choose_launch.png
:align: center
:class: drop-shadow
* Click the **Import XML** button to import a customized launch
configuration file.
@ -171,6 +177,7 @@ a custom scenario configuration file and launch scripts.
.. image:: images/configure_launch.png
:align: center
:class: drop-shadow
* You can edit these items directly in the text boxes, or you can choose
single or even multiple items from the drop-down list.
@ -179,14 +186,15 @@ a custom scenario configuration file and launch scripts.
* Hover the mouse cursor over the item to see the description.
#. Add or remove User VM (UOS) launch scripts:
#. Add or remove User VM launch scripts:
* Click **Configure an UOS below** to add a User VM launch script.
* Click **Configure a User VM below** to add a User VM launch script.
* Click **Remove this VM** to remove a User VM launch script.
.. image:: images/configure_launch_add.png
:align: center
:class: drop-shadow
#. Click **Export XML** to save the launch configuration file. A dialog box
appears, enabling you to save the file to a specific folder by inputting the
@ -194,12 +202,12 @@ a custom scenario configuration file and launch scripts.
be saved to the default folder:
``acrn-hypervisor/../user_config/<board name>``.
Before saving the launch configuration file, the configurator validates the
configurable items. If errors exist, the configurator lists all incorrectly
Before saving the launch configuration file, the Configurator validates the
configurable items. If errors exist, the Configurator lists all incorrectly
configured items and shows the errors.
#. To delete a launch configuration file, click **Export XML** > **Remove**.
The configurator will delete the loaded file, even if you change the name of
The Configurator will delete the loaded file, even if you change the name of
the file in the dialog box.
#. Click **Generate Launch Script** to save the current launch configuration
@ -208,6 +216,7 @@ a custom scenario configuration file and launch scripts.
.. image:: images/generate_launch_script.png
:align: center
:class: drop-shadow
#. Confirm that the launch scripts appear in the
``<board name>/output`` directory.

View File

@ -11,9 +11,10 @@ configuration.
This setup was tested with the following configuration:
- ACRN hypervisor: ``v2.6`` tag
- ACRN kernel: ``v2.6`` tag
- ACRN hypervisor: ``v2.7`` tag
- ACRN kernel: ``v2.7`` tag
- QEMU emulator version: 4.2.1
- Host OS: Ubuntu 20.04
- Service VM/User VM OS: Ubuntu 20.04
- Platforms tested: Kaby Lake, Skylake
@ -131,26 +132,26 @@ Install ACRN Hypervisor
#. Install the ACRN build tools and dependencies following the :ref:`gsg`.
#. Switch to the ACRN hypervisor ``v2.6`` tag.
#. Switch to the ACRN hypervisor ``v2.7`` tag.
.. code-block:: none
cd ~
git clone https://github.com/projectacrn/acrn-hypervisor.git
cd acrn-hypervisor
git checkout v2.6
git checkout v2.7
#. Build ACRN for QEMU:
.. code-block:: none
make BOARD=qemu SCENARIO=sdc
make BOARD=qemu SCENARIO=shared
For more details, refer to the :ref:`gsg`.
#. Install the ACRN Device Model and tools:
.. code-block::
.. code-block:: none
sudo make install
@ -161,9 +162,9 @@ Install ACRN Hypervisor
sudo cp build/hypervisor/acrn.32.out /boot
#. Clone and configure the Service VM kernel repository following the
instructions in the :ref:`gsg` and using the ``v2.6`` tag. The User VM (L2
instructions in the :ref:`gsg` and using the ``v2.7`` tag. The User VM (L2
guest) uses the ``virtio-blk`` driver to mount the rootfs. This driver is
included in the default kernel configuration as of the ``v2.6`` tag.
included in the default kernel configuration as of the ``v2.7`` tag.
#. Update GRUB to boot the ACRN hypervisor and load the Service VM kernel.
Append the following configuration to the :file:`/etc/grub.d/40_custom`.
@ -238,21 +239,55 @@ Bring Up User VM (L2 Guest)
#. Transfer the ``UserVM.img`` or ``UserVM.iso`` User VM disk image to the
Service VM (L1 guest).
#. Launch the User VM using the ``launch_ubuntu.sh`` script.
#. Copy OVMF.fd to launch User VM.
.. code-block:: none
cp ~/acrn-hypervisor/misc/config_tools/data/samples_launch_scripts/launch_ubuntu.sh ~/
cp ~/acrn-hypervisor/devicemodel/bios/OVMF.fd ~/
#. Update the script to use your disk image (``UserVM.img`` or ``UserVM.iso``).
.. code-block:: none
#!/bin/bash
# Copyright (C) 2020 Intel Corporation.
# SPDX-License-Identifier: BSD-3-Clause
function launch_ubuntu()
{
vm_name=ubuntu_vm$1
logger_setting="--logger_setting console,level=5;kmsg,level=6;disk,level=5"
#check if the vm is running or not
vm_ps=$(pgrep -a -f acrn-dm)
result=$(echo $vm_ps | grep "${vm_name}")
if [[ "$result" != "" ]]; then
echo "$vm_name is running, can't create twice!"
exit
fi
#for memsize setting
mem_size=1024M
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 3,virtio-blk,~/UserVM.img \
-s 4,virtio-net,tap0 \
--cpu_affinity 1 \
-s 5,virtio-console,@stdio:stdio_port \
--ovmf ~/OVMF.fd \
$logger_setting \
$vm_name
}
# offline SOS CPUs except BSP before launch UOS
for i in `ls -d /sys/devices/system/cpu/cpu[1-99]`; do
online=`cat $i/online`
idx=`echo $i | tr -cd "[1-99]"`
echo cpu$idx online=$online
if [ "$online" = "1" ]; then
echo 0 > $i/online
# during boot time, cpu hotplug may be disabled by pci_device_probe during a pci module insmod
while [ "$online" = "1" ]; do
sleep 1
echo 0 > $i/online
online=`cat $i/online`
done
echo $idx > /sys/devices/virtual/misc/acrn_hsm/remove_cpu
fi
done
launch_ubuntu 1

View File

@ -8,7 +8,7 @@ This guide describes all features and uses of the tool.
About the Board Inspector Tool
******************************
The board inspector tool ``board_inspector.py`` enables you to generate a board
The Board Inspector tool ``board_inspector.py`` enables you to generate a board
configuration file on the target system. The board configuration file stores
hardware-specific information extracted from the target platform and is used to
customize your :ref:`ACRN configuration <acrn_configuration_tool>`.
@ -22,19 +22,19 @@ Generate a Board Configuration File
additional memory, or PCI devices, you must generate a new board
configuration file.
The following steps describe all options in the board inspector for generating
The following steps describe all options in the Board Inspector for generating
a board configuration file.
#. Make sure the target system is set up and ready to run the board inspector,
#. Make sure the target system is set up and ready to run the Board Inspector,
according to :ref:`gsg-board-setup` in the Getting Started Guide.
#. Load the ``msr`` driver, used by the board inspector:
#. Load the ``msr`` driver, used by the Board Inspector:
.. code-block:: bash
sudo modprobe msr
#. Run the board inspector tool (``board_inspector.py``) to generate the board
#. Run the Board Inspector tool (``board_inspector.py``) to generate the board
configuration file. This example assumes the tool is in the
``~/acrn-work/`` directory and ``my_board`` is the desired file
name. Feel free to modify the commands as needed.
@ -44,11 +44,11 @@ a board configuration file.
cd ~/acrn-work/board_inspector/
sudo python3 board_inspector.py my_board
Upon success, the tool displays the following message:
Upon success, the tool displays a message similar to this example:
.. code-block:: console
PTCT table has been saved to PTCT successfully!
my_board.xml saved successfully!
#. Confirm that the board configuration file ``my_board.xml`` was generated in
the current directory.
@ -58,8 +58,8 @@ a board configuration file.
Command-Line Options
********************
You can configure the board inspector via command-line options. Running the
board inspector with the ``-h`` option yields the following usage message:
You can configure the Board Inspector via command-line options. Running the
Board Inspector with the ``-h`` option yields the following usage message:
.. code-block::
@ -94,11 +94,11 @@ Details about certain arguments:
* - ``--out``
- Optional. Specify a file path where the board configuration file will be
saved (example: ``~/acrn_work``). If only a filename is provided in this
option, the board inspector will generate the file in the current
option, the Board Inspector will generate the file in the current
directory.
* - ``--basic``
- Optional. By default, the board inspector parses the ACPI namespace when
- Optional. By default, the Board Inspector parses the ACPI namespace when
generating board configuration files. This option provides a way to
disable ACPI namespace parsing in case the parsing blocks the generation
of board configuration files.
@ -110,6 +110,6 @@ Details about certain arguments:
* - ``--check-device-status``
- Optional. On some boards, the device status (reported by the _STA
object) returns 0 while the device object is still useful for
pass-through devices. By default, the board inspector includes the
pass-through devices. By default, the Board Inspector includes the
devices in the board configuration file. This option filters out the
devices, so that they cannot be used.

View File

@ -86,8 +86,9 @@ Scheduler
*********
The below block diagram shows the basic concept for the scheduler. There
are two kinds of schedulers in the diagram: NOOP (No-Operation) scheduler
and BVT (Borrowed Virtual Time) scheduler.
are four kinds of schedulers in the diagram: NOOP (No-Operation) scheduler,
the IO sensitive Round Robin scheduler, the priority based scheduler and
the BVT (Borrowed Virtual Time) scheduler. By default, BVT is used.
- **No-Operation scheduler**:
@ -99,16 +100,27 @@ and BVT (Borrowed Virtual Time) scheduler.
tries to keep resources busy, and will run once it is ready. The idle thread
can run when the vCPU thread is blocked.
- **IO sensitive Round Robin scheduler**:
The IORR (IO sensitive Round Robin) scheduler supports multiple vCPUs running
on one pCPU, scheduled by a IO sensitive round robin policy.
- **Priority based scheduler**:
The priority based scheduler can support vCPU scheduling based on their
pre-configured priorities. A vCPU can be running only if there is no
higher priority vCPU running on the same pCPU. For example, in some cases,
we have two VMs, one VM can be configured to use **PRIO_LOW** and the
other one to use **PRIO_HIGH**. The vCPU of the **PRIO_LOW** VM can
only be running when the vCPU of the **PRIO_HIGH** VM voluntarily relinquishes
usage of the pCPU.
- **Borrowed Virtual Time scheduler**:
BVT (Borrowed Virtual time) is a virtual time based scheduling
algorithm, it dispatches the runnable thread with the earliest
effective virtual time.
TODO: BVT scheduler will be built on top of prioritized scheduling
mechanism, i.e. higher priority threads get scheduled first, and same
priority tasks are scheduled per BVT.
- **Virtual time**: The thread with the earliest effective virtual
time (EVT) is dispatched first.
- **Warp**: a latency-sensitive thread is allowed to warp back in

View File

@ -13,10 +13,10 @@ Enable Ivshmem Support
**********************
The ``ivshmem`` solution is disabled by default in ACRN. You can enable
it using the :ref:`ACRN configurator tool <acrn_configurator_tool>` with these
it using the :ref:`ACRN Configurator <acrn_configurator_tool>` with these
steps:
- Enable ``ivshmem`` via ACRN configurator tool GUI.
- Enable ``ivshmem`` via ACRN Configurator GUI.
- Set :option:`hv.FEATURES.IVSHMEM.IVSHMEM_ENABLED` to ``y``
@ -63,21 +63,21 @@ where
There are two ways to insert the above boot parameter for ``acrn-dm``:
- Manually edit the launch script file. In this case, ensure that both
``shm_name`` and ``shm_size`` match those defined via the ACRN configurator
``shm_name`` and ``shm_size`` match those defined via the ACRN Configurator
tool.
- Use the following command to create a launch script, when IVSHMEM is enabled
and :option:`hv.FEATURES.IVSHMEM.IVSHMEM_REGION` is properly configured via
the ACRN configurator tool.
the ACRN Configurator.
.. code-block:: none
:emphasize-lines: 5
python3 misc/config_tools/launch_config/launch_cfg_gen.py \
--board <path_to_your_boardxml> \
--scenario <path_to_your_scenarioxml> \
--board <path_to_your_board_xml> \
--scenario <path_to_your_scenario_xml> \
--launch <path_to_your_launch_script_xml> \
--uosid <desired_single_vmid_or_0_for_all_vmids>
--user_vmid <desired_single_vmid_or_0_for_all_vmids>
.. note:: This device can be used with real-time VM (RTVM) as well.
@ -105,7 +105,7 @@ Hypervisor:
target VM by target Peer ID and inject MSI interrupt to the target VM.
Notification Receiver (VM):
VM receives MSI interrupt and forward it to related application.
VM receives MSI interrupt and forwards it to related application.
ACRN supports up to 8 (MSI-X) interrupt vectors for ivshmem device.
Guest VMs shall implement their own mechanism to forward MSI interrupts
@ -139,7 +139,7 @@ Linux-based post-launched VMs (VM1 and VM2).
-s 2,pci-gvt -G "$2" \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,/home/acrn/uos1.img \
-s 3,virtio-blk,/home/acrn/UserVM1.img \
-s 4,virtio-net,tap0 \
-s 6,ivshmem,dm:/test,2 \
-s 7,virtio-rnd \
@ -154,7 +154,7 @@ Linux-based post-launched VMs (VM1 and VM2).
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 2,pci-gvt -G "$2" \
-s 3,virtio-blk,/home/acrn/uos2.img \
-s 3,virtio-blk,/home/acrn/UserVM2.img \
-s 4,virtio-net,tap0 \
-s 5,ivshmem,dm:/test,2 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
@ -169,9 +169,9 @@ Linux-based post-launched VMs (VM1 and VM2).
the ``ivshmem`` device vendor ID is ``1af4`` (Red Hat) and device ID is ``1110``
(Inter-VM shared memory). Use these commands to probe the device::
$ sudo modprobe uio
$ sudo modprobe uio_pci_generic
$ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
sudo modprobe uio
sudo modprobe uio_pci_generic
sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
.. note:: These commands are applicable to Linux-based guests with ``CONFIG_UIO`` and ``CONFIG_UIO_PCI_GENERIC`` enabled.
@ -222,7 +222,7 @@ Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM).
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 2,pci-gvt -G "$2" \
-s 3,virtio-blk,/home/acrn/uos2.img \
-s 3,virtio-blk,/home/acrn/UserVM2.img \
-s 4,virtio-net,tap0 \
-s 5,ivshmem,hv:/shm_region_0,2 \
--ovmf /usr/share/acrn/bios/OVMF.fd \

View File

@ -9,8 +9,8 @@ Introduction
S5 is one of the `ACPI sleep states <http://acpi.sourceforge.net/documentation/sleep.html>`_
that refers to the system being shut down (although some power may still be
supplied to certain devices). In this document, S5 means the function to
shut down the **User VMs**, **the Service VM**, the hypervisor, and the
hardware. In most cases, directly shutting down the power of a computer
shut down the **User VMs**, **Service VM**, the hypervisor, and the
hardware. In most cases, directly powering off a computer
system is not advisable because it can damage some components. It can cause
corruption and put the system in an unknown or unstable state. On ACRN, the
User VM must be shut down before powering off the Service VM. Especially for
@ -31,135 +31,200 @@ The diagram below shows the overall architecture:
S5 overall architecture
- **Scenario I**:
- **vUART channel**:
The User VM's serial port device (``ttySn``) is emulated in the
Device Model, the channel from the Service VM to the User VM:
.. graphviz:: images/s5-scenario-1.dot
:name: s5-scenario-1
- **Scenario II**:
The User VM's (like RT-Linux or other RT-VMs) serial port device
(``ttySn``) is emulated in the Hypervisor,
the channel from the Service OS to the User VM:
The User VM's serial port device (``/dev/ttySn``) is emulated in the
Hypervisor. The channel from the Service VM to the User VM:
.. graphviz:: images/s5-scenario-2.dot
:name: s5-scenario-2
Initiate a system S5 from within a User VM (e.g. HMI)
=====================================================
Lifecycle Manager Overview
==========================
As part of the S5 reference design, a Lifecycle Manager daemon (``life_mngr`` in Linux,
``life_mngr_win.exe`` in Windows) runs in the Service VM and User VMs to implement S5.
Operator or user can use ``s5_trigger_linux.py`` or ``s5_trigger_win.py`` script to initialize
a system S5 in the Service VM or User VMs. The Lifecycle Manager in the Service VM and
User VMs wait for system S5 request on the local socket port.
Initiate a System S5 from within a User VM (e.g., HMI)
======================================================
As shown in the :numref:`s5-architecture`, a request to Service VM initiates the shutdown flow.
This could come from a User VM, most likely the HMI (running Windows or Linux).
When a human operator initiates the flow, the Lifecycle Manager (``life_mngr``) running in that
User VM will send the request via the vUART to the Lifecycle Manager in the Service VM which in
turn acknowledges the request and triggers the following flow.
When a human operator initiates the flow through running ``s5_trigger_linux.py`` or ``s5_trigger_win.py``,
the Lifecycle Manager (``life_mngr``) running in that User VM sends the system S5 request via
the vUART to the Lifecycle Manager in the Service VM which in turn acknowledges the request.
The Lifecycle Manager in Service VM sends ``poweroff_cmd`` request to User VMs, when the Lifecycle Manager
in User VMs receives ``poweroff_cmd`` request, it sends ``ack_poweroff`` to the Service VM;
then it shuts down the User VMs. If the User VMs is not ready to shut down, it can ignore the
``poweroff_cmd`` request.
.. note:: The User VM need to be authorized to be able to request a Shutdown, this is achieved by adding
``--pm_notify_channel uart,allow_trigger_s5`` in the launch script of that VM.
And, there is only one VM in the system can be configured to request a shutdown. If there is a second User
VM launched with ``--pm_notify_channel uart,allow_trigger_s5``, ACRN will stop launching it and throw
out below error message:
``initiate a connection on a socket error``
``create socket to connect life-cycle manager failed``
.. note:: The User VM need to be authorized to be able to request a system S5, this is achieved
by configuring ``ALLOW_TRIGGER_S5`` in the Lifecycle Manager service configuration :file:`/etc/life_mngr.conf`
in the Service VM. There is only one User VM in the system can be configured to request a shutdown.
If this configuration is wrong, the system S5 request from User VM is rejected by
Lifecycle Manager of Service VM, the following error message is recorded in Lifecycle Manager
log :file:`/var/log/life_mngr.log` of Service VM:
``The user VM is not allowed to trigger system shutdown``
Trigger the User VM's S5
========================
Initiate a System S5 within the Service VM
==========================================
On the Service VM side, it uses the ``acrnctl`` tool to trigger the User VM's S5 flow:
``acrnctl stop user-vm-name``. Then, the Device Model sends a ``shutdown`` command
to the User VM through a channel. If the User VM receives the command, it will send an ``ACKED``
to the Device Model. It is the Service VM's responsibility to check whether the User VMs
shut down successfully or not, and to decide when to shut the Service VM itself down.
On the Service VM side, it uses the ``s5_trigger_linux.py`` to trigger the system S5 flow. Then,
the Lifecycle Manager in service VM sends a ``poweroff_cmd`` request to the lifecycle manager in each
User VM through the vUART channel. If the User VM receives this request, it will send an ``ack_poweroff``
to the lifecycle manager in Service VM. It is the Service VM's responsibility to check whether the
User VMs shut down successfully or not, and to decide when to shut the Service VM itself down.
User VM "Lifecycle Manager"
===========================
As part of the S5 reference design, a Lifecycle Manager daemon (``life_mngr`` in Linux,
``life_mngr_win.exe`` in Windows) runs in the User VM to implement S5. It waits for the shutdown
request from the Service VM on the serial port. The simple protocol between the Service VM and
User VM is as follows: when the daemon receives ``shutdown``, it sends ``ACKED`` to the Service VM;
then it shuts down the User VM. If the User VM is not ready to shut down,
it can ignore the ``shutdown`` command.
.. note:: Service VM is always allowed to trigger system S5 by default.
.. _enable_s5:
Enable S5
*********
The procedure for enabling S5 is specific to the particular OS:
1. Configure communication vUART for Service VM and User VMs:
* For Linux (LaaG) or Windows (WaaG), include these lines in the launch script:
Add these lines in the hypervisor scenario XML file manually:
.. code-block:: bash
Example::
# Power Management (PM) configuration using vUART channel
pm_channel="--pm_notify_channel uart"
pm_by_vuart="--pm_by_vuart pty,/run/acrn/life_mngr_"$vm_name
pm_vuart_node="-s 1:0,lpc -l com2,/run/acrn/life_mngr_"$vm_name
/* VM0 */
<vm_type>SERVICE_VM</vm_type>
...
<legacy_vuart id="1">
<type>VUART_LEGACY_PIO</type>
<base>CONFIG_COM_BASE</base>
<irq>0</irq>
<target_vm_id>1</target_vm_id>
<target_uart_id>1</target_uart_id>
</legacy_vuart>
<legacy_vuart id="2">
<type>VUART_LEGACY_PIO</type>
<base>CONFIG_COM_BASE</base>
<irq>0</irq>
<target_vm_id>2</target_vm_id>
<target_uart_id>2</target_uart_id>
</legacy_vuart>
...
/* VM1 */
<vm_type>POST_STD_VM</vm_type>
...
<legacy_vuart id="1">
<type>VUART_LEGACY_PIO</type>
<base>COM2_BASE</base>
<irq>COM2_IRQ</irq>
<target_vm_id>0</target_vm_id>
<target_uart_id>1</target_uart_id>
</legacy_vuart>
...
/* VM2 */
<vm_type>POST_STD_VM</vm_type>
...
<legacy_vuart id="1">
<type>VUART_LEGACY_PIO</type>
<base>INVALID_COM_BASE</base>
<irq>COM2_IRQ</irq>
<target_vm_id>0</target_vm_id>
<target_uart_id>2</target_uart_id>
</legacy_vuart>
<legacy_vuart id="2">
<type>VUART_LEGACY_PIO</type>
<base>COM2_BASE</base>
<irq>COM2_IRQ</irq>
<target_vm_id>0</target_vm_id>
<target_uart_id>2</target_uart_id>
</legacy_vuart>
...
/* VM3 */
...
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
...
$pm_channel \
$pm_by_vuart \
$pm_vuart_node \
...
.. note:: These vUART is emulated in the hypervisor; expose the node as ``/dev/ttySn``.
For the User VM with the minimal VM ID, the communication vUART id should be 1.
For other User VMs, the vUART (id is 1) shoulbe be configured as invalid, the communication
vUART id should be 2 or others.
* For RT-Linux, include these lines in the launch script:
2. Build the Lifecycle Manager daemon, ``life_mngr``:
.. code-block:: bash
.. code-block:: none
# Power Management (PM) configuration
pm_channel="--pm_notify_channel uart"
pm_by_vuart="--pm_by_vuart tty,/dev/ttyS1"
cd acrn-hypervisor
make life_mngr
/usr/bin/acrn-dm -A -m $mem_size -s 0:0,hostbridge \
...
$pm_channel \
$pm_by_vuart \
...
#. For Service VM, LaaG VM and RT-Linux VM, run the Lifecycle Manager daemon:
.. note:: For RT-Linux, the vUART is emulated in the hypervisor; expose the node as ``/dev/ttySn``.
#. For LaaG and RT-Linux VMs, run the lifecycle manager daemon:
a. Use these commands to build the lifecycle manager daemon, ``life_mngr``.
a. Copy ``life_mngr.conf``, ``s5_trigger_linux.py``, ``user_vm_shutdown.py``, ``life_mngr``,
and ``life_mngr.service`` into the Service VM and User VMs.
.. code-block:: none
$ cd acrn-hypervisor
$ make life_mngr
scp build/misc/services/s5_trigger_linux.py root@<target board address>:~/
scp build/misc/services/life_mngr root@<target board address>:/usr/bin/
scp build/misc/services/life_mngr.conf root@<target board address>:/etc/life_mngr/
scp build/misc/services/life_mngr.service root@<target board address>:/lib/systemd/system/
#. Copy ``life_mngr`` and ``life_mngr.service`` into the User VM:
scp misc/services/life_mngr/user_vm_shutdown.py root@<target board address>:~/
.. note:: :file:`user_vm_shutdown.py` is only needed to be copied into Service VM.
#. Edit options in ``/etc/life_mngr/life_mngr.conf`` in the Service VM.
.. code-block:: none
$ scp build/misc/services/life_mngr root@<test board address>:/usr/bin/life_mngr
$ scp build/misc/services/life_mngr.service root@<test board address>:/lib/systemd/system/life_mngr.service
VM_TYPE=service_vm
VM_NAME=Service_VM
DEV_NAME=tty:/dev/ttyS8,/dev/ttyS9,/dev/ttyS10,/dev/ttyS11,/dev/ttyS12,/dev/ttyS13,/dev/ttyS14
ALLOW_TRIGGER_S5=/dev/ttySn
#. Use the below commands to enable ``life_mngr.service`` and restart the User VM.
.. note:: The mapping between User VM ID and communication serial device name (``/dev/ttySn``)
in the :file:`/etc/serial.conf`. If ``/dev/ttySn`` is configured in the ``ALLOW_TRIGGER_S5``,
this means system shutdown is allowed to be triggered in the corresponding User VM.
#. Edit options in ``/etc/life_mngr/life_mngr.conf`` in the User VM.
.. code-block:: none
# chmod +x /usr/bin/life_mngr
# systemctl enable life_mngr.service
# reboot
VM_TYPE=user_vm
VM_NAME=<User VM name>
DEV_NAME=tty:/dev/ttyS1
#ALLOW_TRIGGER_S5=/dev/ttySn
.. note:: The User VM name in this configuration file should be consistent with the VM name in the
launch script for the Post-launched User VM or the VM name which is specified in the hypervisor
scenario XML for the Pre-launched User VM.
#. Use the following commands to enable ``life_mngr.service`` and restart the Service VM and User VMs.
.. code-block:: none
sudo chmod +x /usr/bin/life_mngr
sudo systemctl enable life_mngr.service
sudo reboot
.. note:: For the Pre-launched User VM, need restart Lifecycle Manager service manually
after Lifecycle Manager in Service VM starts.
#. For the WaaG VM, run the lifecycle manager daemon:
a) Build the ``life_mngr_win.exe`` application::
a) Build the ``life_mngr_win.exe`` application and ``s5_trigger_win.py``::
$ cd acrn-hypervisor
$ make life_mngr
cd acrn-hypervisor
make life_mngr
.. note:: If there is no ``x86_64-w64-mingw32-gcc`` compiler, you can run ``sudo apt install gcc-mingw-w64-x86-64``
on Ubuntu to install it.
.. note:: If there is no ``x86_64-w64-mingw32-gcc`` compiler, you can run
``sudo apt install gcc-mingw-w64-x86-64`` on Ubuntu to install it.
#) Copy ``s5_trigger_win.py`` into the WaaG VM.
#) Set up a Windows environment:
I) Download the :kbd:`Visual Studio 2019` tool from `<https://visualstudio.microsoft.com/downloads/>`_,
1) Download the Python3 from `<https://www.python.org/downloads/release/python-3810/>`_, install
"Python 3.8.10" in WaaG.
#) If Lifecycle Manager for WaaG will be built in Windows,
download the Visual Studio 2019 tool from `<https://visualstudio.microsoft.com/downloads/>`_,
and choose the two options in the below screenshots to install "Microsoft Visual C++ Redistributable
for Visual Studio 2015, 2017 and 2019 (x86 or X64)" in WaaG:
@ -167,6 +232,8 @@ The procedure for enabling S5 is specific to the particular OS:
.. figure:: images/Microsoft-Visual-C-install-option-2.png
.. note:: If Lifecycle Manager for WaaG is built in Linux, Visual Studio 2019 tool is not needed for WaaG.
#) In WaaG, use the :kbd:`Windows + R` shortcut key, input
``shell:startup``, click :kbd:`OK`
and then copy the ``life_mngr_win.exe`` application into this directory.
@ -179,15 +246,15 @@ The procedure for enabling S5 is specific to the particular OS:
.. figure:: images/open-com-success.png
#. If the Service VM is being shut down (transitioning to the S5 state), it can call
``acrnctl stop vm-name`` to shut down the User VMs.
#. If ``s5_trigger_linux.py`` is run in the Service VM, the Service VM will shut down (transitioning to the S5 state),
it sends poweroff request to shut down the User VMs.
.. note:: S5 state is not automatically triggered by a Service VM shutdown; this needs
to be run before powering off the Service VM.
to run ``s5_trigger_linux.py`` in the Service VM.
How to Test
***********
As described in :ref:`vuart_config`, two vUARTs are defined in
As described in :ref:`vuart_config`, two vUARTs are defined for User VM in
pre-defined ACRN scenarios: vUART0/ttyS0 for the console and
vUART1/ttyS1 for S5-related communication (as shown in :ref:`s5-architecture`).
@ -204,49 +271,46 @@ How to Test
#. Refer to the :ref:`enable_s5` section to set up the S5 environment for the User VMs.
.. note:: RT-Linux's UUID must use ``495ae2e5-2603-4d64-af76-d4bc5a8ec0e5``. Also, the
shared EFI image is required for launching the RT-Linux VM.
.. note:: Use the ``systemctl status life_mngr.service`` command to ensure the service is working on the LaaG or RT-Linux:
.. code-block:: console
* life_mngr.service - ACRN lifemngr daemon
Loaded: loaded (/usr/lib/systemd/system/life_mngr.service; enabled; vendor p>
Active: active (running) since Tue 2019-09-10 07:15:06 UTC; 1min 11s ago
Main PID: 840 (life_mngr)
Loaded: loaded (/lib/systemd/system/life_mngr.service; enabled; vendor preset: enabled)
Active: active (running) since Thu 2021-11-11 12:43:53 CST; 36s ago
Main PID: 197397 (life_mngr)
.. note:: For WaaG, we need to close ``windbg`` by using the ``bcdedit /set debug off`` command
IF you executed the ``bcdedit /set debug on`` when you set up the WaaG, because it occupies the ``COM2``.
#. Use the ``acrnctl stop`` command on the Service VM to trigger S5 to the User VMs:
#. Use the ``user_vm_shutdown.py`` in the Service VM to shut down the User VMs:
.. code-block:: console
.. code-block:: none
# acrnctl stop vm1
sudo python3 ~/user_vm_shutdown.py <User VM name>
.. note:: The User VM name is configured in the :file:`life_mngr.conf` of User VM.
For the WaaG VM, the User VM name is "windows".
#. Use the ``acrnctl list`` command to check the User VM status.
.. code-block:: console
.. code-block:: none
# acrnctl list
vm1 stopped
sudo acrnctl list
<User VM name> stopped
System Shutdown
***************
Using a coordinating script, ``misc/life_mngr/s5_trigger.sh``, in conjunction with
the lifecycle manager in each VM, graceful system shutdown can be performed.
Using a coordinating script, ``s5_trigger_linux.py`` or ``s5_trigger_win.py``,
in conjunction with the Lifecycle Manager in each VM, graceful system shutdown
can be performed.
.. note:: Please install ``s5_trigger.sh`` manually to root's home directory.
.. code-block:: none
$ sudo install -p -m 0755 -t ~root misc/life_mngr/s5_trigger.sh
In the ``hybrid_rt`` scenario, the script can send a shutdown command via ``ttyS1``
in the Service VM, which is connected to ``ttyS1`` in the pre-launched VM. The
lifecycle manager in the pre-launched VM receives the shutdown command, sends an
In the ``hybrid_rt`` scenario, operator can use the script to send a system shutdown
request via ``/var/lib/life_mngr/monitor.sock`` to User VM which is configured to be allowed to
trigger system S5, this system shutdown request is forwarded to the Service VM, the
Service VM sends poweroff request to each User VMs (Pre-launched VM or Post-launched VM)
through vUART. The Lifecycle Manager in the User VM receives the poweroff request, sends an
ack message, and proceeds to shut itself down accordingly.
.. figure:: images/system_shutdown.png
@ -254,22 +318,28 @@ ack message, and proceeds to shut itself down accordingly.
Graceful system shutdown flow
#. The HMI Windows Guest uses the lifecycle manager to send a shutdown request to
the Service VM
#. The lifecycle manager in the Service VM responds with an ack message and
executes ``s5_trigger.sh``
#. After receiving the ack message, the lifecycle manager in the HMI Windows Guest
shuts down the guest
#. The ``s5_trigger.sh`` script in the Service VM shuts down the Linux Guest by
using ``acrnctl`` to send a shutdown request
#. After receiving the shutdown request, the lifecycle manager in the Linux Guest
responds with an ack message and shuts down the guest
#. The ``s5_trigger.sh`` script in the Service VM shuts down the Pre-launched RTVM
by sending a shutdown request to its ``ttyS1``
#. After receiving the shutdown request, the lifecycle manager in the Pre-launched
RTVM responds with an ack message
#. The lifecycle manager in the Pre-launched RTVM shuts down the guest using
standard PM registers
#. After receiving the ack message, the ``s5_trigger.sh`` script in the Service VM
shuts down the Service VM
#. The hypervisor shuts down the system after all of its guests have shut down
#. The HMI in the Windows VM uses ``s5_trigger_win.py`` to send
system shutdown request to the Lifecycle Manager, Lifecycle Manager
forwards this request to Lifecycle Manager in the Service VM.
#. The Lifecycle Manager in the Service VM responds with an ack message and
sends ``poweroff_cmd`` request to Windows VM.
#. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in the HMI
Windows VM responds with an ack message, then shuts down VM.
#. The Lifecycle Manager in the Service VM sends ``poweroff_cmd`` request to
Linux User VM.
#. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in the
Linux User VM responds with an ack message, then shuts down VM.
#. The Lifecycle Manager in the Service VM sends ``poweroff_cmd`` request to
Pre-launched RTVM.
#. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in
the Pre-launched RTVM responds with an ack message.
#. The Lifecycle Manager in the Pre-launched RTVM shuts down the VM using
ACPI PM registers.
#. After receiving the ack message from all user VMs, the Lifecycle Manager
in the Service VM shuts down VM.
#. The hypervisor shuts down the system after all VMs have shut down.
.. note:: If one or more virtual functions (VFs) of a SR-IOV device, e.g. GPU on Alder
Lake platform, are assigned to User VMs, extra steps should be taken by user to
disable all VFs before Service VM shuts down. Otherwise, Service VM may fail to
shut down due to some enabled VFs.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 10 KiB

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

After

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 65 KiB

After

Width:  |  Height:  |  Size: 67 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 32 KiB

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 46 KiB

After

Width:  |  Height:  |  Size: 45 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

After

Width:  |  Height:  |  Size: 184 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 23 KiB

View File

@ -2,5 +2,5 @@ digraph G {
node [shape=plaintext fontsize=12];
rankdir=LR;
bgcolor="transparent";
"ACRN-DM" -> "Service VM:/dev/ttyS1" -> "ACRN hypervisor" -> "User VM:/dev/ttyS1" [arrowsize=.5];
"Service VM:/dev/ttyS8" -> "ACRN hypervisor" -> "User VM:/dev/ttyS1" [arrowsize=.5];
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

After

Width:  |  Height:  |  Size: 20 KiB

Some files were not shown because too many files have changed in this diff Show More