doc: fix all headings to use title case

While we hoped to make the headings consistent over time while doing
other edits, we should instead just make the squirrels happy and do them
all at once or they'll likely never be made consistent.

A python script was used to find the headings, and then a call to
https://pypi.org/project/titlecase to transform the title.  A visual
inspection was used to tweak a few unexpected resulting titles.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2021-02-12 16:27:24 -08:00 committed by David Kinder
parent 6e655d098b
commit 0bd384d41b
119 changed files with 576 additions and 638 deletions

View File

@ -28,8 +28,8 @@ and the `Graphics Execution Manager(GEM)`_ parts of `i915 driver`_.
.. _i915 driver: https://01.org/linuxgraphics/gfx-docs/drm/gpu/i915.html
Intel GVT-g Guest Support(vGPU)
===============================
Intel GVT-g Guest Support (vGPU)
================================
.. kernel-doc:: drivers/gpu/drm/i915/i915_vgpu.c
:doc: Intel GVT-g guest support
@ -37,8 +37,8 @@ Intel GVT-g Guest Support(vGPU)
.. kernel-doc:: drivers/gpu/drm/i915/i915_vgpu.c
:internal:
Intel GVT-g Host Support(vGPU device model)
===========================================
Intel GVT-g Host Support (vGPU Device Model)
============================================
.. kernel-doc:: drivers/gpu/drm/i915/intel_gvt.c
:doc: Intel GVT-g host support
@ -47,7 +47,7 @@ Intel GVT-g Host Support(vGPU device model)
:internal:
VHM APIs called from AcrnGT
VHM APIs Called From AcrnGT
****************************
The Virtio and Hypervisor Service Module (VHM) is a kernel module in the
@ -83,7 +83,7 @@ responses to user space modules, notified by vIRQ injections.
.. _MPT_interface:
AcrnGT mediated passthrough (MPT) interface
AcrnGT Mediated Passthrough (MPT) Interface
*******************************************
AcrnGT receives request from GVT module through MPT interface. Refer to the
@ -145,7 +145,7 @@ This section describes the wrap functions:
.. _intel_gvt_ops_interface:
GVT-g intel_gvt_ops interface
GVT-g Intel_gvt_ops Interface
*****************************
This section contains APIs for GVT-g intel_gvt_ops interface. Sources are found
@ -186,23 +186,23 @@ in the `ACRN kernel GitHub repo`_
.. _sysfs_interface:
AcrnGT sysfs interface
AcrnGT Sysfs Interface
***********************
This section contains APIs for the AcrnGT sysfs interface. Sources are found
in the `ACRN kernel GitHub repo`_
sysfs nodes
Sysfs Nodes
===========
In below examples all accesses to these interfaces are via bash command
``echo`` or ``cat``. This is a quick and easy way to get/control things. But
when these operations fails, it is impossible to get respective error code by
In the following examples, all accesses to these interfaces are via bash command
``echo`` or ``cat``. This is a quick and easy way to get or control things. But
when these operations fail, it is impossible to get respective error code by
this way.
When accessing sysfs entries, people should use library functions such as
``read()`` or ``write()``.
When accessing sysfs entries, use library functions such as
``read()`` or ``write()`` instead.
On **success**, the returned value of ``read()`` or ``write()`` indicates how
many bytes have been transferred. On **error**, the returned value is ``-1``
@ -210,33 +210,17 @@ and the global ``errno`` will be set appropriately. This is the only way to
figure out what kind of error occurs.
/sys/kernel/gvt/
----------------
- The ``/sys/kernel/gvt/`` class sub-directory belongs to AcrnGT and provides a
centralized sysfs interface for configuring vGPU properties.
The ``/sys/kernel/gvt/`` class sub-directory belongs to AcrnGT and provides a
centralized sysfs interface for configuring vGPU properties.
- The ``/sys/kernel/gvt/control/`` sub-directory contains all the necessary
switches for different purposes.
- The ``/sys/kernel/gvt/control/create_gvt_instance`` node is used by ACRN-DM to
create/destroy a vGPU instance.
/sys/kernel/gvt/control/
------------------------
- After a VM is created, a new sub-directory ``/sys/kernel/GVT/vmN`` ("N" is the VM id) will be
created.
The ``/sys/kernel/gvt/control/`` sub-directory contains all the necessary
switches for different purposes.
/sys/kernel/gvt/control/create_gvt_instance
-------------------------------------------
The ``/sys/kernel/gvt/control/create_gvt_instance`` node is used by ACRN-DM to
create/destroy a vGPU instance.
/sys/kernel/gvt/vmN/
--------------------
After a VM is created, a new sub-directory ``vmN`` ("N" is the VM id) will be
created.
/sys/kernel/gvt/vmN/vgpu_id
---------------------------
The ``/sys/kernel/gvt/vmN/vgpu_id`` node is to get vGPU id from VM which id is
N.
- The ``/sys/kernel/gvt/vmN/vgpu_id`` node is to get vGPU id from VM which id is
N.

View File

@ -3,13 +3,13 @@
Security Advisory
#################
Addressed in ACRN v2.3
Addressed in ACRN V2.3
************************
We recommend that all developers upgrade to this v2.3 release (or later), which
addresses the following security issue that was discovered in previous releases:
------
-----
- NULL Pointer Dereference in ``devicemodel\hw\pci\virtio\virtio_mei.c``
``vmei_proc_tx()`` function tries to find the ``iov_base`` by calling
@ -19,13 +19,13 @@ addresses the following security issue that was discovered in previous releases:
**Affected Release:** v2.2 and earlier.
Addressed in ACRN v2.1
Addressed in ACRN V2.1
************************
We recommend that all developers upgrade to this v2.1 release (or later), which
addresses the following security issue that was discovered in previous releases:
------
-----
- Missing access control restrictions in the Hypervisor component
A malicious entity with root access in the Service VM
@ -36,13 +36,13 @@ addresses the following security issue that was discovered in previous releases:
**Affected Release:** v2.0 and v1.6.1.
Addressed in ACRN v1.6.1
Addressed in ACRN V1.6.1
************************
We recommend that all developers upgrade to this v1.6.1 release (or later), which
addresses the following security issue that was discovered in previous releases:
------
-----
- Service VM kernel Crashes When Fuzzing HC_ASSIGN_PCIDEV and HC_DEASSIGN_PCIDEV
NULL pointer dereference due to invalid address of PCI device to be assigned or
@ -52,13 +52,13 @@ addresses the following security issue that was discovered in previous releases:
**Affected Release:** v1.6.
Addressed in ACRN v1.6
Addressed in ACRN V1.6
**********************
We recommend that all developers upgrade to this v1.6 release (or later), which
addresses the following security issues that were discovered in previous releases:
------
-----
- Hypervisor Crashes When Fuzzing HC_DESTROY_VM
The input 'vdev->pdev' should be validated properly when handling
@ -84,13 +84,13 @@ addresses the following security issues that were discovered in previous release
**Affected Release:** v1.4 and earlier.
Addressed in ACRN v1.4
Addressed in ACRN V1.4
**********************
We recommend that all developers upgrade to this v1.4 release (or later), which
addresses the following security issues that were discovered in previous releases:
------
-----
- Mitigation for Machine Check Error on Page Size Change
Improper invalidation for page table updates by a virtual guest operating

View File

@ -24,7 +24,7 @@ For simplicity, in the rest of this document, the term GVT is used to
refer to the core device model component of GVT-g, specifically
corresponding to ``gvt.ko`` when build as a module.
Purpose of this document
Purpose of This Document
************************
This document explains the relationship between components of GVT-g in
@ -94,11 +94,11 @@ VHM module
GVT-g components and interfaces
Core scenario interaction sequences
Core Scenario Interaction Sequences
***********************************
vGPU creation scenario
vGPU Creation Scenario
======================
In this scenario, AcrnGT receives a create request from ACRN-DM. It
@ -111,14 +111,14 @@ configure space of the vGPU (virtual device 0:2:0) via VHM's APIs.
Finally, the AcrnGT module launches an AcrnGT emulation thread to
listen to I/O trap notifications from HVM and ACRN hypervisor.
vGPU destroy scenario
vGPU Destroy Scenario
=====================
In this scenario, AcrnGT receives a destroy request from ACRN-DM. It
calls GVT's :ref:`intel_gvt_ops_interface` to inform GVT of the vGPU destroy
request, and cleans up all vGPU resources.
vGPU PCI configure space write scenario
vGPU PCI Configure Space Write Scenario
=======================================
ACRN traps the vGPU's PCI config space write, notifies AcrnGT's
@ -133,26 +133,26 @@ config space write:
corresponding part in the host's aperture.
#. Otherwise, write to the virtual PCI configuration space of the vGPU.
PCI configure space read scenario
PCI Configure Space Read Scenario
=================================
Call sequence is almost the same as the write scenario above,
but instead it calls the GVT's :ref:`intel_gvt_ops_interface`
``emulate_cfg_read`` to emulate the vGPU PCI config space read.
GGTT read/write scenario
GGTT Read/Write Scenario
========================
GGTT's trap is set up in the PCI configure space write
scenario above.
MMIO read/write scenario
MMIO Read/Write Scenario
========================
MMIO's trap is set up in the PCI configure space write
scenario above.
PPGTT write-protection page set/unset scenario
PPGTT Write-Protection Page Set/Unset Scenario
==============================================
PPGTT write-protection page is set by calling ``acrn_ioreq_add_iorange``
@ -161,13 +161,13 @@ allowing read without trap.
PPGTT write-protection page is unset by calling ``acrn_ioreq_del_range``.
PPGTT write-protection page write
PPGTT Write-Protection Page Write
=================================
In the VHM module, ioreq for PPGTT WP and MMIO trap is the same. It will
also be trapped into the routine ``intel_vgpu_emulate_mmio_write()``.
API details
API Details
***********
APIs of each component interface can be found in the :ref:`GVT-g_api`

View File

@ -114,7 +114,7 @@ platforms such as GitHub.
If you haven't already done so, you'll need to create a (free) GitHub account
on https://github.com and have Git tools available on your development system.
Repository layout
Repository Layout
*****************
To clone the ACRN hypervisor repository (including the ``hypervisor``,
@ -166,7 +166,7 @@ Contribution Tools and Git Setup
.. _Git send-email documentation:
https://git-scm.com/docs/git-send-email
git-send-email
Git-Send-Email
==============
If you'll be submitting code patches, you may need to install
@ -178,7 +178,7 @@ for example use::
and then configure Git` with your SMTP server information as
described in the `Git send-email documentation`_.
Signed-off-by
Signed-Off-By
=============
The name in the commit message ``Signed-off-by:`` line and your email must

View File

@ -162,7 +162,7 @@ Would be rendered as:
Remove all generated output, restoring the folders to a
clean state.
Multi-column lists
Multi-Column Lists
******************
If you have a long bullet list of items, where each item is short, you
@ -282,7 +282,7 @@ columns, you can specify ``:widths: 1 2 2``. If you'd like the browser
to set the column widths automatically based on the column contents, you
can use ``:widths: auto``.
File names and Commands
File Names and Commands
***********************
Sphinx extends reST by supporting additional inline markup elements (called
@ -484,7 +484,7 @@ as needed, generally at least 500 px wide but no more than 1000 px, and
no more than 250 KB unless a particularly large image is needed for
clarity.
Tabs, spaces, and indenting
Tabs, Spaces, and Indenting
***************************
Indenting is significant in reST file content, and using spaces is
@ -641,7 +641,7 @@ without this ``rst-class`` directive will not be numbered.) For example::
.. rst-class:: numbered-step
First instruction step
First Instruction Step
**********************
This is the first instruction step material. You can do the usual paragraphs and
@ -651,7 +651,7 @@ can move steps around easily if needed).
.. rst-class:: numbered-step
Second instruction step
Second Instruction Step
***********************
This is the second instruction step.

View File

@ -1,6 +1,6 @@
.. _graphviz-examples:
Drawings using graphviz
Drawings Using Graphviz
#######################
We support using the Sphinx `graphviz extension`_ for creating simple
@ -35,7 +35,7 @@ and the generated output would appear as this:
Let's look at some more examples and then we'll get into more details
about the dot language and drawing options.
Simple directed graph
Simple Directed Graph
*********************
For simple drawings with shapes and lines, you can put the graphviz
@ -77,7 +77,7 @@ colors, as shown.
.. _standard HTML color names:
https://www.w3schools.com/colors/colors_hex.asp
Adding edge labels
Adding Edge Labels
******************
Here's an example of a drawing with labels on the edges (arrows)

View File

@ -1,6 +1,6 @@
.. _atkbdc_virt_hld:
AT keyboard controller emulation
AT Keyboard Controller Emulation
################################
This document describes the AT keyboard controller emulation implementation in the ACRN device model. The Atkbdc device emulates a PS2 keyboard and mouse.
@ -16,7 +16,7 @@ The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and mic
AT keyboard controller emulation architecture
PS2 keyboard emulation
PS2 Keyboard Emulation
**********************
ACRN supports AT keyboard controller for PS2 keyboard that can be accessed through I/O ports(0x60 and 0x64). 0x60 is used to access AT keyboard controller data register, 0x64 is used to access AT keyboard controller address register.
@ -45,7 +45,7 @@ The PS2 keyboard ACPI description as below::
})
}
PS2 mouse emulation
PS2 Mouse Emulation
*******************
ACRN supports AT keyboard controller for PS2 mouse that can be accessed through I/O ports(0x60 and 0x64).

View File

@ -1,12 +1,12 @@
.. _APL_GVT-g-hld:
GVT-g high-level design
GVT-g High-Level Design
#######################
Introduction
************
Purpose of this Document
Purpose of This Document
========================
This high-level design (HLD) document describes the usage requirements
@ -919,7 +919,7 @@ OS and an Android Guest OS.
Full picture of the AcrnGT
AcrnGT in kernel
AcrnGT in Kernel
=================
The AcrnGT module in the Service VM kernel acts as an adaption layer to connect

View File

@ -1,6 +1,6 @@
.. _hld-devicemodel:
Device Model high-level design
Device Model High-Level Design
##############################
Hypervisor Device Model (DM) is a QEMU-like application in Service VM
@ -279,7 +279,7 @@ DM Initialization
VHM
***
VHM overview
VHM Overview
============
Device Model manages User VM by accessing interfaces exported from VHM
@ -302,7 +302,7 @@ hypercall to the hypervisor. There are two exceptions:
Architecture of ACRN VHM
VHM ioctl interfaces
VHM Ioctl Interfaces
====================
.. note:: Reference API documents for General interface, VM Management,
@ -756,7 +756,7 @@ called from the PIO/MMIO handler.
The PCI emulation device will make use of interrupt APIs as well for
its interrupt injection.
PCI Host Bridge and hierarchy
PCI Host Bridge and Hierarchy
=============================
There is PCI host bridge emulation in DM. The bus hierarchy is
@ -892,7 +892,7 @@ shows a typical ACPI table layout in an Intel APL platform:
Typical ACPI table layout on Intel APL platform
ACPI virtualization
ACPI Virtualization
===================
Most modern OSes requires ACPI, so we need ACPI virtualization to

View File

@ -1,6 +1,6 @@
.. _hld-emulated-devices:
Emulated devices high-level design
Emulated Devices High-Level Design
##################################
Full virtualization device models can typically

View File

@ -1,6 +1,6 @@
.. _hld-hypervisor:
Hypervisor high-level design
Hypervisor High-Level Design
############################

View File

@ -1,6 +1,6 @@
.. _hld-overview:
ACRN high-level design overview
ACRN High-Level Design Overview
###############################
ACRN is an open source reference hypervisor (HV) that runs on top of
@ -28,7 +28,7 @@ The Instrument Control (IC) system manages graphic displays of:
- alerts of low fuel or tire pressure
- rear-view camera (RVC) and surround-camera view for driving assistance
In-vehicle Infotainment
In-Vehicle Infotainment
=======================
A typical In-vehicle Infotainment (IVI) system supports:
@ -419,7 +419,7 @@ to complete the User VM's host-to-guest mapping using this pseudo code:
host2guest_map_for_uos(x.hpa, x.uos_gpa, x.size)
end
Virtual Slim bootloader
Virtual Slim Bootloader
=======================
The Virtual Slim bootloader (vSBL) is the virtual bootloader that supports
@ -451,7 +451,7 @@ For an Android VM, the vSBL will load and verify trusty OS first, and
trusty OS will then load and verify Android OS according to the Android
OS verification mechanism.
OVMF bootloader
OVMF Bootloader
=======================
Open Virtual Machine Firmware (OVMF) is the virtual bootloader that supports
@ -536,7 +536,7 @@ Boot Flow
Power Management
****************
CPU P-state & C-state
CPU P-State & C-State
=====================
In ACRN, CPU P-state and C-state (Px/Cx) are controlled by the guest OS.
@ -562,7 +562,7 @@ This diagram shows CPU P/C-state management blocks:
CPU P/C-state management block diagram
System power state
System Power State
==================
ACRN supports ACPI standard defined power state: S3 and S5 in system

View File

@ -1,12 +1,12 @@
.. _hld-power-management:
Power Management high-level design
Power Management High-Level Design
##################################
P-state/C-state management
P-State/C-State Management
**************************
ACPI Px/Cx data
ACPI Px/Cx Data
===============
CPU P-state/C-state are controlled by the guest OS. The ACPI
@ -54,7 +54,7 @@ Hypervisor module named CPU state table:
With these Px/Cx data, the Hypervisor is able to intercept the guest's
P/C-state requests with desired restrictions.
Virtual ACPI table build flow
Virtual ACPI Table Build Flow
=============================
:numref:`vACPItable` shows how to build the virtual ACPI table with the
@ -127,7 +127,7 @@ could customize it according to their hardware/software requirements.
ACRN System S3/S5 diagram
System low power state entry process
System Low Power State Entry Process
====================================
Each time, when lifecycle manager of User VM starts power state transition,
@ -171,7 +171,7 @@ For system power state entry:
6. OSPM in ACRN hypervisor checks all guests are in S5 state and shuts down
whole system.
System low power state exit process
System Low Power State Exit Process
===================================
The low power state exit process is in reverse order. The ACRN

View File

@ -1,6 +1,6 @@
.. _hld-security:
Security high-level design
Security High-Level Design
##########################
.. primary author: Bing Zhu
@ -131,7 +131,7 @@ Boot Flow
---------
ACRN supports two verified boot sequences.
1) Verified Boot Sequence with SBL
1) Verified Boot Sequence With SBL
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As shown in :numref:`security-bootflow-sbl`, the Converged Security Engine
Firmware (CSE FW) behaves as the root of trust in this platform boot
@ -148,7 +148,7 @@ before launching.
ACRN Boot Flow with SBL
2) Verified Boot Sequence with UEFI
2) Verified Boot Sequence With UEFI
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As shown in :numref:`security-bootflow-uefi`, in this boot sequence, UEFI
authenticates and starts the ACRN hypervisor firstly,and hypervisor will return
@ -193,7 +193,7 @@ partners are responsible for image signing, ensuring the key strength
meets security requirements, and storing the secret RSA private key
securely.
Guest Secure Boot with OVMF
Guest Secure Boot With OVMF
---------------------------
Open Virtual Machine Firmware (OVMF) is an EDK II based project to enable UEFI
support for virtual machines in a virtualized environment. In ACRN, OVMF is
@ -677,7 +677,7 @@ virtual power life cycle management is out of scope in this document.
This subsection is intended to describe the security issues for those
power cycles.
User VM Power On and Shutdown
User VM Power on and Shutdown
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The memory of the User VM is allocated dynamically by the DM
@ -744,7 +744,7 @@ for secure-world is preserved too. The physical memory region of
secure world is removed from EPT paging tables of any guest VM,
even including the Service VM.
Third-party libraries
Third-Party Libraries
---------------------
All the third-party libraries must be examined before use to verify
@ -754,7 +754,7 @@ can be used to search for known vulnerabilities.
.. _platform_root_of_trust:
Platform Root of Trust Key/SEED Derivation
Platform Root of Trust Key/Seed Derivation
==========================================
For security reason, each guest VM requires a root key, which is used to
@ -880,7 +880,7 @@ memory (>=511G) are valid for Trusty World's EPT only.
Memory View for User VM non-secure World and Secure World
Trusty/TEE Hypercalls
Trusty/Tee Hypercalls
---------------------
Two hypercalls are introduced to assist in secure world (Trusty/TEE)
@ -1039,7 +1039,7 @@ SEED Derivation
Refer to the previous section: :ref:`platform_root_of_trust`.
Trusty/TEE S3 (Suspend To RAM)
Trusty/Tee S3 (Suspend to RAM)
------------------------------
Secure world S3 design is not yet finalized. However, there is a

View File

@ -1,6 +1,6 @@
.. _hld_splitlock:
Handling Split-locked Access in ACRN
Handling Split-Locked Access in ACRN
####################################
A split lock is any atomic operation whose operand crosses two cache
@ -12,7 +12,7 @@ system performance.
This document explains Split-locked Access, how to detect it, and how
ACRN handles it.
Split-locked Access Introduction
Split-Locked Access Introduction
********************************
Intel-64 and IA32 multiple-processor systems support locked atomic
operations on locations in system memory. For example, The LOCK instruction
@ -38,7 +38,7 @@ Split-locked Access can cause unexpected long latency to ordinary memory
operations by other CPUs while the bus is locked. This degraded system
performance can be hard to investigate.
Split-locked Access Detection
Split-Locked Access Detection
*****************************
The `Intel Tremont Microarchitecture
<https://newsroom.intel.com/news/intel-introduces-tremont-microarchitecture>`_
@ -70,7 +70,7 @@ MSR registers.
- The 29th bit of TEST_CTL MSR(0x33) controls enabling and disabling #AC for Split-locked
Access.
ACRN Handling Split-locked Access
ACRN Handling Split-Locked Access
*********************************
Split-locked Access is not expected in the ACRN hypervisor itself, and
should never happen. However, such access could happen inside a VM. ACRN
@ -92,7 +92,7 @@ support for handling split-locked access follows these design principles:
native OS). The real-time (RT) guest must avoid a Split-locked Access
and consider it a software bug.
Enable Split-Locked Access handling early
Enable Split-Locked Access Handling Early
==========================================
This feature is enumerated at the Physical CPU (pCPU) pre-initialization
stage, where ACRN detects CPU capabilities. If the pCPU supports this
@ -128,7 +128,7 @@ problem by reporting a warning message that the VM tried writing to
TEST_CTRL MSR.
Disable Split-locked Access Detection
Disable Split-Locked Access Detection
=====================================
If the CPU supports Split-locked Access detection, the ACRN hypervisor
uses it to prevent any VM running with potential system performance

View File

@ -1,6 +1,6 @@
.. _hld-trace-log:
Tracing and Logging high-level design
Tracing and Logging High-Level Design
#####################################
Both Trace and Log are built on top of a mechanism named shared
@ -30,7 +30,7 @@ is allowed to put data into that sbuf in HV, and a single consumer is
allowed to get data from sbuf in Service VM. Therefore, no lock is required to
synchronize access by the producer and consumer.
sbuf APIs
Sbuf APIs
=========
The sbuf APIs are defined in ``hypervisor/include/debug/sbuf.h``.
@ -128,7 +128,7 @@ kinds of logs:
- Current runtime logs;
- Logs remaining in the buffer, from the last crashed run.
Architectural diagram
Architectural Diagram
=====================
Similar to the design of ACRN Trace, ACRN Log is built on top of
@ -149,7 +149,7 @@ up:
Architectural diagram of ACRN Log
ACRN log support in Hypervisor
ACRN Log Support in Hypervisor
==============================
To support ``acrnlog``, the following adaption was made to hypervisor log

View File

@ -1,7 +1,7 @@
.. _hld-virtio-devices:
.. _virtio-hld:
Virtio devices high-level design
Virtio Devices High-Level Design
################################
The ACRN Hypervisor follows the `Virtual I/O Device (virtio)
@ -47,7 +47,7 @@ ACRN's virtio architectures, and elaborates on ACRN virtio APIs. Finally
this section will introduce all the virtio devices currently supported
by ACRN.
Virtio introduction
Virtio Introduction
*******************
Virtio is an abstraction layer over devices in a para-virtualized
@ -268,7 +268,7 @@ Kernel-Land Virtio Framework
ACRN supports two kernel-land virtio frameworks: VBS-K, designed from
scratch for ACRN, the other called Vhost, compatible with Linux Vhost.
VBS-K framework
VBS-K Framework
---------------
The architecture of ACRN VBS-K is shown in
@ -301,7 +301,7 @@ driver development.
ACRN Kernel Land Virtio Framework
Vhost framework
Vhost Framework
---------------
Vhost is similar to VBS-K. Vhost is a common solution upstreamed in the
@ -346,7 +346,7 @@ can be described as:
virtqueue, which results a event_signal on kick fd by VHM ioeventfd.
5. vhost device in kernel signals on the irqfd to notify the guest.
Ioeventfd implementation
Ioeventfd Implementation
~~~~~~~~~~~~~~~~~~~~~~~~
Ioeventfd module is implemented in VHM, and can enhance a registered
@ -372,7 +372,7 @@ The workflow can be summarized as:
corresponding eventfd.
7. trigger the signal to related eventfd.
Irqfd implementation
Irqfd Implementation
~~~~~~~~~~~~~~~~~~~~
The irqfd module is implemented in VHM, and can enhance a registered
@ -584,7 +584,7 @@ VBS-K APIs
The VBS-K APIs are exported by VBS-K related modules. Users could use
the following APIs to implement their VBS-K modules.
APIs provided by DM
APIs Provided by DM
~~~~~~~~~~~~~~~~~~~
.. doxygenfunction:: vbs_kernel_reset
@ -596,7 +596,7 @@ APIs provided by DM
.. doxygenfunction:: vbs_kernel_stop
:project: Project ACRN
APIs provided by VBS-K modules in service OS
APIs Provided by VBS-K Modules in Service OS
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. kernel-doc:: include/linux/vbs/vbs.h
@ -611,7 +611,7 @@ APIs provided by VBS-K modules in service OS
VHOST APIS
==========
APIs provided by DM
APIs Provided by DM
-------------------
.. doxygenfunction:: vhost_dev_init
@ -626,7 +626,7 @@ APIs provided by DM
.. doxygenfunction:: vhost_dev_stop
:project: Project ACRN
Linux vhost IOCTLs
Linux Vhost IOCTLs
------------------
``#define VHOST_GET_FEATURES _IOR(VHOST_VIRTIO, 0x00, __u64)``
@ -658,7 +658,7 @@ Linux vhost IOCTLs
This IOCTL is used to set the eventfd which is used by vhost do inject
virtual interrupt.
VHM eventfd IOCTLs
VHM Eventfd IOCTLs
------------------
.. doxygenstruct:: acrn_ioeventfd

View File

@ -1,4 +1,4 @@
.. _hld-vsbl:
Virtual Slim-Bootloader high-level design
Virtual Slim-Bootloader High-Level Design
#########################################

View File

@ -1,6 +1,6 @@
.. _hostbridge_virt_hld:
Hostbridge emulation
Hostbridge Emulation
####################
Overview
@ -8,7 +8,7 @@ Overview
Hostbridge emulation is based on PCI emulation; however, the hostbridge emulation only sets the PCI configuration space. The device model sets the PCI configuration space for hostbridge in the Service VM and then exposes it to the User VM to detect the PCI hostbridge.
PCI Host Bridge and hierarchy
PCI Host Bridge and Hierarchy
*****************************
There is PCI host bridge emulation in DM. The bus hierarchy is determined by ``acrn-dm`` command line input. Using this command line, as an example::

View File

@ -1,6 +1,6 @@
.. _hv-config:
Compile-time Configuration
Compile-Time Configuration
##########################
.. note:: With ACRN release 2.4, ACRN configuration has changed

View File

@ -1,11 +1,11 @@
.. _hv-console-shell-uart:
Hypervisor console, hypervisor shell, and virtual UART
Hypervisor Console, Hypervisor Shell, and Virtual UART
######################################################
.. _hv-console:
Hypervisor console
Hypervisor Console
******************
The hypervisor console is a text-based terminal accessible from UART.
@ -32,7 +32,7 @@ is active:
configured at compile time. In the release version, the console is
disabled and the physical UART is not used by the hypervisor or Service VM.
Hypervisor shell
Hypervisor Shell
****************
For debugging, the hypervisor shell provides commands to list some

View File

@ -31,7 +31,7 @@ Based on Intel VT-x virtualization technology, ACRN emulates a virtual CPU
- **simple schedule**: a well-designed scheduler framework that allows ACRN
to adopt different scheduling policies, such as the **noop** and **round-robin**:
- **noop scheduler**: only two thread loops are maintained for a CPU: a
- **noop scheduler**: only two thread loops are maintained for a CPU: a
vCPU thread and a default idle thread. A CPU runs most of the time in
the vCPU thread for emulating a guest CPU, switching between VMX root
mode and non-root mode. A CPU schedules out to default idle when an
@ -45,7 +45,7 @@ Based on Intel VT-x virtualization technology, ACRN emulates a virtual CPU
itself as well, such as when it executes "PAUSE" instruction.
Static CPU partitioning
Static CPU Partitioning
***********************
CPU partitioning is a policy for mapping a virtual
@ -75,7 +75,7 @@ VM.
See :ref:`cpu_sharing` for more information.
CPU management in the Service VM under static CPU partitioning
CPU Management in the Service VM Under Static CPU Partitioning
==============================================================
With ACRN, all ACPI table entries are passthrough to the Service VM, including
@ -96,7 +96,7 @@ Here is an example flow of CPU allocation on a multi-core platform.
CPU allocation on a multi-core platform
CPU management in the Service VM under flexible CPU sharing
CPU Management in the Service VM Under Flexible CPU Sharing
===========================================================
As all Service VM CPUs could share with different User VMs, ACRN can still passthrough
@ -105,7 +105,7 @@ MADT to Service VM, and the Service VM is still able to see all physical CPUs.
But as under CPU sharing, the Service VM does not need offline/release the physical
CPUs intended for User VM use.
CPU management in the User VM
CPU Management in the User VM
=============================
``cpu_affinity`` in ``vm config`` defines a set of pCPUs that a User VM
@ -113,7 +113,7 @@ is allowed to run on. acrn-dm could choose to launch on only a subset of the pCP
or on all pCPUs listed in cpu_affinity, but it can't assign
any pCPU that is not included in it.
CPU assignment management in HV
CPU Assignment Management in HV
===============================
The physical CPU assignment is pre-defined by ``cpu_affinity`` in
@ -169,7 +169,7 @@ lifecycle:
:project: Project ACRN
vCPU Scheduling under static CPU partitioning
vCPU Scheduling Under Static CPU Partitioning
*********************************************
.. figure:: images/hld-image35.png
@ -225,7 +225,7 @@ Some example scenario flows are shown here:
*hcall_notify_ioreq_finish->resume_vcpu* and makes the vCPU
schedule back to *vcpu_thread* to continue its guest execution.
vCPU Scheduling under flexible CPU sharing
vCPU Scheduling Under Flexible CPU Sharing
******************************************
To be added.

View File

@ -53,7 +53,7 @@ for post-launched VM:
Passthrough devices initialization control flow
Passthrough Device status
Passthrough Device Status
*************************
Most common devices on supported platforms are enabled for
@ -129,7 +129,7 @@ a passthrough device to/from a post-launched VM is shown in the following figure
.. _vtd-posted-interrupt:
VT-d Interrupt-remapping
VT-d Interrupt-Remapping
************************
The VT-d interrupt-remapping architecture enables system software to
@ -252,7 +252,7 @@ There is one exception, MSI-X table is also in a MMIO BAR. Hypervisor needs to t
accesses to MSI-X table. So the page(s) having MSI-X table should not be accessed by guest
directly. EPT mapping is not built for these pages having MSI-X table.
Device configuration emulation
Device Configuration Emulation
******************************
The PCI configuration space can be accessed by a PCI-compatible
@ -260,7 +260,7 @@ Configuration Mechanism (IO port 0xCF8/CFC) and the PCI Express Enhanced
Configuration Access Mechanism (PCI MMCONFIG). The ACRN hypervisor traps
this PCI configuration space access and emulate it. Refer to :ref:`split-device-model` for details.
MSI-X table emulation
MSI-X Table Emulation
*********************
VM accesses to MSI-X table should be trapped so that hypervisor has the
@ -386,7 +386,7 @@ The platform GSI information is in devicemodel/hw/pci/platform_gsi_info.c
for limited platform (currently, only APL MRB). For other platforms, the platform
specific GSI information should be added to activate the checking of GSI sharing violation.
Data structures and interfaces
Data Structures and Interfaces
******************************
The following APIs are common APIs provided to initialize interrupt remapping for

View File

@ -1,6 +1,6 @@
.. _hv-hypercall:
Hypercall / VHM upcall
Hypercall / VHM Upcall
######################
The hypercall/upcall is used to request services between the Guest VM and the hypervisor.
@ -28,7 +28,7 @@ injected to Service VM vCPU0. The Service VM will register the IRQ handler for v
module in the Service VM once the IRQ is triggered.
View the detailed upcall process at :ref:`ipi-management`
Hypercall APIs reference:
Hypercall APIs Reference:
*************************
:ref:`hypercall_apis` for the Service VM

View File

@ -1,6 +1,6 @@
.. _interrupt-hld:
Physical Interrupt high-level design
Physical Interrupt High-Level Design
####################################
Overview
@ -374,7 +374,7 @@ IPI vector 0xF3 upcall. The virtual interrupt injection uses IPI vector 0xF0.
.. _hv_interrupt-data-api:
Data structures and interfaces
Data Structures and Interfaces
******************************
IOAPIC

View File

@ -1,6 +1,6 @@
.. _hld-io-emulation:
I/O Emulation high-level design
I/O Emulation High-Level Design
###############################
As discussed in :ref:`intro-io-emulation`, there are multiple ways and
@ -215,7 +215,7 @@ Note that there is no state to represent a 'failed' I/O request. Service VM
should return all 1's for reads and ignore writes whenever it cannot
handle the I/O request, and change the state of the request to COMPLETE.
Post-work
Post-Work
=========
After an I/O request is completed, some more work needs to be done for

View File

@ -1,6 +1,6 @@
.. _IOC_virtualization_hld:
IOC Virtualization high-level design
IOC Virtualization High-Level Design
####################################
@ -31,7 +31,7 @@ IOC Mediator Design
Architecture Diagrams
=====================
IOC introduction
IOC Introduction
----------------
.. figure:: images/ioc-image12.png
@ -57,7 +57,7 @@ IOC introduction
IOC for storing persistent data. The IOC is in charge of accessing NVM
following the SoC's requirements.
CBC protocol introduction
CBC Protocol Introduction
-------------------------
The Carrier Board Communication (CBC) protocol multiplexes and
@ -85,7 +85,7 @@ The CBC protocol is based on a four-layer system:
and contains Multiplexer (MUX) and Priority fields.
- The **Service Layer** contains the payload data.
Native architecture
Native Architecture
-------------------
In the native architecture, the IOC controller connects to UART
@ -102,7 +102,7 @@ devices.
IOC Native - Software architecture
Virtualization architecture
Virtualization Architecture
---------------------------
In the virtualization architecture, the IOC Device Model (DM) is
@ -163,7 +163,7 @@ char devices and UART DM immediately.
- Currently, IOC mediator only cares about lifecycle, signal, and raw data.
Others, e.g. diagnosis, are not used by the IOC mediator.
State transfer
State Transfer
--------------
IOC mediator has four states and five events for state transfer.
@ -190,7 +190,7 @@ IOC mediator has four states and five events for state transfer.
sleep until a RESUME event is triggered to re-open the closed native
CBC char devices and transition to the INIT state.
CBC protocol
CBC Protocol
------------
IOC mediator needs to pack/unpack the CBC link frame for IOC
@ -221,7 +221,7 @@ priority. Currently, priority is not supported by IOC firmware; the
priority setting by the IOC mediator is based on the priority setting of
the CBC driver. The Service VM and User VM use the same CBC driver.
Power management virtualization
Power Management Virtualization
-------------------------------
In acrn-dm, the IOC power management architecture involves PM DM, IOC
@ -232,7 +232,7 @@ and wakeup reason flow is used to indicate IOC power state to the OS.
UART DM transfers all IOC data between the Service VM and User VM. These modules
complete boot/suspend/resume/shutdown functions.
Boot flow
Boot Flow
+++++++++
.. figure:: images/ioc-image19.png
@ -251,7 +251,7 @@ Boot flow
#. PM DM starts User VM.
#. User VM lifecycle gets a "booting" wakeup reason.
Suspend & Shutdown flow
Suspend & Shutdown Flow
+++++++++++++++++++++++
.. figure:: images/ioc-image21.png
@ -281,7 +281,7 @@ Suspend & Shutdown flow
suspend/shutdown SUS_STAT, based on the Service VM's own lifecycle service
policy.
Resume flow
Resume Flow
+++++++++++
.. figure:: images/ioc-image22.png
@ -326,7 +326,7 @@ For RTC resume flow
initial or active heartbeat. The User VM gets wakeup reason 0x800200
after resuming..
System control data
System Control Data
-------------------
IOC mediator has several emulated CBC commands, including wakeup reason,
@ -385,7 +385,7 @@ table:
disable any watchdog on the CBC heartbeat messages during this period
of time.
Wakeup reason
Wakeup Reason
+++++++++++++
The wakeup reasons command contains a bit mask of all reasons, which is
@ -532,7 +532,7 @@ definition is as below.
IOC Mediator - RTC flow
Signal data
Signal Data
-----------
Signal channel is an API between the SOC and IOC for
@ -579,7 +579,7 @@ new multi signal, which contains the signals in the passlist.
IOC Mediator - Multi-Signal passlist
Raw data
Raw Data
--------
OEM raw channel only assigns to a specific User VM following that OEM
@ -613,7 +613,7 @@ for TTY line discipline in User VM::
-l com2,/run/acrn/ioc_$vm_name
Porting and adaptation to different platforms
Porting and Adaptation to Different Platforms
*********************************************
TBD

View File

@ -1,6 +1,6 @@
.. _memmgt-hld:
Memory Management high-level design
Memory Management High-Level Design
###################################
This document describes memory management for the ACRN hypervisor.
@ -233,7 +233,7 @@ checking service and an EPT hugepage supporting checking service. Before the HV
enables memory virtualization and uses the EPT hugepage, these services need
to be invoked by other units.
Data Transfer between Different Address Spaces
Data Transfer Between Different Address Spaces
==============================================
In ACRN, different memory space management is used in the hypervisor,
@ -244,7 +244,7 @@ transferring, or when the hypervisor does instruction emulation: the HV
needs to access the guest instruction pointer register to fetch guest
instruction data.
Access GPA from Hypervisor
Access GPA From Hypervisor
--------------------------
When the hypervisor needs to access the GPA for data transfer, the caller from guest
@ -255,7 +255,7 @@ different 2M huge host-physical pages. The ACRN hypervisor must take
care of this kind of data transfer by doing EPT page walking based on
its HPA.
Access GVA from Hypervisor
Access GVA From Hypervisor
--------------------------
When the hypervisor needs to access GVA for data transfer, it's likely both
@ -312,7 +312,7 @@ PAT entry in the PAT MSR (which is determined by PAT, PCD, and PWT bits
from the guest paging structures) to determine the effective memory
type.
VPID operations
VPID Operations
===============
Virtual-processor identifier (VPID) is a hardware feature to optimize
@ -376,7 +376,7 @@ Interfaces Design
The memory virtualization unit interacts with external units through VM
exit and APIs.
VM Exit about EPT
VM Exit About EPT
=================
There are two VM exit handlers for EPT violation and EPT
@ -395,7 +395,7 @@ Here is a list of major memory related APIs in the HV:
EPT/VPID Capability Checking
----------------------------
Data Transferring between hypervisor and VM
Data Transferring Between Hypervisor and VM
-------------------------------------------
.. doxygenfunction:: copy_from_gpa

View File

@ -1,6 +1,6 @@
.. _partition-mode-hld:
Partition mode
Partition Mode
##############
ACRN is a type-1 hypervisor that supports running multiple guest operating
@ -44,7 +44,7 @@ example of two VMs with exclusive access to physical resources.
Partition Mode example with two VMs
Guest info
Guest Info
**********
ACRN uses multi-boot info passed from the platform bootloader to know
@ -57,7 +57,7 @@ configuration and copies them to the corresponding guest memory.
.. figure:: images/partition-image18.png
:align: center
ACRN setup for guests
ACRN Setup for Guests
*********************
Cores
@ -96,7 +96,7 @@ for assigning host memory to the guests:
ACRN creates EPT mapping for the guest between GPA (0, memory size) and
HPA (starting address in guest configuration, memory size).
E820 and zero page info
E820 and Zero Page Info
=======================
A default E820 is used for all the guests in partition mode. This table
@ -123,7 +123,7 @@ e820 info for all the guests.
| RESERVED |
+------------------------+
Platform info - mptable
Platform Info - Mptable
=======================
ACRN, in partition mode, uses mptable to convey platform info to each
@ -132,7 +132,7 @@ guest, and whether the guest needs devices with INTX, ACRN builds
mptable and copies it to the guest memory. In partition mode, ACRN uses
physical APIC IDs to pass to the guests.
I/O - Virtual devices
I/O - Virtual Devices
=====================
Port I/O is supported for PCI device config space 0xcfc and 0xcf8, vUART
@ -141,7 +141,7 @@ Port I/O is supported for PCI device config space 0xcfc and 0xcf8, vUART
host-bridge at BDF (Bus Device Function) 0.0:0 to each guest. Access to
256 bytes of config space for virtual host bridge is emulated.
I/O - Passthrough devices
I/O - Passthrough Devices
=========================
ACRN, in partition mode, supports passing thru PCI devices on the
@ -153,7 +153,7 @@ expects the developer to provide the virtual BDF to BDF of the
physical device mapping for all the passthrough devices as part of each guest
configuration.
Runtime ACRN support for guests
Runtime ACRN Support for Guests
*******************************
ACRN, in partition mode, supports an option to passthrough LAPIC of the
@ -170,7 +170,7 @@ will be discussed in detail in the corresponding sections.
:align: center
Guest SMP boot flow
Guest SMP Boot Flow
===================
The core APIC IDs are reported to the guest using mptable info. SMP boot
@ -178,17 +178,17 @@ flow is similar to sharing mode. Refer to :ref:`vm-startup`
for guest SMP boot flow in ACRN. Partition mode guests startup is same as
the Service VM startup in sharing mode.
Inter-processor Interrupt (IPI) Handling
Inter-Processor Interrupt (IPI) Handling
========================================
Guests w/o LAPIC passthrough
Guests W/O LAPIC Passthrough
----------------------------
For guests without LAPIC passthrough, IPIs between guest CPUs are handled in
the same way as sharing mode in ACRN. Refer to :ref:`virtual-interrupt-hld`
for more details.
Guests w/ LAPIC passthrough
Guests W/ LAPIC Passthrough
---------------------------
ACRN supports passthrough if and only if the guest is using x2APIC mode
@ -204,10 +204,10 @@ corresponding to the destination processor info in the ICR.
:align: center
Passthrough device support
Passthrough Device Support
==========================
Configuration space access
Configuration Space Access
--------------------------
ACRN emulates Configuration Space Address (0xcf8) I/O port and
@ -258,7 +258,7 @@ Interrupt Configuration
ACRN supports both legacy (INTx) and MSI interrupts for passthrough
devices.
INTx support
INTx Support
~~~~~~~~~~~~
ACRN expects developers to identify the interrupt line info (0x3CH) from
@ -271,7 +271,7 @@ IOAPIC. When guest masks the RTE in vIOAPIC, ACRN masks the interrupt
RTE in the physical IOAPIC. Level triggered interrupts are not
supported.
MSI support
MSI Support
~~~~~~~~~~~
Guest reads/writes to PCI configuration space for configuring MSI
@ -279,7 +279,7 @@ interrupts using an address. Data and control registers are passthrough to
the physical BAR of the passthrough device. Refer to `Configuration
space access`_ for details on how the PCI configuration space is emulated.
Virtual device support
Virtual Device Support
======================
ACRN provides read-only vRTC support for partition mode guests. Writes
@ -288,10 +288,10 @@ to the data port are discarded.
For port I/O to ports other than vPIC, vRTC, or vUART, reads return 0xFF and
writes are discarded.
Interrupt delivery
Interrupt Delivery
==================
Guests w/o LAPIC passthrough
Guests W/O LAPIC Passthrough
----------------------------
In partition mode of ACRN, interrupts stay disabled after a vmexit. The
@ -307,25 +307,25 @@ for device interrupts.
:align: center
Guests w/ LAPIC passthrough
Guests W/ LAPIC Passthrough
---------------------------
For guests with LAPIC passthrough, ACRN does not configure vmexit upon
external interrupts. There is no vmexit upon device interrupts and they are
handled by the guest IDT.
Hypervisor IPI service
Hypervisor IPI Service
======================
ACRN needs IPIs for events such as flushing TLBs across CPUs, sending virtual
device interrupts (e.g. vUART to vCPUs), and others.
Guests w/o LAPIC passthrough
Guests W/O LAPIC Passthrough
----------------------------
Hypervisor IPIs work the same way as in sharing mode.
Guests w/ LAPIC passthrough
Guests W/ LAPIC Passthrough
---------------------------
Since external interrupts are passthrough to the guest IDT, IPIs do not
@ -344,7 +344,7 @@ For a guest console in partition mode, ACRN provides an option to pass
``vmid`` as an argument to ``vm_console``. vmid is the same as the one
developers use in the guest configuration.
Guests w/o LAPIC passthrough
Guests W/O LAPIC Passthrough
----------------------------
Works the same way as sharing mode.

View File

@ -3,7 +3,7 @@
Power Management
################
System PM module
System PM Module
****************
The PM module in the hypervisor does three things:

View File

@ -38,7 +38,7 @@ IA32_PQR_ASSOC MSR to CLOS 0. (Note that CLOS, or Class of Service, is a
resource allocator.) The user can check the cache capabilities such as cache
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
CLOS ID, to select a cache mask to take effect. These configurations can be
CLOS ID, to select a cache mask to take effect. These configurations can be
done in scenario XML file under ``FEATURES`` section as shown in the below example.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
to enforce the settings.
@ -137,17 +137,17 @@ needs to be set in the scenario XML file under ``VM`` section.
misconfiguration errors.
CAT and MBA high-level design in ACRN
CAT and MBA High-Level Design in ACRN
*************************************
Data structures
Data Structures
===============
The below figure shows the RDT data structure to store enumerated resources.
.. figure:: images/mba_data_structures.png
:align: center
Enabling CAT, MBA software flow
Enabling CAT, MBA Software Flow
===============================
The hypervisor enumerates RDT capabilities and sets up mask arrays; it also

View File

@ -11,7 +11,7 @@ limited timer management services:
- A timer can only be added on the logical CPU for a process or thread. Timer
scheduling or timer migrating is not supported.
How it works
How It Works
************
When the system boots, we check that the hardware supports lapic

View File

@ -113,7 +113,7 @@ These APIs will finish by making a vCPU request.
.. doxygenfunction:: vlapic_receive_intr
:project: Project ACRN
EOI processing
EOI Processing
==============
EOI virtualization is enabled if APICv virtual interrupt delivery is
@ -129,7 +129,7 @@ indicate that is a level triggered interrupt.
.. _lapic_passthru:
LAPIC passthrough based on vLAPIC
LAPIC Passthrough Based on vLAPIC
=================================
LAPIC passthrough is supported based on vLAPIC, the guest OS first boots with
@ -280,7 +280,7 @@ window is not present, HV would enable
VM Enter directly. The injection will be done on next VM Exit once Guest
issues ``STI (GuestRFLAG.IF=1)``.
Data structures and interfaces
Data Structures and Interfaces
******************************
There is no data structure exported to the other components in the

View File

@ -8,7 +8,7 @@ running VM, and a series VM APIs like create_vm, start_vm, reset_vm, shutdown_vm
etc are used to switch a VM to the right state, according to the requirements of
applications or system power operations.
VM structure
VM Structure
************
The ``acrn_vm`` structure is defined to manage a VM instance, this structure
@ -22,7 +22,7 @@ platform level cpuid entries.
The ``acrn_vm`` structure instance will be created by ``create_vm`` API, and then
work as the first parameter for other VM APIs.
VM state
VM State
********
Generally, a VM is not running at the beginning: it is in a 'powered off'
@ -44,7 +44,7 @@ please refer to :ref:`hv-cpu-virt` for related VCPU state.
VM State Management
*******************
Pre-launched and Service VM
Pre-Launched and Service VM
===========================
The hypervisor is the owner to control pre-launched and Service VM's state
@ -52,7 +52,7 @@ by calling VM APIs directly, following the design of system power
management. Please refer to ACRN power management design for more details.
Post-launched User VMs
Post-Launched User VMs
======================
DM takes control of post-launched User VMs' state transition after the Service VM

View File

@ -28,7 +28,7 @@ First-level/nested translation.
DMAR Engines Discovery
**********************
DMA Remapping Report ACPI table
DMA Remapping Report ACPI Table
===============================
For generic platforms, the ACRN hypervisor retrieves DMAR information from
@ -43,13 +43,13 @@ the devices under the scope of a remapping hardware unit, as shown in
DMA Remapping Reporting Structure
Pre-parsed DMAR information
Pre-Parsed DMAR Information
===========================
For specific platforms, the ACRN hypervisor uses pre-parsed DMA remapping
reporting information directly to save hypervisor bootup time.
DMA remapping unit for integrated graphics device
DMA Remapping Unit for Integrated Graphics Device
=================================================
Generally, there is a dedicated remapping hardware unit for the Intel
@ -167,7 +167,7 @@ Other domains
EPT table of the VM only allows devices to access the memory
allocated for the Normal world of the VM.
Page-walk coherency
Page-Walk Coherency
===================
For the VT-d hardware, which doesn't support page-walk coherency, the
@ -182,14 +182,14 @@ memory:
ACRN flushes the related cache line after these structures are updated
if the VT-d hardware doesn't support page-walk coherency.
Super-page support
Super-Page Support
==================
The ACRN VT-d reuses the EPT table as the address translation table. VT-d
capability or super-page support should be identical with the usage of the
EPT table.
Snoop control
Snoop Control
=============
If VT-d hardware supports snoop control, iVT-d can control the
@ -272,7 +272,7 @@ translation for DMAR unit(s) if they are not marked as ignored.
.. _device-assignment:
Device assignment
Device Assignment
*****************
All devices are initially added to the SOS_VM domain. To assign a device
@ -286,7 +286,7 @@ device is removed from the VM domain related to the User OS and then added
back to the SOS_VM domain; this changes the address translation table from
the EPT of the User OS to the EPT of the SOS_VM for the device.
Power Management support for S3
Power Management Support for S3
*******************************
During platform S3 suspend and resume, the VT-d register values are
@ -309,10 +309,10 @@ registered for the IRQ. DMAR unit supports report fault event via MSI.
When a fault event occurs, a MSI is generated, so that the DMAR fault
handler will be called to report the error event.
Data structures and interfaces
Data Structures and Interfaces
******************************
initialization and deinitialization
Initialization and Deinitialization
===================================
The following APIs are provided during initialization and
@ -321,7 +321,7 @@ deinitialization:
.. doxygenfunction:: init_iommu
:project: Project ACRN
runtime
Runtime
=======
The following API are provided during runtime:

View File

@ -1,6 +1,6 @@
.. _ivshmem-hld:
ACRN Shared Memory Based Inter-VM Communication
ACRN Shared Memory Based Inter-Vm Communication
###############################################
ACRN supports inter-virtual machine communication based on a shared
@ -8,7 +8,7 @@ memory mechanism. The ACRN device model or hypervisor emulates a virtual
PCI device (called an ``ivshmem`` device) to expose the base address and
size of this shared memory.
Inter-VM Communication Overview
Inter-Vm Communication Overview
*******************************
.. figure:: images/ivshmem-architecture.png
@ -129,7 +129,7 @@ Usage
For usage information, see :ref:`enable_ivshmem`
Inter-VM Communication Security hardening (BKMs)
Inter-Vm Communication Security Hardening (BKMs)
************************************************
As previously highlighted, ACRN 2.0 provides the capability to create shared

View File

@ -1,6 +1,6 @@
.. _system-timer-hld:
System timer virtualization
System Timer Virtualization
###########################
ACRN supports RTC (Real-time clock), HPET (High Precision Event Timer),
@ -20,7 +20,7 @@ System timer virtualization architecture
timerfd\_create interfaces to set up native timers for the trigger timeout
mechanism.
System Timer initialization
System Timer Initialization
===========================
The device model initializes vRTC, vHEPT, and vPIT devices automatically when
@ -48,7 +48,7 @@ below code snippets.::
...
}
PIT emulation
PIT Emulation
=============
The ACRN emulated Intel 8253 Programmable Interval Timer includes a chip
@ -83,7 +83,7 @@ I/O ports definition::
#define TIMER_CNTR2 (IO_TIMER1_PORT + TIMER_REG_CNTR2)
#define TIMER_MODE (IO_TIMER1_PORT + TIMER_REG_MODE)
RTC emulation
RTC Emulation
=============
ACRN supports RTC (real-time clock) that can only be accessed through
@ -114,7 +114,7 @@ The RTC ACPI description as below::
dsdt_line("}");
}
HPET emulation
HPET Emulation
==============
ACRN supports HPET (High Precision Event Timer) which is a higher resolution

View File

@ -43,7 +43,7 @@ An xHCI register access from a User VM will induce EPT trap from the User VM to
DM, and the xHCI DM or DRD DM will emulate hardware behaviors to make
the subsystem run.
USB devices supported by USB mediator
USB Devices Supported by USB Mediator
*************************************
The following USB devices are supported for the WaaG and LaaG operating systems.
@ -70,7 +70,7 @@ The following USB devices are supported for the WaaG and LaaG operating systems.
The above information is current as of ACRN 1.4.
USB host virtualization
USB Host Virtualization
***********************
USB host virtualization is implemented as shown in
@ -116,7 +116,7 @@ This configuration means the virtual xHCI will appear in PCI slot 7
in the User VM, and any physical USB device attached on 1-2 or 2-2 will be
detected by a User VM and used as expected.
USB DRD virtualization
USB DRD Virtualization
**********************
USB DRD (Dual Role Device) emulation works as shown in this figure:

View File

@ -1,6 +1,6 @@
.. _virtio-blk:
Virtio-blk
Virtio-BLK
##########
The virtio-blk device is a simple virtual block device. The FE driver
@ -35,7 +35,7 @@ The feature bits supported by the BE device are shown as follows:
Device can toggle its cache between writeback and writethrough modes.
Virtio-blk-BE design
Virtio-BLK-Be Design
********************
.. figure:: images/virtio-blk-image02.png

View File

@ -1,6 +1,6 @@
.. _virtio-console:
Virtio-console
Virtio-Console
##############
The Virtio-console is a simple device for data input and output. The
@ -142,7 +142,7 @@ PTY
.. code-block:: console
# minicom -D /dev/pts/0
or:
.. code-block:: console
@ -162,7 +162,7 @@ TTY
/dev/pts/0
# sleep 2d
- If you do not have network access to your device, use screen
to create a new TTY:

View File

@ -1,6 +1,6 @@
.. _virtio-gpio:
Virtio-gpio
Virtio-Gpio
###########
virtio-gpio provides a virtual GPIO controller, which will map part of
@ -33,7 +33,7 @@ irq_set_type of irqchip) will trigger a virtqueue_kick on its own
virtqueue. If some gpio has been set to interrupt mode, the interrupt
events will be handled within the IRQ virtqueue callback.
GPIO mapping
GPIO Mapping
************
.. figure:: images/virtio-gpio-2.png

View File

@ -1,6 +1,6 @@
.. _virtio-i2c:
Virtio-i2c
Virtio-I2c
##########
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple

View File

@ -1,6 +1,6 @@
.. _virtio-input:
Virtio-input
Virtio-Input
############
The virtio input device can be used to create virtual human interface

View File

@ -1,6 +1,6 @@
.. _virtio-net:
Virtio-net
Virtio-Net
##########
Virtio-net is the para-virtualization solution used in ACRN for
@ -110,7 +110,7 @@ Initialization in Device Model
- Setup data plan callbacks, including TX, RX
- Setup TAP backend
Initialization in virtio-net Frontend Driver
Initialization in Virtio-Net Frontend Driver
============================================
**virtio_pci_probe**

View File

@ -1,6 +1,6 @@
.. _virtio-rnd:
Virtio-rnd
Virtio-RND
##########
Virtio-rnd provides a virtual hardware random source for the User VM. It simulates a PCI device

View File

@ -34,7 +34,7 @@ It receives read/write commands from the watchdog driver, does the
actions, and returns. In ACRN, the commands are from User VM
watchdog driver.
User VM watchdog workflow
User VM Watchdog Workflow
*************************
When the User VM does a read or write operation on the watchdog device's
@ -58,7 +58,7 @@ from a User VM to the Service VM and return back:
Watchdog operation workflow
Implementation in ACRN and how to use it
Implementation in ACRN and How to Use It
****************************************
In ACRN, the Intel 6300ESB watchdog device emulation is added into the

View File

@ -67,7 +67,7 @@ to protect itself from malicious user space attack.
Intel SGX/SMM related attacks are mitigated by using latest microcode.
There is no additional action in ACRN hypervisor.
Guest -> hypervisor Attack
Guest -> Hypervisor Attack
==========================
ACRN always enables EPT for all guests (Service VM and User VM), thus a malicious
@ -84,7 +84,7 @@ a malicious guest running on one logical processor can attack the data which
is brought into L1D by the context which runs on the sibling thread of
the same physical core. This context can be any code in hypervisor.
Guest -> guest Attack
Guest -> Guest Attack
=====================
The possibility of guest -> guest attack varies on specific configuration,
@ -144,7 +144,7 @@ not all of them apply to a specific ACRN deployment. Check the
'Mitigation Status'_ and 'Mitigation Recommendations'_ sections
for guidance.
L1D flush on VMENTRY
L1D Flush on VMENTRY
====================
ACRN may optionally flush L1D at VMENTRY, which ensures no
@ -175,7 +175,7 @@ is always enabled on all platforms.
ACRN hypervisor doesn't set reserved bits in any EPT entry.
Put Secret Data into Uncached Memory
Put Secret Data Into Uncached Memory
====================================
It is hard to decide which data in ACRN hypervisor is secret or valuable
@ -204,7 +204,7 @@ useful to be attacked.
However if such 100% identification is not possible, user should
consider other mitigation options to protect hypervisor.
L1D flush on World Switch
L1D Flush on World Switch
=========================
For L1D-affected platforms, ACRN writes to aforementioned MSR
@ -218,7 +218,7 @@ normal world is less privileged entity to secure world.
This mitigation is always enabled.
Core-based scheduling
Core-Based Scheduling
=====================
If Hyper-threading is enabled, it's important to avoid running

View File

@ -35,7 +35,7 @@ Trusty Architecture
.. _trusty-hypercalls:
Trusty specific Hypercalls
Trusty Specific Hypercalls
**************************
There are a few :ref:`hypercall_apis` that are related to Trusty.
@ -44,7 +44,7 @@ There are a few :ref:`hypercall_apis` that are related to Trusty.
:project: Project ACRN
:content-only:
Trusty Boot flow
Trusty Boot Flow
****************
By design, the User OS bootloader (``UOS_Loader``) will trigger the Trusty boot process. The complete boot flow is illustrated below.

View File

@ -9,9 +9,8 @@ Here are some frequently asked questions about the ACRN project.
:local:
:backlinks: entry
------
What hardware does ACRN support?
What Hardware Does ACRN Support?
********************************
ACRN runs on Intel boards, as documented in
@ -19,7 +18,7 @@ our :ref:`hardware` documentation.
.. _config_32GB_memory:
How do I configure ACRN's memory size?
How Do I Configure ACRN's Memory Size?
**************************************
It's important that the ACRN configuration settings are aligned with the
@ -53,7 +52,7 @@ the ACRN Service VM with the 32G memory size.
#. Then continue building the ACRN Service VM as usual.
How to modify the default display output for a User VM?
How to Modify the Default Display Output for a User VM?
*******************************************************
Apollo Lake HW has three pipes and each pipe can have three or four planes which
@ -98,7 +97,7 @@ these parameters:
intentional, and the driver will enforce this if the parameters do not
do this.
Why does ACRN need to know how much RAM the system has?
Why Does ACRN Need to Know How Much RAM the System Has?
*******************************************************
Configuring ACRN at compile time with the system RAM size is a tradeoff between

View File

@ -1,6 +1,6 @@
.. _getting-started-building:
Build ACRN from Source
Build ACRN From Source
######################
Following a general embedded-system programming model, the ACRN
@ -45,7 +45,7 @@ these steps.
.. rst-class:: numbered-step
Install build tools and dependencies
Install Build Tools and Dependencies
************************************
ACRN development is supported on popular Linux distributions, each with
@ -99,7 +99,7 @@ Install the necessary tools for the following systems:
.. rst-class:: numbered-step
Get the ACRN hypervisor source code
Get the ACRN Hypervisor Source Code
***********************************
The `acrn-hypervisor <https://github.com/projectacrn/acrn-hypervisor/>`_
@ -120,7 +120,7 @@ Enter the following to get the acrn-hypervisor source code:
.. rst-class:: numbered-step
Build with the ACRN scenario
Build With the ACRN Scenario
****************************
Currently, the ACRN hypervisor defines these typical usage scenarios:
@ -187,10 +187,10 @@ for each scenario.
.. rst-class:: numbered-step
Build the hypervisor configuration
Build the Hypervisor Configuration
**********************************
Modify the hypervisor configuration
Modify the Hypervisor Configuration
===================================
The ACRN hypervisor leverages Kconfig to manage configurations; it is
@ -239,7 +239,7 @@ Refer to the help on menuconfig for a detailed guide on the interface:
.. rst-class:: numbered-step
Build the hypervisor, device model, and tools
Build the Hypervisor, Device Model, and Tools
*********************************************
Now you can build all these components at once as follows:

View File

@ -1,11 +1,11 @@
Getting Started Guide for ACRN Industry Scenario with ROScube-I
Getting Started Guide for ACRN Industry Scenario With ROScube-I
###############################################################
.. contents::
:local:
:depth: 1
Verified version
Verified Version
****************
- Ubuntu version: **18.04**
@ -68,10 +68,10 @@ Prerequisites
.. rst-class:: numbered-step
Install ACRN hypervisor
Install ACRN Hypervisor
***********************
Set up Environment
Set Up Environment
==================
#. Open ``/etc/default/grub/`` and add ``idle=nomwait intel_pstate=disable``
@ -203,10 +203,10 @@ Configure Hypervisor
.. rst-class:: numbered-step
Install Service VM kernel
Install Service VM Kernel
*************************
Build Service VM kernel
Build Service VM Kernel
=======================
#. Get code from GitHub
@ -302,7 +302,7 @@ Update Grub
Install User VM
***************
Before create User VM
Before Create User VM
=====================
#. Download Ubuntu image (Here we use `Ubuntu 18.04 LTS
@ -316,7 +316,7 @@ Before create User VM
bridge-utils virt-manager ovmf
sudo reboot
Create User VM image
Create User VM Image
====================
.. note:: Reboot into the **native Linux kernel** (not the ACRN kernel)
@ -451,10 +451,10 @@ the User VM.
.. rst-class:: numbered-step
Install real-time VM
Install Real-Time VM
********************
Copy real-time VM image
Copy Real-Time VM Image
=======================
.. note:: Reboot into the **native Linux kernel** (not the ACRN kernel)
@ -468,7 +468,7 @@ Copy real-time VM image
.. figure:: images/rqi-acrn-rtos-ready.png
Set up real-time VM
Set Up Real-Time VM
===================
.. note:: The section will show you how to install Xenomai on ROScube-I.
@ -548,7 +548,7 @@ Set up real-time VM
sudo poweroff
Run real-time VM
Run Real-Time VM
================
Now back to the native machine and we'll set up the environment for
@ -582,7 +582,7 @@ launching the real-time VM.
In ACRN design, rebooting the real-time VM will also reboot the whole
system.
Customizing the launch file
Customizing the Launch File
***************************
The launch file in this tutorial has the following hardware resource allocation.

View File

@ -1,13 +1,13 @@
.. _rt_industry_ubuntu_setup:
Getting Started Guide for ACRN Industry Scenario with Ubuntu Service VM
Getting Started Guide for ACRN Industry Scenario With Ubuntu Service VM
#######################################################################
.. contents::
:local:
:depth: 1
Verified version
Verified Version
****************
- Ubuntu version: **18.04**
@ -51,7 +51,7 @@ Connect the WHL Maxtang with the appropriate external devices.
.. _install-ubuntu-rtvm-sata:
Install the Ubuntu User VM (RTVM) on the SATA disk
Install the Ubuntu User VM (RTVM) on the SATA Disk
**************************************************
.. note:: The WHL Maxtang machine contains both an NVMe and SATA disk.
@ -84,7 +84,7 @@ to turn it into a real-time User VM (RTVM).
.. _install-ubuntu-Service VM-NVMe:
Install the Ubuntu Service VM on the NVMe disk
Install the Ubuntu Service VM on the NVMe Disk
**********************************************
.. note:: Before you install the Ubuntu Service VM on the NVMe disk, either
@ -209,7 +209,7 @@ Build the ACRN Hypervisor on Ubuntu
$ sudo mkdir -p /boot/acrn
$ sudo cp build/hypervisor/acrn.bin /boot/acrn/
Build and install the ACRN kernel
Build and Install the ACRN Kernel
=================================
#. Build the Service VM kernel from the ACRN repo:
@ -224,12 +224,12 @@ Build and install the ACRN kernel
.. code-block:: none
$ git checkout v2.3
$ git checkout v2.3
$ cp kernel_config_uefi_sos .config
$ make olddefconfig
$ make all
Install the Service VM kernel and modules
Install the Service VM Kernel and Modules
=========================================
.. code-block:: none
@ -289,7 +289,7 @@ Update Grub for the Ubuntu Service VM
$ sudo update-grub
Enable network sharing for the User VM
Enable Network Sharing for the User VM
======================================
In the Ubuntu Service VM, enable network sharing for the User VM:
@ -300,7 +300,7 @@ In the Ubuntu Service VM, enable network sharing for the User VM:
$ sudo systemctl start systemd-networkd
Reboot the system
Reboot the System
=================
Reboot the system. You should see the Grub menu with the new **ACRN
@ -317,10 +317,10 @@ typical output of a successful installation resembles the following:
[ 0.862942] ACRN HVLog: acrn_hvlog_init
Additional settings in the Service VM
Additional Settings in the Service VM
=====================================
BIOS settings of GVT-d for WaaG
BIOS Settings of GVT-d for WaaG
-------------------------------
.. note::
@ -333,11 +333,11 @@ Set **DVMT Pre-Allocated** to **64MB**:
.. figure:: images/DVMT-reallocated-64mb.png
Set **PM Support** to **Enabled**:
Set **PM Support** to **Enabled**:
.. figure:: images/PM-support-enabled.png
Use OVMF to launch the User VM
Use OVMF to Launch the User VM
------------------------------
The User VM will be launched by OVMF, so copy it to the specific folder:
@ -347,7 +347,7 @@ The User VM will be launched by OVMF, so copy it to the specific folder:
$ sudo mkdir -p /usr/share/acrn/bios
$ sudo cp /home/acrn/work/acrn-hypervisor/devicemodel/bios/OVMF.fd /usr/share/acrn/bios
Build and Install the RT kernel for the Ubuntu User VM
Build and Install the RT Kernel for the Ubuntu User VM
------------------------------------------------------
Follow these instructions to build the RT kernel.
@ -398,7 +398,7 @@ Grub in the Ubuntu User VM (RTVM) needs to be configured to use the new RT
kernel that was just built and installed on the rootfs. Follow these steps to
perform this operation.
Update the Grub file
Update the Grub File
====================
#. Reboot into the Ubuntu User VM located on the SATA drive and log on.
@ -461,7 +461,7 @@ Launch the RTVM
$ sudo cp /home/acrn/work/acrn-hyperviso/misc/vm_configs/sample_launch_scripts/nuc/launch_hard_rt_vm.sh /usr/share/acrn/
$ sudo /usr/share/acrn/launch_hard_rt_vm.sh
Recommended BIOS settings for RTVM
Recommended BIOS Settings for RTVM
----------------------------------
.. csv-table::
@ -491,7 +491,7 @@ Recommended BIOS settings for RTVM
.. note:: BIOS settings depend on the platform and BIOS version; some may
not be applicable.
Recommended kernel cmdline for RTVM
Recommended Kernel Cmdline for RTVM
-----------------------------------
.. code-block:: none
@ -513,7 +513,7 @@ automatically at the time of RTVM creation. Refer to :ref:`rdt_configuration`
for details on RDT configuration and :ref:`hv_rdt` for details on RDT
high-level design.
Set up the core allocation for the RTVM
Set Up the Core Allocation for the RTVM
---------------------------------------
In our recommended configuration, two cores are allocated to the RTVM:
@ -563,7 +563,7 @@ this, follow the below steps to allocate all housekeeping tasks to core 0:
.. note:: Ignore the error messages that might appear while the script is
running.
Run cyclictest
Run Cyclictest
--------------
#. Refer to the :ref:`troubleshooting section <enabling the network on the RTVM>`
@ -621,7 +621,7 @@ Troubleshooting
.. _enabling the network on the RTVM:
Enabling the network on the RTVM
Enabling the Network on the RTVM
================================
If you need to access the internet, you must add the following command line
@ -644,7 +644,7 @@ to the ``launch_hard_rt_vm.sh`` script before launching it:
.. _passthru to rtvm:
Passthrough a hard disk to RTVM
Passthrough a Hard Disk to RTVM
===============================
#. Use the ``lspci`` command to ensure that the correct SATA device IDs will

View File

@ -1,6 +1,6 @@
.. _acrn_home:
Project ACRN documentation
Project ACRN Documentation
##########################
Welcome to the Project ACRN (version |version|) documentation. ACRN is

View File

@ -1,6 +1,6 @@
.. _introduction:
What is ACRN
What Is ACRN
############
Introduction to Project ACRN
@ -281,7 +281,7 @@ application scenario needs.
Here are block diagrams for each of these four scenarios.
SDC scenario
SDC Scenario
============
In this SDC scenario, an instrument cluster (IC) system runs with the
@ -295,7 +295,7 @@ VM.
SDC scenario with two VMs
Industry scenario
Industry Scenario
=================
In this Industry scenario, the Service VM provides device sharing capability for
@ -312,7 +312,7 @@ vision, etc.
Industry scenario
Hybrid scenario
Hybrid Scenario
===============
In this Hybrid scenario, a pre-launched Safety/RTVM is started by the
@ -326,7 +326,7 @@ non-real-time tasks.
Hybrid scenario
Hybrid real-time (RT) scenario
Hybrid Real-Time (RT) Scenario
==============================
In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the
@ -340,7 +340,7 @@ non-real-time tasks.
Hybrid RT scenario
Logical Partition scenario
Logical Partition Scenario
==========================
This scenario is a simplified VM configuration for VM logical
@ -619,7 +619,7 @@ ACRN Device model incorporates these three aspects:
.. _pass-through:
Device passthrough
Device Passthrough
******************
At the highest level, device passthrough is about providing isolation
@ -651,7 +651,7 @@ don't support passthrough for a legacy serial port, (for example
0x3f8).
Hardware support for device passthrough
Hardware Support for Device Passthrough
=======================================
Intel's current processor architectures provides support for device
@ -673,7 +673,7 @@ fabrics to scale to many devices. MSI is ideal for I/O virtualization,
as it allows isolation of interrupt sources (as opposed to physical pins
that must be multiplexed or routed through software).
Hypervisor support for device passthrough
Hypervisor Support for Device Passthrough
=========================================
By using the latest virtualization-enhanced processor architectures,
@ -688,7 +688,7 @@ assigned to the same guest OS. PCIe does not have this restriction.
.. _ACRN-io-mediator:
ACRN I/O mediator
ACRN I/O Mediator
*****************
:numref:`io-emulation-path` shows the flow of an example I/O emulation path.
@ -736,7 +736,7 @@ The MMIO path is very similar, except the VM exit reason is different. MMIO
access is usually trapped through a VMX_EXIT_REASON_EPT_VIOLATION in
the hypervisor.
Virtio framework architecture
Virtio Framework Architecture
*****************************
.. _Virtio spec:

View File

@ -2,7 +2,7 @@
.. _learn_acrn:
What is ACRN
What Is ACRN
############
ACRN is supported on Apollo Lake and Kaby Lake Intel platforms,

View File

@ -7,7 +7,7 @@
lingering references to these docs out in the wild and in the Google
index. Give the reader a reference to the /2.1/ document instead.
This document was removed
This Document Was Removed
#########################
.. raw:: html

View File

@ -13,7 +13,7 @@ ACRN-based application. This document describes these option settings.
:local:
:depth: 2
Common option value types
Common Option Value Types
*************************
Within this option documentation, we refer to some common type

View File

@ -1,6 +1,6 @@
.. _release_notes_0.1:
ACRN v0.1 (July 2018)
ACRN V0.1 (July 2018)
#####################
We are pleased to announce the release of Project ACRN version 0.1.
@ -14,7 +14,7 @@ The project ACRN reference code can be found on GitHub in
https://github.com/projectacrn. It includes the ACRN hypervisor, the
ACRN device model, and documentation.
Version 0.1 new features
Version 0.1 New Features
************************
Hardware Support
@ -35,7 +35,7 @@ Virtual Graphics support added:
assigned to different display. The display ports supports eDP and HDMI.
- See :ref:`APL_GVT-G-hld` documentation for more information.
Virtio standard is supported
Virtio Standard Is Supported
============================
Virtio is a virtualization standard for
@ -45,7 +45,7 @@ the hypervisor. The SOS and UOS can share physical LAN network
and physical eMMC storage device. (See :ref:`virtio-hld` for more
information.)
Device pass-through support
Device Pass-Through Support
===========================
Device pass-through to UOS support for:
@ -54,13 +54,13 @@ Device pass-through to UOS support for:
- SD card (mount, read, and write directly in the UOS)
- Converged Security Engine (CSE)
Hypervisor configuration
Hypervisor Configuration
========================
Developers can configure hypervisor via Kconfig parameters. (See
documentation for configuration options.)
New ACRN tools
New ACRN Tools
==============
We've added a collection of support tools including acrnctl, acrntrace,

View File

@ -1,6 +1,6 @@
.. _release_notes_0.2:
ACRN v0.2 (Sep 2018)
ACRN V0.2 (Sep 2018)
####################
We are pleased to announce the release of Project ACRN version 0.2.
@ -31,7 +31,7 @@ https://projectacrn.github.io/0.2/. Documentation for the latest
(master) branch is found at https://projectacrn.github.io/latest/.
Version 0.2 new features
Version 0.2 New Features
************************
VT-x, VT-d
@ -86,7 +86,7 @@ hotspot for 3rd party devices, provides 3rd party device applications
access to the vehicle, and provides access of 3rd party devices to the
TCU provided connectivity.
IPU (MIPI-CS2, HDMI-in)
IPU (MIPI-CS2, HDMI-In)
========================
ACRN hypervisor supports passthrough IPU assignment to Service OS or
guest OS, without sharing.
@ -104,7 +104,7 @@ This is done to ensure performance of the most critical workload can be
achieved. Three different schedulers for the GPU are involved: i915 UOS
scheduler, Mediator GVT scheduler, and i915 SOS scheduler.
GPU - display surface sharing via Hyper DMA
GPU - Display Surface Sharing via Hyper DMA
============================================
Surface sharing is one typical automotive use case which requires
that the SOS accesses an individual surface or a set of surfaces

View File

@ -1,6 +1,6 @@
.. _release_notes_0.3:
ACRN v0.3 (Nov 2018)
ACRN V0.3 (Nov 2018)
####################
We are pleased to announce the release of Project ACRN version 0.3.
@ -31,7 +31,7 @@ https://projectacrn.github.io/0.3/. Documentation for the latest
(master) branch is found at https://projectacrn.github.io/latest/.
Version 0.3 new features
Version 0.3 New Features
************************

View File

@ -1,6 +1,6 @@
.. _release_notes_0.4:
ACRN v0.4 (Dec 2018)
ACRN V0.4 (Dec 2018)
####################
We are pleased to announce the release of Project ACRN version 0.4.
@ -31,7 +31,7 @@ https://projectacrn.github.io/0.4/. Documentation for the latest
(master) branch is found at https://projectacrn.github.io/latest/.
Version 0.4 new features
Version 0.4 New Features
************************
- :acrn-issue:`1824` - implement "wbinvd" emulation

View File

@ -1,6 +1,6 @@
.. _release_notes_0.5:
ACRN v0.5 (Jan 2019)
ACRN V0.5 (Jan 2019)
####################
We are pleased to announce the release of Project ACRN version 0.5.
@ -31,7 +31,7 @@ https://projectacrn.github.io/0.5/. Documentation for the latest
(master) branch is found at https://projectacrn.github.io/latest/.
Version 0.5 new features
Version 0.5 New Features
************************
**OVMF support initial patches merged in ACRN**:

View File

@ -1,6 +1,6 @@
.. _release_notes_0.6:
ACRN v0.6 (Feb 2019)
ACRN V0.6 (Feb 2019)
####################
We are pleased to announce the release of Project ACRN version 0.6.
@ -32,7 +32,7 @@ https://projectacrn.github.io/0.6/. Documentation for the latest
ACRN v0.6 requires Clear Linux OS version 27600.
Version 0.6 new features
Version 0.6 New Features
************************
**Enable Privileged VM support for real-time UOS in ACRN**:

View File

@ -1,6 +1,6 @@
.. _release_notes_0.7:
ACRN v0.7 (Mar 2019)
ACRN V0.7 (Mar 2019)
####################
We are pleased to announce the release of Project ACRN version 0.7.
@ -32,10 +32,10 @@ https://projectacrn.github.io/0.7/. Documentation for the latest
ACRN v0.7 requires Clear Linux OS version 28260.
Version 0.7 new features
Version 0.7 New Features
************************
Enable cache QOS with CAT
Enable Cache QOS With CAT
=========================
Cache Allocation Technology (CAT) is enabled on Apollo Lake (APL)
@ -46,12 +46,12 @@ build time. For debugging and performance tuning, the CAT can also be
enabled and configured at runtime by writing proper values to certain
MSRs using the ``wrmsr`` command on ACRN shell.
Support ACPI power key mediator
Support ACPI Power Key Mediator
===============================
ACRN supports ACPI power/sleep key on the APL and KBL NUC platforms,
triggering S3/S5 flow, following the ACPI spec.
Document updates
Document Updates
================
Several new documents have been added in this release, including:
@ -121,7 +121,6 @@ Known Issues
**Workaround:** Unplug and plug-in the unrecognized device after booting.
-----
:acrn-issue:`1991` - Input not accepted in UART Console for corner case
Input is useless in UART Console for a corner case, demonstrated with these steps:
@ -136,7 +135,6 @@ Known Issues
**Workaround:** Enter other keys before typing :kbd:`Enter`.
-----
:acrn-issue:`1996` - There is an error log when using ``acrnd&`` to boot UOS
An error log is printed when starting ``acrnd`` as a background job
@ -150,7 +148,6 @@ Known Issues
**Workaround:** None.
-----
:acrn-issue:`2267` - [APLUP2][LaaG] LaaG can't detect 4k monitor
After launching UOS on APL UP2 , 4k monitor cannot be detected.
@ -159,7 +156,6 @@ Known Issues
**Workaround:** Use a monitor with less than 4k resolution.
-----
:acrn-issue:`2278` - [KBLNUC] Cx/Px is not supported on KBLNUC
C states and P states are not supported on KBL NUC.
@ -169,7 +165,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2279` - [APLNUC] After exiting UOS with mediator
Usb_KeyBoard and Mouse, SOS cannot use the USB keyboard and mouse.
@ -189,7 +184,6 @@ Known Issues
**Workaround:** Unplug and plug-in the USB keyboard and mouse after exiting UOS.
-----
:acrn-issue:`2522` - [NUC7i7BNH] After starting IAS in SOS, there is no display
On NUC7i7BNH, after starting IAS in SOS, there is no display if the monitor is
@ -199,7 +193,6 @@ Known Issues
**Workaround:** None.
-----
:acrn-issue:`2523` - UOS monitor does not display when using IAS
There is no UOS display after starting IAS weston.
@ -220,7 +213,6 @@ Known Issues
The issue will be fixed in the next release.
-----
:acrn-issue:`2524` - [UP2][SBL] Launching UOS hangs while weston is running in SOS
When using weston in SOS, it will hang during the UOS launch.
@ -239,7 +231,6 @@ Known Issues
The issue will be fixed in the next release.
-----
:acrn-issue:`2527` - [KBLNUC][HV]System will crash when run ``crashme`` (SOS/UOS)
System will crash after a few minutes running stress test ``crashme`` tool in SOS/UOS.
@ -248,7 +239,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2526` - Hypervisor crash when booting UOS with acrnlog running with mem loglevel=6
If we use ``loglevel 3 6`` to change the mem loglevel to 6, we may hit a page fault in HV.
@ -257,7 +247,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2753` - UOS cannot resume after suspend by pressing power key
UOS cannot resume after suspend by pressing power key

View File

@ -1,6 +1,6 @@
.. _release_notes_0.8:
ACRN v0.8 (Apr 2019)
ACRN V0.8 (Apr 2019)
####################
We are pleased to announce the release of Project ACRN version 0.8.
@ -32,10 +32,10 @@ https://projectacrn.github.io/0.8/. Documentation for the latest
ACRN v0.8 requires Clear Linux OS version 28600.
Version 0.8 new features
Version 0.8 New Features
************************
GPIO virtualization
GPIO Virtualization
=========================
GPIO virtualization is supported as para-virtualization based on the
@ -45,19 +45,19 @@ configuration via one virtual GPIO controller. In the Back-end, the GPIO
command line in the launch script can be modified to map native GPIO to
UOS.
Enable QoS based on runC container
Enable QoS Based on runC Container
==================================
ACRN supports Device-Model QoS based on runC container to control the SOS
resources (CPU, Storage, MEM, NET) by modifying the runC configuration file.
S5 support for RTVM
S5 Support for RTVM
===============================
ACRN supports a Real-time VM (RTVM) shutting itself down. A RTVM is a
kind of VM that the SOS can't interfere at runtime, and as such, can
only power itself off internally. All poweroff requests external to the
RTVM will be rejected to avoid any interference.
Document updates
Document Updates
================
Several new documents have been added in this release, including:
@ -114,7 +114,6 @@ Known Issues
**Workaround:** Unplug and plug-in the unrecognized device after booting.
-----
:acrn-issue:`1991` - Input not accepted in UART Console for corner case
Input is useless in UART Console for a corner case, demonstrated with these steps:
@ -129,7 +128,6 @@ Known Issues
**Workaround:** Enter other keys before typing :kbd:`Enter`.
-----
:acrn-issue:`2267` - [APLUP2][LaaG] LaaG can't detect 4k monitor
After launching UOS on APL UP2 , 4k monitor cannot be detected.
@ -138,7 +136,6 @@ Known Issues
**Workaround:** Use a monitor with less than 4k resolution.
-----
:acrn-issue:`2278` - [KBLNUC] Cx/Px is not supported on KBLNUC
C states and P states are not supported on KBL NUC.
@ -148,7 +145,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2279` - [APLNUC] After exiting UOS, SOS can't use USB keyboard and mouse
After exiting UOS with mediator
@ -169,7 +165,6 @@ Known Issues
**Workaround:** Unplug and plug-in the USB keyboard and mouse after exiting UOS.
-----
:acrn-issue:`2527` - System will crash after a few minutes running stress test ``crashme`` tool in SOS/UOS.
System stress test may cause a system crash.
@ -178,7 +173,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2526` - Hypervisor crash when booting UOS with acrnlog running with mem loglevel=6
If we use ``loglevel 3 6`` to change the mem loglevel to 6, we may hit a page fault in HV.
@ -187,7 +181,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2753` - UOS cannot resume after suspend by pressing power key
UOS cannot resume after suspend by pressing power key

View File

@ -1,6 +1,6 @@
.. _release_notes_1.0.1:
ACRN v1.0.1 (July 2019)
ACRN V1.0.1 (July 2019)
#######################
We are pleased to announce the release of ACRN version 1.0.1. This is a
@ -27,7 +27,7 @@ There were no documentation changes in this update, so you can still
refer to the v1.0-specific documentation found at
https://projectacrn.github.io/1.0/.
Change Log in version 1.0.1 since version 1.0
Change Log in Version 1.0.1 Since Version 1.0
*********************************************
Primary changes are to fix several security and stability issues found

View File

@ -1,6 +1,6 @@
.. _release_notes_1.0.2:
ACRN v1.0.2 (Nov 2019)
ACRN V1.0.2 (Nov 2019)
######################
We are pleased to announce the release of ACRN version 1.0.2. This is a
@ -27,7 +27,7 @@ There were no documentation changes in this update, so you can still
refer to the v1.0-specific documentation found at
https://projectacrn.github.io/1.0/.
Change Log in v1.0.2 since v1.0.1
Change Log in V1.0.2 Since V1.0.1
*********************************
Primary changes are to fix several security and stability issues found

View File

@ -1,6 +1,6 @@
.. _release_notes_1.0:
ACRN v1.0 (May 2019)
ACRN V1.0 (May 2019)
####################
We are pleased to announce the release of ACRN version 1.0, a key
@ -33,7 +33,7 @@ with a specific release: generated v1.0 documents can be found at https://projec
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
ACRN v1.0 requires Clear Linux* OS version 29070.
Version 1.0 major features
Version 1.0 Major Features
**************************
Hardware Support
@ -42,7 +42,7 @@ ACRN supports multiple x86 platforms and has been tested with Apollo
Lake and Kaby Lake NUCs, and the UP Squared board.
(See :ref:`hardware` for supported platform details.)
APL UP2 board with SBL firmware
APL UP2 Board With SBL Firmware
===============================
ACRN supports APL UP2 board with Slim Bootloader (SBL) firmware.
Slim Bootloader is a modern, flexible, light-weight,
@ -51,13 +51,13 @@ customizable, and secure. An end-to-end reference build has been verified
on UP2/SBL board using ACRN hypervisor, Clear Linux OS as SOS, and Clear
Linux OS as UOS.
Enable post-launched RTVM support for real-time UOS in ACRN
Enable Post-Launched RTVM Support for Real-Time UOS in ACRN
===========================================================
This release provides initial patches enabling a User OS (UOS) running as a
virtual machine (VM) with real-time characteristics,
also called a "post-launched RTVM". More patches for ACRN real time support will continue.
Enable cache QOS with CAT
Enable Cache QOS With CAT
=========================
Cache Allocation Technology (CAT) is available on Apollo Lake (APL) platforms,
providing cache isolation between VMs mainly for real-time performance quality
@ -66,27 +66,27 @@ the VM configuration determined at build time. For debugging and performance
tuning, the CAT can also be enabled and configured at runtime by writing proper
values to certain MSRs using the ``wrmsr`` command on ACRN shell.
Enable QoS based on runC container
Enable QoS Based on runC Container
==================================
ACRN supports Device-Model QoS based on runC container to control
the SOS resources (CPU, Storage, MEM, NET) by modifying the runC configuration file,
configuration guide will be published in next release.
S5 support for RTVM
S5 Support for RTVM
===================
ACRN supports a Real-time VM (RTVM) shutting itself down. A RTVM is a kind
of VM that the SOS can't interfere with at runtime, and as such, only the
RTVM can power itself off internally. All power-off requests external to the
RTVM will be rejected to avoid any interference.
OVMF support initial patches merged in ACRN
OVMF Support Initial Patches Merged in ACRN
===========================================
To support booting Windows as a Guest OS, we are using
Open source Virtual Machine Firmware (OVMF). Initial
patches to support OVMF have been merged in ACRN hypervisor. More patches for
ACRN and patches upstreaming to OVMF work will be continuing.
Support ACPI power key mediator
Support ACPI Power Key Mediator
===============================
ACRN supports ACPI power/sleep key on the APL and KBL NUC platforms, triggering
S3/S5 flow, following the ACPI spec.
@ -135,7 +135,7 @@ a Guest VM (UOS), enables control of the Wi-Fi as an in-vehicle hotspot for thir
devices, provides third-party device applications access to the vehicle, and
provides access of third-party devices to the TCU (if applicable) provided connectivity.
IPU (MIPI CSI-2, HDMI-in)
IPU (MIPI CSI-2, HDMI-In)
=========================
ACRN hypervisor provide an IPU mediator to share with Guest OS. Alternatively, IPU
can also be configured as pass-through to Guest OS without sharing.
@ -161,7 +161,7 @@ to ensure performance of the most critical workload can be achieved. Three
different schedulers for the GPU are involved: i915 UOS scheduler, Mediator
GVT scheduler, and i915 SOS scheduler.
GPU - display surface sharing via Hyper DMA
GPU - Display Surface Sharing via Hyper DMA
===========================================
Surface sharing is one typical automotive use case which requires that the
SOS accesses an individual surface or a set of surfaces from the UOS without
@ -169,7 +169,7 @@ having to access the entire frame buffer of the UOS. It leverages hyper_DMABUF,
a Linux kernel driver running on multiple VMs and expands DMA-BUFFER sharing
capability to inter-VM.
Virtio standard is supported
Virtio Standard Is Supported
============================
Virtio framework is widely used in ACRN, allowing devices beyond network and
storage to be shared to UOS in a standard way. Many mediators in ACRN follow
@ -179,11 +179,11 @@ the guest's device driver "knows" it is running in a virtual environment, and
cooperates with the hypervisor. The SOS and UOS can share physical LAN network
and physical eMMC storage device. (See :ref:`virtio-hld` for more information.)
Device pass-through support
Device Pass-Through Support
===========================
Device pass-through to UOS supported with help of VT-d.
GPIO virtualization
GPIO Virtualization
===================
GPIO virtualization is supported as para-virtualization based on the Virtual
I/O Device (VIRTIO) specification. The GPIO consumers of the Front-end are able
@ -191,12 +191,12 @@ to set or get GPIO values, directions, and configuration via one virtual GPIO
controller. In the Back-end, the GPIO command line in the launch script can be
modified to map native GPIO to UOS. (See :ref:`virtio-hld` for more information.)
New ACRN tools
New ACRN Tools
==============
We've added a collection of support tools including ``acrnctl``, ``acrntrace``, ``acrnlog``,
``acrn-crashlog``, ``acrnprobe``. (See the `Tools` section under **User Guides** for details.)
Document updates
Document Updates
================
We have many reference documents `available
<https://projectacrn.github.io>`_, including:
@ -390,7 +390,6 @@ Known Issues
**Workaround:** Unplug and plug-in the unrecognized device after booting.
-----
:acrn-issue:`1991` - Input not accepted in UART Console for corner case
Input is useless in UART Console for a corner case, demonstrated with these steps:
@ -405,7 +404,6 @@ Known Issues
**Workaround:** Enter other keys before typing :kbd:`Enter`.
-----
:acrn-issue:`2267` - [APLUP2][LaaG] LaaG can't detect 4k monitor
After launching UOS on APL UP2 , 4k monitor cannot be detected.
@ -414,7 +412,6 @@ Known Issues
**Workaround:** Use a monitor with less than 4k resolution.
-----
:acrn-issue:`2278` - [KBLNUC] Cx/Px is not supported on KBLNUC
C states and P states are not supported on KBL NUC.
@ -424,7 +421,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2279` - [APLNUC] After exiting UOS, SOS can't use USB keyboard and mouse
After exiting UOS with mediator
@ -445,7 +441,6 @@ Known Issues
**Workaround:** Unplug and plug-in the USB keyboard and mouse after exiting UOS.
-----
:acrn-issue:`2527` - System will crash after a few minutes running stress test ``crashme`` tool in SOS/UOS.
System stress test may cause a system crash.
@ -454,7 +449,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2526` - Hypervisor crash when booting UOS with acrnlog running with mem loglevel=6
If we use ``loglevel 3 6`` to change the mem loglevel to 6, we may hit a page fault in HV.
@ -463,7 +457,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2753` - UOS cannot resume after suspend by pressing power key
UOS cannot resume after suspend by pressing power key
@ -472,7 +465,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2974` - Launching Zephyr RTOS as a real-time UOS takes too long
Launching Zephyr RTOS as a real-time UOS takes too long
@ -488,7 +480,6 @@ Known Issues
**Workaround:** None
-----
Change Log
**********

View File

@ -1,6 +1,6 @@
.. _release_notes_1.1:
ACRN v1.1 (Jun 2019)
ACRN V1.1 (Jun 2019)
####################
We are pleased to announce the release of ACRN version 1.1.
@ -24,7 +24,7 @@ with a specific release: generated v1.1 documents can be found at https://projec
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
ACRN v1.1 requires Clear Linux* OS version 29970.
Version 1.1 major features
Version 1.1 Major Features
**************************
Hybrid Mode Introduced
@ -33,14 +33,14 @@ In hybrid mode, a Zephyr OS is launched by the hypervisor even before the Servic
launched (pre-launched), with dedicated resources to achieve highest level of isolation.
This is designed to meet the needs of a FuSa certifiable safety OS.
Support for new guest Operating Systems
Support for New Guest Operating Systems
=======================================
* The `Zephyr RTOS <https://zephyrproject.org>`_ can be a pre-launched Safety OS in hybrid mode.
It can also be a post-launched (launched by Service OS, not the hypervisor) as a guest OS.
* VxWorks as a post-launched RTOS for industrial usages.
* Windows as a post-launched OS
Document updates
Document Updates
================
We have many `reference documents available <https://projectacrn.github.io>`_, including:
@ -132,7 +132,6 @@ Known Issues
**Workaround:** Unplug and plug-in the unrecognized device after booting.
-----
:acrn-issue:`1991` - Input not accepted in UART Console for corner case
Input is useless in UART Console for a corner case, demonstrated with these steps:
@ -147,7 +146,6 @@ Known Issues
**Workaround:** Enter other keys before typing :kbd:`Enter`.
-----
:acrn-issue:`2267` - [APLUP2][LaaG] LaaG can't detect 4k monitor
After launching UOS on APL UP2 , 4k monitor cannot be detected.
@ -156,7 +154,6 @@ Known Issues
**Workaround:** Use a monitor with less than 4k resolution.
-----
:acrn-issue:`2279` - [APLNUC] After exiting UOS, SOS can't use USB keyboard and mouse
After exiting UOS with mediator Usb_KeyBoard and Mouse, SOS cannot use the USB keyboard and mouse.
@ -176,7 +173,6 @@ Known Issues
**Workaround:** Unplug and plug-in the USB keyboard and mouse after exiting UOS.
-----
:acrn-issue:`2753` - UOS cannot resume after suspend by pressing power key
UOS cannot resume after suspend by pressing power key
@ -185,7 +181,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`2974` - Launching Zephyr RTOS as a real-time UOS takes too long
Launching Zephyr RTOS as a real-time UOS takes too long
@ -204,7 +199,6 @@ Known Issues
**Workaround:** A different version of Grub is known to work correctly
-----
:acrn-issue:`3268` - dm: add virtio-rnd device to command line
LaaG's network is unreachable with UOS kernel
@ -222,7 +216,6 @@ Known Issues
**Workaround:** Add ``-s 7,virtio-rnd \`` to the launch_uos.sh script
-----
:acrn-issue:`3280` - AcrnGT holding forcewake lock causes high CPU usage in gvt workload thread.
The i915 forcewake mechanism is to keep the GPU from its low power state, in
@ -233,7 +226,6 @@ Known Issues
**Workaround:** None
-----
:acrn-issue:`3279` - AcrnGT causes display flicker in some situations.
In current scaler ownership assignment logic, there's an issue that when SOS disables a plane,
@ -244,7 +236,6 @@ Known Issues
**Workaround:** None
-----
Change Log
**********

View File

@ -1,6 +1,6 @@
.. _release_notes_1.2:
ACRN v1.2 (Aug 2019)
ACRN V1.2 (Aug 2019)
####################
We are pleased to announce the release of ACRN version 1.2.
@ -24,10 +24,10 @@ with a specific release: generated v1.2 documents can be found at https://projec
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
ACRN v1.2 requires Clear Linux* OS version 30690.
Version 1.2 major features
Version 1.2 Major Features
**************************
What's New in v1.2
What's New in V1.2
==================
* Support OVMF as virtual boot loader for Service VM to launch Clearlinux, VxWorks
or Windows, Secure boot is supported
@ -36,7 +36,7 @@ What's New in v1.2
* Virtualization supports Always Running Timer (ART)
* Various bug fixes and enhancements
Document updates
Document Updates
================
We have many `reference documents available <https://projectacrn.github.io>`_, including:
@ -105,7 +105,6 @@ Known Issues
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
-----
:acrn-issue:`3520` - bundle of "VGPU unconformance guest" messages observed for "gvt" in SOS console while using UOS
After the need_force_wake is not removed in course of submitting VGPU workload,
@ -118,7 +117,6 @@ Known Issues
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
-----
:acrn-issue:`3533` - NUC hang while repeating the cold boot
NUC will hang while repeating cold boot operation.
@ -134,7 +132,6 @@ Known Issues
**Workaround:** Need to rebuild and apply the latest Service VM kernel from the ``acrn-kernel`` source code.
-----
:acrn-issue:`3576` - Expand default memory from 2G to 4G for WaaG
@ -142,25 +139,21 @@ Known Issues
**Workaround:** Issue resolved on ACRN tag: ``acrn-2019w33.1-140000p``
-----
:acrn-issue:`3609` - Sometimes fail to boot os while repeating the cold boot operation
**Workaround:** Please refer the PR information in this git issue
-----
:acrn-issue:`3610` - LaaG hang while run some workloads loop with zephyr idle
**Workaround:** Revert commit ``bbb891728d82834ec450f6a61792f715f4ec3013`` from the kernel
-----
:acrn-issue:`3611` - OVMF launch UOS fail for Hybrid and industry scenario
**Workaround:** Please refer the PR information in this git issue
-----
Change Log

View File

@ -1,6 +1,6 @@
.. _release_notes_1.3:
ACRN v1.3 (Sep 2019)
ACRN V1.3 (Sep 2019)
####################
We are pleased to announce the release of ACRN version 1.3.
@ -24,10 +24,10 @@ with a specific release: generated v1.3 documents can be found at https://projec
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
ACRN v1.3 requires Clear Linux* OS version 31080.
Version 1.3 major features
Version 1.3 Major Features
**************************
What's New in v1.3
What's New in V1.3
==================
* OVMF supports Graphics Output Protocol (GOP), allowing Windows logo at guest
VM boot time.
@ -38,7 +38,7 @@ What's New in v1.3
* Ethernet mediator now supports prioritization per VM.
* Features for real-time determinism, e.g. Cache Allocation Technology (CAT, only supported on Apollo Lake).
Document updates
Document Updates
================
We have many new `reference documents available <https://projectacrn.github.io>`_, including:

View File

@ -1,6 +1,6 @@
.. _release_notes_1.4:
ACRN v1.4 (Oct 2019)
ACRN V1.4 (Oct 2019)
####################
We are pleased to announce the release of ACRN version 1.4.
@ -24,17 +24,17 @@ with a specific release: generated v1.4 documents can be found at https://projec
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
ACRN v1.4 requires Clear Linux* OS version 31670.
Version 1.4 major features
Version 1.4 Major Features
**************************
What's New in v1.4
What's New in V1.4
==================
* ACRN now conforms to the Microsoft* Hypervisor Top-Level Functional Specification (TLFS).
* ACRN scheduler framework re-architected capabilities have been added.
* WaaG (Windows as a guest) stability and performance has been improved.
* Realtime performance of the RTVM (preempt-RT kernel-based) has been improved.
Document updates
Document Updates
================
Many new `reference documents <https://projectacrn.github.io>`_ are available, including:

View File

@ -1,6 +1,6 @@
.. _release_notes_1.5:
ACRN v1.5 (Jan 2020)
ACRN V1.5 (Jan 2020)
####################
We are pleased to announce the release of ACRN version 1.5.
@ -24,17 +24,17 @@ with a specific release: generated v1.5 documents can be found at https://projec
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
ACRN v1.5 requires Clear Linux* OS version 32030.
Version 1.5 major features
Version 1.5 Major Features
**************************
What's New in v1.5
What's New in V1.5
==================
* Basic CPU sharing: Fairness Round-Robin CPU Scheduling has been added to support basic CPU sharing (the Service VM and WaaG share one CPU core).
* 8th Gen Intel® Core™ Processors (code name Whiskey Lake) are now supported and validated.
* Overall stability and performance has been improved.
* An offline configuration tool has been created to help developers port ACRN to different hardware boards.
Document updates
Document Updates
================
Many new `reference documents <https://projectacrn.github.io>`_ are available, including:

View File

@ -1,6 +1,6 @@
.. _release_notes_1.6.1:
ACRN v1.6.1 (May 2020)
ACRN V1.6.1 (May 2020)
######################
We are pleased to announce the release of ACRN version 1.6.1.
@ -25,10 +25,10 @@ https://projectacrn.github.io/1.6.1/.
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
ACRN v1.6.1 requires Clear Linux OS version 33050.
Version 1.6.1 major features
Version 1.6.1 Major Features
****************************
What's New in v1.6.1
What's New in V1.6.1
====================
* ACRN ensures libvirt supports VM orchestration based on OpenStack
@ -49,7 +49,7 @@ What's New in v1.6.1
* Supported VT-d Posted Interrupts
Document updates
Document Updates
================
Many new and updated `reference documents <https://projectacrn.github.io>`_ are available, including:

View File

@ -1,6 +1,6 @@
.. _release_notes_1.6:
ACRN v1.6 (Mar 2020)
ACRN V1.6 (Mar 2020)
####################
We are pleased to announce the release of ACRN version 1.6.
@ -24,10 +24,10 @@ with a specific release: generated v1.6 documents can be found at https://projec
Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/.
ACRN v1.6 requires Clear Linux OS version 32680.
Version 1.6 major features
Version 1.6 Major Features
**************************
What's New in v1.6
What's New in V1.6
==================
* Graphics passthrough support
@ -51,7 +51,7 @@ What's New in v1.6
* PCI bridge emulation in hypervisor
Document updates
Document Updates
================
Many new and updated `reference documents <https://projectacrn.github.io>`_ are available, including:

View File

@ -1,6 +1,6 @@
.. _release_notes_2.0:
ACRN v2.0 (Jun 2020)
ACRN V2.0 (Jun 2020)
####################
We are pleased to announce the second major release of the Project ACRN
@ -55,7 +55,7 @@ started with ACRN.
We recommend that all developers upgrade to ACRN release v2.0.
Version 2.0 Key Features (comparing with v1.0)
Version 2.0 Key Features (Comparing With V1.0)
**********************************************
.. contents::
@ -101,7 +101,7 @@ New Hardware Platform Support
This release adds support for 8th Gen Intel® Core™ Processors (code
name: Whiskey Lake). (See :ref:`hardware` for platform details.)
Pre-launched Safety VM Support
Pre-Launched Safety VM Support
==============================
ACRN supports a pre-launched partitioned safety VM, isolated from the
@ -111,21 +111,21 @@ For example, in the hybrid mode, a real-time Zephyr RTOS VM can be
and with its own dedicated resources to achieve a high level of
isolation. This is designed to meet the needs of a Functional Safety OS.
Post-launched VM support via OVMF
Post-Launched VM Support via OVMF
=================================
ACRN supports Open Virtual Machine Firmware (OVMF) as a virtual boot
loader for the Service VM to launch post-launched VMs such as Windows,
Linux, VxWorks, or Zephyr RTOS. Secure boot is also supported.
Post-launched real-time VM Support
Post-Launched Real-Time VM Support
==================================
ACRN supports a post-launched RTVM, which also uses partitioned hardware
resources to ensure adequate real-time performance, as required for
industrial use cases.
Real-time VM Performance Optimizations
Real-Time VM Performance Optimizations
======================================
ACRN 2.0 improves RTVM performance with these optimizations:
@ -161,7 +161,7 @@ scheduler in the hypervisor to make sure the physical CPU can be shared
between VMs and support for yielding an idle vCPU when it's running a
'HLT' or 'PAUSE' instruction.
Large selection of OSs for User VMs
Large Selection of OSs for User VMs
===================================
ACRN now supports Windows* 10, Android*, Ubuntu*, Xenomai, VxWorks*,
@ -170,7 +170,7 @@ to the Microsoft* Hypervisor Top-Level Functional Specification (TLFS).
ACRN 2.0 also improves overall Windows as a Guest (WaaG) stability and
performance.
GRUB bootloader
GRUB Bootloader
===============
The ACRN hypervisor can boot from the popular GRUB bootloader using
@ -189,14 +189,14 @@ In this example, the ACRN Service VM supports a SR-IOV ethernet device
through the Physical Function (PF) driver, and ensures that the SR-IOV
Virtual Function (VF) device can passthrough to a post-launched VM.
Graphics passthrough support
Graphics Passthrough Support
============================
ACRN supports GPU passthrough to dedicated User VM based on Intel GVT-d
technology used to virtualize the GPU for multiple guest VMs,
effectively providing near-native graphics performance in the VM.
Shared memory based Inter-VM communication
Shared Memory Based Inter-Vm Communication
==========================================
ACRN supports Inter-VM communication based on shared memory for
@ -213,7 +213,7 @@ Kata Containers Support
ACRN can launch a Kata container, a secure container runtime, as a User VM.
VM orchestration
VM Orchestration
================
Libvirt is an open-source API, daemon, and management tool as a layer to
@ -221,7 +221,7 @@ decouple orchestrators and hypervisors. By adding a "ACRN driver", ACRN
supports libvirt-based tools and orchestrators to configure a User VM's CPU
configuration during VM creation.
Document updates
Document Updates
================
Many new and updated `reference documents <https://projectacrn.github.io>`_ are available, including:

View File

@ -1,6 +1,6 @@
.. _release_notes_2.1:
ACRN v2.1 (Aug 2020)
ACRN V2.1 (Aug 2020)
####################
We are pleased to announce the release of the Project ACRN
@ -33,7 +33,7 @@ ACRN v2.1 requires Ubuntu 18.04. Follow the instructions in the
We recommend that all developers upgrade to ACRN release v2.1.
What's new in v2.1
What's New in V2.1
******************
* Preempt-RT Linux has been validated as a pre-launched realtime VM. See

View File

@ -1,6 +1,6 @@
.. _release_notes_2.2:
ACRN v2.2 (Sep 2020)
ACRN V2.2 (Sep 2020)
####################
We are pleased to announce the release of the Project ACRN
@ -32,7 +32,7 @@ ACRN v2.2 requires Ubuntu 18.04. Follow the instructions in the
:ref:`rt_industry_ubuntu_setup` to get started with ACRN.
What's New in v2.2
What's New in V2.2
******************
Elkhart Lake and Tiger Lake processor support.
@ -73,7 +73,7 @@ Staged removal of deprivileged boot mode support.
Clear Linux though, so we have chosen Ubuntu (and Yocto Project) as the
preferred Service VM OSs moving forward.
Document updates
Document Updates
****************
New and updated reference documents are available, including:

View File

@ -1,6 +1,6 @@
.. _release_notes_2.3:
ACRN v2.3 (Dec 2020)
ACRN V2.3 (Dec 2020)
####################
We are pleased to announce the release of the Project ACRN
@ -32,7 +32,7 @@ ACRN v2.3 requires Ubuntu 18.04. Follow the instructions in the
:ref:`rt_industry_ubuntu_setup` to get started with ACRN.
What's New in v2.3
What's New in V2.3
******************
Enhanced GPU passthrough (GVT-d)
@ -65,7 +65,7 @@ Removed deprivileged boot mode support
Clear Linux so we have chosen Ubuntu (and Yocto Project) as the
preferred Service VM OSs moving forward.
Document updates
Document Updates
****************
New and updated reference documents are available, including:

View File

@ -1,6 +1,6 @@
.. _how-to-enable-acrn-secure-boot-with-grub:
Enable ACRN Secure Boot with GRUB
Enable ACRN Secure Boot With GRUB
#################################
This document shows how to enable ACRN secure boot with GRUB including:
@ -243,14 +243,14 @@ Creating UEFI Secure Boot Key
The keys to be enrolled in UEFI firmware: :file:`PK.der`, :file:`KEK.der`, :file:`db.der`.
The keys to sign bootloader image: :file:`grubx64.efi`, :file:`db.key` , :file:`db.crt`.
Sign GRUB Image With ``db`` Key
================================
Sign GRUB Image With Db Key
===========================
sbsign --key db.key --cert db.crt path/to/grubx64.efi
:file:`grubx64.efi.signed` will be created, it will be your bootloader.
Enroll UEFI Keys To UEFI Firmware
Enroll UEFI Keys to UEFI Firmware
=================================
Enroll ``PK`` (:file:`PK.der`), ``KEK`` (:file:`KEK.der`) and ``db``

View File

@ -15,7 +15,7 @@ Introduction
ACRN includes three types of configurations: Hypervisor, Board, and VM. Each
is discussed in the following sections.
Hypervisor configuration
Hypervisor Configuration
========================
The hypervisor configuration defines a working scenario and target
@ -29,7 +29,7 @@ A board-specific ``defconfig`` file, for example
``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/$(BOARD).config``
is loaded first; it is the default ``Kconfig`` for the specified board.
Board configuration
Board Configuration
===================
The board configuration stores board-specific settings referenced by the
@ -40,7 +40,7 @@ and BDF information. The reference board configuration is organized as
``*.c/*.h`` files located in the
``misc/vm_configs/boards/$(BOARD)/`` folder.
VM configuration
VM Configuration
=================
VM configuration includes **scenario-based** VM configuration
@ -58,7 +58,7 @@ The board-specific configurations on this scenario are stored in the
User VM launch script samples are located in the
``misc/vm_configs/sample_launch_scripts/`` folder.
ACRN configuration XMLs
ACRN Configuration XMLs
***********************
The ACRN configuration includes three kinds of XML files for acrn-config
@ -75,7 +75,7 @@ configurations by importing customized XMLs or by saving the
configurations by exporting XMLs.
Board XML format
Board XML Format
================
The board XMLs are located in the
@ -89,7 +89,7 @@ The board XML has an ``acrn-config`` root element and a ``board`` attribute:
As an input for the ``acrn-config`` tool, end users do not need to care
about the format of board XML and should not modify it.
Scenario XML format
Scenario XML Format
===================
The scenario XMLs are located in the
``misc/vm_configs/xmls/config-xmls/`` folder. The
@ -103,7 +103,7 @@ and ``scenario`` attributes:
See :ref:`scenario-config-options` for a full explanation of available scenario XML elements.
Launch XML format
Launch XML Format
=================
The launch XMLs are located in the
``misc/vm_configs/xmls/config-xmls/`` folder.
@ -188,10 +188,10 @@ current scenario has:
interface. When ``configurable="0"``, the item does not appear on the
interface.
Configuration tool workflow
Configuration Tool Workflow
***************************
Hypervisor configuration workflow
Hypervisor Configuration Workflow
==================================
The hypervisor configuration is based on the ``Kconfig``
@ -219,7 +219,7 @@ configuration steps.
.. _vm_config_workflow:
Board and VM configuration workflow
Board and VM Configuration Workflow
===================================
Python offline tools are provided to configure Board and VM configurations.
@ -300,7 +300,7 @@ Here is the offline configuration tool workflow:
.. _acrn_config_tool_ui:
Use the ACRN configuration app
Use the ACRN Configuration App
******************************
The ACRN configuration app is a web user interface application that performs the following:

View File

@ -1,6 +1,6 @@
.. _acrn_on_qemu:
Enable ACRN over QEMU/KVM
Enable ACRN Over QEMU/KVM
#########################
Goal of this document is to bring-up ACRN as a nested Hypervisor on top of QEMU/KVM
@ -195,7 +195,7 @@ Install ACRN Hypervisor
$ virsh destroy ACRNSOS # where ACRNSOS is the virsh domain name.
Service VM Networking updates for User VM
Service VM Networking Updates for User VM
*****************************************
Follow these steps to enable networking for the User VM (L2 guest):
@ -232,7 +232,7 @@ Follow these steps to enable networking for the User VM (L2 guest):
4. Restart ACRNSOS guest (L1 guest) to complete the setup and start with bring-up of User VM
Bring-up User VM (L2 Guest)
Bring-Up User VM (L2 Guest)
***************************
1. Build the device-model, using ``make devicemodel`` and copy acrn-dm to ACRNSOS guest (L1 guest) directory ``/usr/bin/acrn-dm``

View File

@ -37,7 +37,7 @@ Scheduling initialization is invoked in the hardware management layer.
.. figure:: images/cpu_sharing_api.png
:align: center
CPU affinity
CPU Affinity
*************
Currently, we do not support vCPU migration; the assignment of vCPU mapping to
@ -64,7 +64,7 @@ Here is an example for affinity:
.. figure:: images/cpu_sharing_affinity.png
:align: center
Thread object state
Thread Object State
*******************
The thread object contains three states: RUNNING, RUNNABLE, and BLOCKED.
@ -128,7 +128,6 @@ and BVT (Borrowed Virtual Time) scheduler.
Scheduler configuration
***********************
* The option in Kconfig decides the only scheduler used in runtime.
``hypervisor/arch/x86/Kconfig``
@ -159,7 +158,7 @@ The default scheduler is **SCHED_BVT**.
- With ``cpu_affinity`` option in acrn-dm. This launches the user VM on
a subset of the configured cpu_affinity pCPUs.
For example, assign physical CPUs 0 and 1 to this VM::
--cpu_affinity 0,1

View File

@ -15,7 +15,7 @@ full list of commands, or see a summary of available commands by using
the ``help`` command within the ACRN shell.
An example
An Example
**********
As an example, we'll show how to obtain the interrupts of a passthrough USB device.
@ -54,7 +54,7 @@ ACRN log provides a console log and a mem log for a user to analyze.
We can use console log to debug directly, while mem log is a userland tool
used to capture an ACRN hypervisor log.
Turn on the logging info
Turn on the Logging Info
========================
ACRN enables a console log by default.
@ -65,7 +65,7 @@ To enable and start the mem log::
$ systemctl start acrnlog
Set and grab log
Set and Grab Log
================
We have six (1-6) log levels for console log and mem log. The following
@ -129,7 +129,7 @@ ACRN trace is a tool running on the Service VM to capture trace
data. We can use the existing trace information to analyze, and we can
add self-defined tracing to analyze code that we care about.
Using Existing trace event ID to analyze trace
Using Existing Trace Event ID to Analyze Trace
==============================================
As an example, we can use the existing vm_exit trace to analyze the
@ -159,7 +159,7 @@ reason and times of each vm_exit after we have done some operations.
vmexit summary information
Using Self-defined trace event ID to analyze trace
Using Self-Defined Trace Event ID to Analyze Trace
==================================================
For some undefined trace event ID, we can define it by ourselves as

View File

@ -1,7 +1,7 @@
.. _enable_ivshmem:
Enable Inter-VM Communication Based on ``ivshmem``
##################################################
Enable Inter-Vm Communication Based on Ivshmem
##############################################
You can use inter-VM communication based on the ``ivshmem`` dm-land
solution or hv-land solution, according to the usage scenario needs.
@ -9,7 +9,7 @@ solution or hv-land solution, according to the usage scenario needs.
While both solutions can be used at the same time, VMs using different
solutions cannot communicate with each other.
ivshmem dm-land usage
Ivshmem Dm-Land Usage
*********************
Add this line as an ``acrn-dm`` boot parameter::
@ -35,7 +35,7 @@ where
.. _ivshmem-hv:
ivshmem hv-land usage
Ivshmem Hv-Land Usage
*********************
The ``ivshmem`` hv-land solution is disabled by default in ACRN. You
@ -68,7 +68,7 @@ enable it using the :ref:`acrn_configuration_tool` with these steps:
- Build the XML configuration, refer to :ref:`getting-started-building`
ivshmem notification mechanism
Ivshmem Notification Mechanism
******************************
Notification (doorbell) of ivshmem device allows VMs with ivshmem
@ -94,10 +94,10 @@ to applications.
.. note:: Notification is supported only for HV-land ivshmem devices. (Future
support may include notification for DM-land ivshmem devices.)
Inter-VM Communication Examples
Inter-Vm Communication Examples
*******************************
dm-land example
Dm-Land Example
===============
This example uses dm-land inter-VM communication between two
@ -167,7 +167,7 @@ Linux-based post-launched VMs (VM1 and VM2).
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
hv-land example
Hv-Land Example
===============
This example uses hv-land inter-VM communication between two

View File

@ -57,7 +57,7 @@ to the User VM through a channel. If the User VM receives the command, it will s
to the Device Model. It is the Service VM's responsibility to check if the User VMs
shut down successfully or not, and decides when to power off itself.
User VM "lifecycle manager"
User VM "Lifecycle Manager"
===========================
As part of the current S5 reference design, a lifecycle manager daemon (life_mngr) runs in the
@ -159,7 +159,7 @@ The procedure for enabling S5 is specific to the particular OS:
.. note:: S5 state is not automatically triggered by a Service VM shutdown; this needs
to be run before powering off the Service VM.
How to test
How to Test
***********
As described in :ref:`vuart_config`, two vUARTs are defined in
pre-defined ACRN scenarios: vUART0/ttyS0 for the console and

View File

@ -18,7 +18,7 @@ It allows for direct assignment of an entire GPU's prowess to a single
user, passing the native driver capabilities through to the hypervisor
without any limitations.
Verified version
Verified Version
*****************
- ACRN-hypervisor tag: **acrn-2020w17.4-140000p**
@ -31,7 +31,7 @@ Prerequisites
Follow :ref:`these instructions <rt_industry_ubuntu_setup>` to set up
Ubuntu as the ACRN Service VM.
Supported hardware platform
Supported Hardware Platform
***************************
Currently, ACRN has enabled GVT-d on the following platforms:
@ -40,16 +40,16 @@ Currently, ACRN has enabled GVT-d on the following platforms:
* Whiskey Lake
* Elkhart Lake
BIOS settings
BIOS Settings
*************
Kaby Lake platform
Kaby Lake Platform
==================
* Set **IGD Minimum Memory** to **64MB** in **Devices** &rarr;
**Video** &rarr; **IGD Minimum Memory**.
Whiskey Lake platform
Whiskey Lake Platform
=====================
* Set **PM Support** to **Enabled** in **Chipset** &rarr; **System
@ -59,7 +59,7 @@ Whiskey Lake platform
**System Agent (SA) Configuration**
&rarr; **Graphics Configuration** &rarr; **DVMT Pre-Allocated**.
Elkhart Lake platform
Elkhart Lake Platform
=====================
* Set **DMVT Pre-Allocated** to **64MB** in **Intel Advanced Menu**
@ -93,7 +93,7 @@ Passthrough the GPU to Guest
4. Run ``launch_win.sh``.
Enable the GVT-d GOP driver
Enable the GVT-d GOP Driver
***************************
When enabling GVT-d, the Guest OS cannot light up the physical screen

View File

@ -1,6 +1,6 @@
.. _pre_launched_rt:
Pre-Launched Preempt-RT Linux Mode in ACRN
Pre-Launched Preempt-Rt Linux Mode in ACRN
##########################################
The Pre-Launched Preempt-RT Linux Mode of ACRN, abbreviated as
@ -34,7 +34,7 @@ two Ethernet ports. We will passthrough the SATA and Ethernet 03:00.0
devices into the Pre-Launched RT VM, and give the rest of the devices to
the Service VM.
Install SOS with Grub on NVMe
Install SOS With Grub on NVMe
=============================
As with the Hybrid and Logical Partition scenarios, the Pre-Launched RT
@ -64,7 +64,7 @@ the SATA to the NVMe drive:
# mount /dev/sda1 /mnt
# cp /mnt/bzImage /boot/EFI/BOOT/bzImage_RT
Build ACRN with Pre-Launched RT Mode
Build ACRN With Pre-Launched RT Mode
====================================
The ACRN VM configuration framework can easily configure resources for

View File

@ -35,7 +35,7 @@ Manual, (Section 17.19 Intel Resource Director Technology Allocation Features)
.. _rdt_detection_capabilities:
RDT detection and resource capabilities
RDT Detection and Resource Capabilities
***************************************
From the ACRN HV debug shell, use ``cpuid`` to detect and identify the
resource capabilities. Use the platform's serial port for the HV shell.
@ -98,7 +98,7 @@ MBA bit encoding:
resources by using a common subset CLOS. This is done in order to minimize
misconfiguration errors.
Tuning RDT resources in HV debug shell
Tuning RDT Resources in HV Debug Shell
**************************************
This section explains how to configure the RDT resources from the HV debug
shell.
@ -141,7 +141,7 @@ shell.
.. _rdt_vm_configuration:
Configure RDT for VM using VM Configuration
Configure RDT for VM Using VM Configuration
*******************************************
#. RDT hardware feature is enabled by default on supported platforms. This
@ -166,11 +166,11 @@ Configure RDT for VM using VM Configuration
</RDT>
#. Once RDT is enabled in the scenario XML file, the next step is to program
the desired cache mask or/and the MBA delay value as needed in the
scenario file. Each cache mask or MBA delay configuration corresponds
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
the desired cache mask or/and the MBA delay value as needed in the
scenario file. Each cache mask or MBA delay configuration corresponds
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
cache mask settings needs to be in place where each setting corresponds
to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and
to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and
use default delay value for MBA, it can be done as shown in the example below.
.. code-block:: none

View File

@ -1,6 +1,6 @@
.. _rt_performance_tuning:
ACRN Real-time (RT) Performance Analysis
ACRN Real-Time (RT) Performance Analysis
########################################
The document describes the methods to collect trace/data for ACRN real-time VM (RTVM)
@ -9,8 +9,8 @@ real-time performance analysis. Two parts are included:
- Method to trace ``vmexit`` occurrences for analysis.
- Method to collect Performance Monitoring Counters information for tuning based on Performance Monitoring Unit, or PMU.
``vmexit`` analysis for ACRN RT performance
*******************************************
Vmexit Analysis for ACRN RT Performance
***************************************
``vmexit`` are triggered in response to certain instructions and events and are
a key source of performance degradation in virtual machines. During the runtime
@ -30,7 +30,7 @@ the duration of time where we do not want to see any ``vmexit`` occur.
Different RT tasks use different critical sections. This document uses
the cyclictest benchmark as an example of how to do ``vmexit`` analysis.
The critical sections
The Critical Sections
=====================
Here is example pseudocode of a cyclictest implementation.
@ -53,14 +53,14 @@ the cyclictest to be awakened and scheduled. Here we can get the latency by
So, we define the starting point of the critical section as ``next`` and
the ending point as ``now``.
Log and trace data collection
Log and Trace Data Collection
=============================
#. Add time stamps (in TSC) at ``next`` and ``now``.
#. Capture the log with the above time stamps in the RTVM.
#. Capture the ``acrntrace`` log in the Service VM at the same time.
Offline analysis
Offline Analysis
================
#. Convert the raw trace data to human readable format.
@ -71,10 +71,10 @@ Offline analysis
:align: center
:name: vm_exits_log
Collecting Performance Monitoring Counters data
Collecting Performance Monitoring Counters Data
***********************************************
Enable Performance Monitoring Unit (PMU) support in VM
Enable Performance Monitoring Unit (PMU) Support in VM
======================================================
By default, the ACRN hypervisor doesn't expose the PMU-related CPUID and
@ -149,7 +149,7 @@ Note that Precise Event Based Sampling (PEBS) is not yet enabled in the VM.
value64 = hva2hpa(vcpu->arch.msr_bitmap);
exec_vmwrite64(VMX_MSR_BITMAP_FULL, value64);
Perf/PMU tools in performance analysis
Perf/Pmu Tools in Performance Analysis
======================================
After exposing PMU-related CPUID/MSRs to the VM, performance analysis tools
@ -170,7 +170,7 @@ following links for perf usage:
Refer to https://github.com/andikleen/pmu-tools for PMU usage.
Top-down Microarchitecture Analysis Method (TMAM)
Top-Down Microarchitecture Analysis Method (TMAM)
==================================================
The top-down microarchitecture analysis method (TMAM), based on top-down

View File

@ -1,6 +1,6 @@
.. _rt_perf_tips_rtvm:
ACRN Real-time VM Performance Tips
ACRN Real-Time VM Performance Tips
##################################
Background
@ -34,7 +34,7 @@ RTVM performance:
This document summarizes tips from issues encountered and
resolved during real-time development and performance tuning.
Mandatory options for an RTVM
Mandatory Options for an RTVM
*****************************
An RTVM is a post-launched VM with LAPIC passthrough. Pay attention to
@ -55,7 +55,7 @@ Tip: Use virtio polling mode
and enables polling mode to avoid a VM-exit at the frontend. Enable
virtio polling mode via the option ``--virtio_poll [polling interval]``.
Avoid VM-exit latency
Avoid VM-exit Latency
*********************
VM-exit has a significant negative impact on virtualization performance.
@ -137,7 +137,7 @@ Tip: Create and initialize the RT tasks at the beginning to avoid runtime access
to CR3 and CR8 does not cause a VM-exit. However, writes to CR0 and CR4 may cause a
VM-exit, which would happen at the spawning and initialization of a new task.
Isolating the impact of neighbor VMs
Isolating the Impact of Neighbor VMs
************************************
ACRN makes use of several technologies and hardware features to avoid

View File

@ -1,6 +1,6 @@
.. _rtvm_workload_guideline:
Real-time VM Application Design Guidelines
Real-Time VM Application Design Guidelines
##########################################
An RTOS developer must be aware of the differences between running applications on a native
@ -11,7 +11,7 @@ incremental runtime overhead.
This document provides some application design guidelines when using an RTVM within the ACRN hypervisor.
Run RTVM with dedicated resources/devices
Run RTVM With Dedicated Resources/Devices
*****************************************
For best practice, ACRN allocates dedicated CPU, memory resources, and cache resources (using Intel
@ -22,14 +22,14 @@ of I/O devices, we recommend using dedicated (passthrough) PCIe devices to avoid
The configuration space for passthrough PCI devices is still emulated and accessing it will
trigger a VM-Exit.
RTVM with virtio PMD (Polling Mode Driver) for I/O sharing
RTVM With Virtio PMD (Polling Mode Driver) for I/O Sharing
**********************************************************
If the RTVM must use shared devices, we recommend using PMD drivers that can eliminate the
unpredictable latency caused by guest I/O trap-and-emulate access. The RTVM application must be
aware that the packets in the PMD driver may arrive or be sent later than expected.
RTVM with HV Emulated Device
RTVM With HV Emulated Device
****************************
ACRN uses hypervisor emulated virtual UART (vUART) devices for inter-VM synchronization such as
@ -39,7 +39,7 @@ behavior, the RT application using the vUART shall reserve a margin of CPU cycle
for the additional latency introduced by the VM-Exit to the vUART I/O registers (~2000-3000 cycles
per register access).
DM emulated device (Except PMD)
DM Emulated Device (Except PMD)
*******************************
We recommend **not** using DM-emulated devices in an RTVM.

View File

@ -177,7 +177,7 @@ outputs:
Debug = false
UseVSock = false
Run a Kata Container with ACRN
Run a Kata Container With ACRN
******************************
The system is now ready to run a Kata Container on ACRN. Note that a reboot

View File

@ -146,7 +146,7 @@ Install ACRN on the Debian VM
[ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19
[ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp
Enable the network sharing to give network access to User VM
Enable the Network Sharing to Give Network Access to User VM
************************************************************
.. code-block:: bash

View File

@ -190,7 +190,7 @@ Modify the ``launch_win.sh`` script in order to launch Ubuntu as the User VM.
The Ubuntu desktop on the secondary monitor
Enable the Ubuntu Console instead of the User Interface
Enable the Ubuntu Console Instead of the User Interface
*******************************************************
After the Ubuntu VM reboots, follow the steps below to enable the Ubuntu

View File

@ -1,6 +1,6 @@
.. _setup_openstack_libvirt:
Configure ACRN using OpenStack and libvirt
Configure ACRN Using OpenStack and Libvirt
##########################################
Introduction
@ -41,7 +41,7 @@ Install ACRN
create it using the instructions in
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
Set up and launch LXC/LXD
Set Up and Launch LXC/LXD
*************************
1. Set up the LXC/LXD Linux container engine::
@ -148,7 +148,7 @@ The ``openstack`` container is now properly configured for OpenStack.
Use the ``lxc list`` command to verify that both **eth0** and **eth1**
appear in the container.
Set up ACRN prerequisites inside the container
Set Up ACRN Prerequisites Inside the Container
**********************************************
1. Log in to the ``openstack`` container as the **stack** user::
@ -177,7 +177,7 @@ Set up ACRN prerequisites inside the container
.. note:: Use the tag that matches the version of the ACRN hypervisor (``acrn.bin``)
that runs on your system.
Set up libvirt
Set Up Libvirt
**************
1. Install the required packages::
@ -218,7 +218,7 @@ Set up libvirt
$ sudo systemctl daemon-reload
Set up OpenStack
Set Up OpenStack
****************
Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://docs.openstack.org/devstack/>`_.
@ -303,7 +303,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
$ sudo iptables -t nat -A POSTROUTING -s 172.24.4.1/24 -o br-ex -j SNAT --to-source 192.168.1.104
Configure and create OpenStack Instance
Configure and Create OpenStack Instance
***************************************
We'll be using the Ubuntu 20.04 (Focal) Cloud image as the OS image (qcow2

View File

@ -30,7 +30,7 @@ The image below shows the high-level design of SGX virtualization in ACRN.
SGX Virtualization in ACRN
Enable SGX support for Guest
Enable SGX Support for Guest
****************************
Presumptions
@ -232,13 +232,13 @@ ENCLS[ECREATE]
Other VMExit Control
********************
RDRAND exiting
RDRAND Exiting
==============
* ACRN allows Guest to use RDRAND/RDSEED instruction but does not set "RDRAND
exiting" to 1.
PAUSE exiting
PAUSE Exiting
=============
* ACRN does not set "PAUSE exiting" to 1.
@ -248,7 +248,7 @@ Future Development
Following are some currently unplanned areas of interest for future
ACRN development around SGX virtualization.
Launch Configuration support
Launch Configuration Support
============================
When the following two conditions are both satisfied:

View File

@ -128,7 +128,7 @@ SR-IOV Architecture in ACRN
standard BAR registers. The MSI-X mapping base address is also from the
PF's SR-IOV capabilities, not PCI standard BAR registers.
SR-IOV Passthrough VF Architecture In ACRN
SR-IOV Passthrough VF Architecture in ACRN
------------------------------------------
.. figure:: images/sriov-image4.png
@ -219,7 +219,7 @@ SR-IOV VF Assignment Policy
a passthrough to high privilege VMs because the PF device may impact
the assigned VFs' functionality and stability.
SR-IOV Usage Guide In ACRN
SR-IOV Usage Guide in ACRN
--------------------------
We use the Intel 82576 NIC as an example in the following instructions. We
@ -280,7 +280,7 @@ only support LaaG (Linux as a Guest).
c. Boot the User VM
SR-IOV Limitations In ACRN
SR-IOV Limitations in ACRN
--------------------------
1. The SR-IOV migration feature is not supported.

View File

@ -256,7 +256,7 @@ section, we'll focus on two major components:
See :ref:`trusty_tee` for additional details of Trusty implementation in
ACRN.
One-VM, Two-Worlds
One-Vm, Two-Worlds
==================
As previously mentioned, Trusty Secure Monitor could be any

View File

@ -1,6 +1,6 @@
.. _using_grub:
Using GRUB to boot ACRN
Using GRUB to Boot ACRN
#######################
`GRUB <http://www.gnu.org/software/grub/>`_ is a multiboot bootloader
@ -45,7 +45,7 @@ ELF format when :option:`hv.FEATURES.RELOC` is not set, or RAW format when
.. _pre-installed-grub:
Using pre-installed GRUB
Using Pre-Installed GRUB
************************
Most Linux distributions use GRUB version 2 by default. If its version
@ -137,7 +137,7 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
start the VMs automatically.
Installing self-built GRUB
Installing Self-Built GRUB
**************************
If the GRUB version on your platform is outdated or has issues booting

Some files were not shown because too many files have changed in this diff Show More