doc: update release_2.0 branch with doc changes
Update the working release_2.0 branch with doc updates made since the code feature freeze two weeks ago. (This is an update of all docs changed in master since then, instead of doing cherry-picks of the individual doc PRs/commits). Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
@ -5,7 +5,7 @@
|
||||
Page Not Found
|
||||
##############
|
||||
|
||||
.. rst-class:: rst-columns
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
.. image:: images/ACRN-fall-from-tree-small.png
|
||||
:align: left
|
||||
|
@ -83,8 +83,8 @@ responses to user space modules, notified by vIRQ injections.
|
||||
|
||||
.. _MPT_interface:
|
||||
|
||||
AcrnGT mediated pass-through (MPT) interface
|
||||
**************************************************
|
||||
AcrnGT mediated passthrough (MPT) interface
|
||||
*******************************************
|
||||
|
||||
AcrnGT receives request from GVT module through MPT interface. Refer to the
|
||||
:ref:`Graphic_mediation` page.
|
||||
|
@ -240,6 +240,10 @@ html_show_sourcelink = False
|
||||
# using the given strftime format.
|
||||
html_last_updated_fmt = '%b %d, %Y'
|
||||
|
||||
# The name of a javascript file (relative to the configuration directory) that
|
||||
# implements a search results scorer. If empty, the default will be used.
|
||||
html_search_scorer = 'scorer.js'
|
||||
|
||||
# -- Options for HTMLHelp output ------------------------------------------
|
||||
|
||||
# Output file base name for HTML help builder.
|
||||
|
@ -15,6 +15,7 @@ Configuration and Tools
|
||||
|
||||
tutorials/acrn_configuration_tool
|
||||
reference/kconfig/index
|
||||
user-guides/hv-parameters
|
||||
user-guides/kernel-parameters
|
||||
user-guides/acrn-shell
|
||||
user-guides/acrn-dm-parameters
|
||||
@ -76,6 +77,7 @@ Enable ACRN Features
|
||||
tutorials/rtvm_workload_design_guideline
|
||||
tutorials/setup_openstack_libvirt
|
||||
tutorials/acrn_on_qemu
|
||||
tutorials/using_grub
|
||||
|
||||
Debug
|
||||
*****
|
||||
|
@ -17,7 +17,7 @@ SoCs.
|
||||
This document describes:
|
||||
|
||||
- The different GPU virtualization techniques
|
||||
- GVT-g mediated pass-through
|
||||
- GVT-g mediated passthrough
|
||||
- High level design
|
||||
- Key components
|
||||
- GVT-g new architecture differentiation
|
||||
@ -47,7 +47,7 @@ Background
|
||||
|
||||
Intel GVT-g is an enabling technology in emerging graphics
|
||||
virtualization scenarios. It adopts a full GPU virtualization approach
|
||||
based on mediated pass-through technology, to achieve good performance,
|
||||
based on mediated passthrough technology, to achieve good performance,
|
||||
scalability and secure isolation among Virtual Machines (VMs). A virtual
|
||||
GPU (vGPU), with full GPU features, is presented to each VM so that a
|
||||
native graphics driver can run directly inside a VM.
|
||||
@ -161,10 +161,10 @@ also suffers from the following intrinsic limitations:
|
||||
exhibit quite different performance, which gives rise to a need for a
|
||||
fine-grained graphics tuning effort.
|
||||
|
||||
Direct Pass-Through
|
||||
Direct Passthrough
|
||||
-------------------
|
||||
|
||||
"Direct pass-through" dedicates the GPU to a single VM, providing full
|
||||
"Direct passthrough" dedicates the GPU to a single VM, providing full
|
||||
features and good performance, but at the cost of device sharing
|
||||
capability among VMs. Only one VM at a time can use the hardware
|
||||
acceleration capability of the GPU, which is a major limitation of this
|
||||
@ -177,7 +177,7 @@ solution. Intel GVT-d uses this mechanism.
|
||||
:align: center
|
||||
:name: gvt-pass-through
|
||||
|
||||
Pass-Through
|
||||
Passthrough
|
||||
|
||||
SR-IOV
|
||||
------
|
||||
@ -188,16 +188,16 @@ with each VF directly assignable to a VM.
|
||||
|
||||
.. _Graphic_mediation:
|
||||
|
||||
Mediated Pass-Through
|
||||
Mediated Passthrough
|
||||
*********************
|
||||
|
||||
Intel GVT-g achieves full GPU virtualization using a "mediated
|
||||
pass-through" technique.
|
||||
passthrough" technique.
|
||||
|
||||
Concept
|
||||
=======
|
||||
|
||||
Mediated pass-through allows a VM to access performance-critical I/O
|
||||
Mediated passthrough allows a VM to access performance-critical I/O
|
||||
resources (usually partitioned) directly, without intervention from the
|
||||
hypervisor in most cases. Privileged operations from this VM are
|
||||
trapped-and-emulated to provide secure isolation among VMs.
|
||||
@ -207,7 +207,7 @@ trapped-and-emulated to provide secure isolation among VMs.
|
||||
:align: center
|
||||
:name: mediated-pass-through
|
||||
|
||||
Mediated Pass-Through
|
||||
Mediated Passthrough
|
||||
|
||||
The Hypervisor must ensure that no vulnerability is exposed when
|
||||
assigning performance-critical resource to each VM. When a
|
||||
@ -229,7 +229,7 @@ Examples of performance-critical I/O resources include the following:
|
||||
Performance-Critical I/O Resources
|
||||
|
||||
|
||||
The key to implementing mediated pass-through for a specific device is
|
||||
The key to implementing mediated passthrough for a specific device is
|
||||
to define the right policy for various I/O resources.
|
||||
|
||||
Virtualization Policies for GPU Resources
|
||||
@ -317,7 +317,7 @@ High Level Architecture
|
||||
:numref:`gvt-arch` shows the overall architecture of GVT-g, based on the
|
||||
ACRN hypervisor, with Service VM as the privileged VM, and multiple user
|
||||
guests. A GVT-g device model working with the ACRN hypervisor,
|
||||
implements the policies of trap and pass-through. Each guest runs the
|
||||
implements the policies of trap and passthrough. Each guest runs the
|
||||
native graphics driver and can directly access performance-critical
|
||||
resources: the Frame Buffer and Command Buffer, with resource
|
||||
partitioning (as presented later). To protect privileged resources, that
|
||||
@ -331,14 +331,14 @@ concurrently with the CPU scheduler in ACRN to share the physical GPU
|
||||
timeslot among the VMs. GVT-g uses the physical GPU to directly execute
|
||||
all the commands submitted from a VM, so it avoids the complexity of
|
||||
emulating the Render Engine, which is the most complex part of the GPU.
|
||||
In the meantime, the resource pass-through of both the Frame Buffer and
|
||||
In the meantime, the resource passthrough of both the Frame Buffer and
|
||||
Command Buffer minimizes the hypervisor's intervention of CPU accesses,
|
||||
while the GPU scheduler guarantees every VM a quantum time-slice for
|
||||
direct GPU execution. With that, GVT-g can achieve near-native
|
||||
performance for a VM workload.
|
||||
|
||||
In :numref:`gvt-arch`, the yellow GVT device model works as a client on
|
||||
top of an i915 driver in the Service VM. It has a generic Mediated Pass-Through
|
||||
top of an i915 driver in the Service VM. It has a generic Mediated Passthrough
|
||||
(MPT) interface, compatible with all types of hypervisors. For ACRN,
|
||||
some extra development work is needed for such MPT interfaces. For
|
||||
example, we need some changes in ACRN-DM to make ACRN compatible with
|
||||
@ -795,7 +795,7 @@ the shadow PTE entries.
|
||||
Per-VM Shadow PPGTT
|
||||
-------------------
|
||||
|
||||
To support local graphics memory access pass-through, GVT-g implements
|
||||
To support local graphics memory access passthrough, GVT-g implements
|
||||
per-VM shadow local page tables. The local graphics memory is only
|
||||
accessible from the Render Engine. The local page tables have two-level
|
||||
paging structures, as shown in :numref:`per-vm-shadow`.
|
||||
|
@ -1189,8 +1189,8 @@ and waits for a resume signal. When the User VM should exit from S3, DM will
|
||||
get a wakeup signal and reset the User VM to emulate the User VM exit from
|
||||
S3.
|
||||
|
||||
Pass-through in Device Model
|
||||
Passthrough in Device Model
|
||||
****************************
|
||||
|
||||
You may refer to :ref:`hv-device-passthrough` for pass-through realization
|
||||
You may refer to :ref:`hv-device-passthrough` for passthrough realization
|
||||
in device model.
|
||||
|
@ -22,3 +22,4 @@ documented in this section.
|
||||
Hostbridge emulation <hostbridge-virt-hld>
|
||||
AT keyboard controller emulation <atkbdc-virt-hld>
|
||||
Split Device Model <split-dm>
|
||||
Shared memory based inter-vm communication <ivshmem-hld>
|
||||
|
@ -61,7 +61,7 @@ for its RT VM:
|
||||
|
||||
- CAT (Cache Allocation Technology)
|
||||
- MBA (Memory Bandwidth Allocation)
|
||||
- LAPIC pass-thru
|
||||
- LAPIC passthrough
|
||||
- Polling mode driver
|
||||
- ART (always running timer)
|
||||
- other TCC features like split lock detection, Pseudo locking for cache
|
||||
@ -109,7 +109,7 @@ separate instrument cluster VM is started after the User VM is booted.
|
||||
:numref:`overview-arch1.0` shows the architecture of ACRN 1.0 together with
|
||||
the IC VM and Service VM. As shown, the Service VM owns most of platform devices and
|
||||
provides I/O mediation to VMs. Some of the PCIe devices function as a
|
||||
pass-through mode to User VMs according to VM configuration. In addition,
|
||||
passthrough mode to User VMs according to VM configuration. In addition,
|
||||
the Service VM could run the IC applications and HV helper applications such
|
||||
as the Device Model, VM manager, etc. where the VM manager is responsible
|
||||
for VM start/stop/pause, virtual CPU pause/resume, etc.
|
||||
@ -136,7 +136,7 @@ compared to ACRN 1.0 is that:
|
||||
interference between different VMs
|
||||
|
||||
- ACRN 2.0 supports RT VM for a post-launched User VM, with assistant features like LAPIC
|
||||
pass-thru and PMD virtio driver
|
||||
passthrough and PMD virtio driver
|
||||
|
||||
ACRN 2.0 is still WIP, and some of its features are already merged in the master.
|
||||
|
||||
@ -162,12 +162,12 @@ ACRN adopts various approaches for emulating devices for the User VM:
|
||||
- para-virtualized, requiring front-end drivers in
|
||||
the User VM to function.
|
||||
|
||||
- **Pass-through device**: A device passed through to the User VM is fully
|
||||
- **Passthrough device**: A device passed through to the User VM is fully
|
||||
accessible to the User VM without interception. However, interrupts
|
||||
are first handled by the hypervisor before
|
||||
being injected to the User VM.
|
||||
|
||||
- **Mediated pass-through device**: A mediated pass-through device is a
|
||||
- **Mediated passthrough device**: A mediated passthrough device is a
|
||||
hybrid of the previous two approaches. Performance-critical
|
||||
resources (mostly data-plane related) are passed-through to the User VMs and
|
||||
others (mostly control-plane related) are emulated.
|
||||
@ -275,7 +275,7 @@ used by commercial OS).
|
||||
|
||||
- On top of vCPUs are three components for device emulation: one for
|
||||
emulation inside the hypervisor, another for communicating with
|
||||
the Service VM for mediation, and the third for managing pass-through
|
||||
the Service VM for mediation, and the third for managing passthrough
|
||||
devices.
|
||||
|
||||
- The highest layer is a VM management module providing
|
||||
@ -311,7 +311,7 @@ based on command line configurations.
|
||||
Based on a VHM kernel module, DM interacts with VM manager to create the User
|
||||
VM. It then emulates devices through full virtualization on the DM user
|
||||
level, or para-virtualized based on kernel mediator (such as virtio,
|
||||
GVT), or pass-through based on kernel VHM APIs.
|
||||
GVT), or passthrough based on kernel VHM APIs.
|
||||
|
||||
Refer to :ref:`hld-devicemodel` for more details.
|
||||
|
||||
@ -592,6 +592,6 @@ Some details about the ACPI table for the User and Service VMs:
|
||||
knows which register the User VM writes to trigger power state
|
||||
transitions. Device Model must register an I/O handler for it.
|
||||
|
||||
- The ACPI table in the Service VM is passthru. There is no ACPI parser
|
||||
- The ACPI table in the Service VM is passthrough. There is no ACPI parser
|
||||
in ACRN HV. The power management related ACPI table is
|
||||
generated offline and hardcoded in ACRN HV.
|
||||
|
@ -19,7 +19,7 @@ for the Device Model to build a virtual ACPI table.
|
||||
The Px/Cx data includes four
|
||||
ACPI objects: _PCT, _PPC, and _PSS for P-state management, and _CST for
|
||||
C-state management. All these ACPI data must be consistent with the
|
||||
native data because the control method is a kind of pass through.
|
||||
native data because the control method is a kind of passthrough.
|
||||
|
||||
These ACPI objects data are parsed by an offline tool and hard-coded in a
|
||||
Hypervisor module named CPU state table:
|
||||
|
@ -126,6 +126,9 @@ a passthrough device to/from a post-launched VM is shown in the following figure
|
||||
|
||||
ptdev de-assignment control flow
|
||||
|
||||
.. _vtd-posted-interrupt:
|
||||
|
||||
|
||||
VT-d Interrupt-remapping
|
||||
************************
|
||||
|
||||
|
@ -11,10 +11,10 @@ to manage interrupts and exceptions, as shown in
|
||||
:numref:`interrupt-modules-overview`. In its native layer, it configures
|
||||
the physical PIC, IOAPIC, and LAPIC to support different interrupt
|
||||
sources from the local timer/IPI to the external INTx/MSI. In its virtual guest
|
||||
layer, it emulates virtual PIC, virtual IOAPIC, and virtual LAPIC/pass-thru
|
||||
layer, it emulates virtual PIC, virtual IOAPIC, and virtual LAPIC/passthrough
|
||||
LAPIC. It provides full APIs, allowing virtual interrupt injection from
|
||||
emulated or pass-thru devices. The contents in this section do not include
|
||||
the pass-thru LAPIC case. For the pass-thru LAPIC, refer to
|
||||
emulated or passthrough devices. The contents in this section do not include
|
||||
the passthrough LAPIC case. For the passthrough LAPIC, refer to
|
||||
:ref:`lapic_passthru`
|
||||
|
||||
.. figure:: images/interrupt-image3.png
|
||||
@ -29,10 +29,10 @@ the ACRN hypervisor sets up the physical interrupt in its basic
|
||||
interrupt modules (e.g., IOAPIC/LAPIC/IDT). It dispatches the interrupt
|
||||
in the hypervisor interrupt flow control layer to the corresponding
|
||||
handlers; this could be pre-defined IPI notification, timer, or runtime
|
||||
registered pass-thru devices. The ACRN hypervisor then uses its VM
|
||||
registered passthrough devices. The ACRN hypervisor then uses its VM
|
||||
interfaces based on vPIC, vIOAPIC, and vMSI modules, to inject the
|
||||
necessary virtual interrupt into the specific VM, or directly deliver
|
||||
interrupt to the specific RT VM with pass-thru LAPIC.
|
||||
interrupt to the specific RT VM with passthrough LAPIC.
|
||||
|
||||
.. figure:: images/interrupt-image2.png
|
||||
:align: center
|
||||
@ -100,7 +100,7 @@ Physical Interrupt Initialization
|
||||
After ACRN hypervisor gets control from the bootloader, it
|
||||
initializes all physical interrupt-related modules for all the CPUs. ACRN
|
||||
hypervisor creates a framework to manage the physical interrupt for
|
||||
hypervisor local devices, pass-thru devices, and IPI between CPUs, as
|
||||
hypervisor local devices, passthrough devices, and IPI between CPUs, as
|
||||
shown in :numref:`hv-interrupt-init`:
|
||||
|
||||
.. figure:: images/interrupt-image66.png
|
||||
@ -323,7 +323,7 @@ there are three different handling flows according to flags:
|
||||
|
||||
- ``IRQF_LEVEL && IRQF_PT``
|
||||
|
||||
For pass-thru devices, to avoid continuous interrupt triggers, it masks
|
||||
For passthrough devices, to avoid continuous interrupt triggers, it masks
|
||||
the IOAPIC pin and leaves it unmasked until corresponding vIOAPIC
|
||||
pin gets an explicit EOI ACK from guest.
|
||||
|
||||
|
@ -141,26 +141,26 @@ Port I/O is supported for PCI device config space 0xcfc and 0xcf8, vUART
|
||||
host-bridge at BDF (Bus Device Function) 0.0:0 to each guest. Access to
|
||||
256 bytes of config space for virtual host bridge is emulated.
|
||||
|
||||
I/O - Pass-thru devices
|
||||
=======================
|
||||
I/O - Passthrough devices
|
||||
=========================
|
||||
|
||||
ACRN, in partition mode, supports passing thru PCI devices on the
|
||||
platform. All the pass-thru devices are exposed as child devices under
|
||||
platform. All the passthrough devices are exposed as child devices under
|
||||
the virtual host bridge. ACRN does not support either passing thru
|
||||
bridges or emulating virtual bridges. Pass-thru devices should be
|
||||
bridges or emulating virtual bridges. Passthrough devices should be
|
||||
statically allocated to each guest using the guest configuration. ACRN
|
||||
expects the developer to provide the virtual BDF to BDF of the
|
||||
physical device mapping for all the pass-thru devices as part of each guest
|
||||
physical device mapping for all the passthrough devices as part of each guest
|
||||
configuration.
|
||||
|
||||
Runtime ACRN support for guests
|
||||
*******************************
|
||||
|
||||
ACRN, in partition mode, supports an option to pass-thru LAPIC of the
|
||||
ACRN, in partition mode, supports an option to passthrough LAPIC of the
|
||||
physical CPUs to the guest. ACRN expects developers to specify if the
|
||||
guest needs LAPIC pass-thru using guest configuration. When guest
|
||||
guest needs LAPIC passthrough using guest configuration. When guest
|
||||
configures vLAPIC as x2APIC, and if the guest configuration has LAPIC
|
||||
pass-thru enabled, ACRN passes the LAPIC to the guest. Guest can access
|
||||
passthrough enabled, ACRN passes the LAPIC to the guest. Guest can access
|
||||
the LAPIC hardware directly without hypervisor interception. During
|
||||
runtime of the guest, this option differentiates how ACRN supports
|
||||
inter-processor interrupt handling and device interrupt handling. This
|
||||
@ -181,18 +181,18 @@ the Service VM startup in sharing mode.
|
||||
Inter-processor Interrupt (IPI) Handling
|
||||
========================================
|
||||
|
||||
Guests w/o LAPIC pass-thru
|
||||
--------------------------
|
||||
Guests w/o LAPIC passthrough
|
||||
----------------------------
|
||||
|
||||
For guests without LAPIC pass-thru, IPIs between guest CPUs are handled in
|
||||
For guests without LAPIC passthrough, IPIs between guest CPUs are handled in
|
||||
the same way as sharing mode in ACRN. Refer to :ref:`virtual-interrupt-hld`
|
||||
for more details.
|
||||
|
||||
Guests w/ LAPIC pass-thru
|
||||
-------------------------
|
||||
Guests w/ LAPIC passthrough
|
||||
---------------------------
|
||||
|
||||
ACRN supports pass-thru if and only if the guest is using x2APIC mode
|
||||
for the vLAPIC. In LAPIC pass-thru mode, writes to the Interrupt Command
|
||||
ACRN supports passthrough if and only if the guest is using x2APIC mode
|
||||
for the vLAPIC. In LAPIC passthrough mode, writes to the Interrupt Command
|
||||
Register (ICR) x2APIC MSR is intercepted. Guest writes the IPI info,
|
||||
including vector, and destination APIC IDs to the ICR. Upon an IPI request
|
||||
from the guest, ACRN does a sanity check on the destination processors
|
||||
@ -204,8 +204,8 @@ corresponding to the destination processor info in the ICR.
|
||||
:align: center
|
||||
|
||||
|
||||
Pass-thru device support
|
||||
========================
|
||||
Passthrough device support
|
||||
==========================
|
||||
|
||||
Configuration space access
|
||||
--------------------------
|
||||
@ -217,7 +217,7 @@ Address registers (BAR), offsets starting from 0x10H to 0x24H, provide
|
||||
the information about the resources (I/O and MMIO) used by the PCI
|
||||
device. ACRN virtualizes the BAR registers and for the rest of the
|
||||
config space, forwards reads and writes to the physical config space of
|
||||
pass-thru devices. Refer to the `I/O`_ section below for more details.
|
||||
passthrough devices. Refer to the `I/O`_ section below for more details.
|
||||
|
||||
.. figure:: images/partition-image1.png
|
||||
:align: center
|
||||
@ -226,16 +226,16 @@ pass-thru devices. Refer to the `I/O`_ section below for more details.
|
||||
DMA
|
||||
---
|
||||
|
||||
ACRN developers need to statically define the pass-thru devices for each
|
||||
ACRN developers need to statically define the passthrough devices for each
|
||||
guest using the guest configuration. For devices to DMA to/from guest
|
||||
memory directly, ACRN parses the list of pass-thru devices for each
|
||||
memory directly, ACRN parses the list of passthrough devices for each
|
||||
guest and creates context entries in the VT-d remapping hardware. EPT
|
||||
page tables created for the guest are used for VT-d page tables.
|
||||
|
||||
I/O
|
||||
---
|
||||
|
||||
ACRN supports I/O for pass-thru devices with two restrictions.
|
||||
ACRN supports I/O for passthrough devices with two restrictions.
|
||||
|
||||
1) Supports only MMIO. Thus, this requires developers to expose I/O BARs as
|
||||
not present in the guest configuration.
|
||||
@ -244,7 +244,7 @@ ACRN supports I/O for pass-thru devices with two restrictions.
|
||||
|
||||
As the guest PCI sub-system scans the PCI bus and assigns a Guest Physical
|
||||
Address (GPA) to the MMIO BAR, ACRN maps the GPA to the address in the
|
||||
physical BAR of the pass-thru device using EPT. The following timeline chart
|
||||
physical BAR of the passthrough device using EPT. The following timeline chart
|
||||
explains how PCI devices are assigned to guest and BARs are mapped upon
|
||||
guest initialization.
|
||||
|
||||
@ -255,14 +255,14 @@ guest initialization.
|
||||
Interrupt Configuration
|
||||
-----------------------
|
||||
|
||||
ACRN supports both legacy (INTx) and MSI interrupts for pass-thru
|
||||
ACRN supports both legacy (INTx) and MSI interrupts for passthrough
|
||||
devices.
|
||||
|
||||
INTx support
|
||||
~~~~~~~~~~~~
|
||||
|
||||
ACRN expects developers to identify the interrupt line info (0x3CH) from
|
||||
the physical BAR of the pass-thru device and build an interrupt entry in
|
||||
the physical BAR of the passthrough device and build an interrupt entry in
|
||||
the mptable for the corresponding guest. As guest configures the vIOAPIC
|
||||
for the interrupt RTE, ACRN writes the info from the guest RTE into the
|
||||
physical IOAPIC RTE. Upon the guest kernel request to mask the interrupt,
|
||||
@ -275,8 +275,8 @@ MSI support
|
||||
~~~~~~~~~~~
|
||||
|
||||
Guest reads/writes to PCI configuration space for configuring MSI
|
||||
interrupts using an address. Data and control registers are pass-thru to
|
||||
the physical BAR of the pass-thru device. Refer to `Configuration
|
||||
interrupts using an address. Data and control registers are passthrough to
|
||||
the physical BAR of the passthrough device. Refer to `Configuration
|
||||
space access`_ for details on how the PCI configuration space is emulated.
|
||||
|
||||
Virtual device support
|
||||
@ -291,8 +291,8 @@ writes are discarded.
|
||||
Interrupt delivery
|
||||
==================
|
||||
|
||||
Guests w/o LAPIC pass-thru
|
||||
--------------------------
|
||||
Guests w/o LAPIC passthrough
|
||||
----------------------------
|
||||
|
||||
In partition mode of ACRN, interrupts stay disabled after a vmexit. The
|
||||
processor does not take interrupts when it is executing in VMX root
|
||||
@ -307,10 +307,10 @@ for device interrupts.
|
||||
:align: center
|
||||
|
||||
|
||||
Guests w/ LAPIC pass-thru
|
||||
-------------------------
|
||||
Guests w/ LAPIC passthrough
|
||||
---------------------------
|
||||
|
||||
For guests with LAPIC pass-thru, ACRN does not configure vmexit upon
|
||||
For guests with LAPIC passthrough, ACRN does not configure vmexit upon
|
||||
external interrupts. There is no vmexit upon device interrupts and they are
|
||||
handled by the guest IDT.
|
||||
|
||||
@ -320,15 +320,15 @@ Hypervisor IPI service
|
||||
ACRN needs IPIs for events such as flushing TLBs across CPUs, sending virtual
|
||||
device interrupts (e.g. vUART to vCPUs), and others.
|
||||
|
||||
Guests w/o LAPIC pass-thru
|
||||
--------------------------
|
||||
Guests w/o LAPIC passthrough
|
||||
----------------------------
|
||||
|
||||
Hypervisor IPIs work the same way as in sharing mode.
|
||||
|
||||
Guests w/ LAPIC pass-thru
|
||||
-------------------------
|
||||
Guests w/ LAPIC passthrough
|
||||
---------------------------
|
||||
|
||||
Since external interrupts are pass-thru to the guest IDT, IPIs do not
|
||||
Since external interrupts are passthrough to the guest IDT, IPIs do not
|
||||
trigger vmexit. ACRN uses NMI delivery mode and the NMI exiting is
|
||||
chosen for vCPUs. At the time of NMI interrupt on the target processor,
|
||||
if the processor is in non-root mode, vmexit happens on the processor
|
||||
@ -344,8 +344,8 @@ For a guest console in partition mode, ACRN provides an option to pass
|
||||
``vmid`` as an argument to ``vm_console``. vmid is the same as the one
|
||||
developers use in the guest configuration.
|
||||
|
||||
Guests w/o LAPIC pass-thru
|
||||
--------------------------
|
||||
Guests w/o LAPIC passthrough
|
||||
----------------------------
|
||||
|
||||
Works the same way as sharing mode.
|
||||
|
||||
@ -354,7 +354,7 @@ Hypervisor Console
|
||||
|
||||
ACRN uses the TSC deadline timer to provide a timer service. The hypervisor
|
||||
console uses a timer on CPU0 to poll characters on the serial device. To
|
||||
support LAPIC pass-thru, the TSC deadline MSR is pass-thru and the local
|
||||
support LAPIC passthrough, the TSC deadline MSR is passthrough and the local
|
||||
timer interrupt is also delivered to the guest IDT. Instead of the TSC
|
||||
deadline timer, ACRN uses the VMX preemption timer to poll the serial device.
|
||||
|
||||
|
@ -8,16 +8,16 @@ management, which includes:
|
||||
|
||||
- VCPU request for virtual interrupt kick off,
|
||||
- vPIC/vIOAPIC/vLAPIC for virtual interrupt injection interfaces,
|
||||
- physical-to-virtual interrupt mapping for a pass-thru device, and
|
||||
- physical-to-virtual interrupt mapping for a passthrough device, and
|
||||
- the process of VMX interrupt/exception injection.
|
||||
|
||||
A standard VM never owns any physical interrupts; all interrupts received by the
|
||||
Guest OS come from a virtual interrupt injected by vLAPIC, vIOAPIC, or
|
||||
vPIC. Such virtual interrupts are triggered either from a pass-through
|
||||
vPIC. Such virtual interrupts are triggered either from a passthrough
|
||||
device or from I/O mediators in the Service VM via hypercalls. The
|
||||
:ref:`interrupt-remapping` section discusses how the hypervisor manages
|
||||
the mapping between physical and virtual interrupts for pass-through
|
||||
devices. However, a hard RT VM with LAPIC pass-through does own the physical
|
||||
the mapping between physical and virtual interrupts for passthrough
|
||||
devices. However, a hard RT VM with LAPIC passthrough does own the physical
|
||||
maskable external interrupts. On its physical CPUs, interrupts are disabled
|
||||
in VMX root mode, while in VMX non-root mode, physical interrupts will be
|
||||
delivered to RT VM directly.
|
||||
@ -64,7 +64,7 @@ will send an IPI to kick it out, which leads to an external-interrupt
|
||||
VM-Exit. In some cases, there is no need to send IPI when making a request,
|
||||
because the CPU making the request itself is the target VCPU. For
|
||||
example, the #GP exception request always happens on the current CPU when it
|
||||
finds an invalid emulation has happened. An external interrupt for a pass-thru
|
||||
finds an invalid emulation has happened. An external interrupt for a passthrough
|
||||
device always happens on the VCPUs of the VM which this device is belonged to,
|
||||
so after it triggers an external-interrupt VM-Exit, the current CPU is the very
|
||||
target VCPU.
|
||||
@ -93,7 +93,7 @@ APIs are invoked when an interrupt source from vLAPIC needs to inject
|
||||
an interrupt, for example:
|
||||
|
||||
- from LVT like LAPIC timer
|
||||
- from vIOAPIC for a pass-thru device interrupt
|
||||
- from vIOAPIC for a passthrough device interrupt
|
||||
- from an emulated device for a MSI
|
||||
|
||||
These APIs will finish by making a vCPU request.
|
||||
@ -134,7 +134,7 @@ LAPIC passthrough based on vLAPIC
|
||||
|
||||
LAPIC passthrough is supported based on vLAPIC, the guest OS first boots with
|
||||
vLAPIC in xAPIC mode and then switches to x2APIC mode to enable the LAPIC
|
||||
pass-through.
|
||||
passthrough.
|
||||
|
||||
In case of LAPIC passthrough based on vLAPIC, the system will have the
|
||||
following characteristics.
|
||||
|
BIN
doc/developer-guides/hld/images/ivshmem-architecture.png
Normal file
After Width: | Height: | Size: 47 KiB |
183
doc/developer-guides/hld/ivshmem-hld.rst
Normal file
@ -0,0 +1,183 @@
|
||||
.. _ivshmem-hld:
|
||||
|
||||
ACRN Shared Memory Based Inter-VM Communication
|
||||
###############################################
|
||||
|
||||
ACRN supports inter-virtual machine communication based on a shared
|
||||
memory mechanism. The ACRN device model or hypervisor emulates a virtual
|
||||
PCI device (called an ``ivshmem`` device) to expose the base address and
|
||||
size of this shared memory.
|
||||
|
||||
Inter-VM Communication Overview
|
||||
*******************************
|
||||
|
||||
.. figure:: images/ivshmem-architecture.png
|
||||
:align: center
|
||||
:name: ivshmem-architecture-overview
|
||||
|
||||
ACRN shared memory based inter-vm communication architecture
|
||||
|
||||
The ``ivshmem`` device is emulated in the ACRN device model (dm-land)
|
||||
and its shared memory region is allocated from the Service VM's memory
|
||||
space. This solution only supports communication between post-launched
|
||||
VMs.
|
||||
|
||||
.. note:: In a future implementation, the ``ivshmem`` device could
|
||||
instead be emulated in the hypervisor (hypervisor-land) and the shared
|
||||
memory regions reserved in the hypervisor's memory space. This solution
|
||||
would work for both pre-launched and post-launched VMs.
|
||||
|
||||
ivshmem hv:
|
||||
The **ivshmem hv** implements register virtualization
|
||||
and shared memory mapping in the ACRN hypervisor.
|
||||
It will support notification/interrupt mechanism in the future.
|
||||
|
||||
ivshmem dm:
|
||||
The **ivshmem dm** implements register virtualization
|
||||
and shared memory mapping in the ACRN Device Model (``acrn-dm``).
|
||||
It will support notification/interrupt mechanism in the future.
|
||||
|
||||
ivshmem server:
|
||||
A daemon for inter-VM notification capability that will work with **ivshmem
|
||||
dm**. This is currently **not implemented**, so the inter-VM communication
|
||||
doesn't support a notification mechanism.
|
||||
|
||||
Ivshmem Device Introduction
|
||||
***************************
|
||||
|
||||
The ``ivshmem`` device is a virtual standard PCI device consisting of
|
||||
two Base Address Registers (BARs): BAR0 is used for emulating interrupt
|
||||
related registers, and BAR2 is used for exposing shared memory region. The ``ivshmem`` device doesn't support any extra capabilities.
|
||||
|
||||
Configuration Space Definition
|
||||
|
||||
+---------------+----------+----------+
|
||||
| Register | Offset | Value |
|
||||
+===============+==========+==========+
|
||||
| Vendor ID | 0x00 | 0x1AF4 |
|
||||
+---------------+----------+----------+
|
||||
| Device ID | 0x02 | 0x1110 |
|
||||
+---------------+----------+----------+
|
||||
| Revision ID | 0x08 | 0x1 |
|
||||
+---------------+----------+----------+
|
||||
| Class Code | 0x09 | 0x5 |
|
||||
+---------------+----------+----------+
|
||||
|
||||
|
||||
MMIO Registers Definition
|
||||
|
||||
.. list-table::
|
||||
:widths: auto
|
||||
:header-rows: 1
|
||||
|
||||
* - Register
|
||||
- Offset
|
||||
- Read/Write
|
||||
- Description
|
||||
* - IVSHMEM\_IRQ\_MASK\_REG
|
||||
- 0x0
|
||||
- R/W
|
||||
- Interrupt Status register is used for legacy interrupt.
|
||||
ivshmem doesn't support interrupts, so this is reserved.
|
||||
* - IVSHMEM\_IRQ\_STA\_REG
|
||||
- 0x4
|
||||
- R/W
|
||||
- Interrupt Mask register is used for legacy interrupt.
|
||||
ivshmem doesn't support interrupts, so this is reserved.
|
||||
* - IVSHMEM\_IV\_POS\_REG
|
||||
- 0x8
|
||||
- RO
|
||||
- Inter-VM Position register is used to identify the VM ID.
|
||||
Currently its value is zero.
|
||||
* - IVSHMEM\_DOORBELL\_REG
|
||||
- 0xC
|
||||
- WO
|
||||
- Doorbell register is used to trigger an interrupt to the peer VM.
|
||||
ivshmem doesn't support interrupts.
|
||||
|
||||
Usage
|
||||
*****
|
||||
|
||||
To support two post-launched VMs communicating via an ``ivshmem`` device,
|
||||
add this line as an ``acrn-dm`` boot parameter::
|
||||
|
||||
-s slot,ivshmem,shm_name,shm_size
|
||||
|
||||
where
|
||||
|
||||
- ``-s slot`` - Specify the virtual PCI slot number
|
||||
|
||||
- ``ivshmem`` - Virtual PCI device name
|
||||
|
||||
- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the
|
||||
same ``shm_name`` share a shared memory region.
|
||||
|
||||
- ``shm_size`` - Specify a shared memory size. The two communicating
|
||||
VMs must define the same size.
|
||||
|
||||
.. note:: This device can be used with Real-Time VM (RTVM) as well.
|
||||
|
||||
Inter-VM Communication Example
|
||||
******************************
|
||||
|
||||
The following example uses inter-vm communication between two Linux-based
|
||||
post-launched VMs (VM1 and VM2).
|
||||
|
||||
.. note:: An ``ivshmem`` Windows driver exists and can be found `here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_
|
||||
|
||||
1. Add a new virtual PCI device for both VMs: the device type is
|
||||
``ivshmem``, shared memory name is ``test``, and shared memory size is
|
||||
4096 bytes. Both VMs must have the same shared memory name and size:
|
||||
|
||||
- VM1 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 7
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$2" \
|
||||
-s 5,virtio-console,@stdio:stdio_port \
|
||||
-s 6,virtio-hyper_dmabuf \
|
||||
-s 3,virtio-blk,/home/clear/uos/uos1.img \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 6,ivshmem,test,4096 \
|
||||
-s 7,virtio-rnd \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
|
||||
- VM2 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$2" \
|
||||
-s 3,virtio-blk,/home/clear/uos/uos2.img \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 5,ivshmem,test,4096 \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.
|
||||
|
||||
- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
|
||||
3. Use these commands to probe the device::
|
||||
|
||||
$ sudo modprobe uio
|
||||
$ sudo modprobe uio_pci_generic
|
||||
$ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
|
||||
|
||||
4. Finally, a user application can get the shared memory base address from
|
||||
the ``ivshmem`` device BAR resource
|
||||
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
|
||||
the ``ivshmem`` device config resource
|
||||
(``/sys/class/uio/uioX/device/config``).
|
||||
|
||||
The ``X`` in ``uioX`` above, is a number that can be retrieved using the
|
||||
``ls`` command:
|
||||
|
||||
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
|
||||
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
|
@ -92,7 +92,7 @@ The components are listed as follows.
|
||||
* **Device Emulation** This component implements devices that are emulated in
|
||||
the hypervisor itself, such as the virtual programmable interrupt controllers
|
||||
including vPIC, vLAPIC and vIOAPIC.
|
||||
* **Passthru Management** This component manages devices that are passed-through
|
||||
* **Passthrough Management** This component manages devices that are passed-through
|
||||
to specific VMs.
|
||||
* **Extended Device Emulation** This component implements an I/O request
|
||||
mechanism that allow the hypervisor to forward I/O accesses from a User
|
||||
|
@ -581,7 +581,7 @@ The following table shows some use cases of module level configuration design:
|
||||
* - Configuration data provided by BSP
|
||||
- This module is used to virtualize LAPIC, and the configuration data is
|
||||
provided by BSP.
|
||||
For example, some VMs use LAPIC pass-through and the other VMs use
|
||||
For example, some VMs use LAPIC passthrough and the other VMs use
|
||||
vLAPIC.
|
||||
- If a function pointer is used, the prerequisite is
|
||||
"hv_operation_mode == OPERATIONAL".
|
||||
|
@ -8,9 +8,9 @@ Verified version
|
||||
|
||||
- Ubuntu version: **18.04**
|
||||
- GCC version: **9.0**
|
||||
- ACRN-hypervisor tag: **v1.6.1 (acrn-2020w18.4-140000p)**
|
||||
- ACRN-Kernel (Service VM kernel): **4.19.120-108.iot-lts2018-sos**
|
||||
- RT kernel for Ubuntu User OS:
|
||||
- ACRN-hypervisor branch: **release_2.0 (acrn-2020w23.6-180000p)**
|
||||
- ACRN-Kernel (Service VM kernel): **release_2.0 (5.4.43-PKT-200203T060100Z)**
|
||||
- RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)**
|
||||
- HW: Maxtang Intel WHL-U i7-8665U (`AX8665U-A2 <http://www.maxtangpc.com/fanlessembeddedcomputers/140.html>`_)
|
||||
|
||||
Prerequisites
|
||||
@ -45,14 +45,17 @@ Connect the WHL Maxtang with the appropriate external devices.
|
||||
Install the Ubuntu User VM (RTVM) on the SATA disk
|
||||
==================================================
|
||||
|
||||
Install the Native Ubuntu OS on the SATA disk
|
||||
---------------------------------------------
|
||||
Install Ubuntu on the SATA disk
|
||||
-------------------------------
|
||||
|
||||
.. note:: The WHL Maxtang machine contains both an NVMe and SATA disk.
|
||||
Before you install the Ubuntu User VM on the SATA disk, either
|
||||
remove the NVMe disk or delete its blocks.
|
||||
|
||||
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
|
||||
#. Power on the machine, then press F11 to select the USB disk as the boot
|
||||
device. Select **UEFI: SanDisk**. Note that the label depends on the brand/make of the USB stick.
|
||||
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
|
||||
label depends on the brand/make of the USB stick.
|
||||
#. Install the Ubuntu OS.
|
||||
#. Select **Something else** to create the partition.
|
||||
|
||||
@ -66,10 +69,16 @@ Install the Native Ubuntu OS on the SATA disk
|
||||
b. Select ``/dev/sda`` **ATA KINGSTON RBUSNS4** as the device for the
|
||||
bootloader installation. Note that the label depends on the SATA disk used.
|
||||
|
||||
#. Continue with the Ubuntu Service VM installation in ``/dev/sda``.
|
||||
#. Complete the Ubuntu installation on ``/dev/sda``.
|
||||
|
||||
This Ubuntu installation will be modified later (see `Build and Install the RT kernel for the Ubuntu User VM`_)
|
||||
to turn it into a Real-Time User VM (RTVM).
|
||||
|
||||
Install the Ubuntu Service VM on the NVMe disk
|
||||
----------------------------------------------
|
||||
==============================================
|
||||
|
||||
Install Ubuntu on the NVMe disk
|
||||
-------------------------------
|
||||
|
||||
.. note:: Before you install the Ubuntu Service VM on the NVMe disk, either
|
||||
remove the SATA disk or disable it in the BIOS. Disable it by going to:
|
||||
@ -77,7 +86,8 @@ Install the Ubuntu Service VM on the NVMe disk
|
||||
|
||||
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
|
||||
#. Power on the machine, then press F11 to select the USB disk as the boot
|
||||
device. Select **UEFI: SanDisk**. Note that the label depends on the brand/make of the USB stick.
|
||||
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
|
||||
label depends on the brand/make of the USB stick.
|
||||
#. Install the Ubuntu OS.
|
||||
#. Select **Something else** to create the partition.
|
||||
|
||||
@ -139,8 +149,9 @@ Build the ACRN Hypervisor on Ubuntu
|
||||
e2fslibs-dev \
|
||||
pkg-config \
|
||||
libnuma-dev \
|
||||
|
||||
liblz4-tool
|
||||
liblz4-tool \
|
||||
flex \
|
||||
bison
|
||||
|
||||
$ sudo pip3 install kconfiglib
|
||||
|
||||
@ -150,7 +161,7 @@ Build the ACRN Hypervisor on Ubuntu
|
||||
|
||||
$ cd /home/acrn/work
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypvervisor
|
||||
$ cd acrn-hypervisor
|
||||
|
||||
#. Switch to the v2.0 version:
|
||||
|
||||
@ -163,12 +174,14 @@ Build the ACRN Hypervisor on Ubuntu
|
||||
.. code-block:: none
|
||||
|
||||
$ make all BOARD_FILE=misc/acrn-config/xmls/board-xmls/whl-ipc-i7.xml SCENARIO_FILE=misc/acrn-config/xmls/config-xmls/whl-ipc-i7/industry.xml RELEASE=0
|
||||
|
||||
$ sudo make install
|
||||
$ sudo cp build/hypervisor/acrn.bin /boot/acrn/
|
||||
|
||||
Enable network sharing for the User VM
|
||||
======================================
|
||||
|
||||
In the Ubuntu Service VM, enable network sharing for the User VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
@ -183,6 +196,7 @@ Build and install the ACRN kernel
|
||||
|
||||
$ cd /home/acrn/work/
|
||||
$ git clone https://github.com/projectacrn/acrn-kernel
|
||||
$ cd acrn-kernel
|
||||
|
||||
#. Switch to the 5.4 kernel:
|
||||
|
||||
@ -192,17 +206,14 @@ Build and install the ACRN kernel
|
||||
$ cp kernel_config_uefi_sos .config
|
||||
$ make olddefconfig
|
||||
$ make all
|
||||
$ sudo make modules_install
|
||||
|
||||
Install the Service VM kernel and modules
|
||||
=========================================
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo mkdir /boot/acrn/
|
||||
$ sudo cp ~/sos-kernel-build/usr/lib/kernel/lts2018-sos.4.19.78-98 /boot/bzImage
|
||||
|
||||
Copy the Service VM kernel files located at ``arch/x86/boot/bzImage`` to the ``/boot/`` folder.
|
||||
$ sudo make modules_install
|
||||
$ sudo cp arch/x86/boot/bzImage /boot/bzImage
|
||||
|
||||
Update Grub for the Ubuntu Service VM
|
||||
=====================================
|
||||
@ -214,12 +225,9 @@ Update Grub for the Ubuntu Service VM
|
||||
a single line and not as multiple lines. Otherwise, the kernel will
|
||||
fail to boot.
|
||||
|
||||
**menuentry 'ACRN Multiboot Ubuntu Service VM' --id ubuntu-service-vm**
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
{
|
||||
|
||||
menuentry "ACRN Multiboot Ubuntu Service VM" --id ubuntu-service-vm {
|
||||
load_video
|
||||
insmod gzio
|
||||
insmod part_gpt
|
||||
@ -227,15 +235,14 @@ Update Grub for the Ubuntu Service VM
|
||||
|
||||
search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1
|
||||
echo 'loading ACRN...'
|
||||
multiboot2 /boot/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83"
|
||||
multiboot2 /boot/acrn/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83"
|
||||
module2 /boot/bzImage Linux_bzImage
|
||||
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
Adjust this to your UUID and PARTUUID for the root= parameter using
|
||||
the ``blkid`` command (or use the device node directly).
|
||||
Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
|
||||
(or use the device node directly) of the root partition (e.g.
|
||||
``/dev/nvme0n1p2). Hint: use ``sudo blkid /dev/sda*``.
|
||||
|
||||
Update the kernel name if you used a different name as the source
|
||||
for your Service VM kernel.
|
||||
@ -259,9 +266,12 @@ Update Grub for the Ubuntu Service VM
|
||||
Reboot the system
|
||||
=================
|
||||
|
||||
Reboot the system. You should see the Grub menu with the new **ACRN ubuntu-service-vm** entry. Select it and proceed to booting the platform. The system will start Ubuntu and you can now log in (as before).
|
||||
Reboot the system. You should see the Grub menu with the new **ACRN
|
||||
ubuntu-service-vm** entry. Select it and proceed to booting the platform. The
|
||||
system will start Ubuntu and you can now log in (as before).
|
||||
|
||||
To verify that the hypervisor is effectively running, check ``dmesg``. The typical output of a successful installation resembles the following:
|
||||
To verify that the hypervisor is effectively running, check ``dmesg``. The
|
||||
typical output of a successful installation resembles the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -309,7 +319,7 @@ following steps:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo -E apt-get install iasl bison flex
|
||||
$ sudo -E apt-get install iasl
|
||||
$ cd /home/acrn/work
|
||||
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz
|
||||
$ tar zxvf acpica-unix-20191018.tar.gz
|
||||
@ -324,16 +334,28 @@ Follow these instructions to build the RT kernel.
|
||||
|
||||
#. Clone the RT kernel source code:
|
||||
|
||||
.. note::
|
||||
This guide assumes you are doing this within the Service VM. This
|
||||
**acrn-kernel** repository was already cloned under ``/home/acrn/work``
|
||||
earlier on so you can just ``cd`` into it and perform the ``git checkout``
|
||||
directly.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ git clone https://github.com/projectacrn/acrn-kernel
|
||||
$ cd acrn-kernel
|
||||
$ git checkout 4.19/preempt-rt
|
||||
$ make mrproper
|
||||
|
||||
.. note::
|
||||
The ``make mrproper`` is to make sure there is no ``.config`` file
|
||||
left from any previous build (e.g. the one for the Service VM kernel).
|
||||
|
||||
#. Build the kernel:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cp x86-64_defconfig .config
|
||||
$ make olddefconfig
|
||||
$ make targz-pkg
|
||||
|
||||
@ -341,20 +363,24 @@ Follow these instructions to build the RT kernel.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo mount /dev/sda1 /mnt
|
||||
$ sudo cp bzImage /mnt/EFI/
|
||||
$ sudo umount /mnt
|
||||
$ sudo mount /dev/sda2 /mnt
|
||||
$ sudo cp kernel.tar.gz -P /mnt/usr/lib/modules/ && cd /mnt/usr/lib/modules/
|
||||
$ sudo tar zxvf kernel.tar.gz
|
||||
$ sudo cd ~ && umount /mnt && sync
|
||||
$ sudo cp arch/x86/boot/bzImage /mnt/boot/
|
||||
$ sudo tar -zxvf linux-4.19.72-rt25-x86.tar.gz -C /mnt/lib/modules/
|
||||
$ sudo cp -r /mnt/lib/modules/lib/modules/4.19.72-rt25 /mnt/lib/modules/
|
||||
$ sudo cd ~ && sudo umount /mnt && sync
|
||||
|
||||
Launch the RTVM
|
||||
***************
|
||||
|
||||
Grub in the Ubuntu User VM (RTVM) needs to be configured to use the new RT
|
||||
kernel that was just built and installed on the rootfs. Follow these steps to
|
||||
perform this operation.
|
||||
|
||||
Update the Grub file
|
||||
====================
|
||||
|
||||
#. Reboot into the Ubuntu User VM located on the SATA drive and log on.
|
||||
|
||||
#. Update the ``/etc/grub.d/40_custom`` file as shown below.
|
||||
|
||||
.. note::
|
||||
@ -362,29 +388,24 @@ Update the Grub file
|
||||
a single line and not as multiple lines. Otherwise, the kernel will
|
||||
fail to boot.
|
||||
|
||||
**menuentry 'ACRN Ubuntu User VM' --id ubuntu-user-vm**
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
{
|
||||
|
||||
menuentry "ACRN Ubuntu User VM" --id ubuntu-user-vm {
|
||||
load_video
|
||||
insmod gzio
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
set root=hd0,gpt2
|
||||
|
||||
search --no-floppy --fs-uuid --set b2ae4879-c0b6-4144-9d28-d916b578f2eb
|
||||
echo 'loading ACRN...'
|
||||
|
||||
linux /boot/bzImage root=PARTUUID=<UUID of rootfs partition> rw rootwait nohpet console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M consoleblank=0 clocksource=tsc tsc=reliable x2apic_phys processor.max_cstate=0 intel_idle.max_cstate=0 intel_pstate=disable mce=ignore_ce audit=0 isolcpus=nohz,domain,1 nohz_full=1 rcu_nocbs=1 nosoftlockup idle=poll irqaffinity=0
|
||||
|
||||
}
|
||||
|
||||
.. note::
|
||||
|
||||
Update this to use your UUID and PARTUUID for the root= parameter (or
|
||||
use the device node directly).
|
||||
Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
|
||||
(or use the device node directly) of the root partition (e.g. ``/dev/sda2).
|
||||
Hint: use ``sudo blkid /dev/sda*``.
|
||||
|
||||
Update the kernel name if you used a different name as the source
|
||||
for your Service VM kernel.
|
||||
@ -405,6 +426,15 @@ Update the Grub file
|
||||
|
||||
$ sudo update-grub
|
||||
|
||||
#. Reboot into the Ubuntu Service VM
|
||||
|
||||
Launch the RTVM
|
||||
===============
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
||||
Recommended BIOS settings for RTVM
|
||||
----------------------------------
|
||||
|
||||
@ -464,11 +494,16 @@ In our recommended configuration, two cores are allocated to the RTVM:
|
||||
core 0 for housekeeping and core 1 for RT tasks. In order to achieve
|
||||
this, follow the below steps to allocate all housekeeping tasks to core 0:
|
||||
|
||||
#. Prepare the RTVM launch script
|
||||
|
||||
Follow the `Passthrough a hard disk to RTVM`_ section to make adjustments to
|
||||
the ``/usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh`` launch script.
|
||||
|
||||
#. Launch the RTVM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
||||
#. Log in to the RTVM as root and run the script as below:
|
||||
|
||||
@ -505,13 +540,16 @@ this, follow the below steps to allocate all housekeeping tasks to core 0:
|
||||
Run cyclictest
|
||||
--------------
|
||||
|
||||
#. Refer to the :ref:`troubleshooting section <enabling the network on the RTVM>` below that discusses how to enable the network connection for RTVM.
|
||||
#. Refer to the :ref:`troubleshooting section <enabling the network on the RTVM>`
|
||||
below that discusses how to enable the network connection for RTVM.
|
||||
|
||||
#. Launch the RTVM and log in as root.
|
||||
|
||||
#. Install the ``rt-tests`` tool::
|
||||
#. Install the ``rt-tests`` tool:
|
||||
|
||||
$ sudo apt install rt-tests
|
||||
.. code-block:: none
|
||||
|
||||
# apt install rt-tests
|
||||
|
||||
#. Use the following command to start cyclictest:
|
||||
|
||||
@ -634,4 +672,4 @@ Passthrough a hard disk to RTVM
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
|
||||
|
@ -122,9 +122,9 @@ Glossary of Terms
|
||||
OSPM
|
||||
Operating System Power Management
|
||||
|
||||
Pass-Through Device
|
||||
Passthrough Device
|
||||
Physical devices (typically PCI) exclusively assigned to a guest. In
|
||||
the Project ACRN architecture, pass-through devices are owned by the
|
||||
the Project ACRN architecture, passthrough devices are owned by the
|
||||
foreground OS.
|
||||
|
||||
Partition Mode
|
||||
|
BIN
doc/introduction/images/ACRN-Hybrid.png
Normal file
After Width: | Height: | Size: 113 KiB |
BIN
doc/introduction/images/ACRN-Industry.png
Normal file
After Width: | Height: | Size: 163 KiB |
BIN
doc/introduction/images/ACRN-Logical-Partition.png
Normal file
After Width: | Height: | Size: 95 KiB |
Before Width: | Height: | Size: 40 KiB After Width: | Height: | Size: 106 KiB |
Before Width: | Height: | Size: 75 KiB After Width: | Height: | Size: 175 KiB |
Before Width: | Height: | Size: 47 KiB After Width: | Height: | Size: 163 KiB |
@ -26,17 +26,18 @@ user VM sharing optimizations for IoT and embedded devices.
|
||||
ACRN Open Source Roadmap 2020
|
||||
*****************************
|
||||
|
||||
Stay informed on what's ahead for ACRN in 2020 by visiting the `ACRN 2020 Roadmap <https://projectacrn.org/wp-content/uploads/sites/59/2020/03/ACRN-Roadmap-External-2020.pdf>`_.
|
||||
Stay informed on what's ahead for ACRN in 2020 by visiting the
|
||||
`ACRN 2020 Roadmap <https://projectacrn.org/wp-content/uploads/sites/59/2020/03/ACRN-Roadmap-External-2020.pdf>`_.
|
||||
|
||||
For up-to-date happenings, visit the `ACRN blog <https://projectacrn.org/blog/>`_.
|
||||
|
||||
ACRN High-Level Architecture
|
||||
****************************
|
||||
|
||||
The ACRN architecture has evolved since it's initial v0.1 release in
|
||||
The ACRN architecture has evolved since its initial v0.1 release in
|
||||
July 2018. Beginning with the v1.1 release, the ACRN architecture has
|
||||
flexibility to support partition mode, sharing mode, and a mixed hybrid
|
||||
mode. As shown in :numref:`V2-hl-arch`, hardware resources can be
|
||||
mode. As shown in :numref:`V2-hl-arch`, hardware resources can be
|
||||
partitioned into two parts:
|
||||
|
||||
.. figure:: images/ACRN-V2-high-level-arch.png
|
||||
@ -65,10 +66,10 @@ VM. The service VM can access hardware resources directly by running
|
||||
native drivers and it provides device sharing services to the user VMs
|
||||
through the Device Model. Currently, the service VM is based on Linux,
|
||||
but it can also use other operating systems as long as the ACRN Device
|
||||
Model is ported into it. A user VM can be Clear Linux*, Android*,
|
||||
Model is ported into it. A user VM can be Clear Linux*, Ubuntu*, Android*,
|
||||
Windows* or VxWorks*. There is one special user VM, called a
|
||||
post-launched Real-Time VM (RTVM), designed to run a hard real-time OS,
|
||||
such as VxWorks*, or Xenomai*. Because of its real-time capability, RTVM
|
||||
such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM
|
||||
can be used for soft programmable logic controller (PLC), inter-process
|
||||
communication (IPC), or Robotics applications.
|
||||
|
||||
@ -94,7 +95,7 @@ for building Automotive Software Defined Cockpit (SDC) and In-Vehicle
|
||||
Experience (IVE) solutions.
|
||||
|
||||
.. figure:: images/ACRN-V2-SDC-scenario.png
|
||||
:width: 400px
|
||||
:width: 600px
|
||||
:align: center
|
||||
:name: V2-SDC-scenario
|
||||
|
||||
@ -103,10 +104,10 @@ Experience (IVE) solutions.
|
||||
As a reference implementation, ACRN provides the basis for embedded
|
||||
hypervisor vendors to build solutions with a reference I/O mediation
|
||||
solution. In this scenario, an automotive SDC system consists of the
|
||||
Instrument Cluster (IC) system in VM1, the In-Vehicle Infotainment (IVI)
|
||||
system in VM2, and one or more Rear Seat Entertainment (RSE) systems in
|
||||
VM3. Each system is running as an isolated Virtual Machine (VM) for
|
||||
overall system safety considerations.
|
||||
Instrument Cluster (IC) system running in the Service VM and the In-Vehicle
|
||||
Infotainment (IVI) system is running the post-launched User VM. Additionally,
|
||||
one could modify the SDC scenario to add more post-launched User VMs that can
|
||||
host Rear Seat Entertainment (RSE) systems (not shown on the picture).
|
||||
|
||||
An **Instrument Cluster (IC)** system is used to show the driver operational
|
||||
information about the vehicle, such as:
|
||||
@ -140,15 +141,8 @@ reference stack to run their own VMs, together with IC, IVI, and RSE
|
||||
VMs. The Service VM runs in the background and the User VMs run as
|
||||
Post-Launched VMs.
|
||||
|
||||
.. figure:: images/ACRN-V2-SDC-Usage-Architecture-Overview.png
|
||||
:width: 700px
|
||||
:align: center
|
||||
:name: V2-SDC-usage-arch
|
||||
|
||||
ACRN SDC usage architecture overview
|
||||
|
||||
A block diagram of ACRN's SDC usage scenario is shown in
|
||||
:numref:`V2-SDC-usage-arch` above.
|
||||
:numref:`V2-SDC-scenario` above.
|
||||
|
||||
- The ACRN hypervisor sits right on top of the bootloader for fast booting
|
||||
capabilities.
|
||||
@ -156,24 +150,24 @@ A block diagram of ACRN's SDC usage scenario is shown in
|
||||
non-safety-critical domains are able to coexist on one platform.
|
||||
- Rich I/O mediators allows sharing of various I/O devices across VMs,
|
||||
delivering a comprehensive user experience.
|
||||
- Multiple operating systems are supported by one SoC through efficient virtualization.
|
||||
- Multiple operating systems are supported by one SoC through efficient
|
||||
virtualization.
|
||||
|
||||
Industrial Workload Consolidation
|
||||
=================================
|
||||
|
||||
.. figure:: images/ACRN-V2-industrial-scenario.png
|
||||
:width: 400px
|
||||
:width: 600px
|
||||
:align: center
|
||||
:name: V2-industrial-scenario
|
||||
|
||||
ACRN Industrial Workload Consolidation scenario
|
||||
|
||||
Supporting Workload consolidation for industrial applications is even
|
||||
more challenging. The ACRN hypervisor needs to run both safety-critical
|
||||
and non-safety workloads with no interference, increase security
|
||||
functions that safeguard the system, run hard real-time sensitive
|
||||
workloads together with general computing workloads, and conduct data
|
||||
analytics for timely actions and predictive maintenance.
|
||||
more challenging. The ACRN hypervisor needs to run different workloads with no
|
||||
interference, increase security functions that safeguard the system, run hard
|
||||
real-time sensitive workloads together with general computing workloads, and
|
||||
conduct data analytics for timely actions and predictive maintenance.
|
||||
|
||||
Virtualization is especially important in industrial environments
|
||||
because of device and application longevity. Virtualization enables
|
||||
@ -181,37 +175,34 @@ factories to modernize their control system hardware by using VMs to run
|
||||
older control systems and operating systems far beyond their intended
|
||||
retirement dates.
|
||||
|
||||
As shown in :numref:`V2-industry-usage-arch`, the Safety VM has
|
||||
functional safety applications running inside it to monitor the overall
|
||||
system health status. This Safety VM is partitioned from other VMs and
|
||||
is pre-launched before the Service VM. Service VM provides devices
|
||||
sharing capability across user VMs and can launch additional user VMs.
|
||||
In this usage example, VM2 provides Human Machine Interface (HMI)
|
||||
capability, and VM3 is optimized to support industrial workload
|
||||
real-time OS needs, such as VxWorks* or RT-Linux*.
|
||||
As shown in :numref:`V2-industrial-scenario`, the Service VM can start a number
|
||||
of post-launched User VMs and can provide device sharing capabilities to these.
|
||||
In total, up to 7 post-launched User VMs can be started:
|
||||
|
||||
.. figure:: images/ACRN-V2-Industrial-Usage-Architecture-Overview.png
|
||||
:width: 700px
|
||||
:align: center
|
||||
:name: V2-industry-usage-arch
|
||||
- 5 regular User VMs,
|
||||
- One `Kata Containers <https://katacontainers.io>`_ User VM (see
|
||||
:ref:`run-kata-containers` for more details), and
|
||||
- One Real-Time VM (RTVM).
|
||||
|
||||
ACRN Industrial Usage Architecture Overview
|
||||
In this example, one post-launched User VM provides Human Machine Interface
|
||||
(HMI) capability, another provides Artificial Intelligence (AI) capability, some
|
||||
compute function is run the Kata Container and the RTVM runs the soft
|
||||
Programmable Logic Controller (PLC) that requires hard real-time
|
||||
characteristics.
|
||||
|
||||
:numref:`V2-industry-usage-arch` shows ACRN's block diagram for an
|
||||
:numref:`V2-industrial-scenario` shows ACRN's block diagram for an
|
||||
Industrial usage scenario:
|
||||
|
||||
- ACRN boots from the SoC platform, and supports firmware such as the
|
||||
UEFI BIOS.
|
||||
- The ACRN hypervisor can create four VMs to run four different OSes:
|
||||
- The ACRN hypervisor can create VMs that run different OSes:
|
||||
|
||||
- A safety VM such as Zephyr*,
|
||||
- a service VM such as Clear Linux*,
|
||||
- a Human Machine Interface (HMI) application OS such as Windows*, and
|
||||
- a real-time control OS such as VxWorks or RT-Linux*.
|
||||
- a Service VM such as Ubuntu*,
|
||||
- a Human Machine Interface (HMI) application OS such as Windows*,
|
||||
- an Artificial Intelligence (AI) application on Linux*,
|
||||
- a Kata Container application, and
|
||||
- a real-time control OS such as Zephyr*, VxWorks* or RT-Linux*.
|
||||
|
||||
- The Safety VM (VM0) is launched by ACRN before any other VM. The
|
||||
functional safety code inside VM0 checks the overall system health
|
||||
status.
|
||||
- The Service VM, provides device sharing functionalities, such as
|
||||
disk and network mediation, to other virtual machines.
|
||||
It can also run an orchestration agent allowing User VM orchestration
|
||||
@ -227,8 +218,7 @@ Best Known Configurations
|
||||
The ACRN Github codebase defines five best known configurations (BKC)
|
||||
targeting SDC and Industry usage scenarios. Developers can start with
|
||||
one of these pre-defined configurations and customize it to their own
|
||||
application scenario needs. (These configurations assume there is at
|
||||
most one Safety VM and it is pre-launched.)
|
||||
application scenario needs.
|
||||
|
||||
.. list-table:: Scenario-based Best Known Configurations
|
||||
:header-rows: 1
|
||||
@ -240,33 +230,26 @@ most one Safety VM and it is pre-launched.)
|
||||
- VM2
|
||||
- VM3
|
||||
|
||||
* - Software Defined Cockpit 1
|
||||
* - Software Defined Cockpit
|
||||
- SDC
|
||||
- Service VM
|
||||
- Post-launched VM (Android)
|
||||
-
|
||||
- Post-launched VM
|
||||
- One Kata Containers VM
|
||||
-
|
||||
|
||||
* - Software Defined Cockpit 2
|
||||
- SDC
|
||||
- Service VM
|
||||
- Post-launched VM (Android)
|
||||
- Post-launched VM (Android)
|
||||
- Post-launched VM (Android)
|
||||
|
||||
* - Industry Usage Config 1
|
||||
* - Industry Usage Config
|
||||
- Industry
|
||||
- Service VM
|
||||
- Post-launched VM (HMI)
|
||||
- Post-launched VM (Hard RTVM)
|
||||
- Post-launched VM (Soft RTVM)
|
||||
- Up to 5 Post-launched VMs
|
||||
- One Kata Containers VM
|
||||
- Post-launched RTVM (Soft or Hard realtime)
|
||||
|
||||
* - Industry Usage Config 2
|
||||
- Industry
|
||||
* - Hybrid Usage Config
|
||||
- Hybrid
|
||||
- Pre-launched VM (Safety VM)
|
||||
- Service VM
|
||||
- Post-launched VM (HMI)
|
||||
- Post-launched VM (Hard/Soft RTVM)
|
||||
- Post-launched VM
|
||||
-
|
||||
|
||||
* - Logical Partition
|
||||
- Logical Partition
|
||||
@ -275,73 +258,61 @@ most one Safety VM and it is pre-launched.)
|
||||
-
|
||||
-
|
||||
|
||||
Here are block diagrams for each of these five scenarios.
|
||||
Here are block diagrams for each of these four scenarios.
|
||||
|
||||
SDC scenario with two VMs
|
||||
=========================
|
||||
SDC scenario
|
||||
============
|
||||
|
||||
In this SDC scenario, an Instrument Cluster (IC) system runs with the
|
||||
Service VM and an In-Vehicle Infotainment (IVI) system runs in a user
|
||||
VM.
|
||||
|
||||
.. figure:: images/SDC-2VM.png
|
||||
.. figure:: images/ACRN-V2-SDC-scenario.png
|
||||
:width: 600px
|
||||
:align: center
|
||||
:name: SDC-2VM
|
||||
:name: ACRN-SDC
|
||||
|
||||
SDC scenario with two VMs
|
||||
|
||||
SDC scenario with four VMs
|
||||
==========================
|
||||
|
||||
In this SDC scenario, an Instrument Cluster (IC) system runs with the
|
||||
Service VM. An In-Vehicle Infotainment (IVI) is User VM1 and two Rear
|
||||
Seat Entertainment (RSE) systems run in User VM2 and User VM3.
|
||||
|
||||
.. figure:: images/SDC-4VM.png
|
||||
:width: 600px
|
||||
:align: center
|
||||
:name: SDC-4VM
|
||||
|
||||
SDC scenario with four VMs
|
||||
|
||||
Industry scenario without a safety VM
|
||||
======================================
|
||||
Industry scenario
|
||||
=================
|
||||
|
||||
In this Industry scenario, the Service VM provides device sharing capability for
|
||||
a Windows-based HMI User VM. The other two post-launched User VMs
|
||||
support either hard or soft Real-time OS applications.
|
||||
a Windows-based HMI User VM. One post-launched User VM can run a Kata Container
|
||||
application. Another User VM supports either hard or soft Real-time OS
|
||||
applications. Up to five additional post-launched User VMs support functions
|
||||
such as Human Machine Interface (HMI), Artificial Intelligence (AI), Computer
|
||||
Vision, etc.
|
||||
|
||||
.. figure:: images/Industry-wo-safetyVM.png
|
||||
.. figure:: images/ACRN-Industry.png
|
||||
:width: 600px
|
||||
:align: center
|
||||
:name: Industry-wo-safety
|
||||
:name: Industry
|
||||
|
||||
Industry scenario without a safety VM
|
||||
Industry scenario
|
||||
|
||||
Industry scenario with a safety VM
|
||||
==================================
|
||||
Hybrid scenario
|
||||
===============
|
||||
|
||||
In this Industry scenario, a Pre-launched VM is included as a Safety VM.
|
||||
The Service VM provides device sharing capability for the HMI User VM. The
|
||||
remaining User VM can support either a hard or soft Real-time OS
|
||||
application.
|
||||
In this Hybrid scenario, a pre-launched Safety/RTVM is started by the
|
||||
hypervisor. The Service VM runs a post-launched User VM that runs non-safety or
|
||||
non-real-time tasks.
|
||||
|
||||
.. figure:: images/Industry-w-safetyVM.png
|
||||
.. figure:: images/ACRN-Hybrid.png
|
||||
:width: 600px
|
||||
:align: center
|
||||
:name: Industry-w-safety
|
||||
:name: ACRN-Hybrid
|
||||
|
||||
Industry scenario with a safety VM
|
||||
Hybrid scenario
|
||||
|
||||
Logical Partitioning scenario
|
||||
=============================
|
||||
Logical Partition scenario
|
||||
==========================
|
||||
|
||||
This scenario is a simplified VM configuration for VM logical
|
||||
partitioning: one is the Safety VM and the other is a Linux-based User
|
||||
VM.
|
||||
|
||||
.. figure:: images/Logical-partition.png
|
||||
.. figure:: images/ACRN-Logical-Partition.png
|
||||
:width: 600px
|
||||
:align: center
|
||||
:name: logical-partition
|
||||
@ -390,6 +361,7 @@ Boot Sequence
|
||||
|
||||
.. _systemd-boot: https://www.freedesktop.org/software/systemd/man/systemd-boot.html
|
||||
.. _grub: https://www.gnu.org/software/grub/manual/grub/
|
||||
.. _Slim Bootloader: https://www.intel.com/content/www/us/en/design/products-and-solutions/technologies/slim-bootloader/overview.html
|
||||
|
||||
ACRN supports two kinds of boots: **De-privilege boot mode** and **Direct
|
||||
boot mode**.
|
||||
@ -427,23 +399,10 @@ bootloader used by the Operating System (OS).
|
||||
|
||||
* In the case of Clear Linux, the EFI bootloader is `systemd-boot`_ and the Linux
|
||||
kernel command-line parameters are defined in the ``.conf`` files.
|
||||
* Another popular EFI bootloader used by Linux distributions is `grub`_.
|
||||
Distributions like Ubuntu/Debian, Fedora/CentOS use `grub`_.
|
||||
|
||||
.. note::
|
||||
|
||||
The `Slim Bootloader
|
||||
<https://www.intel.com/content/www/us/en/design/products-and-solutions/technologies/slim-bootloader/overview.html>`__
|
||||
is an alternative boot firmware that can be used to boot ACRN. The `Boot
|
||||
ACRN Hyervisor
|
||||
<https://slimbootloader.github.io/how-tos/boot-acrn.html>`_ tutorial
|
||||
provides more information on how to use SBL with ACRN.
|
||||
|
||||
.. note::
|
||||
|
||||
A virtual `Slim Bootloader
|
||||
<https://www.intel.com/content/www/us/en/design/products-and-solutions/technologies/slim-bootloader/overview.html>`__,
|
||||
called ``vSBL``, can also be used to start User VMs. The
|
||||
A virtual `Slim Bootloader`_ called ``vSBL``, can also be used to start User VMs. The
|
||||
:ref:`acrn-dm_parameters` provides more information on how to boot a
|
||||
User VM using ``vSBL``. Note that in this case, the kernel command-line
|
||||
parameters are defined by the combination of the ``cmdline.txt`` passed
|
||||
@ -453,6 +412,12 @@ bootloader used by the Operating System (OS).
|
||||
Direct boot mode
|
||||
================
|
||||
|
||||
The ACRN hypervisor can be booted from a third-party bootloader
|
||||
directly, called **Direct boot mode**. A popular bootloader is `grub`_ and is
|
||||
also widely used by Linux distributions.
|
||||
|
||||
:ref:`using_grub` has a introduction on how to boot ACRN hypervisor with GRUB.
|
||||
|
||||
In :numref:`boot-flow-2`, we show the **Direct boot mode** sequence:
|
||||
|
||||
.. graphviz:: images/boot-flow-2.dot
|
||||
@ -471,8 +436,21 @@ The Boot process proceeds as follows:
|
||||
the ACRN Device Model and Virtual bootloader through ``dm-verity``.
|
||||
#. The virtual bootloader starts the User-side verified boot process.
|
||||
|
||||
In this boot mode, the boot options are defined via the ``VM{x}_CONFIG_OS_BOOTARGS``
|
||||
macro in the source code (replace ``{x}`` with the VM number).
|
||||
In this boot mode, the boot options of pre-launched VM and service VM are defined
|
||||
in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config``
|
||||
in the source code ``hypervisor/$(SCENARIO)/vm_configurations.c`` by default.
|
||||
Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for
|
||||
details. The boot options of post-launched VM is not covered by hypervisor
|
||||
source code or GRUB menu, it is defined in guest image file or specified by
|
||||
launch scripts.
|
||||
|
||||
.. note::
|
||||
|
||||
`Slim Bootloader`_ is an alternative boot firmware that can be used to
|
||||
boot ACRN in **Direct boot mode**. The `Boot ACRN Hypervisor
|
||||
<https://slimbootloader.github.io/how-tos/boot-acrn.html>`_ tutorial
|
||||
provides more information on how to use SBL with ACRN.
|
||||
|
||||
|
||||
ACRN Hypervisor Architecture
|
||||
****************************
|
||||
@ -481,20 +459,28 @@ ACRN hypervisor is a Type 1 hypervisor, running directly on bare-metal
|
||||
hardware. It implements a hybrid VMM architecture, using a privileged
|
||||
service VM, running the Service VM that manages the I/O devices and
|
||||
provides I/O mediation. Multiple User VMs are supported, with each of
|
||||
them running Linux\* or Android\* OS as the User VM .
|
||||
them running different OSs.
|
||||
|
||||
Running systems in separate VMs provides isolation between other VMs and
|
||||
their applications, reducing potential attack surfaces and minimizing
|
||||
safety interference. However, running the systems in separate VMs may
|
||||
introduce additional latency for applications.
|
||||
|
||||
:numref:`ACRN-architecture` shows the ACRN hypervisor architecture, with
|
||||
the automotive example IC VM and service VM together. The Service VM
|
||||
owns most of the devices including the platform devices, and
|
||||
provides I/O mediation. Some of the PCIe devices may be passed through
|
||||
to the User OSes via the VM configuration. The Service VM runs the IC
|
||||
applications and hypervisor-specific applications together, such as the
|
||||
ACRN device model, and ACRN VM manager.
|
||||
:numref:`V2-hl-arch` shows the ACRN hypervisor architecture, with
|
||||
all types of Virtual Machines (VMs) represented:
|
||||
|
||||
- Pre-launched User VM (Safety/RTVM)
|
||||
- Pre-launched Service VM
|
||||
- Post-launched User VM
|
||||
- Kata Container VM (post-launched)
|
||||
- Real-Time VM (RTVM)
|
||||
|
||||
The Service VM owns most of the devices including the platform devices, and
|
||||
provides I/O mediation. The notable exceptions are the devices assigned to the
|
||||
pre-launched User VM. Some of the PCIe devices may be passed through
|
||||
to the post-launched User OSes via the VM configuration. The Service VM runs
|
||||
hypervisor-specific applications together, such as the ACRN device model, and
|
||||
ACRN VM manager.
|
||||
|
||||
ACRN hypervisor also runs the ACRN VM manager to collect running
|
||||
information of the User OS, and controls the User VM such as starting,
|
||||
@ -599,7 +585,7 @@ hypervisor, or in user space within an independent VM, overhead exists.
|
||||
This overhead is worthwhile as long as the devices need to be shared by
|
||||
multiple guest operating systems. If sharing is not necessary, then
|
||||
there are more efficient methods for accessing devices, for example
|
||||
"pass-through".
|
||||
"passthrough".
|
||||
|
||||
ACRN device model is a placeholder of the User VM. It allocates memory for
|
||||
the User OS, configures and initializes the devices used by the User VM,
|
||||
@ -643,16 +629,15 @@ ACRN Device model incorporates these three aspects:
|
||||
notifying it that the IOREQ has completed.
|
||||
|
||||
.. note::
|
||||
Userland: dm as ACRN Device Model.
|
||||
|
||||
Kernel space: VBS-K, MPT Service, VHM itself
|
||||
* Userland: dm as ACRN Device Model.
|
||||
* Kernel space: VBS-K, MPT Service, VHM itself
|
||||
|
||||
.. _pass-through:
|
||||
|
||||
Device pass through
|
||||
*******************
|
||||
Device passthrough
|
||||
******************
|
||||
|
||||
At the highest level, device pass-through is about providing isolation
|
||||
At the highest level, device passthrough is about providing isolation
|
||||
of a device to a given guest operating system so that the device can be
|
||||
used exclusively by that guest.
|
||||
|
||||
@ -676,8 +661,8 @@ Finally, there may be specialized PCI devices that only one guest domain
|
||||
uses, so they should be passed through to the guest. Individual USB
|
||||
ports could be isolated to a given domain too, or a serial port (which
|
||||
is itself not shareable) could be isolated to a particular guest. In
|
||||
ACRN hypervisor, we support USB controller Pass through only and we
|
||||
don't support pass through for a legacy serial port, (for example
|
||||
ACRN hypervisor, we support USB controller passthrough only and we
|
||||
don't support passthrough for a legacy serial port, (for example
|
||||
0x3f8).
|
||||
|
||||
|
||||
@ -685,7 +670,7 @@ Hardware support for device passthrough
|
||||
=======================================
|
||||
|
||||
Intel's current processor architectures provides support for device
|
||||
pass-through with VT-d. VT-d maps guest physical address to machine
|
||||
passthrough with VT-d. VT-d maps guest physical address to machine
|
||||
physical address, so device can use guest physical address directly.
|
||||
When this mapping occurs, the hardware takes care of access (and
|
||||
protection), and the guest operating system can use the device as if it
|
||||
@ -708,9 +693,9 @@ Hypervisor support for device passthrough
|
||||
|
||||
By using the latest virtualization-enhanced processor architectures,
|
||||
hypervisors and virtualization solutions can support device
|
||||
pass-through (using VT-d), including Xen, KVM, and ACRN hypervisor.
|
||||
passthrough (using VT-d), including Xen, KVM, and ACRN hypervisor.
|
||||
In most cases, the guest operating system (User
|
||||
OS) must be compiled to support pass-through, by using
|
||||
OS) must be compiled to support passthrough, by using
|
||||
kernel build-time options. Hiding the devices from the host VM may also
|
||||
be required (as is done with Xen using pciback). Some restrictions apply
|
||||
in PCI, for example, PCI devices behind a PCIe-to-PCI bridge must be
|
||||
|
329
doc/release_notes/release_notes_2.0.rst
Normal file
@ -0,0 +1,329 @@
|
||||
.. _release_notes_2.0:
|
||||
|
||||
ACRN v2.0 (June 2020)
|
||||
#####################
|
||||
|
||||
We are pleased to announce the second major release of the Project ACRN
|
||||
hypervisor.
|
||||
|
||||
ACRN v2.0 offers new and improved scenario definitions, with a focus on
|
||||
industrial IoT and edge device use cases. ACRN supports these uses with
|
||||
their demanding and varying workloads including Functional Safety
|
||||
certification, real-time characteristics, device and CPU sharing, and
|
||||
general computing power needs, while honoring required isolation and
|
||||
resource partitioning. A wide range of User VM OSs (such as Windows 10,
|
||||
Ubuntu, Android, and VxWorks) can run on ACRN, running different
|
||||
workloads and applications on the same hardware platform.
|
||||
|
||||
A new hybrid-mode architecture adds flexibility to simultaneously
|
||||
support both traditional resource sharing among VMs and complete VM
|
||||
resource partitioning required for functional safety requirements.
|
||||
|
||||
Workload management and orchestration, rather standard and mature in
|
||||
cloud environments, are enabled now in ACRN, allowing open source
|
||||
orchestrators such as OpenStack to manage ACRN virtual machines. Kata
|
||||
Containers, a secure container runtime, has also been enabled on ACRN
|
||||
and can be orchestrated via Docker or Kubernetes.
|
||||
|
||||
Rounding things out, we've also made significant improvements in
|
||||
configuration tools, added many new tutorial documents, and enabled ACRN
|
||||
on the QEMU machine emulator making it easier to try out and develop with
|
||||
ACRN.
|
||||
|
||||
ACRN is a flexible, lightweight reference hypervisor that is built with
|
||||
real-time and safety-criticality in mind. It is optimized to streamline
|
||||
embedded development through an open source platform. Check out
|
||||
:ref:`introduction` for more information. All project ACRN source code
|
||||
is maintained in the https://github.com/projectacrn/acrn-hypervisor
|
||||
repository and includes folders for the ACRN hypervisor, the ACRN device
|
||||
model, tools, and documentation. You can either download this source
|
||||
code as a zip or tar.gz file (see the `ACRN v2.0 GitHub release page
|
||||
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v2.0>`_)
|
||||
or use Git clone and checkout commands::
|
||||
|
||||
git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
cd acrn-hypervisor
|
||||
git checkout v2.0
|
||||
|
||||
The project's online technical documentation is also tagged to
|
||||
correspond with a specific release: generated v2.0 documents can be
|
||||
found at https://projectacrn.github.io/2.0/. Documentation for the
|
||||
latest (master) branch is found at
|
||||
https://projectacrn.github.io/latest/.
|
||||
Follow the instructions in the :ref:`rt_industry_ubuntu_setup` to get
|
||||
started with ACRN.
|
||||
|
||||
We recommend that all developers upgrade to ACRN release v2.0.
|
||||
|
||||
Version 2.0 Key Features (comparing with v1.0)
|
||||
**********************************************
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:backlinks: entry
|
||||
|
||||
ACRN Architecture Upgrade to Support Hybrid Mode
|
||||
================================================
|
||||
|
||||
The ACRN architecture has evolved after its initial major 1.0 release in
|
||||
May 2019. The new hybrid mode architecture has the flexibility to
|
||||
support both partition mode and sharing mode simultaneously, as shown in
|
||||
this architecture diagram:
|
||||
|
||||
.. figure:: ../introduction/images/ACRN-V2-high-level-arch.png
|
||||
:width: 700px
|
||||
:align: center
|
||||
|
||||
ACRN V2 high-level architecture
|
||||
|
||||
On the left, resources are partitioned and used by a pre-launched User
|
||||
Virtual Machine (VM), started by the hypervisor before the Service VM
|
||||
has been launched. It runs independent of other virtual machines, and
|
||||
can own its own dedicated hardware resources, such as a CPU core,
|
||||
memory, and I/O devices. Because other VMs may not even be aware of its
|
||||
existence, this pre-launched VM can be used as a safety VM where, for
|
||||
example, platform hardware failure detection code can run and take
|
||||
emergency actions if a system critical failure occurs.
|
||||
|
||||
On the right, the remaining hardware resources are shared by the Service
|
||||
VM and User VMs. The Service VM can access hardware resources directly
|
||||
(by running native drivers) and offer device sharing services to other
|
||||
User VMs by the Device Model.
|
||||
|
||||
Also on the right, a special post-launched real-time VM (RTVM) can run a
|
||||
hard real-time OS, such as VxWorks*, Zephyr*, or Xenomai*. Because of
|
||||
its real-time capabilities, the RTVM can be used for soft PLC, IPC, or
|
||||
Robotics applications.
|
||||
|
||||
New Hardware Platform Support
|
||||
=============================
|
||||
|
||||
This release adds support for 8th Gen Intel® Core™ Processors (code
|
||||
name: Whiskey Lake). (See :ref:`hardware` for platform details.)
|
||||
|
||||
Pre-launched Safety VM Support
|
||||
==============================
|
||||
|
||||
ACRN supports a pre-launched partitioned safety VM, isolated from the
|
||||
Service VM and other post-launched VM by using partitioned HW resources.
|
||||
For example, in the hybrid mode, a real-time Zephyr RTOS VM can be
|
||||
*pre-launched* by the hypervisor even before the Service VM is launched,
|
||||
and with its own dedicated resources to achieve a high level of
|
||||
isolation. This is designed to meet the needs of a Functional Safety OS.
|
||||
|
||||
Post-launched VM support via OVMF
|
||||
=================================
|
||||
|
||||
ACRN supports Open Virtual Machine Firmware (OVMF) as a virtual boot
|
||||
loader for the Service VM to launch post-launched VMs such as Windows,
|
||||
Linux, VxWorks, or Zephyr RTOS. Secure boot is also supported.
|
||||
|
||||
Post-launched Real-Time VM Support
|
||||
==================================
|
||||
|
||||
ACRN supports a post-launched RTVM, which also uses partitioned hardware
|
||||
resources to ensure adequate real-time performance, as required for
|
||||
industrial use cases.
|
||||
|
||||
Real-Time VM Performance Optimizations
|
||||
======================================
|
||||
|
||||
ACRN 2.0 improves RTVM performance with these optimizations:
|
||||
|
||||
* **Eliminate use of VM-Exit and its performance overhead:**
|
||||
Use Local APIC (LAPIC) passthrough, Virtio Polling Mode Drivers (PMD),
|
||||
and NMI interrupt notification technologies.
|
||||
|
||||
* **Isolate the RTVM from the Service VM:**
|
||||
The ACRN hypervisor uses RDT (Resource Director Technology)
|
||||
allocation features such as CAT (Cache Allocation Technology), CDP (Code
|
||||
Data Prioritization), and MBA (Memory Bandwidth Allocation) to provide
|
||||
better isolation and prioritize critical resources, such as cache and
|
||||
memory bandwidth, for RTVMs over other VMs.
|
||||
|
||||
* **PCI Configuration space access emulation for passthrough devices in the hypervisor:**
|
||||
The hypervisor provides the necessary emulation (such as config space)
|
||||
of the passthrough PCI device during runtime for a DM-launched VM from
|
||||
Service VM.
|
||||
|
||||
* **More hypervisor-emulated devices:**
|
||||
This includes vPCI and vPCI bridge emulation, and vUART.
|
||||
|
||||
* **ART (Always Running Timer Virtualization):**
|
||||
Ensure time is synchronized between Ptdev and vART
|
||||
|
||||
CPU Sharing Support
|
||||
===================
|
||||
|
||||
ACRN supports CPU Sharing to fully utilize the physical CPU resource
|
||||
across more virtual machines. ACRN enables a borrowed virtual time CPU
|
||||
scheduler in the hypervisor to make sure the physical CPU can be shared
|
||||
between VMs and support for yielding an idle vCPU when it's running a
|
||||
'HLT' or 'PAUSE' instruction.
|
||||
|
||||
Large selection of OSs for User VMs
|
||||
===================================
|
||||
|
||||
ACRN now supports Windows* 10, Android*, Ubuntu*, Xenomai, VxWorks*,
|
||||
Real-Time Linux*, and Zephyr* RTOS. ACRN's Windows support now conforms
|
||||
to the Microsoft* Hypervisor Top-Level Functional Specification (TLFS).
|
||||
ACRN 2.0 also improves overall Windows as a Guest (WaaG) stability and
|
||||
performance.
|
||||
|
||||
GRUB bootloader
|
||||
===============
|
||||
|
||||
The ACRN hypervisor can boot from the popular GRUB bootloader using
|
||||
either the multiboot or multiboot2 prococol (the latter adding UEFI
|
||||
support). GRUB provides developers with booting flexibility.
|
||||
|
||||
SR-IOV Support
|
||||
==============
|
||||
|
||||
SR-IOV (Single Root Input/Output Virtualization) can isolate PCIe
|
||||
devices to offer performance similar to bare-metal levels. For a
|
||||
network adapter, for example, this enables network traffic to bypass the
|
||||
software switch layer in the virtualization stack and achieve network
|
||||
performance that is nearly the same as in a nonvirtualized environment.
|
||||
In this example, the ACRN Service VM supports a SR-IOV ethernet device
|
||||
through the Physical Function (PF) driver, and ensures that the SR-IOV
|
||||
Virtual Function (VF) device can passthrough to a post-launched VM.
|
||||
|
||||
Graphics passthrough support
|
||||
============================
|
||||
|
||||
ACRN supports GPU passthrough to dedicated User VM based on Intel GVT-d
|
||||
technology used to virtualize the GPU for multiple guest VMs,
|
||||
effectively providing near-native graphics performance in the VM.
|
||||
|
||||
Shared memory based Inter-VM communication
|
||||
==========================================
|
||||
|
||||
ACRN supports Inter-VM communication based on shared memory for
|
||||
post-launched VMs communicating via a Userspace I/O (UIO) interface.
|
||||
|
||||
Configuration Tool Support
|
||||
==========================
|
||||
|
||||
A new offline configuration tool helps developers deploy ACRN to
|
||||
different hardware systems with its own customization.
|
||||
|
||||
Kata Containers Support
|
||||
=======================
|
||||
|
||||
ACRN can launch a Kata container, a secure container runtime, as a User VM.
|
||||
|
||||
VM orchestration
|
||||
================
|
||||
|
||||
Libvirt is an open-source API, daemon, and management tool as a layer to
|
||||
decouple orchestrators and hypervisors. By adding a "ACRN driver", ACRN
|
||||
supports libvirt-based tools and orchestrators to configure a User VM's CPU
|
||||
configuration during VM creation.
|
||||
|
||||
Document updates
|
||||
================
|
||||
Many new and updated `reference documents <https://projectacrn.github.io>`_ are available, including:
|
||||
|
||||
* General
|
||||
|
||||
* :ref:`introduction`
|
||||
* :ref:`hardware`
|
||||
* :ref:`asa`
|
||||
|
||||
* Getting Started
|
||||
|
||||
* :ref:`rt_industry_ubuntu_setup`
|
||||
* :ref:`using_partition_mode_on_nuc`
|
||||
|
||||
* Configuration and Tools
|
||||
|
||||
* :ref:`acrn_configuration_tool`
|
||||
|
||||
* Service VM Tutorials
|
||||
|
||||
* :ref:`running_deb_as_serv_vm`
|
||||
|
||||
* User VM Tutorials
|
||||
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
* :ref:`using_zephyr_as_uos`
|
||||
* :ref:`running_deb_as_user_vm`
|
||||
* :ref:`using_celadon_as_uos`
|
||||
* :ref:`using_windows_as_uos`
|
||||
* :ref:`using_vxworks_as_uos`
|
||||
* :ref:`using_xenomai_as_uos`
|
||||
|
||||
* Enable ACRN Features
|
||||
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
* :ref:`open_vswitch`
|
||||
* :ref:`rdt_configuration`
|
||||
* :ref:`sriov_virtualization`
|
||||
* :ref:`cpu_sharing`
|
||||
* :ref:`run-kata-containers`
|
||||
* :ref:`how-to-enable-secure-boot-for-windows`
|
||||
* :ref:`enable-s5`
|
||||
* :ref:`vuart_config`
|
||||
* :ref:`sgx_virt`
|
||||
* :ref:`acrn-dm_qos`
|
||||
* :ref:`setup_openstack_libvirt`
|
||||
* :ref:`acrn_on_qemu`
|
||||
* :ref:`gpu-passthrough`
|
||||
* :ref:`using_grub`
|
||||
|
||||
* Debug
|
||||
|
||||
* :ref:`rt_performance_tuning`
|
||||
* :ref:`rt_perf_tips_rtvm`
|
||||
|
||||
* High-Level Design Guides
|
||||
|
||||
* :ref:`virtio-i2c`
|
||||
* :ref:`split-device-model`
|
||||
* :ref:`hv-device-passthrough`
|
||||
* :ref:`vtd-posted-interrupt`
|
||||
|
||||
|
||||
|
||||
Fixed Issues Details
|
||||
********************
|
||||
- :acrn-issue:`3715` - Add support for multiple RDT resource allocation and fix L3 CAT config overwrite by L2
|
||||
- :acrn-issue:`3770` - Warning when building the ACRN hypervisor \`SDC (defined at arch/x86/Kconfig:7) set more than once`
|
||||
- :acrn-issue:`3773` - suspicious logic in vhost.c
|
||||
- :acrn-issue:`3918` - Change active_hp_work position for code cleaning and add a module parameter to disable hp work.
|
||||
- :acrn-issue:`3939` - zero-copy non-functional with vhost
|
||||
- :acrn-issue:`3946` - Cannot boot VxWorks as UOS on KabyLake
|
||||
- :acrn-issue:`4017` - hv: rename vuart operations
|
||||
- :acrn-issue:`4046` - Error info popoup when run 3DMARK11 on Waag
|
||||
- :acrn-issue:`4072` - hv: add printf "not support the value of vuart index parameter" in function vuart_register_io_handler
|
||||
- :acrn-issue:`4191` - acrnboot: the end address of _DYNAME region is not calculated correct
|
||||
- :acrn-issue:`4250` - acrnboot: parse hv cmdline incorrectly when containing any trailing white-spaces
|
||||
- :acrn-issue:`4283` - devicemodel: refactor CMD_OPT_LAPIC_PT case branch
|
||||
- :acrn-issue:`4314` - RTVM boot up fail due to init_hugetlb failed during S5 testing
|
||||
- :acrn-issue:`4365` - Enable GOP driver work in GVT-d scenario
|
||||
- :acrn-issue:`4520` - efi-stub could get wrong bootloader name
|
||||
- :acrn-issue:`4628` - HV: guest: fix bug in get_vcpu_paging_mode
|
||||
- :acrn-issue:`4630` - The \`board_parser.py` tool contains a few grammatical mistakes and typos
|
||||
- :acrn-issue:`4664` - Wake up vCPU for interrupts from vPIC
|
||||
- :acrn-issue:`4666` - Fix offline tool to generate info in pci_dev file for logical partition scenario
|
||||
- :acrn-issue:`4680` - Fix potential dead loop if VT-d QI request timeout
|
||||
- :acrn-issue:`4688` - RELEASE=n does not take effect while using xml to make hypervisor
|
||||
- :acrn-issue:`4703` - Failed to launch WaaG at a high probablity if enable CPU sharing in GVT-d.
|
||||
- :acrn-issue:`4711` - WaaG reboot will core dump with USB mediator
|
||||
- :acrn-issue:`4797` - [acrn-configuration-tool] The VM name is always 1 when using web app to generate the launch script
|
||||
- :acrn-issue:`4799` - [acrn-configuration-tool]wrong parameter for Soft RT/Hard RT vm in launch script
|
||||
- :acrn-issue:`4827` - Missing explicit initialization of pci_device_lock
|
||||
- :acrn-issue:`4868` - [acrn-configuation-tool]efi bootloader image file of Yocto industry build not match with default xmls
|
||||
- :acrn-issue:`4889` - [WHL][QEMU][HV] With latest master branch HV, build ACRN for Qemu fail
|
||||
|
||||
Known Issues
|
||||
************
|
||||
- :acrn-issue:`4047` - [WHL][Function][WaaG] passthru usb, Windows will hang when reboot it
|
||||
- :acrn-issue:`4313` - [WHL][VxWorks] Failed to ping when VxWorks passthru network
|
||||
- :acrn-issue:`4557` - [WHL][Performance][WaaG] Failed to run 3D directX9 during Passmark9.0 performance test with 7212 gfx driver
|
||||
- :acrn-issue:`4558` - [WHL][Performance][WaaG] WaaG reboot automatically during run 3D directX12 with 7212 gfx driver
|
||||
- :acrn-issue:`4982` - [WHL]ivshmemTest transfer file failed after UOS shutdown or reboot
|
||||
- :acrn-issue:`4983` - [WHL][RTVM]without any virtio device, with only pass-through devices, RTVM can't boot from SATA
|
48
doc/scorer.js
Normal file
@ -0,0 +1,48 @@
|
||||
/**
|
||||
* Simple search result scoring code.
|
||||
*
|
||||
* Copyright 2007-2018 by the Sphinx team
|
||||
* Copyright (c) 2019, Intel
|
||||
* SPDX-License-Identifier: Apache-2.0
|
||||
*/
|
||||
|
||||
var Scorer = {
|
||||
// Implement the following function to further tweak the score for
|
||||
// each result The function takes a result array [filename, title,
|
||||
// anchor, descr, score] and returns the new score.
|
||||
|
||||
// For ACRN search results, push display down for release_notes and
|
||||
// api docs so "regular" docs will show up before them
|
||||
|
||||
score: function(result) {
|
||||
|
||||
if (result[0].search("release_notes/")>=0) {
|
||||
return -6;
|
||||
}
|
||||
else if (result[0].search("api/")>=0) {
|
||||
return -5;
|
||||
}
|
||||
else if (result[0].search("kconfig/")>=0) {
|
||||
return -5;
|
||||
}
|
||||
else {
|
||||
return result[4];
|
||||
}
|
||||
},
|
||||
|
||||
// query matches the full name of an object
|
||||
objNameMatch: 11,
|
||||
// or matches in the last dotted part of the object name
|
||||
objPartialMatch: 6,
|
||||
// Additive scores depending on the priority of the object
|
||||
objPrio: {0: 15, // used to be importantResults
|
||||
1: 5, // used to be objectResults
|
||||
2: -5}, // used to be unimportantResults
|
||||
// Used when the priority is not in the mapping.
|
||||
objPrioDefault: 0,
|
||||
|
||||
// query found in title
|
||||
title: 15,
|
||||
// query found in terms
|
||||
term: 5
|
||||
};
|
11
doc/static/acrn-custom.css
vendored
@ -73,8 +73,8 @@ div.non-compliant-code div.highlight {
|
||||
color: rgba(255,255,255,1);
|
||||
}
|
||||
|
||||
/* squish the space between a paragraph before a list */
|
||||
div > p + ul, div > p + ol {
|
||||
/* squish the space between a paragraph before a list but not in a note */
|
||||
div:not(.admonition) > p + ul, div:not(.admonition) > p + ol {
|
||||
margin-top: -20px;
|
||||
}
|
||||
|
||||
@ -299,3 +299,10 @@ div.numbered-step h2::before {
|
||||
.rst-content .toctree-l1 > a {
|
||||
font-weight: bold;
|
||||
}
|
||||
|
||||
/* add icon on external links */
|
||||
a.reference.external::after {
|
||||
font-family: 'FontAwesome';
|
||||
font-size: 80%;
|
||||
content: " \f08e";
|
||||
}
|
||||
|
8
doc/static/acrn-custom.js
vendored
@ -1,7 +1,11 @@
|
||||
/* tweak logo link */
|
||||
/* Extra acrn-specific javascript */
|
||||
|
||||
$(document).ready(function(){
|
||||
$( ".icon-home" ).attr("href", "https://projectacrn.org/");
|
||||
/* tweak logo link to the marketing site instead of doc site */
|
||||
$( ".icon-home" ).attr({href: "https://projectacrn.org/", target: "_blank"});
|
||||
|
||||
/* open external links in a new tab */
|
||||
$('a[class*=external]').attr({target: '_blank', rel: 'noopener'});
|
||||
});
|
||||
|
||||
/* Global site tag (gtag.js) - Google Analytics */
|
||||
|
@ -125,27 +125,42 @@ Additional scenario XML elements:
|
||||
Specify the capacity of the log buffer for each physical CPU.
|
||||
|
||||
``RELOC`` (a child node of ``FEATURES``):
|
||||
Specify whether hypervisor image relocation is enabled on booting.
|
||||
Specify whether the hypervisor image relocation is enabled on booting.
|
||||
|
||||
``SCHEDULER`` (a child node of ``FEATURES``):
|
||||
Specify the CPU scheduler used by the hypervisor.
|
||||
Supported schedulers are: ``SCHED_NOOP``, ``SCHED_BVT`` and ``SCHED_IORR``.
|
||||
|
||||
``MULTIBOOT2`` (a child node of ``FEATURES``):
|
||||
Specify whether ACRN hypervisor image can be booted using multiboot2 protocol.
|
||||
If not set, GRUB's multiboot2 is not available as a boot option.
|
||||
Specify whether the ACRN hypervisor image can be booted using the
|
||||
multiboot2 protocol. If not set, GRUB's multiboot2 is not available as a
|
||||
boot option.
|
||||
|
||||
``RDT_ENABLED`` (a child node of ``FEATURES/RDT``):
|
||||
Specify whether to enable the Resource Director Technology (RDT)
|
||||
allocation feature. Set to 'y' to enable the feature or 'n' to disable it.
|
||||
The 'y' will be ignored when hardware does not support RDT.
|
||||
|
||||
``CDP_ENABLED`` (a child node of ``FEATURES/RDT``):
|
||||
Specify whether to enable Code and Data Prioritization (CDP). CDP is an
|
||||
extension of CAT. Set to 'y' to enable the feature or 'n' to disable it.
|
||||
The 'y' will be ignored when hardware does not support CDP.
|
||||
|
||||
``CLOS_MASK`` (a child node of ``FEATURES/RDT``):
|
||||
Specify the cache capacity bitmask for the CLOS; only continuous '1' bits
|
||||
are allowed. The value will be ignored when hardware does not support RDT.
|
||||
|
||||
``HYPERV_ENABLED`` (a child node of ``FEATURES``):
|
||||
Specify whether Hyper-V is enabled.
|
||||
|
||||
``IOMMU_ENFORCE_SNP`` (a child node of ``FEATURES``):
|
||||
Specify whether IOMMU enforces snoop behavior of DMA operation.
|
||||
Specify whether IOMMU enforces snoop behavior of the DMA operation.
|
||||
|
||||
``ACPI_PARSE_ENABLED`` (a child node of ``FEATURES``):
|
||||
Specify whether ACPI runtime parsing is enabled..
|
||||
Specify whether ACPI runtime parsing is enabled.
|
||||
|
||||
``L1D_VMENTRY_ENABLED`` (a child node of ``FEATURES``):
|
||||
Specify whether L1 cache flush before VM entry is enabled.
|
||||
Specify whether the L1 cache flush before VM entry is enabled.
|
||||
|
||||
``MCE_ON_PSC_DISABLED`` (a child node of ``FEATURE``):
|
||||
Specify whether force to disable software workaround for Machine Check
|
||||
@ -159,13 +174,13 @@ Additional scenario XML elements:
|
||||
Specify the size of the RAM region used by the hypervisor.
|
||||
|
||||
``LOW_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify size of RAM region below address 0x10000, starting from address 0x0.
|
||||
Specify the size of the RAM region below address 0x10000, starting from address 0x0.
|
||||
|
||||
``SOS_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of Service OS VM RAM region.
|
||||
Specify the size of the Service OS VM RAM region.
|
||||
|
||||
``UOS_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of User OS VM RAM region.
|
||||
Specify the size of the User OS VM RAM region.
|
||||
|
||||
``PLATFORM_RAM_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of the physical platform RAM region.
|
||||
@ -219,7 +234,8 @@ Additional scenario XML elements:
|
||||
``guest_flags``:
|
||||
Select all applicable flags for the VM:
|
||||
|
||||
- ``GUEST_FLAG_SECURE_WORLD_ENABLED`` specify whether secure world is enabled
|
||||
- ``GUEST_FLAG_SECURE_WORLD_ENABLED`` specify whether the secure world is
|
||||
enabled
|
||||
- ``GUEST_FLAG_LAPIC_PASSTHROUGH`` specify whether LAPIC is passed through
|
||||
- ``GUEST_FLAG_IO_COMPLETION_POLLING`` specify whether the hypervisor needs
|
||||
IO polling to completion
|
||||
@ -270,11 +286,11 @@ Additional scenario XML elements:
|
||||
The entry address in host memory for the VM kernel.
|
||||
|
||||
``vuart``:
|
||||
Specify the vuart (A.K.A COM) with the vUART ID by its "id" attribute.
|
||||
Specify the vuart (aka COM) with the vUART ID by its "id" attribute.
|
||||
Refer to :ref:`vuart_config` for detailed vUART settings.
|
||||
|
||||
``type`` (a child node of ``vuart``):
|
||||
vUART (A.K.A COM) type, currently only supports the legacy PIO mode.
|
||||
vUART (aka COM) type; currently only supports the legacy PIO mode.
|
||||
|
||||
``base`` (a child node of ``vuart``):
|
||||
vUART (A.K.A COM) enabling switch. Enable by exposing its COM_BASE
|
||||
@ -288,7 +304,7 @@ Additional scenario XML elements:
|
||||
target VM the current VM connects to.
|
||||
|
||||
``target_uart_id`` (a child node of ``vuart1``):
|
||||
Target vUART ID that vCOM2 connects to.
|
||||
Target vUART ID to which the vCOM2 connects.
|
||||
|
||||
``pci_dev_num``:
|
||||
PCI devices number of the VM; it is hard-coded for each scenario so it
|
||||
@ -326,7 +342,8 @@ current scenario has:
|
||||
|
||||
``uos_type``:
|
||||
Specify the User VM type, such as ``CLEARLINUX``, ``ANDROID``, ``ALIOS``,
|
||||
``PREEMPT-RT LINUX``, ``GENERIC LINUX``, ``WINDOWS``, ``ZEPHYR`` or ``VXWORKS``.
|
||||
``PREEMPT-RT LINUX``, ``GENERIC LINUX``, ``WINDOWS``, ``YOCTO``, ``UBUNTU``,
|
||||
``ZEPHYR`` or ``VXWORKS``.
|
||||
|
||||
``rtos_type``:
|
||||
Specify the User VM Realtime capability: Soft RT, Hard RT, or none of them.
|
||||
@ -358,7 +375,7 @@ current scenario has:
|
||||
Refer to :ref:`usb_virtualization` for details.
|
||||
|
||||
``passthrough_devices``:
|
||||
Select the passthrough device from the lspci list; currently we support:
|
||||
Select the passthrough device from the lspci list. Currently we support:
|
||||
usb_xdci, audio, audio_codec, ipu, ipu_i2c, cse, wifi, Bluetooth, sd_card,
|
||||
ethernet, wifi, sata, and nvme.
|
||||
|
||||
@ -380,7 +397,7 @@ current scenario has:
|
||||
The ``configurable`` and ``readonly`` attributes are used to mark
|
||||
whether the items is configurable for users. When ``configurable="0"``
|
||||
and ``readonly="true"``, the item is not configurable from the web
|
||||
interface. When ``configurable="0"``. the item does not appear on the
|
||||
interface. When ``configurable="0"``, the item does not appear on the
|
||||
interface.
|
||||
|
||||
Configuration tool workflow
|
||||
@ -428,8 +445,8 @@ Here is the offline configuration tool workflow:
|
||||
#. Copy the ``target`` folder into the target file system and then run the
|
||||
``sudo python3 board_parser.py $(BOARD)`` command.
|
||||
#. A $(BOARD).xml that includes all needed hardware-specific information
|
||||
is generated in the ``./out/`` folder. (Here ``$(BOARD)`` is the
|
||||
specified board name)
|
||||
is generated in the ``./out/`` folder. Here, ``$(BOARD)`` is the
|
||||
specified board name.
|
||||
|
||||
| **Native Linux requirement:**
|
||||
| **Release:** Ubuntu 18.04+ or Clear Linux 30210+
|
||||
@ -664,7 +681,7 @@ The **Launch Setting** is quite similar to the **Scenario Setting**:
|
||||
|
||||
- Select one launch setting xml from the menu.
|
||||
|
||||
- Importing the local launch setting xml by clicking **Import XML**.
|
||||
- Import the local launch setting xml by clicking **Import XML**.
|
||||
|
||||
#. Select one scenario for the current launch setting from the **Select Scenario** drop down box.
|
||||
|
||||
|
@ -30,7 +30,7 @@ Pre-Requisites
|
||||
INFO: /dev/kvm exists
|
||||
KVM acceleration can be used
|
||||
|
||||
2. Ensure the Ubuntu18.04 Host kernel version is **atleast 5.3.0** and above.
|
||||
2. Ensure the Ubuntu18.04 Host kernel version is **at least 5.3.0** and above.
|
||||
|
||||
3. Make sure KVM and the following utilities are installed.
|
||||
|
||||
|
@ -118,7 +118,7 @@ The demo setup uses these software components and versions:
|
||||
* - AGL
|
||||
- Funky Flounder (6.02)
|
||||
- `intel-corei7-x64 image
|
||||
<https://download.automotivelinux.org/AGL/release/flounder/6.0.2/intel-corei7-64/deploy/images/intel-corei7-64/agl-demo-platform-crosssdk-intel-corei7-64-20181112133144.rootfs.wic.xz>`_
|
||||
<https://mirrors.edge.kernel.org/AGL/release/flounder/6.0.2/intel-corei7-64/deploy/images/intel-corei7-64/agl-demo-platform-crosssdk-intel-corei7-64-20200318141526.rootfs.wic.xz>`_
|
||||
* - acrn-kernel
|
||||
- revision acrn-2019w39.1-140000p
|
||||
- `acrn-kernel <https://github.com/projectacrn/acrn-kernel>`_
|
||||
|
@ -192,7 +192,7 @@ Use the ACRN industry out-of-the-box image
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w18.4-140000p/sos-industry-33050.img.xz
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v1.6.1/sos-industry-33050.img.xz
|
||||
|
||||
.. note:: You may also follow :ref:`set_up_ootb_service_vm` to build the image by yourself.
|
||||
|
||||
@ -239,7 +239,7 @@ build the ACRN kernel for the Service VM, and then :ref:`passthrough the SATA di
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w18.4-140000p/preempt-rt-33050.img.xz
|
||||
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v1.6.1/preempt-rt-33050.img.xz
|
||||
|
||||
.. note:: You may also follow :ref:`set_up_ootb_rtvm` to build the Preempt-RT VM image by yourself.
|
||||
|
||||
|
@ -18,7 +18,7 @@ the ``help`` command within the ACRN shell.
|
||||
An example
|
||||
**********
|
||||
|
||||
As an example, we'll show how to obtain the interrupts of a pass-through USB device.
|
||||
As an example, we'll show how to obtain the interrupts of a passthrough USB device.
|
||||
|
||||
First, we can get the USB controller BDF number (0:15.0) through the
|
||||
following command in the Service VM console::
|
||||
|
@ -13,9 +13,11 @@ Introduction
|
||||
************
|
||||
|
||||
Intel GVT-d is a graphics virtualization approach that is also known as
|
||||
the Intel-Graphics-Device passthrough feature. Based on Intel VT-d technology, it offers useful special graphics-related configurations.
|
||||
It allows for direct assignment of an entire GPU’s prowess to a single user,
|
||||
passing the native driver capabilities through to the hypervisor without any limitations.
|
||||
the Intel-Graphics-Device passthrough feature. Based on Intel VT-d
|
||||
technology, it offers useful special graphics-related configurations.
|
||||
It allows for direct assignment of an entire GPU's prowess to a single
|
||||
user, passing the native driver capabilities through to the hypervisor
|
||||
without any limitations.
|
||||
|
||||
Verified version
|
||||
*****************
|
||||
@ -44,18 +46,25 @@ BIOS settings
|
||||
Kaby Lake platform
|
||||
==================
|
||||
|
||||
* Set **IGD Minimum Memory** to **64MB** in **Devices** → **Video** → **IGD Minimum Memory**.
|
||||
* Set **IGD Minimum Memory** to **64MB** in **Devices** →
|
||||
**Video** → **IGD Minimum Memory**.
|
||||
|
||||
Whiskey Lake platform
|
||||
=====================
|
||||
|
||||
* Set **PM Support** to **Enabled** in **Chipset** → **System Agent (SA) Configuration** → **Graphics Configuration** → **PM support**.
|
||||
* Set **DVMT Pre-Allocated** to **64MB** in **Chipset** → **System Agent (SA) Configuration** → **Graphics Configuration** → **DVMT Pre-Allocated**.
|
||||
* Set **PM Support** to **Enabled** in **Chipset** → **System
|
||||
Agent (SA) Configuration** → **Graphics Configuration** →
|
||||
**PM support**.
|
||||
* Set **DVMT Pre-Allocated** to **64MB** in **Chipset** →
|
||||
**System Agent (SA) Configuration**
|
||||
→ **Graphics Configuration** → **DVMT Pre-Allocated**.
|
||||
|
||||
Elkhart Lake platform
|
||||
=====================
|
||||
|
||||
* Set **DMVT Pre-Allocated** to **64MB** in **Intel Advanced Menu** → **System Agent(SA) Configuration** → **Graphics Configuration** → **DMVT Pre-Allocated**.
|
||||
* Set **DMVT Pre-Allocated** to **64MB** in **Intel Advanced Menu**
|
||||
→ **System Agent(SA) Configuration** →
|
||||
**Graphics Configuration** → **DMVT Pre-Allocated**.
|
||||
|
||||
Passthrough the GPU to Guest
|
||||
****************************
|
||||
@ -66,11 +75,11 @@ Passthrough the GPU to Guest
|
||||
|
||||
cp /usr/share/acrn/samples/nuc/launch_win.sh ~/install_win.sh
|
||||
|
||||
#. Modify the ``install_win.sh`` script to specify the Windows image you use.
|
||||
2. Modify the ``install_win.sh`` script to specify the Windows image you use.
|
||||
|
||||
#. Modify the ``install_win.sh`` script to enable GVT-d.
|
||||
3. Modify the ``install_win.sh`` script to enable GVT-d:
|
||||
|
||||
#. Add the following commands before ``acrn-dm -A -m $mem_size -s 0:0,hostbridge \``
|
||||
Add the following commands before ``acrn-dm -A -m $mem_size -s 0:0,hostbridge \``
|
||||
|
||||
::
|
||||
|
||||
@ -80,9 +89,9 @@ Passthrough the GPU to Guest
|
||||
echo "0000:00:02.0" > /sys/bus/pci/devices/0000:00:02.0/driver/unbind
|
||||
echo "0000:00:02.0" > /sys/bus/pci/drivers/pci-stub/bind
|
||||
|
||||
#. Replace ``-s 2,pci-gvt -G "$2" \`` with ``-s 2,passthru,0/2/0,gpu \``
|
||||
Replace ``-s 2,pci-gvt -G "$2" \`` with ``-s 2,passthru,0/2/0,gpu \``
|
||||
|
||||
#. Run ``launch_win.sh``.
|
||||
4. Run ``launch_win.sh``.
|
||||
|
||||
.. note:: If you want to passthrough the GPU to a Clear Linux User VM, the
|
||||
steps are the same as above except your script.
|
||||
@ -90,8 +99,11 @@ Passthrough the GPU to Guest
|
||||
Enable the GVT-d GOP driver
|
||||
***************************
|
||||
|
||||
When enabling GVT-d, the Guest OS cannot light up the physical screen before
|
||||
the OS driver loads. As a result, the Guest BIOS and the Grub UI is not visible on the physical screen. The occurs because the physical display is initialized by the GOP driver or VBIOS before the OS driver loads, and the Guest BIOS doesn’t have them.
|
||||
When enabling GVT-d, the Guest OS cannot light up the physical screen
|
||||
before the OS driver loads. As a result, the Guest BIOS and the Grub UI
|
||||
is not visible on the physical screen. The occurs because the physical
|
||||
display is initialized by the GOP driver or VBIOS before the OS driver
|
||||
loads, and the Guest BIOS doesn't have them.
|
||||
|
||||
The solution is to integrate the GOP driver binary into the OVMF as a DXE
|
||||
driver. Then the Guest OVMF can see the GOP driver and run it in the graphic
|
||||
@ -109,7 +121,8 @@ Steps
|
||||
|
||||
#. Fetch the vbt and gop drivers.
|
||||
|
||||
Fetch the **vbt** and **gop** drivers from the board manufacturer according to your CPU model name.
|
||||
Fetch the **vbt** and **gop** drivers from the board manufacturer
|
||||
according to your CPU model name.
|
||||
|
||||
#. Add the **vbt** and **gop** drivers to the OVMF:
|
||||
|
||||
@ -154,5 +167,4 @@ Keep in mind the following:
|
||||
- This will generate the binary at
|
||||
``Build/OvmfX64/DEBUG_GCC5/FV/OVMF.fd``. Transfer the binary to
|
||||
your target machine.
|
||||
|
||||
- Modify the launch script to specify the OVMF you built just now.
|
||||
- Modify the launch script to specify the OVMF you built just now.
|
||||
|
@ -16,10 +16,10 @@ Run RTVM with dedicated resources/devices
|
||||
|
||||
For best practice, ACRN allocates dedicated CPU, memory resources, and cache resources (using Intel
|
||||
Resource Directory allocation Technology such as CAT, MBA) to RTVMs. For best real time performance
|
||||
of I/O devices, we recommend using dedicated (pass-thru) PCIe devices to avoid VM-Exit at run time.
|
||||
of I/O devices, we recommend using dedicated (passthrough) PCIe devices to avoid VM-Exit at run time.
|
||||
|
||||
.. note::
|
||||
The configuration space for pass-thru PCI devices is still emulated and accessing it will
|
||||
The configuration space for passthrough PCI devices is still emulated and accessing it will
|
||||
trigger a VM-Exit.
|
||||
|
||||
RTVM with virtio PMD (Polling Mode Driver) for I/O sharing
|
||||
|
@ -8,7 +8,7 @@ This tutorial describes how to install, configure, and run `Kata Containers
|
||||
hypervisor. In this configuration,
|
||||
Kata Containers leverage the ACRN hypervisor instead of QEMU which is used by
|
||||
default. Refer to the `Kata Containers with ACRN
|
||||
<https://drive.google.com/file/d/1ZrqM5ouWUJA0FeIWhU_aitEJe8781rpe/view?usp=sharing>`_
|
||||
<https://www.slideshare.net/ProjectACRN/acrn-kata-container-on-acrn>`_
|
||||
presentation from a previous ACRN Project Technical Community Meeting for
|
||||
more details on Kata Containers and how the integration with ACRN has been
|
||||
done.
|
||||
@ -19,9 +19,15 @@ Prerequisites
|
||||
#. Refer to the :ref:`ACRN supported hardware <hardware>`.
|
||||
#. For a default prebuilt ACRN binary in the E2E package, you must have 4
|
||||
CPU cores or enable "CPU Hyper-Threading" in order to have 4 CPU threads for 2 CPU cores.
|
||||
#. Follow :ref:`these instructions <Ubuntu Service OS>` to set up the ACRN Service VM
|
||||
based on Ubuntu. Please note that only ACRN hypervisors compiled for
|
||||
SDC scenario support Kata Containers currently.
|
||||
#. Follow the :ref:`rt_industry_ubuntu_setup` to set up the ACRN Service VM
|
||||
based on Ubuntu.
|
||||
#. This tutorial is validated on the following configurations:
|
||||
|
||||
- ACRN v2.0 (branch: ``release_2.0``)
|
||||
- Ubuntu 20.04
|
||||
|
||||
#. Kata Containers are supported for ACRN hypervisors configured for
|
||||
the Industry or SDC scenarios.
|
||||
|
||||
|
||||
Install Docker
|
||||
@ -53,41 +59,88 @@ Install Kata Containers
|
||||
***********************
|
||||
|
||||
Kata Containers provide a variety of installation methods, this guide uses
|
||||
:command:`kata-manager` to automate the Kata Containers installation procedure.
|
||||
`kata-deploy <https://github.com/kata-containers/packaging/tree/master/kata-deploy>`_
|
||||
to automate the Kata Containers installation procedure.
|
||||
|
||||
#. Install Kata Containers packages:
|
||||
#. Install Kata Containers:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ bash -c "$(curl -fsSL https://raw.githubusercontent.com/kata-containers/tests/master/cmd/kata-manager/kata-manager.sh) install-packages"
|
||||
$ sudo docker run -v /opt/kata:/opt/kata -v /var/run/dbus:/var/run/dbus -v /run/systemd:/run/systemd -v /etc/docker:/etc/docker -it katadocker/kata-deploy kata-deploy-docker install
|
||||
|
||||
#. Add the following settings to :file:`/etc/docker/daemon.json` to configure
|
||||
Docker to use Kata Containers by default. You may need to create the
|
||||
file if it doesn't exist.
|
||||
#. Install the ``acrnctl`` tool:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd /home/acrn/work/acrn-hypervisor
|
||||
$ sudo cp build/misc/tools/acrnctl /usr/bin/
|
||||
|
||||
.. note:: This assumes you have built ACRN on this machine following the
|
||||
instructions in the :ref:`rt_industry_ubuntu_setup`.
|
||||
|
||||
#. Modify the :ref:`daemon.json` file in order to:
|
||||
|
||||
a. Add a ``kata-acrn`` runtime (``runtimes`` section).
|
||||
|
||||
.. note:: In order to run Kata with ACRN, the container stack must provide
|
||||
block-based storage, such as :file:`device-mapper`. Since Docker may be
|
||||
configured to use :file:`overlay2` storage driver, the above
|
||||
configuration also instructs Docker to use :file:`device-mapper`
|
||||
storage driver.
|
||||
|
||||
#. Use the ``device-mapper`` storage driver.
|
||||
|
||||
#. Make Docker use Kata Containers by default.
|
||||
|
||||
These changes are highlighted below.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2,3,21-24
|
||||
:name: daemon.json
|
||||
:caption: /etc/docker/daemon.json
|
||||
|
||||
{
|
||||
"storage-driver": "devicemapper",
|
||||
"default-runtime": "kata-runtime",
|
||||
"default-runtime": "kata-acrn",
|
||||
"runtimes": {
|
||||
"kata-runtime": {
|
||||
"path": "/usr/bin/kata-runtime"
|
||||
}
|
||||
"kata-qemu": {
|
||||
"path": "/opt/kata/bin/kata-runtime",
|
||||
"runtimeArgs": [ "--kata-config", "/opt/kata/share/defaults/kata-containers/configuration-qemu.toml" ]
|
||||
},
|
||||
"kata-qemu-virtiofs": {
|
||||
"path": "/opt/kata/bin/kata-runtime",
|
||||
"runtimeArgs": [ "--kata-config", "/opt/kata/share/defaults/kata-containers/configuration-qemu-virtiofs.toml" ]
|
||||
},
|
||||
"kata-fc": {
|
||||
"path": "/opt/kata/bin/kata-runtime",
|
||||
"runtimeArgs": [ "--kata-config", "/opt/kata/share/defaults/kata-containers/configuration-fc.toml" ]
|
||||
},
|
||||
"kata-clh": {
|
||||
"path": "/opt/kata/bin/kata-runtime",
|
||||
"runtimeArgs": [ "--kata-config", "/opt/kata/share/defaults/kata-containers/configuration-clh.toml" ]
|
||||
},
|
||||
"kata-acrn": {
|
||||
"path": "/opt/kata/bin/kata-runtime",
|
||||
"runtimeArgs": [ "--kata-config", "/opt/kata/share/defaults/kata-containers/configuration-acrn.toml" ]
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
In order to run Kata with ACRN, the container stack must provide block-based
|
||||
storage, such as :file:`device-mapper`. Since Docker may be configured to
|
||||
use :file:`overlay2` storage driver, the above configuration also instructs
|
||||
Docker to use :file:`devive-mapper` storage driver.
|
||||
|
||||
#. Configure Kata to use ACRN.
|
||||
|
||||
.. code-block:: none
|
||||
Modify the ``[hypervisor.acrn]`` section in the ``/opt/kata/share/defaults/kata-containers/configuration-acrn.toml``
|
||||
file.
|
||||
|
||||
$ sudo mkdir -p /etc/kata-containers
|
||||
$ sudo cp /usr/share/defaults/kata-containers/configuration-acrn.toml /etc/kata-containers/configuration.toml
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2,3
|
||||
:name: configuration-acrn.toml
|
||||
:caption: /opt/kata/share/defaults/kata-containers/configuration-acrn.toml
|
||||
|
||||
[hypervisor.acrn]
|
||||
path = "/usr/bin/acrn-dm"
|
||||
ctlpath = "/usr/bin/acrnctl"
|
||||
kernel = "/opt/kata/share/kata-containers/vmlinuz.container"
|
||||
image = "/opt/kata/share/kata-containers/kata-containers.img"
|
||||
|
||||
#. Restart the Docker service.
|
||||
|
||||
@ -98,30 +151,32 @@ Kata Containers provide a variety of installation methods, this guide uses
|
||||
Verify that these configurations are effective by checking the following
|
||||
outputs:
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: console
|
||||
|
||||
$ sudo docker info | grep runtime
|
||||
WARNING: the devicemapper storage-driver is deprecated, and will be
|
||||
removed in a future release.
|
||||
WARNING: devicemapper: usage of loopback devices is strongly discouraged
|
||||
for production use.
|
||||
$ sudo docker info | grep -i runtime
|
||||
WARNING: the devicemapper storage-driver is deprecated, and will be removed in a future release.
|
||||
WARNING: devicemapper: usage of loopback devices is strongly discouraged for production use.
|
||||
Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
|
||||
Runtimes: kata-runtime runc
|
||||
Runtimes: kata-clh kata-fc kata-qemu kata-qemu-virtiofs runc kata-acrn
|
||||
Default Runtime: kata-acrn
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: console
|
||||
|
||||
$ kata-runtime kata-env | awk -v RS= '/\[Hypervisor\]/'
|
||||
$ /opt/kata/bin/kata-runtime --kata-config /opt/kata/share/defaults/kata-containers/configuration-acrn.toml kata-env | awk -v RS= '/\[Hypervisor\]/'
|
||||
[Hypervisor]
|
||||
MachineType = ""
|
||||
Version = "DM version is: 1.5-unstable-"2020w02.5.140000p_261" (daily tag:"2020w02.5.140000p"), build by mockbuild@2020-01-12 08:44:52"
|
||||
Version = "DM version is: 2.0-unstable-7c7bf767-dirty (daily tag:acrn-2020w23.5-180000p), build by acrn@2020-06-11 17:11:17"
|
||||
Path = "/usr/bin/acrn-dm"
|
||||
BlockDeviceDriver = "virtio-blk"
|
||||
EntropySource = "/dev/urandom"
|
||||
SharedFS = ""
|
||||
VirtioFSDaemon = ""
|
||||
Msize9p = 0
|
||||
MemorySlots = 10
|
||||
PCIeRootPort = 0
|
||||
HotplugVFIOOnRootBus = false
|
||||
Debug = false
|
||||
UseVSock = false
|
||||
SharedFS = ""
|
||||
|
||||
Run a Kata Container with ACRN
|
||||
******************************
|
||||
@ -141,8 +196,9 @@ Start a Kata Container on ACRN:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo docker run -ti --runtime=kata-runtime busybox sh
|
||||
$ sudo docker run -ti busybox sh
|
||||
|
||||
If you run into problems, contact us on the ACRN mailing list and provide as
|
||||
If you run into problems, contact us on the `ACRN mailing list
|
||||
<https://lists.projectacrn.org/g/acrn-dev>`_ and provide as
|
||||
much detail as possible about the issue. The output of ``sudo docker info``
|
||||
and ``kata-runtime kata-env`` is useful.
|
||||
|
@ -176,7 +176,9 @@ Set up libvirt
|
||||
1. Install the required packages::
|
||||
|
||||
$ sudo apt install libdevmapper-dev libnl-route-3-dev libnl-3-dev python \
|
||||
automake autoconf autopoint libtool xsltproc libxml2-utils gettext
|
||||
automake autoconf autopoint libtool xsltproc libxml2-utils gettext \
|
||||
libxml2-dev libpciaccess-dev
|
||||
|
||||
|
||||
2. Download libvirt/ACRN::
|
||||
|
||||
|
193
doc/tutorials/using_grub.rst
Normal file
@ -0,0 +1,193 @@
|
||||
.. _using_grub:
|
||||
|
||||
Using GRUB to boot ACRN
|
||||
#######################
|
||||
`GRUB <http://www.gnu.org/software/grub/>`_ is a multiboot boot loader
|
||||
used by many popular Linux distributions. It also supports booting the
|
||||
ACRN hypervisor.
|
||||
See `<http://www.gnu.org/software/grub/grub-download.html>`_
|
||||
to get the latest GRUB source code and `<https://www.gnu.org/software/grub/grub-documentation.html>`_
|
||||
for detailed documentation.
|
||||
|
||||
The ACRN hypervisor can boot from `multiboot protocol <http://www.gnu.org/software/grub/manual/multiboot/multiboot.html>`_
|
||||
or `multiboot2 protocol <http://www.gnu.org/software/grub/manual/multiboot2/multiboot.html>`_.
|
||||
Comparing with multiboot protocol, the multiboot2 protocol adds UEFI support.
|
||||
|
||||
The multiboot protocol is supported by the ACRN hypervisor natively.
|
||||
The multiboot2 protocol is supported when ``CONFIG_MULTIBOOT2`` is
|
||||
enabled in Kconfig. The ``CONFIG_MULTIBOOT2`` is enabled by default.
|
||||
Which boot protocol is used depends on the hypervisor is loaded by
|
||||
GRUB's ``multiboot`` command or ``multiboot2`` command. The guest kernel
|
||||
or ramdisk must be loaded by the GRUB ``module`` command or ``module2``
|
||||
command accordingly when different boot protocol is used.
|
||||
|
||||
The ACRN hypervisor binary is built with two formats: ``acrn.32.out`` in
|
||||
ELF format and ``acrn.bin`` in RAW format. The GRUB ``multiboot``
|
||||
command support ELF format only and does not support binary relocation,
|
||||
even if ``CONFIG_RELOC`` is set. The GRUB ``multiboot2`` command support
|
||||
ELF format when ``CONFIG_RELOC`` is not set, or RAW format when
|
||||
``CONFIG_RELOC`` is set.
|
||||
|
||||
.. note::
|
||||
* ``CONFIG_RELOC`` is set by default, so use ``acrn.32.out`` in multiboot
|
||||
protocol and ``acrn.bin`` in multiboot2 protocol.
|
||||
|
||||
* Per ACPI specification, the RSDP pointer is described in the EFI System
|
||||
Table instead of legacy ACPI RSDP area on a UEFI enabled platform. To make
|
||||
sure ACRN hypervisor gets the correct ACPI RSDP info, we recommend using
|
||||
``acrn.bin`` with multiboot2 protocol to load hypervisor on a UEFI platform.
|
||||
|
||||
.. _pre-installed-grub:
|
||||
|
||||
Using pre-installed GRUB
|
||||
************************
|
||||
|
||||
Most Linux distributions use GRUB version 2 by default. We can re-use
|
||||
pre-installed GRUB to load ACRN hypervisor if its version 2.02 or
|
||||
higher.
|
||||
|
||||
Here's an example using Ubuntu to load ACRN on a scenario with two
|
||||
pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
|
||||
|
||||
#. Copy ACRN hypervisor binary ``acrn.32.out`` (or ``acrn.bin``) and the pre-launched VM kernel images to ``/boot/``;
|
||||
|
||||
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu visible when booting:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# GRUB_HIDDEN_TIMEOUT=0
|
||||
GRUB_HIDDEN_TIMEOUT_QUIET=false
|
||||
|
||||
#. Append the following configuration in the ``/etc/grub.d/40_custom`` file:
|
||||
|
||||
Configuration template for multiboot protocol:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
menuentry 'Boot ACRN hypervisor from multiboot' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out $(HV bootargs) $(Service VM bootargs)
|
||||
module /boot/kernel4vm0 xxxxxx $(VM0 bootargs)
|
||||
module /boot/kernel4vm1 yyyyyy $(VM1 bootargs)
|
||||
}
|
||||
|
||||
Configuration template for multiboot2 protocol:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
menuentry 'Boot ACRN hypervisor from multiboot2' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot2 /boot/acrn.bin $(HV bootargs) $(Service VM bootargs)
|
||||
module2 /boot/kernel4vm0 xxxxxx $(VM0 bootargs)
|
||||
module2 /boot/kernel4vm1 yyyyyy $(VM1 bootargs)
|
||||
}
|
||||
|
||||
|
||||
.. note::
|
||||
The module ``/boot/kernel4vm0`` is the VM0 kernel file. The param
|
||||
``xxxxxx`` is VM0's kernel file tag and must exactly match the
|
||||
``kernel_mod_tag`` of VM0 configured in the
|
||||
``hypervisor/scenarios/$(SCENARIO)/vm_configurations.c`` file. The
|
||||
multiboot module ``/boot/kernel4vm1`` is the VM1 kernel file and the
|
||||
param ``yyyyyy`` is its tag and must exactly match the
|
||||
``kernel_mod_tag`` of VM1 in the
|
||||
``hypervisor/scenarios/$(SCENARIO)/vm_configurations.c`` file.
|
||||
|
||||
The guest kernel command line arguments is configured in the
|
||||
hypervisor source code by default if no ``$(VMx bootargs)`` is present.
|
||||
If ``$(VMx bootargs)`` is present, the default command line arguments
|
||||
are overridden by the ``$(VMx bootargs)`` parameters.
|
||||
|
||||
The ``$(Service VM bootargs)`` parameter in the multiboot command
|
||||
is appended to the end of the Service VM kernel command line. This is
|
||||
useful to override some Service VM kernel cmdline parameters because the
|
||||
later one would win if the same parameters were configured in the Linux
|
||||
kernel cmdline. For example, adding ``root=/dev/sda3`` will override the
|
||||
original root device to ``/dev/sda3`` for the Service VM kernel.
|
||||
|
||||
All parameters after a ``#`` character are ignored since GRUB
|
||||
treat them as comments.
|
||||
|
||||
``\``, ``$``, ``#`` are special characters in GRUB. An escape
|
||||
character ``\`` must be added before these special characters if they
|
||||
are included in ``$(HV bootargs)`` or ``$(VM bootargs)``. For example,
|
||||
``memmap=0x200000$0xE00000`` for guest kernel cmdline must be written as
|
||||
``memmap=0x200000\$0xE00000``
|
||||
|
||||
|
||||
#. Update GRUB::
|
||||
|
||||
$ sudo update-grub
|
||||
|
||||
#. Reboot the platform. On the platform's console, Select the
|
||||
**Boot ACRN hypervisor xxx** entry to boot the ACRN hypervisor.
|
||||
The GRUB loader will boot the hypervisor, and the hypervisor will
|
||||
start the VMs automatically.
|
||||
|
||||
|
||||
Installing self-built GRUB
|
||||
**************************
|
||||
|
||||
If the GRUB version on your platform is outdated or has issues booting
|
||||
the ACRN hypervisor, you can have a try with self-built GRUB binary. Get
|
||||
the latest GRUB code and follow the `GRUB Manual
|
||||
<https://www.gnu.org/software/grub/manual/grub/grub.html#Installing-GRUB-using-grub_002dinstall>`_
|
||||
to build and install your own GRUB, and then follow the steps described
|
||||
earlier in `pre-installed-grub`_.
|
||||
|
||||
|
||||
Here we provide another simple method to build GRUB in efi application format:
|
||||
|
||||
#. Make GRUB efi application:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ git clone https://git.savannah.gnu.org/git/grub.git
|
||||
$ cd grub
|
||||
$ ./bootstrap
|
||||
$ ./configure --with-platform=efi --target=x86_64
|
||||
$ make
|
||||
$ ./grub-mkimage -p /EFI/BOOT -d ./grub-core/ -O x86_64-efi -o grub_x86_64.efi \
|
||||
boot efifwsetup efi_gop efinet efi_uga lsefimmap lsefi lsefisystab \
|
||||
exfat fat multiboot2 multiboot terminal part_msdos part_gpt normal \
|
||||
all_video aout configfile echo file fixvideo fshelp gfxterm gfxmenu \
|
||||
gfxterm_background gfxterm_menu legacycfg video_bochs video_cirrus \
|
||||
video_colors video_fb videoinfo video net tftp
|
||||
|
||||
This will build a ``grub_x86_64.efi`` binary in the current directory, copy it to ``/EFI/boot/`` folder
|
||||
on the EFI partition (it is typically mounted under ``/boot/efi/`` folder on rootfs).
|
||||
|
||||
#. Create ``/EFI/boot/grub.cfg`` file containing the following:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
set default=0
|
||||
set timeout=5
|
||||
# set correct root device which stores acrn binary and kernel images
|
||||
set root='hd0,gpt3'
|
||||
|
||||
menuentry 'Boot ACRN hypervisor from multiboot' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out $(HV bootargs) $(Service VM bootargs)
|
||||
module /boot/kernel4vm0 xxxxxx $(VM0 bootargs)
|
||||
module /boot/kernel4vm1 yyyyyy $(VM1 bootargs)
|
||||
}
|
||||
|
||||
menuentry 'Boot ACRN hypervisor from multiboot2' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot2 /boot/acrn.bin $(HV bootargs) $(Service VM bootargs)
|
||||
module2 /boot/kernel4vm0 xxxxxx $(VM0 bootargs)
|
||||
module2 /boot/kernel4vm1 yyyyyy $(VM1 bootargs)
|
||||
}
|
||||
|
||||
#. Copy ACRN binary and guest kernel images to the GRUB-configured folder, e.g. ``/boot/`` folder on ``/dev/sda3/``;
|
||||
|
||||
#. Run ``/EFI/boot/grub_x86_64.efi`` in the EFI shell.
|
@ -33,7 +33,7 @@ The ACRN hypervisor shell supports the following commands:
|
||||
* - int
|
||||
- List interrupt information per CPU
|
||||
* - pt
|
||||
- Show pass-through device information
|
||||
- Show passthrough device information
|
||||
* - vioapic <vm_id>
|
||||
- Show virtual IOAPIC (vIOAPIC) information for a specific VM
|
||||
* - dump_ioapic
|
||||
@ -184,7 +184,7 @@ IRQ vector number, etc.
|
||||
pt
|
||||
==
|
||||
|
||||
``pt`` provides pass-through detailed information, such as the virtual
|
||||
``pt`` provides passthrough detailed information, such as the virtual
|
||||
machine number, interrupt type, interrupt request, interrupt vector,
|
||||
trigger mode, etc.
|
||||
|
||||
|
65
doc/user-guides/hv-parameters.rst
Normal file
@ -0,0 +1,65 @@
|
||||
.. _hv-parameters:
|
||||
|
||||
ACRN Hypervisor Parameters
|
||||
##########################
|
||||
|
||||
Generic hypervisor parameters
|
||||
*****************************
|
||||
|
||||
The ACRN hypervisor supports the following parameter:
|
||||
|
||||
+-----------------+-----------------------------+----------------------------------------------------------------------------------------+
|
||||
| Parameter | Value | Description |
|
||||
+=================+=============================+========================================================================================+
|
||||
| | disabled | This disables the serial port completely. |
|
||||
| +-----------------------------+----------------------------------------------------------------------------------------+
|
||||
| uart= | bdf@<BDF value> | This sets the PCI serial port based on its BDF. e.g. bdf@0:18.1 |
|
||||
| +-----------------------------+----------------------------------------------------------------------------------------+
|
||||
| | port@<port address> | This sets the serial port address. |
|
||||
+-----------------+-----------------------------+----------------------------------------------------------------------------------------+
|
||||
|
||||
The Generic hypervisor parameters are specified in the GRUB multiboot/multiboot2 command.
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
|
||||
menuentry 'Boot ACRN hypervisor from multiboot' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out uart=bdf@0:18.1
|
||||
module /boot/bzImage Linux_bzImage
|
||||
module /boot/bzImage2 Linux_bzImage2
|
||||
}
|
||||
|
||||
For de-privilege mode, the parameters are specified in the ``efibootmgr -u`` command:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2
|
||||
|
||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN NUC Hypervisor" \
|
||||
-u "uart=disabled"
|
||||
|
||||
|
||||
De-privilege mode hypervisor parameters
|
||||
***************************************
|
||||
|
||||
The de-privilege mode hypervisor parameters can only be specified in the efibootmgr command.
|
||||
Currently we support the ``bootloader=`` parameter:
|
||||
|
||||
+-----------------+-------------------------------------------------+-------------------------------------------------------------------------+
|
||||
| Parameter | Value | Description |
|
||||
+=================+=================================================+=========================================================================+
|
||||
| bootloader= | ``\EFI\org.clearlinux\bootloaderx64.efi`` | This sets the EFI executable to be loaded once the hypervisor is up |
|
||||
| | | and running. This is typically the bootloader of the Service OS. |
|
||||
| | | i.e. : ``\EFI\org.clearlinux\bootloaderx64.efi`` |
|
||||
+-----------------+-------------------------------------------------+-------------------------------------------------------------------------+
|
||||
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2
|
||||
|
||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN NUC Hypervisor" \
|
||||
-u "bootloader=\EFI\boot\bootloaderx64.efi"
|
@ -83,7 +83,7 @@ Use the ``list`` command to display VMs and their state:
|
||||
|
||||
# acrnctl list
|
||||
vm1-14:59:30 untracked
|
||||
vm-yocto stopped
|
||||
vm-ubuntu stopped
|
||||
vm-android stopped
|
||||
|
||||
Start VM
|
||||
@ -94,7 +94,7 @@ command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# acrnctl start vm-yocto
|
||||
# acrnctl start vm-ubuntu
|
||||
|
||||
Stop VM
|
||||
=======
|
||||
@ -103,7 +103,7 @@ Use the ``stop`` command to stop one or more running VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# acrnctl stop vm-yocto vm1-14:59:30 vm-android
|
||||
# acrnctl stop vm-ubuntu vm1-14:59:30 vm-android
|
||||
|
||||
Use the optional ``-f`` or ``--force`` argument to force the stop operation.
|
||||
This will trigger an immediate shutdown of the User VM by the ACRN Device Model
|
||||
@ -112,7 +112,7 @@ gracefully by itself.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# acrnctl stop -f vm-yocto
|
||||
# acrnctl stop -f vm-ubuntu
|
||||
|
||||
RESCAN BLOCK DEVICE
|
||||
===================
|
||||
|