doc: update release_2.1 with new docs

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2020-08-07 17:23:43 -07:00 committed by David Kinder
parent c3800aea66
commit d8ee2f3303
28 changed files with 537 additions and 406 deletions

View File

@ -3,6 +3,23 @@
Security Advisory
#################
Addressed in ACRN v2.1
************************
We recommend that all developers upgrade to this v2.1 release (or later), which
addresses the following security issue that was discovered in previous releases:
------
- Missing access control restrictions in the Hypervisor component
A malicious entity with root access in the Service VM
userspace could abuse the PCIe assign/de-assign Hypercalls via crafted
ioctls and payloads. This attack can result in a corrupt state and Denial
of Service (DoS) for previously assigned PCIe devices to the Service VM
at runtime.
**Affected Release:** v2.0 and v1.6.1.
Addressed in ACRN v1.6.1
************************

View File

@ -79,6 +79,7 @@ Enable ACRN Features
tutorials/setup_openstack_libvirt
tutorials/acrn_on_qemu
tutorials/using_grub
tutorials/pre-launched-rt
Debug
*****

View File

@ -56,6 +56,7 @@ options:
[-l lpc] [-m mem] [-p vcpu:hostcpu] [-r ramdisk_image_path]
[-s pci] [-U uuid] [--vsbl vsbl_file_name] [--ovmf ovmf_file_path]
[--part_info part_info_name] [--enable_trusty] [--intr_monitor param_setting]
[--acpidev_pt HID] [--mmiodev_pt MMIO_regions]
[--vtpm2 sock_path] [--virtio_poll interval] [--mac_seed seed_string]
[--ptdev_no_reset] [--debugexit]
[--lapic_pt] <vm>
@ -86,6 +87,8 @@ options:
--intr_monitor: enable interrupt storm monitor
its params: threshold/s,probe-period(s),delay_time(ms),delay_duration(ms),
--virtio_poll: enable virtio poll mode with poll interval with ns
--acpidev_pt: acpi device ID args: HID in ACPI Table
--mmiodev_pt: MMIO resources args: physical MMIO regions
--vtpm2: Virtual TPM2 args: sock_path=$PATH_OF_SWTPM_SOCKET
--lapic_pt: enable local apic passthrough
--rtvm: indicate that the guest is rtvm
@ -104,6 +107,7 @@ Here's an example showing how to run a VM with:
- GPU device on PCI 00:02.0
- Virtio-block device on PCI 00:03.0
- Virtio-net device on PCI 00:04.0
- TPM2 MSFT0101
.. code-block:: bash
@ -113,6 +117,7 @@ Here's an example showing how to run a VM with:
-s 5,virtio-console,@pty:pty_port \
-s 3,virtio-blk,b,/data/clearlinux/clearlinux.img \
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
--acpidev_pt MSFT0101 \
--intr_monitor 10000,10,1,100 \
-B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
@ -1193,4 +1198,5 @@ Passthrough in Device Model
****************************
You may refer to :ref:`hv-device-passthrough` for passthrough realization
in device model.
in device model and :ref:`mmio-device-passthrough` for MMIO passthrough realization
in device model and ACRN Hypervisor..

View File

@ -18,6 +18,7 @@ Hypervisor high-level design
Virtual Interrupt <hv-virt-interrupt>
VT-d <hv-vt-d>
Device Passthrough <hv-dev-passthrough>
mmio-dev-passthrough
hv-partitionmode
Power Management <hv-pm>
Console, Shell, and vUART <hv-console>

View File

@ -70,7 +70,7 @@ Specifically:
the hypervisor shell. Inputs to the physical UART will be
redirected to the vUART starting from the next timer event.
- The vUART is deactivated after a :kbd:`Ctrl + Space` hotkey is received
- The vUART is deactivated after a :kbd:`Ctrl` + :kbd:`Space` hotkey is received
from the physical UART. Inputs to the physical UART will be
handled by the hypervisor shell starting from the next timer
event.

View File

@ -38,58 +38,36 @@ IA32_PQR_ASSOC MSR to CLOS 0. (Note that CLOS, or Class of Service, is a
resource allocator.) The user can check the cache capabilities such as cache
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
CLOS ID, to select a cache mask to take effect. ACRN uses
VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes to
enforce the settings.
CLOS ID, to select a cache mask to take effect. These configurations can be
done in scenario xml file under ``FEATURES`` section as shown in the below example.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
to enforce the settings.
.. code-block:: none
:emphasize-lines: 3,7,11,15
:emphasize-lines: 2,4
struct platform_clos_info platform_l2_clos_array[MAX_PLATFORM_CLOS_NUM] = {
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 0,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 1,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 2,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 3,
},
};
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section. If user desires
to use CDP feature, CDP_ENABLED should be set to ``y``.
.. code-block:: none
:emphasize-lines: 6
:emphasize-lines: 2
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = {
{
.type = SOS_VM,
.name = SOS_VM_CONFIG_NAME,
.guest_flags = 0UL,
.clos = 0,
.memory = {
.start_hpa = 0x0UL,
.size = CONFIG_SOS_RAM_SIZE,
},
.os_config = {
.name = SOS_VM_CONFIG_OS_NAME,
},
},
};
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<vcpu_clos>0</vcpu_clos>
.. note::
ACRN takes the lowest common CLOS max value between the supported
resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS
supported by L3 is 16 and L2 is 8, ACRN programs MAX_PLATFORM_CLOS_NUM to
8. ACRN recommends consistent capabilities across all RDT
resources by using the common subset CLOS. This is done in order to
minimize misconfiguration errors.
resources as maximum supported CLOS ID. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends to have consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize
misconfiguration errors.
Objective of MBA
@ -128,53 +106,31 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
users can check the MBA capabilities such as mba delay values and
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root
modes to enforce the settings.
These configurations can be done in scenario xml file under ``FEATURES`` section
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
for non-root and root modes to enforce the settings.
.. code-block:: none
:emphasize-lines: 3,7,11,15
:emphasize-lines: 2,5
struct platform_clos_info platform_mba_clos_array[MAX_PLATFORM_CLOS_NUM] = {
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 0,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 1,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 2,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 3,
},
};
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask"></CLOS_MASK>
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section.
.. code-block:: none
:emphasize-lines: 6
:emphasize-lines: 2
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = {
{
.type = SOS_VM,
.name = SOS_VM_CONFIG_NAME,
.guest_flags = 0UL,
.clos = 0,
.memory = {
.start_hpa = 0x0UL,
.size = CONFIG_SOS_RAM_SIZE,
},
.os_config = {
.name = SOS_VM_CONFIG_OS_NAME,
},
},
};
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<vcpu_clos>0</vcpu_clos>
.. note::
ACRN takes the lowest common CLOS max value between the supported
resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS
resources as maximum supported CLOS ID. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends to have consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize

View File

@ -186,7 +186,7 @@ Inter-VM Communication Security hardening (BKMs)
************************************************
As previously highlighted, ACRN 2.0 provides the capability to create shared
memory regions between Post-Launch user VMs known as “Inter-VM Communication”.
memory regions between Post-Launch user VMs known as "Inter-VM Communication".
This mechanism is based on ivshmem v1.0 exposing virtual PCI devices for the
shared regions (in Service VM's memory for this release). This feature adopts a
community-approved design for shared memory between VMs, following same
@ -194,7 +194,7 @@ specification for KVM/QEMU (`Link <https://git.qemu.org/?p=qemu.git;a=blob_plain
Following the ACRN threat model, the policy definition for allocation and
assignment of these regions is controlled by the Service VM, which is part of
ACRNs Trusted Computing Base (TCB). However, to secure inter-VM communication
ACRN's Trusted Computing Base (TCB). However, to secure inter-VM communication
between any userspace applications that harness this channel, applications will
face more requirements for the confidentiality, integrity, and authenticity of
shared or transferred data. It is the application development team's
@ -218,17 +218,17 @@ architecture and threat model for your application.
- Add restrictions based on behavior or subject and object rules around information flow and accesses.
- In Service VM, consider the ``/dev/shm`` device node as a critical interface with special access requirement. Those requirements can be fulfilled using any of the existing opensource MAC technologies or even ACLs depending on the OS compatibility (Ubuntu, Windows, etc..) and integration complexity.
- In the User VM, the shared memory region can be accessed using ``mmap()`` of UIO device node. Other complementary info can be found under:
- ``/sys/class/uio/uioX/device/resource2`` --> shared memory base address
- ``/sys/class/uio/uioX/device/config`` --> shared memory Size.
- For Linux-based User VMs, we recommend using the standard ``UIO`` and ``UIO_PCI_GENERIC`` drivers through the device node (for example, ``/dev/uioX``).
- Reference: `AppArmor <https://wiki.ubuntuusers.de/AppArmor/>`_, `SELinux <https://selinuxproject.org/page/Main_Page>`_, `UIO driver-API <https://www.kernel.org/doc/html/v4.12/driver-api/uio-howto.html>`_
3. **Crypto Support and Secure Applied Crypto**
- According to the applications threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services.
- According to the application's threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services.
- Don't implement your own crypto functions. Use available compliant crypto libraries as applicable, such as. (`Intel IPP <https://github.com/intel/ipp-crypto>`_ or `TinyCrypt <https://01.org/tinycrypt>`_)
- Utilize the platform/kernel infrastructure and services (e.g., :ref:`hld-security` , `Kernel Crypto backend/APIs <https://www.kernel.org/doc/html/v5.4/crypto/index.html>`_ , `keyring subsystem <https://www.man7.org/linux/man-pages/man7/keyrings.7.html>`_, etc..).
- Implement necessary flows for key lifecycle management including wrapping,revocation and migration, depending on the crypto key type used and if there are requirements for key persistence across system and power management events.

View File

@ -0,0 +1,40 @@
.. _mmio-device-passthrough:
MMIO Device Passthrough
########################
The ACRN Hypervisor supports both PCI and MMIO device passthrough.
However there are some constraints on and hypervisor assumptions about
MMIO devices: there can be no DMA access to the MMIO device and the MMIO
device may not use IRQs.
Here is how ACRN supports MMIO device passthrough:
* For a pre-launched VM, the VM configuration tells the ACRN hypervisor
the addresses of the physical MMIO device's regions and where they are
mapped to in the pre-launched VM. The hypervisor then removes these
MMIO regions from the Service VM and fills the vACPI table for this MMIO
device based on the device's physical ACPI table.
* For a post-launched VM, the same actions are done as in a
pre-launched VM, plus we use the command line to tell which MMIO
device we want to pass through to the post-launched VM.
If the MMIO device has ACPI Tables, use ``--acpidev_pt HID`` and
if not, use ``--mmiodev_pt MMIO_regions``.
.. note::
Currently, the vTPM and PT TPM in the ACRN-DM have the same HID so we
can't support them both at the same time. The VM will fail to boot if
both are used.
These issues remain to be implemented:
* Save the MMIO regions in a field of the VM structure in order to
release the resources when the post-launched VM shuts down abnormally.
* Allocate the guest MMIO regions for the MMIO device in a guest-reserved
MMIO region instead of being hard-coded. With this, we could add more
passthrough MMIO devices.
* De-assign the MMIO device from the Service VM first before passing
through it to the post-launched VM and not only removing the MMIO
regions from the Service VM.

View File

@ -70,8 +70,8 @@ ACRN Device Model and virtio-net Backend Driver:
the virtio-net backend driver to process the request. The backend driver
receives the data in a shared virtqueue and sends it to the TAP device.
Bridge and Tap Device:
Bridge and Tap are standard virtual network infrastructures. They play
Bridge and TAP Device:
Bridge and TAP are standard virtual network infrastructures. They play
an important role in communication among the Service VM, the User VM, and the
outside world.
@ -108,7 +108,7 @@ Initialization in Device Model
- Present frontend for a virtual PCI based NIC
- Setup control plan callbacks
- Setup data plan callbacks, including TX, RX
- Setup tap backend
- Setup TAP backend
Initialization in virtio-net Frontend Driver
============================================
@ -365,7 +365,7 @@ cases.)
.. code-block:: c
vring_interrupt --> // virtio-net frontend driver interrupt handler
skb_recv_done --> //registed by virtnet_probe-->init_vqs-->virtnet_find_vqs
skb_recv_done --> // registered by virtnet_probe-->init_vqs-->virtnet_find_vqs
virtqueue_napi_schedule -->
__napi_schedule -->
virtnet_poll -->
@ -406,13 +406,13 @@ cases.)
sk->sk_data_ready --> // application will get notified
How to Use
==========
How to Use TAP Interface
========================
The network infrastructure shown in :numref:`net-virt-infra` needs to be
prepared in the Service VM before we start. We need to create a bridge and at
least one tap device (two tap devices are needed to create a dual
virtual NIC) and attach a physical NIC and tap device to the bridge.
least one TAP device (two TAP devices are needed to create a dual
virtual NIC) and attach a physical NIC and TAP device to the bridge.
.. figure:: images/network-virt-sos-infrastruct.png
:align: center
@ -509,6 +509,32 @@ is the virtual NIC created by acrn-dm:
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
How to Use MacVTap Interface
============================
In addition to TAP interface, ACRN also supports MacVTap interface.
MacVTap replaces the combination of the TAP and bridge drivers with
a single module based on MacVLan driver. With MacVTap, each
virtual network interface is assigned its own MAC and IP address
and is directly attached to the physical interface of the host machine
to improve throughput and latencies.
Create a MacVTap interface in the Service VM as shown here:
.. code-block:: none
sudo ip link add link eth0 name macvtap0 type macvtap
where ``eth0`` is the name of the physical network interface, and
``macvtap0`` is the name of the MacVTap interface being created. (Make
sure the MacVTap interface name includes the keyword ``tap``.)
Once the MacVTap interface is created, the User VM can be launched by adding
a PCI slot to the device model acrn-dm as shown below.
.. code-block:: none
-s 4,virtio-net,<macvtap_name>,[mac=<XX:XX:XX:XX:XX:XX>]
Performance Estimation
======================

View File

@ -96,7 +96,7 @@ Usage
- For console vUART
To enable the console port for a VM, change the
port_base and IRQ in ``acrn-hypervisor/hypervisor/scenarios/<scenario
port_base and IRQ in ``misc/vm_configs/scenarios/<scenario
name>/vm_configurations.c``. If the IRQ number has been used in your
system ( ``cat /proc/interrupt``), you can choose other IRQ number. Set
the .irq =0, the vUART will work in polling mode.

View File

@ -15,13 +15,6 @@ The hypervisor binary is generated based on Kconfig configuration
settings. Instructions about these settings can be found in
:ref:`getting-started-hypervisor-configuration`.
.. note::
A generic configuration named ``hypervisor/arch/x86/configs/generic.config``
is provided to help developers try out ACRN more easily.
This configuration works for most x86-based platforms; it is supported
with limited features. It can be enabled by specifying ``BOARD=generic``
in the ``make`` command line.
One binary for all platforms and all usage scenarios is currently not
supported, primarily because dynamic configuration parsing is restricted in
the ACRN hypervisor for the following reasons:
@ -61,7 +54,9 @@ distribution. Refer to the :ref:`building-acrn-in-docker` user guide for
instructions on how to build ACRN using a container.
.. note::
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI) for configuring hypervisor options and using python's ``kconfiglib`` library.
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI)
for configuring hypervisor options and using python's ``kconfiglib``
library.
Install the necessary tools for the following systems:
@ -121,6 +116,8 @@ Enter the following to get the acrn-hypervisor source code:
$ git clone https://github.com/projectacrn/acrn-hypervisor
.. _build-with-acrn-scenario:
.. rst-class:: numbered-step
Build with the ACRN scenario
@ -144,7 +141,12 @@ INDUSTRY:
HYBRID:
This scenario defines a hybrid use case with three VMs: one
pre-launched VM, one pre-launched Service VM, and one post-launched
pre-launched Safety VM, one pre-launched Service VM, and one post-launched
Standard VM.
HYBRID_RT:
This scenario defines a hybrid use case with three VMs: one
pre-launched RTVM, one pre-launched Service VM, and one post-launched
Standard VM.
Assuming that you are at the top level of the acrn-hypervisor directory, perform the following:
@ -164,11 +166,19 @@ Assuming that you are at the top level of the acrn-hypervisor directory, perform
$ make all BOARD=whl-ipc-i5 SCENARIO=hybrid RELEASE=0
* Build the ``HYBRID_RT`` scenario on the ``whl-ipc-i7``:
.. code-block:: none
$ make all BOARD=whl-ipc-i7 SCENARIO=hybrid_rt RELEASE=0
* Build the ``SDC`` scenario on the ``nuc6cayh``:
.. code-block:: none
$ make all BOARD=nuc6cayh SCENARIO=sdc RELEASE=0
$ make all BOARD_FILE=$PWD/misc/vm_configs/xmls/board-xmls/nuc6cayh.xml \
SCENARIO_FILE=$PWD/misc/vm_configs/xmls/config-xmls/nuc6cayh/sdc.xml
See the :ref:`hardware` document for information about platform needs
for each scenario.
@ -198,14 +208,14 @@ top level of the acrn-hypervisor directory. The configuration file, named
.. code-block:: none
$ cd hypervisor
$ make defconfig BOARD=nuc6cayh
$ make defconfig BOARD=nuc7i7dnb SCENARIO=industry
The BOARD specified is used to select a ``defconfig`` under
``arch/x86/configs/``. The other command line-based options (e.g.
``misc/vm_configs/scenarios/``. The other command line-based options (e.g.
``RELEASE``) take no effect when generating a defconfig.
To modify the hypervisor configurations, you can either edit ``.config``
manually, or you can invoke a TUI-based menuconfig--powered by kconfiglib--by
manually, or you can invoke a TUI-based menuconfig (powered by kconfiglib) by
executing ``make menuconfig``. As an example, the following commands
(assuming that you are at the top level of the acrn-hypervisor directory)
generate a default configuration file for UEFI, allowing you to modify some
@ -215,8 +225,9 @@ configurations and build the hypervisor using the updated ``.config``:
# Modify the configurations per your needs
$ cd ../ # Enter top-level folder of acrn-hypervisor source
$ make menuconfig -C hypervisor BOARD=kbl-nuc-i7 <input scenario name>
$ make menuconfig -C hypervisor
# modify your own "ACRN Scenario" and "Target board" that want to build
# in pop up menu
Note that ``menuconfig`` is python3 only.
@ -239,7 +250,7 @@ Now you can build all these components at once as follows:
The build results are found in the ``build`` directory. You can specify
a different Output folder by setting the ``O`` ``make`` parameter,
for example: ``make O=build-nuc BOARD=nuc6cayh``.
for example: ``make O=build-nuc BOARD=nuc7i7dnb``.
If you only need the hypervisor, use this command:
@ -259,8 +270,8 @@ of the acrn-hypervisor directory):
.. code-block:: none
$ make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/nuc7i7dnb.xml \
SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/nuc7i7dnb/industry.xml FIRMWARE=uefi TARGET_DIR=xxx
$ make BOARD_FILE=$PWD/misc/vm_configs/xmls/board-xmls/nuc7i7dnb.xml \
SCENARIO_FILE=$PWD/misc/vm_configs/xmls/config-xmls/nuc7i7dnb/industry.xml FIRMWARE=uefi TARGET_DIR=xxx
.. note::
@ -268,7 +279,10 @@ of the acrn-hypervisor directory):
information is retrieved from the corresponding ``BOARD_FILE`` and
``SCENARIO_FILE`` XML configuration files. The ``TARGET_DIR`` parameter
specifies what directory is used to store configuration files imported
from XML files. If the ``TARGED_DIR`` is not specified, the original
from XML files. If the ``TARGET_DIR`` is not specified, the original
configuration files of acrn-hypervisor would be overridden.
In the 2.1 release, there is a known issue (:acrn-issue:`5157`) that
``TARGET_DIR=xxx`` does not work.
Follow the same instructions to boot and test the images you created from your build.

View File

@ -439,7 +439,7 @@ The Boot process proceeds as follows:
In this boot mode, the boot options of pre-launched VM and service VM are defined
in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config``
in the source code ``hypervisor/$(SCENARIO)/vm_configurations.c`` by default.
in the source code ``misc/vm_configs/$(SCENARIO)/vm_configurations.c`` by default.
Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for
details. The boot options of post-launched VM is not covered by hypervisor
source code or GRUB menu, it is defined in guest image file or specified by

View File

@ -0,0 +1,111 @@
.. _release_notes_2.1:
ACRN v2.1 (August 2020)
#######################
We are pleased to announce the release of the Project ACRN
hypervisor version 2.1.
ACRN is a flexible, lightweight reference hypervisor that is built with
real-time and safety-criticality in mind. It is optimized to streamline
embedded development through an open source platform. Check out
:ref:`introduction` introduction for more information. All project ACRN
source code is maintained in the
https://github.com/projectacrn/acrn-hypervisor repository and includes
folders for the ACRN hypervisor, the ACRN device model, tools, and
documentation. You can either download this source code as a zip or
tar.gz file (see the `ACRN v2.1 GitHub release page
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v2.1>`_) or
use Git clone and checkout commands::
git clone https://github.com/projectacrn/acrn-hypervisor
cd acrn-hypervisor
git checkout v2.1
The project's online technical documentation is also tagged to
correspond with a specific release: generated v2.1 documents can be
found at https://projectacrn.github.io/2.1/. Documentation for the
latest under-development branch is found at
https://projectacrn.github.io/latest/.
ACRN v2.1 requires Ubuntu 18.04. Follow the instructions in the
:ref:`rt_industry_ubuntu_setup` to get started with ACRN.
We recommend that all developers upgrade to ACRN release v2.1.
What's new in v2.1
******************
* Preempt-RT Linux has been validated as a pre-launched realtime VM. See
:ref:`pre_launched_rt` for more details.
* A Trusted Platform Module (TPM) MMIO device can be passthroughed to a
pre-launched VM (with some limitations discussed in
:ref:`mmio-device-passthrough`). Previously passthrough was only
supported for PCI devices.
* Open Virtual Machine Firmware (OVMF) now uses a Local Advanced
Programmable Interrupt Controller (LAPIC) timer as its local time
instead of the High Precision Event Timer (HPET). This provides the
working timer service for the realtime virtual machine (RTVM) booting
process.
* Grub is the recommended bootloader for ACRN. For more information,
see :ref:`using_grub`.
Improvements, updates, and corrections have been made throughout our documentation,
including these:
* :ref:`contribute_guidelines`
* :ref:`hv_rdt`
* :ref:`ivshmem-hld`
* :ref:`mmio-device-passthrough`
* :ref:`virtio-net`
* :ref:`getting-started-building`
* :ref:`acrn_configuration_tool`
* :ref:`pre_launched_rt`
* :ref:`rdt_configuration`
* :ref:`using_hybrid_mode_on_nuc`
* :ref:`using_partition_mode_on_nuc`
* :ref:`using_windows_as_uos`
* :ref:`debian_packaging`
Fixed Issues Details
********************
- :acrn-issue:`4047` - [WHL][Function][WaaG] passthru usb, Windows will hang when reboot it
- :acrn-issue:`4691` - [WHL][Function][RTVM]without any virtio device, with only pass-through devices, RTVM can't boot from SATA
- :acrn-issue:`4711` - [WHL][Stabilty][WaaG]Failed to boot up WaaG with core dumped in WaaG reboot test in GVT-d & CPU sharing env.
- :acrn-issue:`4897` - [WHL][Yocto][GVT-d]WaaG reboot failed due to USB mediator trouble in WaaG reboot stability test.
- :acrn-issue:`4937` - [EHL][Yocto] Fail to boot ACRN on Yocto
- :acrn-issue:`4958` - cleanup spin lock in hypervisor
- :acrn-issue:`4989` - [WHL][Yocto][acrn-configuration-tool] Fail to generate board xml on Yocto build
- :acrn-issue:`4991` - [WHL][acrn-configuration-tool] vuart1 of VM1 does not change correctly
- :acrn-issue:`4994` - Default max MSIx table is too small
- :acrn-issue:`5013` - [TGL][Yocto][YaaG] Can't enter console #1 via HV console
- :acrn-issue:`5015` - [EHL][TGL][acrn-configuration-tool] default industry xml is only support 2 user vms
- :acrn-issue:`5016` - [EHL][acrn-configuration-tool] Need update pci devices for ehl industry launch xmls
- :acrn-issue:`5029` - [TGL][Yocto][GVT] can not boot and login waag with GVT-D
- :acrn-issue:`5039` - [acrn-configuration-tool]minor fix for launch config tool
- :acrn-issue:`5041` - Pre-Launched VM boot not successful if SR-IOV PF is passed to
- :acrn-issue:`5049` - [WHL][Yocto][YaaG] Display stay on openembedded screen when launch YaaG with GVT-G
- :acrn-issue:`5056` - [EHL][Yocto]Can't enable SRIOV on EHL SOS
- :acrn-issue:`5062` - [EHL] WaaG cannot boot on EHL when CPU sharing is enabled
- :acrn-issue:`5066` - [WHL][Function] Fail to launch YaaG with usb mediator enabled
- :acrn-issue:`5067` - [WHL][Function][WaaG] passthru usb, Windows will hang when reboot it
- :acrn-issue:`5085` - [EHL][Function]Can't enable SRIOV when add memmap=64M$0xc0000000 in cmdline on EHL SOS
- :acrn-issue:`5091` - [TGL][acrn-configuration-tool] generate tgl launch script fail
- :acrn-issue:`5092` - [EHL][acrn-config-tool]After WebUI Enable CDP_ENABLED=y ,build hypervisor fail
- :acrn-issue:`5094` - [TGL][acrn-configuration-tool] Board xml does not contain SATA information
- :acrn-issue:`5095` - [TGL][acrn-configuration-tool] Missing some default launch script xmls
- :acrn-issue:`5107` - Fix size issue used for memset in create_vm
- :acrn-issue:`5115` - [REG][WHL][WAAG] Shutdown waag fails under CPU sharing status
- :acrn-issue:`5122` - [WHL][Stabilty][WaaG][GVT-g & GVT-d]Failed to boot up SOS in cold boot test.
Known Issues
************
- :acrn-issue:`4313` - [WHL][VxWorks] Failed to ping when VxWorks passthru network
- :acrn-issue:`5150` - [REG][WHL][[Yocto][Passthru] Launch RTVM fails with usb passthru
- :acrn-issue:`5151` - [WHL][VxWorks] Launch VxWorks fails due to no suitable video mode found
- :acrn-issue:`5152` - [WHL][Yocto][Hybrid] in hybrid mode ACRN HV env, can not shutdown pre-lanuched RTVM
- :acrn-issue:`5154` - [TGL][Yocto][PM] 148213_PM_SystemS5 with life_mngr fail
- :acrn-issue:`5157` - [build from source] during build HV with XML, “TARGET_DIR=xxx” does not work

View File

@ -1,4 +1,6 @@
#! /usr/bin/env python3
# Copyright (c) 2017, Intel Corporation
# SPDX-License-Identifier: Apache-2.0
"""
Filters a file, classifying output in errors, warnings and discarding
the rest.

View File

@ -1,3 +1,5 @@
# Copyright (c) 2017, Intel Corporation
# SPDX-License-Identifier: Apache-2.0
# Generates a Kconfig symbol reference in RST format, with a separate
# CONFIG_FOO.rst file for each symbol, and an alphabetical index with links in
# index.rst.

View File

@ -26,7 +26,7 @@ The hypervisor configuration uses the ``Kconfig`` mechanism. The configuration
file is located at ``acrn-hypervisor/hypervisor/arch/x86/Kconfig``.
A board-specific ``defconfig`` file, for example
``acrn-hypervisor/hypervisor/arch/x86/configs/$(BOARD).config``
``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/$(BOARD).config``
is loaded first; it is the default ``Kconfig`` for the specified board.
Board configuration
@ -38,7 +38,7 @@ board settings, root device selection, and the kernel cmdline. It also includes
**scenario-irrelevant** hardware-specific information such as ACPI/PCI
and BDF information. The reference board configuration is organized as
``*.c/*.h`` files located in the
``acrn-hypervisor/hypervisor/arch/x86/configs/$(BOARD)/`` folder.
``misc/vm_configs/boards/$(BOARD)/`` folder.
VM configuration
=================
@ -51,10 +51,12 @@ to launch post-launched User VMs.
Scenario based VM configurations are organized as ``*.c/*.h`` files. The
reference scenarios are located in the
``acrn-hypervisor/hypervisor/scenarios/$(SCENARIO)/`` folder.
``misc/vm_configs/scenarios/$(SCENARIO)/`` folder.
The board specific configurations on this scenario is stored in the
``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/`` folder.
User VM launch script samples are located in the
``acrn-hypervisor/devicemodel/samples/`` folder.
``misc/vm_configs/sample_launch_scripts/`` folder.
ACRN configuration XMLs
***********************
@ -77,7 +79,7 @@ Board XML format
================
The board XMLs are located in the
``acrn-hypervisor/misc/acrn-config/xmls/board-xmls/`` folder.
``misc/vm_configs/xmls/board-xmls/`` folder.
The board XML has an ``acrn-config`` root element and a ``board`` attribute:
.. code-block:: xml
@ -90,7 +92,7 @@ about the format of board XML and should not modify it.
Scenario XML format
===================
The scenario XMLs are located in the
``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/`` folder. The
``misc/vm_configs/xmls/config-xmls/`` folder. The
scenario XML has an ``acrn-config`` root element as well as ``board``
and ``scenario`` attributes:
@ -326,7 +328,7 @@ Additional scenario XML elements:
Launch XML format
=================
The launch XMLs are located in the
``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/`` folder.
``misc/vm_configs/xmls/config-xmls/`` folder.
The launch XML has an ``acrn-config`` root element as well as
``board``, ``scenario`` and ``uos_launcher`` attributes:
@ -435,7 +437,7 @@ Board and VM configuration workflow
===================================
Python offline tools are provided to configure Board and VM configurations.
The tool source folder is ``acrn-hypervisor/misc/acrn-config/``.
The tool source folder is ``misc/acrn-config/``.
Here is the offline configuration tool workflow:
@ -599,7 +601,7 @@ Instructions
scenario setting for the current board.
The default scenario configuration xmls are located at
``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/[board]/``.
``misc/vm_configs/xmls/config-xmls/[board]/``.
We can edit the scenario name when creating or loading a scenario. If the
current scenario name is duplicated with an existing scenario setting
name, rename the current scenario name or overwrite the existing one
@ -644,7 +646,7 @@ Instructions
.. note::
All customized scenario xmls will be in user-defined groups which are
located in ``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/[board]/user_defined/``.
located in ``misc/vm_configs/xmls/config-xmls/[board]/user_defined/``.
Before saving the scenario xml, the configuration app validates the
configurable items. If errors exist, the configuration app lists all
@ -665,9 +667,9 @@ Instructions
otherwise, the source code is generated into default folders and
overwrite the old ones. The board-related configuration source
code is located at
``acrn-hypervisor/hypervisor/arch/x86/configs/[board]/`` and the
``misc/vm_configs/boards/[board]/`` and the
scenario-based VM configuration source code is located at
``acrn-hypervisor/hypervisor/scenarios/[scenario]/``.
``misc/vm_configs/scenarios/[scenario]/``.
The **Launch Setting** is quite similar to the **Scenario Setting**:

View File

@ -151,7 +151,7 @@ reason and times of each vm_exit after we have done some operations.
# acrnalyze.py -i /home/trace/acrntrace/20190219-001529/1 -o vmexit --vm_exit
.. note:: The acrnalyze.py script is in the
``acrn-hypervisor/misc/tools/acrntrace/scripts`` folder. The location
``misc/tools/acrntrace/scripts`` folder. The location
of the trace files produced by ``acrntrace`` may be different in your system.
.. figure:: images/debug_image28.png
@ -174,7 +174,7 @@ shown in the following example:
trace event id
2. Add the following format to
``acrn-hypervisor/misc/tools/acrntrace/scripts/formats``:
``misc/tools/acrntrace/scripts/formats``:
.. figure:: images/debug_image1.png
:align: center
@ -224,7 +224,7 @@ shown in the following example:
formats /home/trace/acrntrace/20190219-001529/1 | grep "trace test"
.. note:: The acrnalyze.py script is in the
``acrn-hypervisor/misc/tools/acrntrace/scripts`` folder. The location
``misc/tools/acrntrace/scripts`` folder. The location
of the trace files produced by ``acrntrace`` may be different in your system.
and we will get the following log:

View File

@ -25,8 +25,8 @@ The project's documentation contains the following items:
* ReStructuredText source files used to generate documentation found at the
http://projectacrn.github.io website. All of the reStructuredText sources
are found in the acrn-hypervisor/doc folder, or pulled in from sibling
folders (such as /misc/) by the build scripts.
are found in the ``acrn-hypervisor/doc`` folder, or pulled in from sibling
folders (such as ``misc/``) by the build scripts.
* Doxygen-generated material used to create all API-specific documents
found at http://projectacrn.github.io/latest/api/. The doc build
@ -67,6 +67,7 @@ folder setup for documentation contributions and generation:
devicemodel/
doc/
hypervisor/
misc/
acrn-kernel/
The parent projectacrn folder is there because we'll also be creating a

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 KiB

View File

@ -0,0 +1,118 @@
.. _pre_launched_rt:
Pre-Launched Preempt-RT Linux Mode in ACRN
##########################################
The Pre-Launched Preempt-RT Linux Mode of ACRN, abbreviated as
Pre-Launched RT mode, is an ACRN configuration scenario. Pre-Launched RT
mode allows you to boot ACRN with a preempt-rt Linux running in VM0, and
the Service VM running in VM1. VM0 and VM1 are both pre-launched VMs,
and their resources are partitioned from those on the physical platform.
.. figure:: images/pre_launched_rt.png
:align: center
Prerequisites
*************
Because the Pre-Launched RT VM and Service VM are physically isolated
from each other, they must have their own devices to run a common OS,
such as Linux. Also, the platform must support booting ACRN with
multiple kernel images. So, your platform must have:
- Two hard disk drives, one for the Pre-Launched RT and one for the Service
VM
- Two network devices
- GRUB multiboot support
Example of Pre-Launched RT
**************************
Take the Whiskey Lake WHL-IPC-I5 board (as described in :ref:`hardware`) for
example. This platform can connect both an NVMe and a SATA drive and has
two Ethernet ports. We will passthrough the SATA and Ethernet 03:00.0
devices into the Pre-Launched RT VM, and give the rest of the devices to
the Service VM.
Install SOS with Grub on NVMe
=============================
As with the Hybrid and Logical Partition scenarios, the Pre-Launched RT
mode must boot using GRUB. The ACRN hypervisor is loaded as a GRUB
multiboot kernel, while the Pre-Launched RT kernel and Service VM
kernels are loaded as multiboot modules. The ACRN hypervisor, Service
VM, and Pre-Launched RT kernel images are all located on the NVMe drive.
We recommend installing Ubuntu on the NVMe drive as the Service VM OS,
which also has the required GRUB image to launch Pre-Launched RT mode.
Refer to :ref:`Run Ubuntu as the Service VM <Ubuntu Service OS>`, to
install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
===================================================================
The Pre-Launched Preempt RT Linux use Clearlinux as rootfs. Refer to
:ref:`Burn the Preempt-RT VM image onto the SATA disk <install_rtvm>` to
download the RTVM image and burn it to the SATA drive. The Kernel should
be on the NVMe drive along with GRUB. You'll need to copy the RT kernel
to the NVMe drive. Once you have successfully installed and booted
Ubuntu from the NVMe drive, you'll then need to copy the RT kernel from
the SATA to the NVMe drive:
.. code-block:: none
# mount /dev/nvme0n1p1 /boot
# mount /dev/sda1 /mnt
# cp /mnt/bzImage /boot/EFI/BOOT/bzImage_RT
Build ACRN with Pre-Launched RT Mode
====================================
The ACRN VM configuration framework can easily configure resources for
Pre-Launched VMs. On Whiskey Lake WHL-IPC-I5, to passthrough SATA and
ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with:
.. code-block:: none
make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml RELEASE=0
After the build completes, please update ACRN on NVMe. It is
/boot/EFI/BOOT/acrn.bin, if /dev/nvme0n1p1 is mounted at /boot.
Add Pre-Launched RT Kernel Image to GRUB Config
===============================================
The last step is to modify the GRUB configuration file to load the Pre-Launched
kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM
<rt_industry_ubuntu_setup>`.) The grub config file will look something
like this:
.. code-block:: none
menuentry 'ACRN multiboot2 hybrid'{
echo 'loading multiboot2 hybrid...'
multiboot2 /EFI/BOOT/acrn.bin
module2 /EFI/BOOT/bzImage_RT RT_bzImage
module2 /EFI/BOOT/bzImage Linux_bzImage
}
Reboot the system, and it will boot into Pre-Launched RT Mode
.. code-block:: none
ACRN:\>vm_list
VM_UUID VM_ID VM_NAME VM_STATE
================================ ===== ================================ ========
26c5e0d88f8a47d88109f201ebd61a5e 0 ACRN PRE-LAUNCHED VM0 Running
dbbbd4347a574216a12c2201f1ab0240 1 ACRN SOS VM Running
ACRN:\>
Connect console of VM0, via 'vm_console' ACRN shell command (Press
:kbd:`Ctrl` + :kbd:`Space` to return to the ACRN shell.)
.. code-block:: none
ACRN:\>vm_console 0
----- Entering VM 0 Shell -----
root@clr-85a5e9fbac604fbbb92644991f6315df ~ #

View File

@ -89,6 +89,13 @@ MBA bit encoding:
ACRN:\>cpuid 0x10 **0x3**
cpuid leaf: 0x10, subleaf: 0x3, 0x59:0x0:0x4:0x7
.. note::
ACRN takes the lowest common CLOS max value between the supported
resources as maximum supported CLOS ID. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends to have consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize
misconfiguration errors.
Tuning RDT resources in HV debug shell
**************************************
@ -136,46 +143,51 @@ shell.
Configure RDT for VM using VM Configuration
*******************************************
#. RDT on ACRN is enabled by default on supported platforms. This
#. RDT hardware feature is enabled by default on supported platforms. This
information can be found using an offline tool that generates a
platform-specific xml file that helps ACRN identify RDT-supported
platforms. This feature can be also be toggled using the
CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first
step is to clone the ACRN source code (if you haven't already done so):
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
sub-section of the scenario xml file as in the below example. For
details on building ACRN with scenario refer to :ref:`build-with-acrn-scenario`.
.. code-block:: none
:emphasize-lines: 6
$ git clone https://github.com/projectacrn/acrn-hypervisor.git
$ cd acrn-hypervisor/
<FEATURES>
<RELOC desc="Enable hypervisor relocation">y</RELOC>
<SCHEDULER desc="The CPU scheduler to be used by the hypervisor.">SCHED_BVT</SCHEDULER>
<MULTIBOOT2 desc="Support boot ACRN from multiboot2 protocol.">y</MULTIBOOT2>
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">*y*</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask"></CLOS_MASK>
<MBA_DELAY desc="Memory Bandwidth Allocation delay value"></MBA_DELAY>
</RDT>
.. figure:: images/menuconfig-rdt.png
:align: center
#. The predefined cache masks can be found at
``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards.
For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``.
#. Once RDT is enabled in the scenario xml file, the next step is to program
the desired cache mask or/and the MBA delay value as needed in the
scenario file. Each cache mask or MBA delay configuration corresponds
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
cache mask settings needs to be in place where each setting corresponds
to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and
use default delay value for MBA, it can be done as shown in the example below.
.. code-block:: none
:emphasize-lines: 3,7,11,15
:emphasize-lines: 8,9,10,11,12
struct platform_clos_info platform_l2_clos_array[MAX_PLATFORM_CLOS_NUM] = {
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 0,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 1,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 2,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 3,
},
};
<FEATURES>
<RELOC desc="Enable hypervisor relocation">y</RELOC>
<SCHEDULER desc="The CPU scheduler to be used by the hypervisor.">SCHED_BVT</SCHEDULER>
<MULTIBOOT2 desc="Support boot ACRN from multiboot2 protocol.">y</MULTIBOOT2>
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask">*0xff*</CLOS_MASK>
<CLOS_MASK desc="Cache Capacity Bitmask">*0x3f*</CLOS_MASK>
<CLOS_MASK desc="Cache Capacity Bitmask">*0xf*</CLOS_MASK>
<CLOS_MASK desc="Cache Capacity Bitmask">*0x3*</CLOS_MASK>
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">*0*</MBA_DELAY>
</RDT>
.. note::
Users can change the mask values, but the cache mask must have
@ -183,31 +195,24 @@ Configure RDT for VM using VM Configuration
programming an MBA delay value, be sure to set the value to less than or
equal to the MAX delay value.
#. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilities`_
to identify the MAX CLOS that can be used. ACRN uses the
#. Configure each CPU in VMs to a desired CLOS ID in the ``VM`` section of the
scenario file. Follow `RDT detection and resource capabilities`_
to identify the maximum supported CLOS ID that can be used. ACRN uses the
**the lowest common MAX CLOS** value among all RDT resources to avoid
resource misconfigurations. For example, configuration data for the
Service VM sharing mode can be found at
``hypervisor/arch/x86/configs/vm_config.c``
resource misconfigurations.
.. code-block:: none
:emphasize-lines: 6
:emphasize-lines: 5,6,7,8
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = {
{
.type = SOS_VM,
.name = SOS_VM_CONFIG_NAME,
.guest_flags = 0UL,
.clos = 1,
.memory = {
.start_hpa = 0x0UL,
.size = CONFIG_SOS_RAM_SIZE,
},
.os_config = {
.name = SOS_VM_CONFIG_OS_NAME,
},
},
};
<vm id="0">
<vm_type desc="Specify the VM type" readonly="true">PRE_STD_VM</vm_type>
<name desc="Specify the VM name which will be shown in hypervisor console command: vm_list.">ACRN PRE-LAUNCHED VM0</name>
<uuid configurable="0" desc="vm uuid">26c5e0d8-8f8a-47d8-8109-f201ebd61a5e</uuid>
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<vcpu_clos>*0*</vcpu_clos>
<vcpu_clos>*1*</vcpu_clos>
</clos>
</vm>
.. note::
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2> ...clos n).

View File

@ -177,7 +177,7 @@ Tip: Disable the Intel processor C-State and P-State of the RTVM.
Power management of a processor could save power, but it could also impact
the RT performance because the power state is changing. C-State and P-State
PM mechanism can be disabled by adding ``processor.max_cstate=0
intel_idle.max_cstate=0 intel_pstate=disabled`` to the kernel parameters.
intel_idle.max_cstate=0 intel_pstate=disable`` to the kernel parameters.
Tip: Exercise caution when setting ``/proc/sys/kernel/sched_rt_runtime_us``.
Setting ``/proc/sys/kernel/sched_rt_runtime_us`` to ``-1`` can be a

View File

@ -91,11 +91,11 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
The module ``/boot/kernel4vm0`` is the VM0 kernel file. The param
``xxxxxx`` is VM0's kernel file tag and must exactly match the
``kernel_mod_tag`` of VM0 configured in the
``hypervisor/scenarios/$(SCENARIO)/vm_configurations.c`` file. The
``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file. The
multiboot module ``/boot/kernel4vm1`` is the VM1 kernel file and the
param ``yyyyyy`` is its tag and must exactly match the
``kernel_mod_tag`` of VM1 in the
``hypervisor/scenarios/$(SCENARIO)/vm_configurations.c`` file.
``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file.
The guest kernel command line arguments is configured in the
hypervisor source code by default if no ``$(VMx bootargs)`` is present.

View File

@ -3,7 +3,7 @@
Getting Started Guide for ACRN hybrid mode
##########################################
ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr
or Clear Linux) runs in a pre-launched VM or in a post-launched VM that is
or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is
launched by a Device model in the Service VM. The following guidelines
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
as shown in :numref:`hybrid_scenario_on_nuc`.
@ -19,7 +19,7 @@ Prerequisites
*************
- Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_.
- Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`.
- Install GRUB on your SATA device or on the NVME disk of your NUC.
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your NUC.
Update Ubuntu GRUB
******************
@ -31,7 +31,7 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
.. code-block:: bash
:emphasize-lines: 10,11
menuentry 'ACRN hypervisor Hybird Scenario' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' {
menuentry 'ACRN hypervisor Hybrid Scenario' --id ACRN_Hybrid --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' {
recordfail
load_video
gfxmode $linux_gfx_mode
@ -39,21 +39,21 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
insmod part_gpt
insmod ext2
echo 'Loading hypervisor Hybrid scenario ...'
multiboot --quirk-modules-after-kernel /boot/acrn.32.out
module /boot/zephyr.bin xxxxxx
module /boot/bzImage yyyyyy
multiboot2 /boot/acrn.bin
module2 /boot/zephyr.bin xxxxxx
module2 /boot/bzImage yyyyyy
}
.. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file.
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
``kernel_mod_tag`` of VM0 which is configured in the ``hypervisor/scenarios/hybrid/vm_configurations.c``
``kernel_mod_tag`` of VM0 which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
file. The multiboot module ``/boot/bzImage`` is the Service VM kernel
file. The param ``yyyyyy`` is the bzImage tag and must exactly match the
``kernel_mod_tag`` of VM1 in the ``hypervisor/scenarios/hybrid/vm_configurations.c``
``kernel_mod_tag`` of VM1 in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
file. The kernel command line arguments used to boot the Service VM are
located in the header file ``hypervisor/scenarios/hybrid/vm_configurations.h``
located in the header file ``misc/vm_configs/scenarios/hybrid/vm_configurations.h``
and are configured by the `SOS_VM_BOOTARGS` macro.
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu
@ -61,6 +61,8 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
.. code-block:: bash
GRUB_DEFAULT=ACRN_Hybrid
GRUB_TIMEOUT=5
# GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=false
@ -82,11 +84,11 @@ Hybrid Scenario Startup Checking
#. Use these steps to verify all VMs are running properly:
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**.
#. Enter :kbd:`Ctrl+Spacebar` to return to the ACRN hypervisor shell.
#. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
#. Verify that the VM1's Service VM can boot up and you can log in.
#. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script.
#. Go to the Service VM console, and enter :kbd:`Ctrl+Spacebar` to return to the ACRN hypervisor shell.
#. Go to the Service VM console, and enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console.
#. Verify that VM2 can boot up and you can log in.

View File

@ -4,7 +4,7 @@ Getting Started Guide for ACRN logical partition mode
#####################################################
The ACRN hypervisor supports a logical partition scenario in which the User
OS (such as Clear Linux) running in a pre-launched VM can bypass the ACRN
OS (such as Ubuntu OS) running in a pre-launched VM can bypass the ACRN
hypervisor and directly access isolated PCI devices. The following
guidelines provide step-by-step instructions on how to set up the ACRN
hypervisor logical partition scenario on Intel NUC while running two
@ -14,9 +14,8 @@ Validated Versions
******************
- Ubuntu version: **18.04**
- Clear Linux version: **32680**
- ACRN hypervisor tag: **v1.6**
- ACRN kernel commit: **8c9a8695966d8c5c8c7ccb296b9c48671b14aa70**
- ACRN hypervisor tag: **v2.1**
- ACRN kernel tag: **v2.1**
Prerequisites
*************
@ -28,14 +27,12 @@ Prerequisites
or SATA disk connected with a USB3.0 SATA converter).
* Disable **Intel Hyper Threading Technology** in the BIOS to avoid
interference from logical cores for the logical partition scenario.
* In the logical partition scenario, two VMs (running Clear Linux)
* In the logical partition scenario, two VMs (running Ubuntu OS)
are started by the ACRN hypervisor. Each VM has its own root
filesystem. Set up each VM by following the `Install Clear Linux
OS on bare metal with live server
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ instructions
and install Clear Linux OS (version: 32680) first on a SATA disk and then
again on a storage device with a USB interface. The two pre-launched
VMs will mount the root file systems via the SATA controller and
filesystem. Set up each VM by following the `Ubuntu desktop installation
<https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop>`_ instructions
first on a SATA disk and then again on a storage device with a USB interface.
The two pre-launched VMs will mount the root file systems via the SATA controller and
the USB controller respectively.
Update kernel image and modules of pre-launched VM
@ -84,11 +81,11 @@ Update kernel image and modules of pre-launched VM
.. code-block:: none
# Mount the Clear Linux OS root filesystem on the SATA disk
# Mount the Ubuntu OS root filesystem on the SATA disk
$ sudo mount /dev/sda3 /mnt
$ sudo cp -r <kernel-modules-folder-built-in-step1>/lib/modules/* /mnt/lib/modules
$ sudo umount /mnt
# Mount the Clear Linux OS root filesystem on the USB flash disk
# Mount the Ubuntu OS root filesystem on the USB flash disk
$ sudo mount /dev/sdb3 /mnt
$ sudo cp -r <path-to-kernel-module-folder-built-in-step1>/lib/modules/* /mnt/lib/modules
$ sudo umount /mnt
@ -139,13 +136,13 @@ Update ACRN hypervisor image
Refer to :ref:`getting-started-building` to set up the ACRN build
environment on your development workstation.
Clone the ACRN source code and check out to the tag v1.6:
Clone the ACRN source code and check out to the tag v2.1:
.. code-block:: none
$ git clone https://github.com/projectacrn/acrn-hypervisor.git
$ cd acrn-hypervisor
$ git checkout v1.6
$ git checkout v2.1
Build the ACRN hypervisor with default xmls:
@ -154,7 +151,7 @@ Update ACRN hypervisor image
$ make hypervisor BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/logical_partition.xml RELEASE=0
.. note::
The ``acrn.32.out`` will be generated to ``./build/hypervisor/acrn.32.out``.
The ``acrn.bin`` will be generated to ``./build/hypervisor/acrn.bin``.
#. Check the Ubuntu boot loader name.
@ -171,13 +168,13 @@ Update ACRN hypervisor image
#. Check or update the BDF information of the PCI devices for each
pre-launched VM; check it in the ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``.
#. Copy the artifact ``acrn.32.out`` to the ``/boot`` directory:
#. Copy the artifact ``acrn.bin`` to the ``/boot`` directory:
#. Copy ``acrn.32.out`` to a removable disk.
#. Copy ``acrn.bin`` to a removable disk.
#. Plug the removable disk into the NUC's USB port.
#. Copy the ``acrn.32.out`` from the removable disk to ``/boot``
#. Copy the ``acrn.bin`` from the removable disk to ``/boot``
directory.
Update Ubuntu GRUB to boot hypervisor and load kernel image
@ -187,7 +184,7 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
.. code-block:: none
menuentry 'ACRN hypervisor Logical Partition Scenario' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' {
menuentry 'ACRN hypervisor Logical Partition Scenario' --id ACRN_Logical_Partition --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' {
recordfail
load_video
gfxmode $linux_gfx_mode
@ -195,25 +192,27 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
insmod part_gpt
insmod ext2
search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1
echo 'Loading hypervisor logical partition scenario ...'
multiboot --quirk-modules-after-kernel /boot/acrn.32.out
module /boot/bzImage XXXXXX
multiboot2 /boot/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83"
module2 /boot/bzImage XXXXXX
}
.. note::
Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
(or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``.
The kernel command line arguments used to boot the pre-launched VMs is
located in the ``hypervisor/scenarios/logical_partition/vm_configurations.h`` header file and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id
number and ``*`` are arguments). The multiboot module param ``XXXXXX``
is the bzImage tag and must exactly match the ``kernel_mod_tag``
configured in the
``hypervisor/scenarios/logical_partition/vm_configurations.c`` file.
located in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` header file
and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id number and ``*`` are arguments).
The multiboot2 module param ``XXXXXX`` is the bzImage tag and must exactly match the ``kernel_mod_tag``
configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` file.
#. Modify the `/etc/default/grub` file as follows to make the GRUB menu
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu
visible when booting:
.. code-block:: none
GRUB_DEFAULT=3
GRUB_DEFAULT=ACRN_Logical_Partition
GRUB_TIMEOUT=10
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
@ -243,7 +242,7 @@ Logical partition scenario startup checking
#. Use the ``vm_console 0`` to switch to VM0's console.
#. The VM0's Clear Linux OS should boot up and log in.
#. Use a ``Ctrl-Spacebar`` to return to the Acrn hypervisor shell.
#. Use a :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` to switch to VM1's console.
#. The VM1's Clear Linux OS should boot up and log in.

View File

@ -8,7 +8,8 @@ Introduction
The virtual universal asynchronous receiver-transmitter (vUART) supports two functions: one is the console, the other is communication. vUART only works on a single function.
Currently, only two vUART configurations are added to the ``hypervisor/scenarios/<xxx>/vm_configuration.c`` file, but you can change the value in it.
Currently, only two vUART configurations are added to the
``misc/vm_configs/scenarios/<xxx>/vm_configuration.c`` file, but you can change the value in it.
.. code-block:: none

View File

@ -28,7 +28,7 @@ The ACRN hypervisor shell supports the following commands:
- Dump a User VM (guest) memory region based on the VM ID (``vm_id``, in decimal),
the start of the memory region ``gva`` (in hexadecimal) and its length ``length`` (in bytes, decimal number).
* - vm_console <vm_id>
- Switch to the VM's console. Use :kbd:`Ctrl+Spacebar` to return to the ACRN
- Switch to the VM's console. Use :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN
shell console
* - int
- List interrupt information per CPU
@ -156,7 +156,7 @@ vm_console
===========
The ``vm_console`` command switches the ACRN's console to become the VM's console.
Use a :kbd:`Ctrl-Spacebar` to return to the ACRN shell console.
Press :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN shell console.
vioapic
=======

View File

@ -342,14 +342,6 @@ section below has more details on a few select parameters.
i915.enable_gvt=1
* - i915.enable_pvmmio
- Service VM, User VM
- Control Para-Virtualized MMIO (PVMMIO). It batches sequential MMIO writes
into a shared buffer between the Service VM and User VM
- ::
i915.enable_pvmmio=0x1F
* - i915.gvt_workload_priority
- Service VM
- Define the priority level of User VM graphics workloads
@ -373,20 +365,6 @@ section below has more details on a few select parameters.
i915.nuclear_pageflip=1
* - i915.avail_planes_per_pipe
- Service VM
- See :ref:`i915-avail-planes-owners`.
- ::
i915.avail_planes_per_pipe=0x01010F
* - i915.domain_plane_owners
- Service VM
- See :ref:`i915-avail-planes-owners`.
- ::
i915.domain_plane_owners=0x011111110000
* - i915.domain_scaler_owner
- Service VM
- See `i915.domain_scaler_owner`_
@ -401,13 +379,6 @@ section below has more details on a few select parameters.
i915.enable_guc=0x02
* - i915.avail_planes_per_pipe
- User VM
- See :ref:`i915-avail-planes-owners`.
- ::
i915.avail_planes_per_pipe=0x070F00
* - i915.enable_guc
- User VM
- Disable GuC
@ -445,38 +416,6 @@ support in the host. By default, it's not enabled, so we need to add
``i915.enable_gvt=1`` in the Service VM kernel command line. This is a Service
OS only parameter, and cannot be enabled in the User VM.
i915.enable_pvmmio
------------------
We introduce the feature named **Para-Virtualized MMIO** (PVMMIO)
to improve graphics performance of the GVT-g guest.
This feature batches sequential MMIO writes into a
shared buffer between the Service VM and User VM, and then submits a
para-virtualized command to notify to GVT-g in Service VM. This
effectively reduces the trap numbers of MMIO operations and improves
overall graphics performance.
The ``i915.enable_pvmmio`` option controls
the optimization levels of the PVMMIO feature: each bit represents a
sub-feature of the optimization. By default, all
sub-features of PVMMIO are enabled. They can also be selectively
enabled or disabled..
The PVMMIO optimization levels are:
* PVMMIO_ELSP_SUBMIT = 0x1 - Batch submission of the guest graphics
workloads
* PVMMIO_PLANE_UPDATE = 0x2 - Batch plane register update operations
* PVMMIO_PLANE_WM_UPDATE = 0x4 - Batch watermark registers update operations
* PVMMIO_MASTER_IRQ = 0x8 - Batch IRQ related registers
* PVMMIO_PPGTT_UPDATE = 0x10 - Use PVMMIO method to update the PPGTT table
of guest.
.. note:: This parameter works in both the Service VM and User VM, but
changes to one will affect the other. For example, if either Service VM or User VM
disables the PVMMIO_PPGTT_UPDATE feature, this optimization will be
disabled for both.
i915.gvt_workload_priority
--------------------------
@ -522,118 +461,6 @@ In the current configuration, we will set
This parameter is not used on UEFI platforms.
.. _i915-avail-planes-owners:
i915.avail_planes_per_pipe and i915.domain_plane_owners
-------------------------------------------------------
Both Service VM and User VM are provided a set of HW planes where they
can display their contents. Since each domain provides its content,
there is no need for any extra composition to be done through Service VM.
``i915.avail_planes_per_pipe`` and ``i915.domain_plane_owners`` work
together to provide the plane restriction (or plan-based domain
ownership) feature.
* i915.domain_plane_owners
On Intel's display hardware, each pipeline contains several planes, which are
blended
together by their Z-order and rendered to the display monitors. In
AcrnGT, we can control each planes' ownership so that the domains can
display contents on the planes they own.
The ``i915.domain_plane_owners`` parameter controls the ownership of all
the planes in the system, as shown in :numref:`i915-planes-pipes`. Each
4-bit nibble identifies the domain id owner for that plane and a group
of 4 nibbles represents a pipe. This is a Service VM only configuration
and cannot be modified at runtime. Domain ID 0x0 is for the Service VM,
the User VM use domain IDs from 0x1 to 0xF.
.. figure:: images/i915-image1.png
:width: 900px
:align: center
:name: i915-planes-pipes
i915.domain_plane_owners
For example, if we set ``i915.domain_plane_owners=0x010001101110``, the
plane ownership will be as shown in :numref:`i915-planes-example1` - Service VM
(green) owns plane 1A, 1B, 4B, 1C, and 2C, and User VM #1 owns plane 2A, 3A,
4A, 2B, 3B and 3C.
.. figure:: images/i915-image2.png
:width: 900px
:align: center
:name: i915-planes-example1
i915.domain_plane_owners example
Some other examples:
* i915.domain_plane_owners=0x022211110000 - Service VM (0x0) owns planes on pipe A;
User VM #1 (0x1) owns all planes on pipe B; and User VM #2 (0x2) owns all
planes on pipe C (since, in the representation in
:numref:`i915-planes-pipes` above, there are only 3 planes attached to
pipe C).
* i915.domain_plane_owners=0x000001110000 - Service VM owns all planes on pipe A
and pipe C; User VM #1 owns plane 1, 2 and 3 on pipe B. Plane 4 on pipe B
is owned by the Service VM so that if it wants to display notice message, it
can display on top of the User VM.
* i915.avail_planes_per_pipe
Option ``i915.avail_planes_per_pipe`` is a bitmask (shown in
:numref:`i915-avail-planes`) that tells the i915
driver which planes are available and can be exposed to the compositor.
This is a parameter that must to be set in each domain. If
``i915.avail_planes_per_pipe=0``, the plane restriction feature is disabled.
.. figure:: images/i915-image3.png
:width: 600px
:align: center
:name: i915-avail-planes
i915.avail_planes_per_pipe
For example, if we set ``i915.avail_planes_per_pipe=0x030901`` in Service VM
and ``i915.avail_planes_per_pipe=0x04060E`` in User VM, the planes will be as
shown in :numref:`i915-avail-planes-example1` and
:numref:`i915-avail-planes-example1`:
.. figure:: images/i915-image4.png
:width: 500px
:align: center
:name: i915-avail-planes-example1
Service VM i915.avail_planes_per_pipe
.. figure:: images/i915-image5.png
:width: 500px
:align: center
:name: i915-avail-planes-example2
User VM i915.avail_planes_per_pipe
``i915.avail_planes_per_pipe`` controls the view of planes from i915 drivers
inside of every domain, and ``i915.domain_plane_owners`` is the global
arbiter controlling which domain can present its content onto the
real hardware. Generally, they are aligned. For example, we can set
``i915.domain_plane_owners= 0x011111110000``,
``i915.avail_planes_per_pipe=0x00000F`` in Service VM, and
``i915.avail_planes_per_pipe=0x070F00`` in domain 1, so every domain will
only flip on the planes they owns.
However, we don't force alignment: ``avail_planes_per_pipe`` might
not be aligned with the
setting of ``domain_plane_owners``. Consider this example:
``i915.domain_plane_owners=0x011111110000``,
``i915.avail_planes_per_pipe=0x01010F`` in Service VM and
``i915.avail_planes_per_pipe=0x070F00`` in domain 1.
With this configuration, Service VM will be able to render on plane 1B and
plane 1C, however, the content of plane 1B and plane 1C will not be
flipped onto the real hardware.
i915.domain_scaler_owner
========================