doc: update release_2.2 branch documentation

Update documentaiton in the release_2.2 branch with changes made after
tagged for code freeze

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2020-09-29 17:55:50 -07:00 committed by David Kinder
parent 3b6b5fb662
commit 7e676dbb1c
61 changed files with 907 additions and 511 deletions

View File

@ -33,6 +33,7 @@ Service VM Tutorials
:maxdepth: 1 :maxdepth: 1
tutorials/running_deb_as_serv_vm tutorials/running_deb_as_serv_vm
tutorials/using_yp
User VM Tutorials User VM Tutorials
***************** *****************
@ -72,6 +73,7 @@ Enable ACRN Features
tutorials/acrn_on_qemu tutorials/acrn_on_qemu
tutorials/using_grub tutorials/using_grub
tutorials/pre-launched-rt tutorials/pre-launched-rt
tutorials/enable_ivshmem
Debug Debug
***** *****

View File

@ -22,4 +22,4 @@ documented in this section.
Hostbridge emulation <hostbridge-virt-hld> Hostbridge emulation <hostbridge-virt-hld>
AT keyboard controller emulation <atkbdc-virt-hld> AT keyboard controller emulation <atkbdc-virt-hld>
Split Device Model <split-dm> Split Device Model <split-dm>
Shared memory based inter-vm communication <ivshmem-hld> Shared memory based inter-VM communication <ivshmem-hld>

View File

@ -5,7 +5,7 @@ ACRN high-level design overview
ACRN is an open source reference hypervisor (HV) that runs on top of Intel ACRN is an open source reference hypervisor (HV) that runs on top of Intel
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & real-time OS for industry. ACRN provides embedded hypervisor vendors with a reference
I/O mediation solution with a permissive license and provides auto makers and I/O mediation solution with a permissive license and provides auto makers and
industry users a reference software stack for corresponding use. industry users a reference software stack for corresponding use.
@ -124,7 +124,7 @@ ACRN 2.0
======== ========
ACRN 2.0 is extending ACRN to support pre-launched VM (mainly for safety VM) ACRN 2.0 is extending ACRN to support pre-launched VM (mainly for safety VM)
and Real-Time (RT) VM. and real-time (RT) VM.
:numref:`overview-arch2.0` shows the architecture of ACRN 2.0; the main difference :numref:`overview-arch2.0` shows the architecture of ACRN 2.0; the main difference
compared to ACRN 1.0 is that: compared to ACRN 1.0 is that:

View File

@ -13,7 +13,7 @@ SoC and back, as well as signals the SoC uses to control onboard
peripherals. peripherals.
.. note:: .. note::
NUC and UP2 platforms do not support IOC hardware, and as such, IOC Intel NUC and UP2 platforms do not support IOC hardware, and as such, IOC
virtualization is not supported on these platforms. virtualization is not supported on these platforms.
The main purpose of IOC virtualization is to transfer data between The main purpose of IOC virtualization is to transfer data between

View File

@ -57,8 +57,8 @@ configuration and copies them to the corresponding guest memory.
.. figure:: images/partition-image18.png .. figure:: images/partition-image18.png
:align: center :align: center
ACRN set-up for guests ACRN setup for guests
********************** *********************
Cores Cores
===== =====

View File

@ -39,7 +39,7 @@ resource allocator.) The user can check the cache capabilities such as cache
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities` mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
CLOS ID, to select a cache mask to take effect. These configurations can be CLOS ID, to select a cache mask to take effect. These configurations can be
done in scenario xml file under ``FEATURES`` section as shown in the below example. done in scenario XML file under ``FEATURES`` section as shown in the below example.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
to enforce the settings. to enforce the settings.
@ -52,7 +52,7 @@ to enforce the settings.
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK> <CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
Once the cache mask is set of each individual CPU, the respective CLOS ID Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section. If user desires needs to be set in the scenario XML file under ``VM`` section. If user desires
to use CDP feature, CDP_ENABLED should be set to ``y``. to use CDP feature, CDP_ENABLED should be set to ``y``.
.. code-block:: none .. code-block:: none
@ -106,7 +106,7 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
users can check the MBA capabilities such as mba delay values and users can check the MBA capabilities such as mba delay values and
max supported CLOS as described in :ref:`rdt_detection_capabilities` and max supported CLOS as described in :ref:`rdt_detection_capabilities` and
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID. then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
These configurations can be done in scenario xml file under ``FEATURES`` section These configurations can be done in scenario XML file under ``FEATURES`` section
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
for non-root and root modes to enforce the settings. for non-root and root modes to enforce the settings.
@ -120,7 +120,7 @@ for non-root and root modes to enforce the settings.
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY> <MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
Once the cache mask is set of each individual CPU, the respective CLOS ID Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section. needs to be set in the scenario XML file under ``VM`` section.
.. code-block:: none .. code-block:: none
:emphasize-lines: 2 :emphasize-lines: 2

View File

@ -15,17 +15,24 @@ Inter-VM Communication Overview
:align: center :align: center
:name: ivshmem-architecture-overview :name: ivshmem-architecture-overview
ACRN shared memory based inter-vm communication architecture ACRN shared memory based inter-VM communication architecture
The ``ivshmem`` device is emulated in the ACRN device model (dm-land) There are two ways ACRN can emulate the ``ivshmem`` device:
and its shared memory region is allocated from the Service VM's memory
space. This solution only supports communication between post-launched
VMs.
.. note:: In a future implementation, the ``ivshmem`` device could ``ivshmem`` dm-land
instead be emulated in the hypervisor (hypervisor-land) and the shared The ``ivshmem`` device is emulated in the ACRN device model,
memory regions reserved in the hypervisor's memory space. This solution and the shared memory regions are reserved in the Service VM's
would work for both pre-launched and post-launched VMs. memory space. This solution only supports communication between
post-launched VMs.
``ivshmem`` hv-land
The ``ivshmem`` device is emulated in the hypervisor, and the
shared memory regions are reserved in the hypervisor's
memory space. This solution works for both pre-launched and
post-launched VMs.
While both solutions can be used at the same time, Inter-VM communication
may only be done between VMs using the same solution.
ivshmem hv: ivshmem hv:
The **ivshmem hv** implements register virtualization The **ivshmem hv** implements register virtualization
@ -98,89 +105,7 @@ MMIO Registers Definition
Usage Usage
***** *****
To support two post-launched VMs communicating via an ``ivshmem`` device, For usage information, see :ref:`enable_ivshmem`
add this line as an ``acrn-dm`` boot parameter::
-s slot,ivshmem,shm_name,shm_size
where
- ``-s slot`` - Specify the virtual PCI slot number
- ``ivshmem`` - Virtual PCI device name
- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the
same ``shm_name`` share a shared memory region.
- ``shm_size`` - Specify a shared memory size. The two communicating
VMs must define the same size.
.. note:: This device can be used with Real-Time VM (RTVM) as well.
Inter-VM Communication Example
******************************
The following example uses inter-vm communication between two Linux-based
post-launched VMs (VM1 and VM2).
.. note:: An ``ivshmem`` Windows driver exists and can be found `here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_
1. Add a new virtual PCI device for both VMs: the device type is
``ivshmem``, shared memory name is ``test``, and shared memory size is
4096 bytes. Both VMs must have the same shared memory name and size:
- VM1 Launch Script Sample
.. code-block:: none
:emphasize-lines: 7
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 2,pci-gvt -G "$2" \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,/home/acrn/uos1.img \
-s 4,virtio-net,tap0 \
-s 6,ivshmem,test,4096 \
-s 7,virtio-rnd \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
- VM2 Launch Script Sample
.. code-block:: none
:emphasize-lines: 5
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 2,pci-gvt -G "$2" \
-s 3,virtio-blk,/home/acrn/uos2.img \
-s 4,virtio-net,tap0 \
-s 5,ivshmem,test,4096 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.
- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
3. Use these commands to probe the device::
$ sudo modprobe uio
$ sudo modprobe uio_pci_generic
$ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
4. Finally, a user application can get the shared memory base address from
the ``ivshmem`` device BAR resource
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
the ``ivshmem`` device config resource
(``/sys/class/uio/uioX/device/config``).
The ``X`` in ``uioX`` above, is a number that can be retrieved using the
``ls`` command:
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
Inter-VM Communication Security hardening (BKMs) Inter-VM Communication Security hardening (BKMs)
************************************************ ************************************************

View File

@ -86,7 +86,7 @@ I/O ports definition::
RTC emulation RTC emulation
============= =============
ACRN supports RTC (Real-Time Clock) that can only be accessed through ACRN supports RTC (real-time clock) that can only be accessed through
I/O ports (0x70 and 0x71). I/O ports (0x70 and 0x71).
0x70 is used to access CMOS address register and 0x71 is used to access 0x70 is used to access CMOS address register and 0x71 is used to access

View File

@ -61,7 +61,7 @@ Add the following parameters into the command line::
controller_name, you can use it as controller_name directly. You can controller_name, you can use it as controller_name directly. You can
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can
be used to match /dev/XXX, then use XXX as the controller_name. On MRB be used to match /dev/XXX, then use XXX as the controller_name. On MRB
and NUC platforms, the controller_name are gpiochip0, gpiochip1, and Intel NUC platforms, the controller_name are gpiochip0, gpiochip1,
gpiochip2.gpiochip3. gpiochip2.gpiochip3.
- **offset|name**: you can use gpio offset or its name to locate one - **offset|name**: you can use gpio offset or its name to locate one

View File

@ -35,7 +35,7 @@ actions, and returns. In ACRN, the commands are from User VM
watchdog driver. watchdog driver.
User VM watchdog workflow User VM watchdog workflow
************************** *************************
When the User VM does a read or write operation on the watchdog device's When the User VM does a read or write operation on the watchdog device's
registers or memory space (Port IO or Memory map I/O), it will trap into registers or memory space (Port IO or Memory map I/O), it will trap into

View File

@ -77,7 +77,7 @@ PTEs (with present bit cleared, or reserved bit set) pointing to valid
host PFNs, a malicious guest may use those EPT PTEs to construct an attack. host PFNs, a malicious guest may use those EPT PTEs to construct an attack.
A special aspect of L1TF in the context of virtualization is symmetric A special aspect of L1TF in the context of virtualization is symmetric
multi threading (SMT), e.g. Intel |reg| Hyper-Threading Technology. multi threading (SMT), e.g. Intel |reg| Hyper-threading Technology.
Logical processors on the affected physical cores share the L1 Data Cache Logical processors on the affected physical cores share the L1 Data Cache
(L1D). This fact could make more variants of L1TF-based attack, e.g. (L1D). This fact could make more variants of L1TF-based attack, e.g.
a malicious guest running on one logical processor can attack the data which a malicious guest running on one logical processor can attack the data which
@ -88,11 +88,11 @@ Guest -> guest Attack
===================== =====================
The possibility of guest -> guest attack varies on specific configuration, The possibility of guest -> guest attack varies on specific configuration,
e.g. whether CPU partitioning is used, whether Hyper-Threading is on, etc. e.g. whether CPU partitioning is used, whether Hyper-threading is on, etc.
If CPU partitioning is enabled (default policy in ACRN), there is If CPU partitioning is enabled (default policy in ACRN), there is
1:1 mapping between vCPUs and pCPUs i.e. no sharing of pCPU. There 1:1 mapping between vCPUs and pCPUs i.e. no sharing of pCPU. There
may be an attack possibility when Hyper-Threading is on, where may be an attack possibility when Hyper-threading is on, where
logical processors of same physical core may be allocated to two logical processors of same physical core may be allocated to two
different guests. Then one guest may be able to attack the other guest different guests. Then one guest may be able to attack the other guest
on sibling thread due to shared L1D. on sibling thread due to shared L1D.
@ -221,7 +221,7 @@ This mitigation is always enabled.
Core-based scheduling Core-based scheduling
===================== =====================
If Hyper-Threading is enabled, it's important to avoid running If Hyper-threading is enabled, it's important to avoid running
sensitive context (if containing security data which a given VM sensitive context (if containing security data which a given VM
has no permission to access) on the same physical core that runs has no permission to access) on the same physical core that runs
said VM. It requires scheduler enhancement to enable core-based said VM. It requires scheduler enhancement to enable core-based
@ -265,9 +265,9 @@ requirements:
- Doing 5) is not feasible, or - Doing 5) is not feasible, or
- CPU sharing is enabled (in the future) - CPU sharing is enabled (in the future)
If Hyper-Threading is enabled, there is no available mitigation If Hyper-threading is enabled, there is no available mitigation
option before core scheduling is planned. User should understand option before core scheduling is planned. User should understand
the security implication and only turn on Hyper-Threading the security implication and only turn on Hyper-threading
when the potential risk is acceptable to their usage. when the potential risk is acceptable to their usage.
Mitigation Status Mitigation Status

View File

@ -566,7 +566,7 @@ The following table shows some use cases of module level configuration design:
- This module is used to virtualize part of LAPIC functionalities. - This module is used to virtualize part of LAPIC functionalities.
It can be done via APICv or software emulation depending on CPU It can be done via APICv or software emulation depending on CPU
capabilities. capabilities.
For example, KBL NUC doesn't support virtual-interrupt delivery, while For example, KBL Intel NUC doesn't support virtual-interrupt delivery, while
other platforms support it. other platforms support it.
- If a function pointer is used, the prerequisite is - If a function pointer is used, the prerequisite is
"hv_operation_mode == OPERATIONAL". "hv_operation_mode == OPERATIONAL".

View File

@ -31,8 +31,8 @@ details:
* :option:`CONFIG_UOS_RAM_SIZE` * :option:`CONFIG_UOS_RAM_SIZE`
* :option:`CONFIG_HV_RAM_SIZE` * :option:`CONFIG_HV_RAM_SIZE`
For example, if the NUC's physical memory size is 32G, you may follow these steps For example, if the Intel NUC's physical memory size is 32G, you may follow these steps
to make the new uefi ACRN hypervisor, and then deploy it onto the NUC board to boot to make the new UEFI ACRN hypervisor, and then deploy it onto the Intel NUC to boot
the ACRN Service VM with the 32G memory size. the ACRN Service VM with the 32G memory size.
#. Use ``make menuconfig`` to change the ``RAM_SIZE``:: #. Use ``make menuconfig`` to change the ``RAM_SIZE``::

View File

@ -54,7 +54,7 @@ distribution.
.. note:: .. note::
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI) ACRN uses ``menuconfig``, a python3 text-based user interface (TUI)
for configuring hypervisor options and using python's ``kconfiglib`` for configuring hypervisor options and using Python's ``kconfiglib``
library. library.
Install the necessary tools for the following systems: Install the necessary tools for the following systems:
@ -79,8 +79,17 @@ Install the necessary tools for the following systems:
libblkid-dev \ libblkid-dev \
e2fslibs-dev \ e2fslibs-dev \
pkg-config \ pkg-config \
libnuma-dev libnuma-dev \
liblz4-tool \
flex \
bison
$ sudo pip3 install kconfiglib $ sudo pip3 install kconfiglib
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz
$ tar zxvf acpica-unix-20191018.tar.gz
$ cd acpica-unix-20191018
$ make clean && make iasl
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/
.. note:: .. note::
ACRN requires ``gcc`` version 7.3.* (or higher) and ``binutils`` version ACRN requires ``gcc`` version 7.3.* (or higher) and ``binutils`` version
@ -274,7 +283,4 @@ of the acrn-hypervisor directory):
from XML files. If the ``TARGET_DIR`` is not specified, the original from XML files. If the ``TARGET_DIR`` is not specified, the original
configuration files of acrn-hypervisor would be overridden. configuration files of acrn-hypervisor would be overridden.
In the 2.1 release, there is a known issue (:acrn-issue:`5157`) that
``TARGET_DIR=xxx`` does not work.
Follow the same instructions to boot and test the images you created from your build. Follow the same instructions to boot and test the images you created from your build.

View File

@ -7,9 +7,9 @@ Verified version
**************** ****************
- Ubuntu version: **18.04** - Ubuntu version: **18.04**
- GCC version: **9.0** - GCC version: **7.4**
- ACRN-hypervisor branch: **release_2.0 (acrn-2020w23.6-180000p)** - ACRN-hypervisor branch: **release_2.2 (acrn-2020w40.1-180000p)**
- ACRN-Kernel (Service VM kernel): **release_2.0 (5.4.43-PKT-200203T060100Z)** - ACRN-Kernel (Service VM kernel): **release_2.2 (5.4.43-PKT-200203T060100Z)**
- RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)** - RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)**
- HW: Maxtang Intel WHL-U i7-8665U (`AX8665U-A2 <http://www.maxtangpc.com/fanlessembeddedcomputers/140.html>`_) - HW: Maxtang Intel WHL-U i7-8665U (`AX8665U-A2 <http://www.maxtangpc.com/fanlessembeddedcomputers/140.html>`_)
@ -34,7 +34,7 @@ Hardware Connection
Connect the WHL Maxtang with the appropriate external devices. Connect the WHL Maxtang with the appropriate external devices.
#. Connect the WHL Maxtang board to a monitor via an HDMI cable. #. Connect the WHL Maxtang board to a monitor via an HDMI cable.
#. Connect the mouse, keyboard, ethernet cable, and power supply cable to #. Connect the mouse, keyboard, Ethernet cable, and power supply cable to
the WHL Maxtang board. the WHL Maxtang board.
#. Insert the Ubuntu 18.04 USB boot disk into the USB port. #. Insert the Ubuntu 18.04 USB boot disk into the USB port.
@ -55,7 +55,7 @@ Install Ubuntu on the SATA disk
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine. #. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
#. Power on the machine, then press F11 to select the USB disk as the boot #. Power on the machine, then press F11 to select the USB disk as the boot
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
label depends on the brand/make of the USB stick. label depends on the brand/make of the USB drive.
#. Install the Ubuntu OS. #. Install the Ubuntu OS.
#. Select **Something else** to create the partition. #. Select **Something else** to create the partition.
@ -72,7 +72,7 @@ Install Ubuntu on the SATA disk
#. Complete the Ubuntu installation on ``/dev/sda``. #. Complete the Ubuntu installation on ``/dev/sda``.
This Ubuntu installation will be modified later (see `Build and Install the RT kernel for the Ubuntu User VM`_) This Ubuntu installation will be modified later (see `Build and Install the RT kernel for the Ubuntu User VM`_)
to turn it into a Real-Time User VM (RTVM). to turn it into a real-time User VM (RTVM).
Install the Ubuntu Service VM on the NVMe disk Install the Ubuntu Service VM on the NVMe disk
============================================== ==============================================
@ -87,7 +87,7 @@ Install Ubuntu on the NVMe disk
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine. #. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
#. Power on the machine, then press F11 to select the USB disk as the boot #. Power on the machine, then press F11 to select the USB disk as the boot
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
label depends on the brand/make of the USB stick. label depends on the brand/make of the USB drive.
#. Install the Ubuntu OS. #. Install the Ubuntu OS.
#. Select **Something else** to create the partition. #. Select **Something else** to create the partition.
@ -103,7 +103,7 @@ Install Ubuntu on the NVMe disk
#. Complete the Ubuntu installation and reboot the system. #. Complete the Ubuntu installation and reboot the system.
.. note:: Set **acrn** as the username for the Ubuntu Service VM. .. note:: Set ``acrn`` as the username for the Ubuntu Service VM.
Build and Install ACRN on Ubuntu Build and Install ACRN on Ubuntu
@ -155,6 +155,28 @@ Build the ACRN Hypervisor on Ubuntu
$ sudo pip3 install kconfiglib $ sudo pip3 install kconfiglib
#. Starting with the ACRN v2.2 release, we use the ``iasl`` tool to
compile an offline ACPI binary for pre-launched VMs while building ACRN,
so we need to install the ``iasl`` tool in the ACRN build environment.
Follow these steps to install ``iasl`` (and its dependencies) and
then update the ``iasl`` binary with a newer version not available
in Ubuntu 18.04:
.. code-block:: none
$ sudo -E apt-get install iasl
$ cd /home/acrn/work
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz
$ tar zxvf acpica-unix-20191018.tar.gz
$ cd acpica-unix-20191018
$ make clean && make iasl
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/
.. note:: While there are newer versions of software available from
the `ACPICA downloads site <https://acpica.org/downloads>`_, this
20191018 version has been verified to work.
#. Get the ACRN source code: #. Get the ACRN source code:
.. code-block:: none .. code-block:: none
@ -163,30 +185,20 @@ Build the ACRN Hypervisor on Ubuntu
$ git clone https://github.com/projectacrn/acrn-hypervisor $ git clone https://github.com/projectacrn/acrn-hypervisor
$ cd acrn-hypervisor $ cd acrn-hypervisor
#. Switch to the v2.0 version: #. Switch to the v2.2 version:
.. code-block:: none .. code-block:: none
$ git checkout -b v2.0 remotes/origin/release_2.0 $ git checkout -b v2.2 remotes/origin/release_2.2
#. Build ACRN: #. Build ACRN:
.. code-block:: none .. code-block:: none
$ make all BOARD_FILE=misc/acrn-config/xmls/board-xmls/whl-ipc-i7.xml SCENARIO_FILE=misc/acrn-config/xmls/config-xmls/whl-ipc-i7/industry.xml RELEASE=0 $ make all BOARD_FILE=misc/vm-configs/xmls/board-xmls/whl-ipc-i7.xml SCENARIO_FILE=misc/vm-configs/xmls/config-xmls/whl-ipc-i7/industry.xml RELEASE=0
$ sudo make install $ sudo make install
$ sudo cp build/hypervisor/acrn.bin /boot/acrn/ $ sudo cp build/hypervisor/acrn.bin /boot/acrn/
Enable network sharing for the User VM
======================================
In the Ubuntu Service VM, enable network sharing for the User VM:
.. code-block:: none
$ sudo systemctl enable systemd-networkd
$ sudo systemctl start systemd-networkd
Build and install the ACRN kernel Build and install the ACRN kernel
================================= =================================
@ -202,7 +214,7 @@ Build and install the ACRN kernel
.. code-block:: none .. code-block:: none
$ git checkout -b v2.0 remotes/origin/release_2.0 $ git checkout -b v2.2 remotes/origin/release_2.2
$ cp kernel_config_uefi_sos .config $ cp kernel_config_uefi_sos .config
$ make olddefconfig $ make olddefconfig
$ make all $ make all
@ -256,6 +268,7 @@ Update Grub for the Ubuntu Service VM
GRUB_DEFAULT=ubuntu-service-vm GRUB_DEFAULT=ubuntu-service-vm
#GRUB_TIMEOUT_STYLE=hidden #GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=5 GRUB_TIMEOUT=5
GRUB_CMDLINE_LINUX="text"
#. Update Grub on your system: #. Update Grub on your system:
@ -263,6 +276,17 @@ Update Grub for the Ubuntu Service VM
$ sudo update-grub $ sudo update-grub
Enable network sharing for the User VM
======================================
In the Ubuntu Service VM, enable network sharing for the User VM:
.. code-block:: none
$ sudo systemctl enable systemd-networkd
$ sudo systemctl start systemd-networkd
Reboot the system Reboot the system
================= =================
@ -287,7 +311,7 @@ BIOS settings of GVT-d for WaaG
------------------------------- -------------------------------
.. note:: .. note::
Skip this step if you are using a Kaby Lake (KBL) NUC. Skip this step if you are using a Kaby Lake (KBL) Intel NUC.
Go to **Chipset** -> **System Agent (SA) Configuration** -> **Graphics Go to **Chipset** -> **System Agent (SA) Configuration** -> **Graphics
Configuration** and make the following settings: Configuration** and make the following settings:
@ -313,9 +337,13 @@ The User VM will be launched by OVMF, so copy it to the specific folder:
Install IASL in Ubuntu for User VM launch Install IASL in Ubuntu for User VM launch
----------------------------------------- -----------------------------------------
ACRN uses ``iasl`` to parse **User VM ACPI** information. The original ``iasl`` Starting with the ACRN v2.2 release, we use the ``iasl`` tool to
in Ubuntu 18.04 is too old to match with ``acrn-dm``; update it using the compile an offline ACPI binary for pre-launched VMs while building ACRN,
following steps: so we need to install the ``iasl`` tool in the ACRN build environment.
Follow these steps to install ``iasl`` (and its dependencies) and
then update the ``iasl`` binary with a newer version not available
in Ubuntu 18.04:
.. code-block:: none .. code-block:: none
@ -327,6 +355,11 @@ following steps:
$ make clean && make iasl $ make clean && make iasl
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/ $ sudo cp ./generate/unix/bin/iasl /usr/sbin/
.. note:: While there are newer versions of software available from
the `ACPICA downloads site <https://acpica.org/downloads>`_, this
20191018 version has been verified to work.
Build and Install the RT kernel for the Ubuntu User VM Build and Install the RT kernel for the Ubuntu User VM
------------------------------------------------------ ------------------------------------------------------
@ -441,7 +474,7 @@ Recommended BIOS settings for RTVM
.. csv-table:: .. csv-table::
:widths: 15, 30, 10 :widths: 15, 30, 10
"Hyper-Threading", "Intel Advanced Menu -> CPU Configuration", "Disabled" "Hyper-threading", "Intel Advanced Menu -> CPU Configuration", "Disabled"
"Intel VMX", "Intel Advanced Menu -> CPU Configuration", "Enable" "Intel VMX", "Intel Advanced Menu -> CPU Configuration", "Enable"
"Speed Step", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled" "Speed Step", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
"Speed Shift", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled" "Speed Shift", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
@ -458,7 +491,7 @@ Recommended BIOS settings for RTVM
"Delay Enable DMI ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled" "Delay Enable DMI ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
"DMI Link ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled" "DMI Link ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
"Aggressive LPM Support", "Intel Advanced Menu -> PCH-IO Configuration -> SATA And RST Configuration", "Disabled" "Aggressive LPM Support", "Intel Advanced Menu -> PCH-IO Configuration -> SATA And RST Configuration", "Disabled"
"USB Periodic Smi", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled" "USB Periodic SMI", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled"
"ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled" "ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled"
"Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled" "Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled"
@ -522,13 +555,13 @@ this, follow the below steps to allocate all housekeeping tasks to core 0:
# Move all rcu tasks to core 0. # Move all rcu tasks to core 0.
for i in `pgrep rcu`; do taskset -pc 0 $i; done for i in `pgrep rcu`; do taskset -pc 0 $i; done
# Change realtime attribute of all rcu tasks to SCHED_OTHER and priority 0 # Change real-time attribute of all rcu tasks to SCHED_OTHER and priority 0
for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done
# Change realtime attribute of all tasks on core 1 to SCHED_OTHER and priority 0 # Change real-time attribute of all tasks on core 1 to SCHED_OTHER and priority 0
for i in `pgrep /1`; do chrt -v -o -p 0 $i; done for i in `pgrep /1`; do chrt -v -o -p 0 $i; done
# Change realtime attribute of all tasks to SCHED_OTHER and priority 0 # Change real-time attribute of all tasks to SCHED_OTHER and priority 0
for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done
echo disabling timer migration echo disabling timer migration
@ -668,7 +701,8 @@ Passthrough a hard disk to RTVM
--ovmf /usr/share/acrn/bios/OVMF.fd \ --ovmf /usr/share/acrn/bios/OVMF.fd \
hard_rtvm hard_rtvm
#. Upon deployment completion, launch the RTVM directly onto your WHL NUC: #. Upon deployment completion, launch the RTVM directly onto your WHL
Intel NUC:
.. code-block:: none .. code-block:: none

View File

@ -6,12 +6,12 @@ What is ACRN
Introduction to Project ACRN Introduction to Project ACRN
**************************** ****************************
ACRN |trade| is a, flexible, lightweight reference hypervisor, built with ACRN |trade| is a flexible, lightweight reference hypervisor, built with
real-time and safety-criticality in mind, and optimized to streamline real-time and safety-criticality in mind, and optimized to streamline
embedded development through an open source platform. ACRN defines a embedded development through an open source platform. ACRN defines a
device hypervisor reference stack and an architecture for running device hypervisor reference stack and an architecture for running
multiple software subsystems, managed securely, on a consolidated system multiple software subsystems, managed securely, on a consolidated system
by means of a virtual machine manager (VMM). It also defines a reference using a virtual machine manager (VMM). It also defines a reference
framework implementation for virtual device emulation, called the "ACRN framework implementation for virtual device emulation, called the "ACRN
Device Model". Device Model".
@ -69,7 +69,7 @@ through the Device Model. Currently, the service VM is based on Linux,
but it can also use other operating systems as long as the ACRN Device but it can also use other operating systems as long as the ACRN Device
Model is ported into it. A user VM can be Ubuntu*, Android*, Model is ported into it. A user VM can be Ubuntu*, Android*,
Windows* or VxWorks*. There is one special user VM, called a Windows* or VxWorks*. There is one special user VM, called a
post-launched Real-Time VM (RTVM), designed to run a hard real-time OS, post-launched real-time VM (RTVM), designed to run a hard real-time OS,
such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM
can be used for soft programmable logic controller (PLC), inter-process can be used for soft programmable logic controller (PLC), inter-process
communication (IPC), or Robotics applications. communication (IPC), or Robotics applications.
@ -87,7 +87,7 @@ platform to run both safety-critical applications and non-safety
applications, together with security functions that safeguard the applications, together with security functions that safeguard the
system. system.
There are a number of pre-defined scenarios included in ACRN's source code. They There are a number of predefined scenarios included in ACRN's source code. They
all build upon the three fundamental modes of operation that have been explained all build upon the three fundamental modes of operation that have been explained
above, i.e. the *logical partitioning*, *sharing*, and *hybrid* modes. They above, i.e. the *logical partitioning*, *sharing*, and *hybrid* modes. They
further specify the number of VMs that can be run, their attributes and the further specify the number of VMs that can be run, their attributes and the
@ -130,7 +130,7 @@ In total, up to 7 post-launched User VMs can be started:
- 5 regular User VMs, - 5 regular User VMs,
- One `Kata Containers <https://katacontainers.io>`_ User VM (see - One `Kata Containers <https://katacontainers.io>`_ User VM (see
:ref:`run-kata-containers` for more details), and :ref:`run-kata-containers` for more details), and
- One Real-Time VM (RTVM). - One real-time VM (RTVM).
In this example, one post-launched User VM provides Human Machine Interface In this example, one post-launched User VM provides Human Machine Interface
(HMI) capability, another provides Artificial Intelligence (AI) capability, some (HMI) capability, another provides Artificial Intelligence (AI) capability, some
@ -157,15 +157,15 @@ Industrial usage scenario:
with tools such as Kubernetes*. with tools such as Kubernetes*.
- The HMI Application OS can be Windows* or Linux*. Windows is dominant - The HMI Application OS can be Windows* or Linux*. Windows is dominant
in Industrial HMI environments. in Industrial HMI environments.
- ACRN can support a soft Real-time OS such as preempt-rt Linux for - ACRN can support a soft real-time OS such as preempt-rt Linux for
soft-PLC control, or a hard Real-time OS that offers less jitter. soft-PLC control, or a hard real-time OS that offers less jitter.
Automotive Application Scenarios Automotive Application Scenarios
================================ ================================
As shown in :numref:`V2-SDC-scenario`, the ACRN hypervisor can be used As shown in :numref:`V2-SDC-scenario`, the ACRN hypervisor can be used
for building Automotive Software Defined Cockpit (SDC) and In-Vehicle for building Automotive Software Defined Cockpit (SDC) and in-vehicle
Experience (IVE) solutions. experience (IVE) solutions.
.. figure:: images/ACRN-V2-SDC-scenario.png .. figure:: images/ACRN-V2-SDC-scenario.png
:width: 600px :width: 600px
@ -177,12 +177,12 @@ Experience (IVE) solutions.
As a reference implementation, ACRN provides the basis for embedded As a reference implementation, ACRN provides the basis for embedded
hypervisor vendors to build solutions with a reference I/O mediation hypervisor vendors to build solutions with a reference I/O mediation
solution. In this scenario, an automotive SDC system consists of the solution. In this scenario, an automotive SDC system consists of the
Instrument Cluster (IC) system running in the Service VM and the In-Vehicle instrument cluster (IC) system running in the Service VM and the in-vehicle
Infotainment (IVI) system is running the post-launched User VM. Additionally, infotainment (IVI) system is running the post-launched User VM. Additionally,
one could modify the SDC scenario to add more post-launched User VMs that can one could modify the SDC scenario to add more post-launched User VMs that can
host Rear Seat Entertainment (RSE) systems (not shown on the picture). host rear seat entertainment (RSE) systems (not shown on the picture).
An **Instrument Cluster (IC)** system is used to show the driver operational An **instrument cluster (IC)** system is used to show the driver operational
information about the vehicle, such as: information about the vehicle, such as:
- the speed, fuel level, trip mileage, and other driving information of - the speed, fuel level, trip mileage, and other driving information of
@ -191,14 +191,14 @@ information about the vehicle, such as:
fuel or tire pressure; fuel or tire pressure;
- showing rear-view and surround-view cameras for parking assistance. - showing rear-view and surround-view cameras for parking assistance.
An **In-Vehicle Infotainment (IVI)** system's capabilities can include: An **in-vehicle infotainment (IVI)** system's capabilities can include:
- navigation systems, radios, and other entertainment systems; - navigation systems, radios, and other entertainment systems;
- connection to mobile devices for phone calls, music, and applications - connection to mobile devices for phone calls, music, and applications
via voice recognition; via voice recognition;
- control interaction by gesture recognition or touch. - control interaction by gesture recognition or touch.
A **Rear Seat Entertainment (RSE)** system could run: A **rear seat entertainment (RSE)** system could run:
- entertainment system; - entertainment system;
- virtual office; - virtual office;
@ -221,7 +221,7 @@ A block diagram of ACRN's SDC usage scenario is shown in
capabilities. capabilities.
- Resources are partitioned to ensure safety-critical and - Resources are partitioned to ensure safety-critical and
non-safety-critical domains are able to coexist on one platform. non-safety-critical domains are able to coexist on one platform.
- Rich I/O mediators allows sharing of various I/O devices across VMs, - Rich I/O mediators allow sharing of various I/O devices across VMs,
delivering a comprehensive user experience. delivering a comprehensive user experience.
- Multiple operating systems are supported by one SoC through efficient - Multiple operating systems are supported by one SoC through efficient
virtualization. virtualization.
@ -229,15 +229,15 @@ A block diagram of ACRN's SDC usage scenario is shown in
Best Known Configurations Best Known Configurations
************************* *************************
The ACRN Github codebase defines five best known configurations (BKC) The ACRN GitHub codebase defines five best known configurations (BKC)
targeting SDC and Industry usage scenarios. Developers can start with targeting SDC and Industry usage scenarios. Developers can start with
one of these pre-defined configurations and customize it to their own one of these predefined configurations and customize it to their own
application scenario needs. application scenario needs.
.. list-table:: Scenario-based Best Known Configurations .. list-table:: Scenario-based Best Known Configurations
:header-rows: 1 :header-rows: 1
* - Pre-defined BKC * - Predefined BKC
- Usage Scenario - Usage Scenario
- VM0 - VM0
- VM1 - VM1
@ -256,7 +256,7 @@ application scenario needs.
- Service VM - Service VM
- Up to 5 Post-launched VMs - Up to 5 Post-launched VMs
- One Kata Containers VM - One Kata Containers VM
- Post-launched RTVM (Soft or Hard realtime) - Post-launched RTVM (Soft or Hard real-time)
* - Hybrid Usage Config * - Hybrid Usage Config
- Hybrid - Hybrid
@ -265,9 +265,9 @@ application scenario needs.
- Post-launched VM - Post-launched VM
- -
* - Hybrid Real-Time Usage Config * - Hybrid real-time Usage Config
- Hybrid RT - Hybrid RT
- Pre-launched VM (Real-Time VM) - Pre-launched VM (real-time VM)
- Service VM - Service VM
- Post-launched VM - Post-launched VM
- -
@ -284,8 +284,8 @@ Here are block diagrams for each of these four scenarios.
SDC scenario SDC scenario
============ ============
In this SDC scenario, an Instrument Cluster (IC) system runs with the In this SDC scenario, an instrument cluster (IC) system runs with the
Service VM and an In-Vehicle Infotainment (IVI) system runs in a user Service VM and an in-vehicle infotainment (IVI) system runs in a user
VM. VM.
.. figure:: images/ACRN-V2-SDC-scenario.png .. figure:: images/ACRN-V2-SDC-scenario.png
@ -300,10 +300,10 @@ Industry scenario
In this Industry scenario, the Service VM provides device sharing capability for In this Industry scenario, the Service VM provides device sharing capability for
a Windows-based HMI User VM. One post-launched User VM can run a Kata Container a Windows-based HMI User VM. One post-launched User VM can run a Kata Container
application. Another User VM supports either hard or soft Real-time OS application. Another User VM supports either hard or soft real-time OS
applications. Up to five additional post-launched User VMs support functions applications. Up to five additional post-launched User VMs support functions
such as Human Machine Interface (HMI), Artificial Intelligence (AI), Computer such as human/machine interface (HMI), artificial intelligence (AI), computer
Vision, etc. vision, etc.
.. figure:: images/ACRN-Industry.png .. figure:: images/ACRN-Industry.png
:width: 600px :width: 600px
@ -326,10 +326,10 @@ non-real-time tasks.
Hybrid scenario Hybrid scenario
Hybrid Real-Time (RT) scenario Hybrid real-time (RT) scenario
============================== ==============================
In this Hybrid Real-Time (RT) scenario, a pre-launched RTVM is started by the In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the
hypervisor. The Service VM runs a post-launched User VM that runs non-safety or hypervisor. The Service VM runs a post-launched User VM that runs non-safety or
non-real-time tasks. non-real-time tasks.
@ -401,7 +401,7 @@ The ACRN hypervisor can be booted from a third-party bootloader
directly. A popular bootloader is `grub`_ and is directly. A popular bootloader is `grub`_ and is
also widely used by Linux distributions. also widely used by Linux distributions.
:ref:`using_grub` has a introduction on how to boot ACRN hypervisor with GRUB. :ref:`using_grub` has an introduction on how to boot ACRN hypervisor with GRUB.
In :numref:`boot-flow-2`, we show the boot sequence: In :numref:`boot-flow-2`, we show the boot sequence:
@ -425,8 +425,8 @@ In this boot mode, the boot options of pre-launched VM and service VM are define
in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config`` in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config``
in the source code ``misc/vm_configs/$(SCENARIO)/vm_configurations.c`` by default. in the source code ``misc/vm_configs/$(SCENARIO)/vm_configurations.c`` by default.
Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for
details. The boot options of post-launched VM is not covered by hypervisor details. The boot options of a post-launched VM are not covered by hypervisor
source code or GRUB menu, it is defined in guest image file or specified by source code or a GRUB menu; they are defined in a guest image file or specified by
launch scripts. launch scripts.
.. note:: .. note::
@ -458,11 +458,11 @@ all types of Virtual Machines (VMs) represented:
- Pre-launched Service VM - Pre-launched Service VM
- Post-launched User VM - Post-launched User VM
- Kata Container VM (post-launched) - Kata Container VM (post-launched)
- Real-Time VM (RTVM) - real-time VM (RTVM)
The Service VM owns most of the devices including the platform devices, and The Service VM owns most of the devices including the platform devices, and
provides I/O mediation. The notable exceptions are the devices assigned to the provides I/O mediation. The notable exceptions are the devices assigned to the
pre-launched User VM. Some of the PCIe devices may be passed through pre-launched User VM. Some PCIe devices may be passed through
to the post-launched User OSes via the VM configuration. The Service VM runs to the post-launched User OSes via the VM configuration. The Service VM runs
hypervisor-specific applications together, such as the ACRN device model, and hypervisor-specific applications together, such as the ACRN device model, and
ACRN VM manager. ACRN VM manager.
@ -500,10 +500,10 @@ usually not used by commercial OSes).
As shown in :numref:`VMX-brief`, VMM mode and guest mode are switched As shown in :numref:`VMX-brief`, VMM mode and guest mode are switched
through VM Exit and VM Entry. When the bootloader hands off control to through VM Exit and VM Entry. When the bootloader hands off control to
the ACRN hypervisor, the processor hasn't enabled VMX operation yet. The the ACRN hypervisor, the processor hasn't enabled VMX operation yet. The
ACRN hypervisor needs to enable VMX operation thru a VMXON instruction ACRN hypervisor needs to enable VMX operation through a VMXON instruction
first. Initially, the processor stays in VMM mode when the VMX operation first. Initially, the processor stays in VMM mode when the VMX operation
is enabled. It enters guest mode thru a VM resume instruction (or first is enabled. It enters guest mode through a VM resume instruction (or
time VM launch), and returns back to VMM mode thru a VM exit event. VM first-time VM launch), and returns to VMM mode through a VM exit event. VM
exit occurs in response to certain instructions and events. exit occurs in response to certain instructions and events.
The behavior of processor execution in guest mode is controlled by a The behavior of processor execution in guest mode is controlled by a
@ -522,7 +522,7 @@ reason (for example if a guest memory page is not mapped yet) and resume
the guest to re-execute the instruction. the guest to re-execute the instruction.
Note that the address space used in VMM mode is different from that in Note that the address space used in VMM mode is different from that in
guest mode. The guest mode and VMM mode use different memory mapping guest mode. The guest mode and VMM mode use different memory-mapping
tables, and therefore the ACRN hypervisor is protected from guest tables, and therefore the ACRN hypervisor is protected from guest
access. The ACRN hypervisor uses EPT to map the guest address, using the access. The ACRN hypervisor uses EPT to map the guest address, using the
guest page table to map from guest linear address to guest physical guest page table to map from guest linear address to guest physical
@ -537,7 +537,7 @@ used to give VM applications (and OSes) access to these shared devices.
Traditionally there are three architectural approaches to device Traditionally there are three architectural approaches to device
emulation: emulation:
* The first architecture is **device emulation within the hypervisor** which * The first architecture is **device emulation within the hypervisor**, which
is a common method implemented within the VMware\* workstation product is a common method implemented within the VMware\* workstation product
(an operating system-based hypervisor). In this method, the hypervisor (an operating system-based hypervisor). In this method, the hypervisor
includes emulations of common devices that the various guest operating includes emulations of common devices that the various guest operating
@ -548,7 +548,7 @@ emulation:
name implies, rather than the device emulation being embedded within name implies, rather than the device emulation being embedded within
the hypervisor, it is instead implemented in a separate user space the hypervisor, it is instead implemented in a separate user space
application. QEMU, for example, provides this kind of device emulation application. QEMU, for example, provides this kind of device emulation
also used by a large number of independent hypervisors. This model is also used by many independent hypervisors. This model is
advantageous, because the device emulation is independent of the advantageous, because the device emulation is independent of the
hypervisor and can therefore be shared for other hypervisors. It also hypervisor and can therefore be shared for other hypervisors. It also
permits arbitrary device emulation without having to burden the permits arbitrary device emulation without having to burden the
@ -557,11 +557,11 @@ emulation:
* The third variation on hypervisor-based device emulation is * The third variation on hypervisor-based device emulation is
**paravirtualized (PV) drivers**. In this model introduced by the `XEN **paravirtualized (PV) drivers**. In this model introduced by the `XEN
project`_ the hypervisor includes the physical drivers, and each guest Project`_, the hypervisor includes the physical drivers, and each guest
operating system includes a hypervisor-aware driver that works in operating system includes a hypervisor-aware driver that works in
concert with the hypervisor drivers. concert with the hypervisor drivers.
.. _XEN project: .. _XEN Project:
https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum
In the device emulation models discussed above, there's a price to pay In the device emulation models discussed above, there's a price to pay
@ -600,14 +600,14 @@ ACRN Device model incorporates these three aspects:
**VHM**: **VHM**:
The Virtio and Hypervisor Service Module is a kernel module in the The Virtio and Hypervisor Service Module is a kernel module in the
Service VM acting as a middle layer to support the device model. The VHM Service VM acting as a middle layer to support the device model. The VHM
and its client handling flow is described below: client handling flow is described below:
#. ACRN hypervisor IOREQ is forwarded to the VHM by an upcall #. ACRN hypervisor IOREQ is forwarded to the VHM by an upcall
notification to the Service VM. notification to the Service VM.
#. VHM will mark the IOREQ as "in process" so that the same IOREQ will #. VHM will mark the IOREQ as "in process" so that the same IOREQ will
not pick up again. The IOREQ will be sent to the client for handling. not pick up again. The IOREQ will be sent to the client for handling.
Meanwhile, the VHM is ready for another IOREQ. Meanwhile, the VHM is ready for another IOREQ.
#. IOREQ clients are either an Service VM Userland application or a Service VM #. IOREQ clients are either a Service VM Userland application or a Service VM
Kernel space module. Once the IOREQ is processed and completed, the Kernel space module. Once the IOREQ is processed and completed, the
Client will issue an IOCTL call to the VHM to notify an IOREQ state Client will issue an IOCTL call to the VHM to notify an IOREQ state
change. The VHM then checks and hypercalls to ACRN hypervisor change. The VHM then checks and hypercalls to ACRN hypervisor
@ -646,7 +646,7 @@ Finally, there may be specialized PCI devices that only one guest domain
uses, so they should be passed through to the guest. Individual USB uses, so they should be passed through to the guest. Individual USB
ports could be isolated to a given domain too, or a serial port (which ports could be isolated to a given domain too, or a serial port (which
is itself not shareable) could be isolated to a particular guest. In is itself not shareable) could be isolated to a particular guest. In
ACRN hypervisor, we support USB controller passthrough only and we ACRN hypervisor, we support USB controller passthrough only, and we
don't support passthrough for a legacy serial port, (for example don't support passthrough for a legacy serial port, (for example
0x3f8). 0x3f8).
@ -701,8 +701,8 @@ ACRN I/O mediator
Following along with the numbered items in :numref:`io-emulation-path`: Following along with the numbered items in :numref:`io-emulation-path`:
1. When a guest execute an I/O instruction (PIO or MMIO), a VM exit happens. 1. When a guest executes an I/O instruction (PIO or MMIO), a VM exit happens.
ACRN hypervisor takes control, and analyzes the the VM ACRN hypervisor takes control, and analyzes the VM
exit reason, which is a VMX_EXIT_REASON_IO_INSTRUCTION for PIO access. exit reason, which is a VMX_EXIT_REASON_IO_INSTRUCTION for PIO access.
2. ACRN hypervisor fetches and analyzes the guest instruction, and 2. ACRN hypervisor fetches and analyzes the guest instruction, and
notices it is a PIO instruction (``in AL, 20h`` in this example), and put notices it is a PIO instruction (``in AL, 20h`` in this example), and put
@ -717,7 +717,7 @@ Following along with the numbered items in :numref:`io-emulation-path`:
module is activated to execute its processing APIs. Otherwise, the VHM module is activated to execute its processing APIs. Otherwise, the VHM
module leaves the IO request in the shared page and wakes up the module leaves the IO request in the shared page and wakes up the
device model thread to process. device model thread to process.
5. The ACRN device model follow the same mechanism as the VHM. The I/O 5. The ACRN device model follows the same mechanism as the VHM. The I/O
processing thread of device model queries the IO request ring to get the processing thread of device model queries the IO request ring to get the
PIO instruction details and checks to see if any (guest) device emulation PIO instruction details and checks to see if any (guest) device emulation
module claims ownership of the IO port: if a module claimed it, module claims ownership of the IO port: if a module claimed it,
@ -726,14 +726,14 @@ Following along with the numbered items in :numref:`io-emulation-path`:
in this example), (say uDev1 here), uDev1 puts the result into the in this example), (say uDev1 here), uDev1 puts the result into the
shared page (in register AL in this example). shared page (in register AL in this example).
7. ACRN device model then returns control to ACRN hypervisor to indicate the 7. ACRN device model then returns control to ACRN hypervisor to indicate the
completion of an IO instruction emulation, typically thru VHM/hypercall. completion of an IO instruction emulation, typically through VHM/hypercall.
8. The ACRN hypervisor then knows IO emulation is complete, and copies 8. The ACRN hypervisor then knows IO emulation is complete, and copies
the result to the guest register context. the result to the guest register context.
9. The ACRN hypervisor finally advances the guest IP to 9. The ACRN hypervisor finally advances the guest IP to
indicate completion of instruction execution, and resumes the guest. indicate completion of instruction execution, and resumes the guest.
The MMIO path is very similar, except the VM exit reason is different. MMIO The MMIO path is very similar, except the VM exit reason is different. MMIO
access usually is trapped thru VMX_EXIT_REASON_EPT_VIOLATION in access is usually trapped through a VMX_EXIT_REASON_EPT_VIOLATION in
the hypervisor. the hypervisor.
Virtio framework architecture Virtio framework architecture
@ -750,7 +750,7 @@ should have a straightforward, efficient, standard and extensible
mechanism for virtual devices, rather than boutique per-environment or mechanism for virtual devices, rather than boutique per-environment or
per-OS mechanisms. per-OS mechanisms.
Virtio provides a common frontend driver framework which not only Virtio provides a common frontend driver framework that not only
standardizes device interfaces, but also increases code reuse across standardizes device interfaces, but also increases code reuse across
different virtualization platforms. different virtualization platforms.
@ -786,16 +786,16 @@ here:
and BE drivers to interact with each other. For example, FE driver could and BE drivers to interact with each other. For example, FE driver could
read/write registers of the device, and the virtual device could read/write registers of the device, and the virtual device could
interrupt FE driver, on behalf of the BE driver, in case of something is interrupt FE driver, on behalf of the BE driver, in case of something is
happening. Currently Virtio supports PCI/PCIe bus and MMIO bus. In happening. Currently, Virtio supports PCI/PCIe bus and MMIO bus. In
ACRN project, only PCI/PCIe bus is supported, and all the Virtio devices ACRN project, only PCI/PCIe bus is supported, and all the Virtio devices
share the same vendor ID 0x1AF4. share the same vendor ID 0x1AF4.
**Efficient**: batching operation is encouraged **Efficient**: batching operation is encouraged
Batching operation and deferred notification are important to achieve Batching operation and deferred notification are important to achieve
high-performance I/O, since notification between FE and BE driver high-performance I/O, since notification between FE and BE driver
usually involves an expensive exit of the guest. Therefore batching usually involves an expensive exit of the guest. Therefore, batching
operating and notification suppression are highly encouraged if operating and notification suppression are highly encouraged if
possible. This will give an efficient implementation for the performance possible. This will give an efficient implementation for performance
critical devices. critical devices.
**Standard: virtqueue** **Standard: virtqueue**
@ -811,9 +811,10 @@ here:
The virtqueues are created in guest physical memory by the FE drivers. The virtqueues are created in guest physical memory by the FE drivers.
The BE drivers only need to parse the virtqueue structures to obtain The BE drivers only need to parse the virtqueue structures to obtain
the requests and get the requests done. How virtqueue is organized is the requests and get the requests done. Virtqueue organization is
specific to the User OS. In the implementation of Virtio in Linux, the specific to the User OS. In the implementation of Virtio in Linux, the
virtqueue is implemented as a ring buffer structure called vring. virtqueue is implemented as a ring buffer structure called
``vring``.
In ACRN, the virtqueue APIs can be leveraged In ACRN, the virtqueue APIs can be leveraged
directly so users don't need to worry about the details of the directly so users don't need to worry about the details of the
@ -823,7 +824,7 @@ here:
**Extensible: feature bits** **Extensible: feature bits**
A simple extensible feature negotiation mechanism exists for each virtual A simple extensible feature negotiation mechanism exists for each virtual
device and its driver. Each virtual device could claim its device and its driver. Each virtual device could claim its
device specific features while the corresponding driver could respond to device-specific features while the corresponding driver could respond to
the device with the subset of features the driver understands. The the device with the subset of features the driver understands. The
feature mechanism enables forward and backward compatibility for the feature mechanism enables forward and backward compatibility for the
virtual device and driver. virtual device and driver.
@ -839,11 +840,11 @@ space as shown in :numref:`virtio-framework-userland`:
Virtio Framework - User Land Virtio Framework - User Land
In the Virtio user-land framework, the implementation is compatible with In the Virtio user-land framework, the implementation is compatible with
Virtio Spec 0.9/1.0. The VBS-U is statically linked with Device Model, Virtio Spec 0.9/1.0. The VBS-U is statically linked with the Device Model,
and communicates with Device Model through the PCIe interface: PIO/MMIO and communicates with the Device Model through the PCIe interface: PIO/MMIO
or MSI/MSIx. VBS-U accesses Virtio APIs through user space vring service or MSI/MSIx. VBS-U accesses Virtio APIs through the user space ``vring`` service
API helpers. User space vring service API helpers access shared ring API helpers. User space ``vring`` service API helpers access shared ring
through remote memory map (mmap). VHM maps User VM memory with the help of through a remote memory map (mmap). VHM maps User VM memory with the help of
ACRN Hypervisor. ACRN Hypervisor.
.. figure:: images/virtio-framework-kernel.png .. figure:: images/virtio-framework-kernel.png
@ -856,10 +857,10 @@ ACRN Hypervisor.
VBS-U offloads data plane processing to VBS-K. VBS-U initializes VBS-K VBS-U offloads data plane processing to VBS-K. VBS-U initializes VBS-K
at the right timings, for example. The FE driver sets at the right timings, for example. The FE driver sets
VIRTIO_CONFIG_S_DRIVER_OK to avoid unnecessary device configuration VIRTIO_CONFIG_S_DRIVER_OK to avoid unnecessary device configuration
changes while running. VBS-K can access shared rings through VBS-K changes while running. VBS-K can access shared rings through the VBS-K
virtqueue APIs. VBS-K virtqueue APIs are similar to VBS-U virtqueue virtqueue APIs. VBS-K virtqueue APIs are similar to VBS-U virtqueue
APIs. VBS-K registers as VHM client(s) to handle a continuous range of APIs. VBS-K registers as a VHM client to handle a continuous range of
registers registers.
There may be one or more VHM-clients for each VBS-K, and there can be a There may be one or more VHM-clients for each VBS-K, and there can be a
single VHM-client for all VBS-Ks as well. VBS-K notifies FE through VHM single VHM-client for all VBS-Ks as well. VBS-K notifies FE through VHM

View File

@ -12,7 +12,7 @@ Minimum System Requirements for Installing ACRN
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+ +------------------------+-----------------------------------+---------------------------------------------------------------------------------+
| Hardware | Minimum Requirements | Recommended | | Hardware | Minimum Requirements | Recommended |
+========================+===================================+=================================================================================+ +========================+===================================+=================================================================================+
| Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper Threading Technology enabled in the BIOS or more cores | | Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper-threading Technology enabled in the BIOS or more cores |
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+ +------------------------+-----------------------------------+---------------------------------------------------------------------------------+
| System memory | 4GB RAM | 8GB or more (< 32G) | | System memory | 4GB RAM | 8GB or more (< 32G) |
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+ +------------------------+-----------------------------------+---------------------------------------------------------------------------------+
@ -29,13 +29,25 @@ Platforms with multiple PCI segments
ACRN assumes the following conditions are satisfied from the Platform BIOS ACRN assumes the following conditions are satisfied from the Platform BIOS
* All the PCI device BARs should be assigned resources, including SR-IOv VF BARs if a device supports. * All the PCI device BARs should be assigned resources, including SR-IOV VF BARs if a device supports.
* Bridge windows for PCI bridge devices and the resources for root bus, should be programmed with values * Bridge windows for PCI bridge devices and the resources for root bus, should be programmed with values
that enclose resources used by all the downstream devices. that enclose resources used by all the downstream devices.
* There should be no conflict in resources among the PCI devices and also between PCI devices and other platform devices. * There should be no conflict in resources among the PCI devices and also between PCI devices and other platform devices.
New Processor Families
**********************
Here are announced Intel processor architectures that are supported by ACRN v2.2, but don't yet have a recommended platform available:
* `Tiger Lake <https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html#@Embedded>`_
(Q3'2020 Launch Date)
* `Elkhart Lake <https://ark.intel.com/content/www/us/en/ark/products/codename/128825/elkhart-lake.html#@Embedded>`_
(Q1'2021 Launch Date)
Verified Platforms According to ACRN Usage Verified Platforms According to ACRN Usage
****************************************** ******************************************
@ -69,7 +81,7 @@ For general instructions setting up ACRN on supported hardware platforms, visit
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+ +--------------------------------+-------------------------+-----------+-----------+-------------+------------+
| Platform (Intel x86) | Product/Kit Name | Usage Scenerio - BKC Examples | | Platform (Intel x86) | Product/Kit Name | Usage Scenario - BKC Examples |
| | +-----------+-----------+-------------+------------+ | | +-----------+-----------+-------------+------------+
| | | SDC with | IU without| IU with | Logical | | | | SDC with | IU without| IU with | Logical |
| | | 2 VMs | Safety VM | Safety VM | Partition | | | | 2 VMs | Safety VM | Safety VM | Partition |
@ -127,7 +139,7 @@ Verified Hardware Specifications Detail
| | **Apollo Lake** | | UP2 - N3350 | Processor | - Intel® Celeron™ N3350 (2C2T, up to 2.4 GHz) | | | **Apollo Lake** | | UP2 - N3350 | Processor | - Intel® Celeron™ N3350 (2C2T, up to 2.4 GHz) |
| | | UP2 - N4200 | | - Intel® Pentium™ N4200 (4C4T, up to 2.5 GHz) | | | | UP2 - N4200 | | - Intel® Pentium™ N4200 (4C4T, up to 2.5 GHz) |
| | | UP2 - x5-E3940 | | - Intel® Atom ™ x5-E3940 (4C4T) | | | | UP2 - x5-E3940 | | - Intel® Atom ™ x5-E3940 (4C4T) |
| | | | (up to 1.8Ghz)/x7-E3950 (4C4T, up to 2.0GHz) | | | | | (up to 1.8GHz)/x7-E3950 (4C4T, up to 2.0GHz) |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | Graphics | - 2GB (single channel) LPDDR4 | | | | Graphics | - 2GB (single channel) LPDDR4 |
| | | | - 4GB/8GB (dual channel) LPDDR4 | | | | | - 4GB/8GB (dual channel) LPDDR4 |
@ -142,14 +154,14 @@ Verified Hardware Specifications Detail
| | **Kaby Lake** | | NUC7i5BNH | Processor | - Intel® Core™ i5-7260U CPU @ 2.20GHz (2C4T) | | | **Kaby Lake** | | NUC7i5BNH | Processor | - Intel® Core™ i5-7260U CPU @ 2.20GHz (2C4T) |
| | (Code name: Baby Canyon) | | (Board: NUC7i5BNB) | | | | | (Code name: Baby Canyon) | | (Board: NUC7i5BNB) | | |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | Graphics | - Intel® Iris Plus Graphics 640 | | | | Graphics | - Intel® Iris® Plus Graphics 640 |
| | | | - One HDMI\* 2.0 port with 4K at 60 Hz | | | | | - One HDMI\* 2.0 port with 4K at 60 Hz |
| | | | - Thunderbolt™ 3 port with support for USB\* 3.1 | | | | | - Thunderbolt™ 3 port with support for USB\* 3.1 |
| | | | Gen 2, DisplayPort\* 1.2 and 40 Gb/s Thunderbolt | | | | | Gen 2, DisplayPort\* 1.2 and 40 Gb/s Thunderbolt |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V | | | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | Storage capabilities | - Micro SDXC slot with UHS-I support on the side | | | | Storage capabilities | - microSDXC slot with UHS-I support on the side |
| | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD | | | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD |
| | | | - One SATA3 port for connection to 2.5" HDD or SSD | | | | | - One SATA3 port for connection to 2.5" HDD or SSD |
| | | | (up to 9.5 mm thickness) | | | | | (up to 9.5 mm thickness) |
@ -159,14 +171,14 @@ Verified Hardware Specifications Detail
| | **Kaby Lake** | | NUC7i7BNH | Processor | - Intel® Core™ i7-7567U CPU @ 3.50GHz (2C4T) | | | **Kaby Lake** | | NUC7i7BNH | Processor | - Intel® Core™ i7-7567U CPU @ 3.50GHz (2C4T) |
| | (Code name: Baby Canyon) | | (Board: NUC7i7BNB) | | | | | (Code name: Baby Canyon) | | (Board: NUC7i7BNB) | | |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | Graphics | - Intel® Iris Plus Graphics 650 | | | | Graphics | - Intel® Iris® Plus Graphics 650 |
| | | | - One HDMI\* 2.0 port with 4K at 60 Hz | | | | | - One HDMI\* 2.0 port with 4K at 60 Hz |
| | | | - Thunderbolt™ 3 port with support for USB\* 3.1 Gen 2, | | | | | - Thunderbolt™ 3 port with support for USB\* 3.1 Gen 2, |
| | | | DisplayPort\* 1.2 and 40 Gb/s Thunderbolt | | | | | DisplayPort\* 1.2 and 40 Gb/s Thunderbolt |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V | | | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | Storage capabilities | - Micro SDXC slot with UHS-I support on the side | | | | Storage capabilities | - microSDXC slot with UHS-I support on the side |
| | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD | | | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD |
| | | | - One SATA3 port for connection to 2.5" HDD or SSD | | | | | - One SATA3 port for connection to 2.5" HDD or SSD |
| | | | (up to 9.5 mm thickness) | | | | | (up to 9.5 mm thickness) |
@ -197,7 +209,7 @@ Verified Hardware Specifications Detail
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V | | | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | Storage capabilities | - One M.2 connector for WIFI | | | | Storage capabilities | - One M.2 connector for Wi-Fi |
| | | | - One M.2 connector for 3G/4G module, supporting | | | | | - One M.2 connector for 3G/4G module, supporting |
| | | | LTE Category 6 and above | | | | | LTE Category 6 and above |
| | | | - One M.2 connector for 2242 SSD | | | | | - One M.2 connector for 2242 SSD |
@ -213,7 +225,7 @@ Verified Hardware Specifications Detail
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V | | | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
| | +------------------------+-----------------------------------------------------------+ | | +------------------------+-----------------------------------------------------------+
| | | Storage capabilities | - One M.2 connector for WIFI | | | | Storage capabilities | - One M.2 connector for Wi-Fi |
| | | | - One M.2 connector for 3G/4G module, supporting | | | | | - One M.2 connector for 3G/4G module, supporting |
| | | | LTE Category 6 and above | | | | | LTE Category 6 and above |
| | | | - One M.2 connector for 2242 SSD | | | | | - One M.2 connector for 2242 SSD |

View File

@ -6,7 +6,7 @@ ACRN v1.0 (May 2019)
We are pleased to announce the release of ACRN version 1.0, a key We are pleased to announce the release of ACRN version 1.0, a key
Project ACRN milestone focused on automotive Software-Defined Cockpit Project ACRN milestone focused on automotive Software-Defined Cockpit
(SDC) use cases and introducing additional architecture enhancements for (SDC) use cases and introducing additional architecture enhancements for
more IOT usages, such as Industrial. more IoT usages, such as Industrial.
This v1.0 release is a production-ready reference solution for SDC This v1.0 release is a production-ready reference solution for SDC
usages that require multiple VMs and rich I/O mediation for device usages that require multiple VMs and rich I/O mediation for device

View File

@ -44,7 +44,7 @@ We have many new `reference documents available <https://projectacrn.github.io>`
* Getting Started Guide for Industry scenario * Getting Started Guide for Industry scenario
* :ref:`ACRN Configuration Tool Manual <acrn_configuration_tool>` * :ref:`ACRN Configuration Tool Manual <acrn_configuration_tool>`
* :ref:`Trace and Data Collection for ACRN Real-Time(RT) Performance Tuning <rt_performance_tuning>` * :ref:`Trace and Data Collection for ACRN real-time (RT) Performance Tuning <rt_performance_tuning>`
* Building ACRN in Docker * Building ACRN in Docker
* :ref:`Running Ubuntu as the User VM <running_ubun_as_user_vm>` * :ref:`Running Ubuntu as the User VM <running_ubun_as_user_vm>`
* :ref:`Running Debian as the User VM <running_deb_as_user_vm>` * :ref:`Running Debian as the User VM <running_deb_as_user_vm>`

View File

@ -39,7 +39,7 @@ What's New in v1.6
- The ACRN hypervisor allows a SRIOV-capable PCI device's Virtual Functions (VFs) to be allocated to any VM. - The ACRN hypervisor allows a SRIOV-capable PCI device's Virtual Functions (VFs) to be allocated to any VM.
- The ACRN Service VM supports the SRIOV ethernet device (through the PF driver), and ensures that the SRIOV VF device is able to be assigned (passthrough) to a post-launched VM (launched by ACRN-DM). - The ACRN Service VM supports the SRIOV Ethernet device (through the PF driver), and ensures that the SRIOV VF device is able to be assigned (passthrough) to a post-launched VM (launched by ACRN-DM).
* CPU sharing enhancement - Halt/Pause emulation * CPU sharing enhancement - Halt/Pause emulation

View File

@ -118,14 +118,14 @@ ACRN supports Open Virtual Machine Firmware (OVMF) as a virtual boot
loader for the Service VM to launch post-launched VMs such as Windows, loader for the Service VM to launch post-launched VMs such as Windows,
Linux, VxWorks, or Zephyr RTOS. Secure boot is also supported. Linux, VxWorks, or Zephyr RTOS. Secure boot is also supported.
Post-launched Real-Time VM Support Post-launched real-time VM Support
================================== ==================================
ACRN supports a post-launched RTVM, which also uses partitioned hardware ACRN supports a post-launched RTVM, which also uses partitioned hardware
resources to ensure adequate real-time performance, as required for resources to ensure adequate real-time performance, as required for
industrial use cases. industrial use cases.
Real-Time VM Performance Optimizations Real-time VM Performance Optimizations
====================================== ======================================
ACRN 2.0 improves RTVM performance with these optimizations: ACRN 2.0 improves RTVM performance with these optimizations:
@ -165,7 +165,7 @@ Large selection of OSs for User VMs
=================================== ===================================
ACRN now supports Windows* 10, Android*, Ubuntu*, Xenomai, VxWorks*, ACRN now supports Windows* 10, Android*, Ubuntu*, Xenomai, VxWorks*,
Real-Time Linux*, and Zephyr* RTOS. ACRN's Windows support now conforms real-time Linux*, and Zephyr* RTOS. ACRN's Windows support now conforms
to the Microsoft* Hypervisor Top-Level Functional Specification (TLFS). to the Microsoft* Hypervisor Top-Level Functional Specification (TLFS).
ACRN 2.0 also improves overall Windows as a Guest (WaaG) stability and ACRN 2.0 also improves overall Windows as a Guest (WaaG) stability and
performance. performance.

View File

@ -0,0 +1,145 @@
.. _release_notes_2.2:
ACRN v2.2 (Sep 2020)
####################
We are pleased to announce the release of the Project ACRN
hypervisor version 2.2.
ACRN is a flexible, lightweight reference hypervisor that is built with
real-time and safety-criticality in mind. It is optimized to streamline
embedded development through an open source platform. Check out the
:ref:`introduction` introduction for more information. All project ACRN
source code is maintained in the
https://github.com/projectacrn/acrn-hypervisor repository and includes
folders for the ACRN hypervisor, the ACRN device model, tools, and
documentation. You can either download this source code as a zip or
tar.gz file (see the `ACRN v2.2 GitHub release page
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v2.2>`_) or
use Git clone and checkout commands::
git clone https://github.com/projectacrn/acrn-hypervisor
cd acrn-hypervisor
git checkout v2.2
The project's online technical documentation is also tagged to
correspond with a specific release: generated v2.2 documents can be
found at https://projectacrn.github.io/2.2/. Documentation for the
latest under-development branch is found at
https://projectacrn.github.io/latest/.
ACRN v2.2 requires Ubuntu 18.04. Follow the instructions in the
:ref:`rt_industry_ubuntu_setup` to get started with ACRN.
Whats New in v2.2
******************
Elkhart Lake and Tiger Lake processor support.
At `Intel Industrial iSummit 2020
<https://newsroom.intel.com/press-kits/intel-industrial-summit-2020>`_,
Intel announced the latest additions to their
enhanced-for-IoT Edge portfolio: the Intel® Atom® x6000E Series, Intel®
Pentium® and Intel® Celeron® N and J Series (all code named Elkhart Lake),
and 11th Gen Intel® Core™ processors (code named Tiger Lake-UP3). The ACRN
team is pleased to announce that this ACRN v2.2 release already supports
these processors.
* Support for time deterministic applications with new features e.g.,
Time Coordinated Computing and Time Sensitive Networking
* Support for functional safety with new features e.g., Intel Safety Island
**On Elkhart Lake, ACRN can boot using `Slim Bootloader <https://slimbootloader.github.io/>`_ (an alternative bootloader to UEFI BIOS).**
Shared memory based inter-VM communication (ivshmem) is extended
ivshmem now supports all kinds of VMs including pre-launched VM, Service VM, and
other User VMs. (See :ref:`ivshmem-hld`)
**CPU sharing supports pre-launched VM.**
**RTLinux with preempt-RT linux kernel 5.4 is validated both as a pre-launched and post-launched VM.**
**ACRN hypervisor can emulate MSI-X based on physical MSI with multiple vectors.**
Staged removal of deprivileged boot mode support.
ACRN has supported deprivileged boot mode to ease the integration of
Linux distributions such as Clear Linux. Unfortunately, deprivileged boot
mode limits ACRN's scalability and is unsuitable for ACRN's hybrid
hypervisor mode. In ACRN v2.2, deprivileged boot mode is no longer the default
and will be complete removed in ACRN v2.3. We're focusing instead
on using multiboot2 boot (via Grub). Multiboot2 is not supported in
Clearlinux though, so we have chosen Ubuntu (and Yocto Project) as the
preferred Service VM OSs moving forward.
Document updates
****************
New and updated reference documents are available, including:
.. rst-class:: rst-columns
* :ref:`develop_acrn`
* :ref:`asm_coding_guidelines`
* :ref:`c_coding_guidelines`
* :ref:`contribute_guidelines`
* :ref:`hv-cpu-virt`
* :ref:`IOC_virtualization_hld`
* :ref:`hv-startup`
* :ref:`hv-vm-management`
* :ref:`ivshmem-hld`
* :ref:`virtio-i2c`
* :ref:`sw_design_guidelines`
* :ref:`faq`
* :ref:`getting-started-building`
* :ref:`introduction`
* :ref:`acrn_configuration_tool`
* :ref:`enable_ivshmem`
* :ref:`setup_openstack_libvirt`
* :ref:`using_grub`
* :ref:`using_partition_mode_on_nuc`
* :ref:`connect_serial_port`
* :ref:`using_yp`
* :ref:`acrn-dm_parameters`
* :ref:`hv-parameters`
* :ref:`acrnctl`
Because we're dropping deprivileged boot mode support in the next v2.3
release, we're also switching our Service VM of choice away from Clear
Linux. We've begun this transition in the v2.2 documentation and removed
some Clear Linux-specific tutorials. Deleted documents are still
available in the `version-specific v2.1 documentation
<https://projectacrn.github.io/v2.1/>`_.
Fixed Issues Details
********************
- :acrn-issue:`5008` - Slowdown in UOS (Zephyr)
- :acrn-issue:`5033` - SOS decode instruction failed in hybrid mode
- :acrn-issue:`5038` - [WHL][Yocto] SOS occasionally hangs/crashes with a kernel panic
- :acrn-issue:`5048` - iTCO_wdt issue: can't request region for resource
- :acrn-issue:`5102` - Can't access shared memory base address in ivshmem
- :acrn-issue:`5118` - GPT ERROR when write preempt img to SATA on NUC7i5BNB
- :acrn-issue:`5148` - dm: support to provide ACPI SSDT for UOS
- :acrn-issue:`5157` - [build from source] during build HV with XML, "TARGET_DIR=xxx" does not work
- :acrn-issue:`5165` - [WHL][Yocto][YaaG] No UI display when launch Yaag gvt-g with acrn kernel
- :acrn-issue:`5215` - [UPsquared N3350 board] Solution to Bootloader issue
- :acrn-issue:`5233` - Boot Acrn failed on Dell-OptiPlex 5040 with Intel i5-6500T
- :acrn-issue:`5238` - acrn-config: add hybrid_rt scenario xml config for ehl-crb-b
- :acrn-issue:`5240` - passthrough DHRD-ignored device
- :acrn-issue:`5242` - acrn-config: add pse-gpio to vmsix_on_msi devices list
- :acrn-issue:`4691` - hv: add vgpio device model support
- :acrn-issue:`5245` - hv: add INTx mapping for pre-launched VMs
- :acrn-issue:`5426` - hv: add vgpio device model support
- :acrn-issue:`5257` - hv: support PIO access to platform hidden devices
- :acrn-issue:`5278` - [EHL][acrn-configuration-tool]: create a new hybrid_rt based scenario for P2SB MMIO pass-thru use case
- :acrn-issue:`5304` - Cannot cross-compile - Build process assumes build system always hosts the ACRN hypervisor
Known Issues
************
- :acrn-issue:`5150` - [REG][WHL][[Yocto][Passthru] Launch RTVM fails with usb passthru
- :acrn-issue:`5151` - [WHL][VxWorks] Launch VxWorks fails due to no suitable video mode found
- :acrn-issue:`5154` - [TGL][Yocto][PM] 148213_PM_SystemS5 with life_mngr fail
- :acrn-issue:`5368` - [TGL][Yocto][Passthru] Audio does not work on TGL
- :acrn-issue:`5369` - [TGL][qemu] Cannot launch qemu on TGL
- :acrn-issue:`5370` - [TGL][RTVM][PTCM] Launch RTVM failed with mem size smaller than 2G and PTCM enabled
- :acrn-issue:`5371` - [TGL][Industry][Xenomai]Xenomai post launch fail

View File

@ -13,7 +13,7 @@ Introduction
************ ************
ACRN includes three types of configurations: Hypervisor, Board, and VM. Each ACRN includes three types of configurations: Hypervisor, Board, and VM. Each
are discussed in the following sections. is discussed in the following sections.
Hypervisor configuration Hypervisor configuration
======================== ========================
@ -52,7 +52,7 @@ to launch post-launched User VMs.
Scenario based VM configurations are organized as ``*.c/*.h`` files. The Scenario based VM configurations are organized as ``*.c/*.h`` files. The
reference scenarios are located in the reference scenarios are located in the
``misc/vm_configs/scenarios/$(SCENARIO)/`` folder. ``misc/vm_configs/scenarios/$(SCENARIO)/`` folder.
The board specific configurations on this scenario is stored in the The board-specific configurations on this scenario are stored in the
``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/`` folder. ``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/`` folder.
User VM launch script samples are located in the User VM launch script samples are located in the
@ -168,6 +168,24 @@ Additional scenario XML elements:
Specify whether force to disable software workaround for Machine Check Specify whether force to disable software workaround for Machine Check
Error on Page Size Change is enabled. Error on Page Size Change is enabled.
``IVSHMEM`` (a child node of ``FEATURE``):
Specify the inter-VM shared memory configuration
``IVSHMEM_ENABLED`` (a child node of ``FEATURE/IVSHMEM``):
Specify if the inter-VM shared memory feature is enabled.
``IVSHMEM_REGION`` (a child node of ``FEATURE/IVSHMEM``):
Specify a comma-separated list of the inter-VM shared memory region name,
size, and VM IDs that may communicate using this shared region.
* Prefix the region ``name`` with ``hv:/`` (for an hv-land solution).
(See :ref:`ivshmem-hld` for details.)
* Specify the region ``size`` in MB, and a power of 2 (e.g., 2, 4, 8, 16)
up to 512.
* Specify all VM IDs that may use this shared memory area,
separated by a ``:``, for example, ``0:2`` (to share this area between
VMs 0 and 2), or ``0:1:2`` (to let VMs 0, 1, and 2 share this area).
``STACK_SIZE`` (a child node of ``MEMORY``): ``STACK_SIZE`` (a child node of ``MEMORY``):
Specify the size of stacks used by physical cores. Each core uses one stack Specify the size of stacks used by physical cores. Each core uses one stack
for normal operations and another three for specific exceptions. for normal operations and another three for specific exceptions.
@ -224,7 +242,7 @@ Additional scenario XML elements:
- ``PRE_STD_VM`` pre-launched Standard VM - ``PRE_STD_VM`` pre-launched Standard VM
- ``SOS_VM`` pre-launched Service VM - ``SOS_VM`` pre-launched Service VM
- ``POST_STD_VM`` post-launched Standard VM - ``POST_STD_VM`` post-launched Standard VM
- ``POST_RT_VM`` post-launched realtime capable VM - ``POST_RT_VM`` post-launched real-time capable VM
- ``KATA_VM`` post-launched Kata Container VM - ``KATA_VM`` post-launched Kata Container VM
``name`` (a child node of ``vm``): ``name`` (a child node of ``vm``):
@ -239,7 +257,7 @@ Additional scenario XML elements:
- ``GUEST_FLAG_IO_COMPLETION_POLLING`` specify whether the hypervisor needs - ``GUEST_FLAG_IO_COMPLETION_POLLING`` specify whether the hypervisor needs
IO polling to completion IO polling to completion
- ``GUEST_FLAG_HIDE_MTRR`` specify whether to hide MTRR from the VM - ``GUEST_FLAG_HIDE_MTRR`` specify whether to hide MTRR from the VM
- ``GUEST_FLAG_RT`` specify whether the VM is RT-VM (realtime) - ``GUEST_FLAG_RT`` specify whether the VM is RT-VM (real-time)
``cpu_affinity``: ``cpu_affinity``:
List of pCPU: the guest VM is allowed to create vCPU from all or a subset of this list. List of pCPU: the guest VM is allowed to create vCPU from all or a subset of this list.
@ -271,7 +289,7 @@ Additional scenario XML elements:
exactly match the module tag in the GRUB multiboot cmdline. exactly match the module tag in the GRUB multiboot cmdline.
``ramdisk_mod`` (a child node of ``os_config``): ``ramdisk_mod`` (a child node of ``os_config``):
The tag for the ramdisk image which acts as a multiboot module; it The tag for the ramdisk image, which acts as a multiboot module; it
must exactly match the module tag in the GRUB multiboot cmdline. must exactly match the module tag in the GRUB multiboot cmdline.
``bootargs`` (a child node of ``os_config``): ``bootargs`` (a child node of ``os_config``):
@ -313,6 +331,18 @@ Additional scenario XML elements:
PCI devices list of the VM; it is hard-coded for each scenario so it PCI devices list of the VM; it is hard-coded for each scenario so it
is not configurable for now. is not configurable for now.
``mmio_resources``:
MMIO resources to passthrough.
``TPM2`` (a child node of ``mmio_resources``):
TPM2 device to passthrough.
``p2sb`` (a child node of ``mmio_resources``):
Exposing the P2SB (Primary-to-Sideband) bridge to the pre-launched VM.
``pt_intx``:
Forward specific IOAPIC interrupts (with interrupt line remapping) to the pre-launched VM.
``board_private``: ``board_private``:
Stores scenario-relevant board configuration. Stores scenario-relevant board configuration.
@ -345,7 +375,7 @@ current scenario has:
``ZEPHYR`` or ``VXWORKS``. ``ZEPHYR`` or ``VXWORKS``.
``rtos_type``: ``rtos_type``:
Specify the User VM Realtime capability: Soft RT, Hard RT, or none of them. Specify the User VM Real-time capability: Soft RT, Hard RT, or none of them.
``mem_size``: ``mem_size``:
Specify the User VM memory size in Mbyte. Specify the User VM memory size in Mbyte.
@ -373,10 +403,17 @@ current scenario has:
``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``. ``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``.
Refer to :ref:`usb_virtualization` for details. Refer to :ref:`usb_virtualization` for details.
``shm_regions``:
List of shared memory regions for inter-VM communication.
``shm_region`` (a child node of ``shm_regions``):
configure the shm regions for current VM, input format: hv:/<;shm name>;,
<;shm size in MB>;. Refer to :ref:`ivshmem-hld` for details.
``passthrough_devices``: ``passthrough_devices``:
Select the passthrough device from the lspci list. Currently we support: Select the passthrough device from the lspci list. Currently we support:
usb_xdci, audio, audio_codec, ipu, ipu_i2c, cse, wifi, Bluetooth, sd_card, usb_xdci, audio, audio_codec, ipu, ipu_i2c, cse, wifi, Bluetooth, sd_card,
ethernet, wifi, sata, and nvme. Ethernet, wifi, sata, and nvme.
``network`` (a child node of ``virtio_devices``): ``network`` (a child node of ``virtio_devices``):
The virtio network device setting. The virtio network device setting.
@ -394,7 +431,7 @@ current scenario has:
.. note:: .. note::
The ``configurable`` and ``readonly`` attributes are used to mark The ``configurable`` and ``readonly`` attributes are used to mark
whether the items is configurable for users. When ``configurable="0"`` whether the item is configurable for users. When ``configurable="0"``
and ``readonly="true"``, the item is not configurable from the web and ``readonly="true"``, the item is not configurable from the web
interface. When ``configurable="0"``, the item does not appear on the interface. When ``configurable="0"``, the item does not appear on the
interface. interface.
@ -562,7 +599,7 @@ Instructions
because the app needs to download some JavaScript files. because the app needs to download some JavaScript files.
.. note:: The ACRN configuration app is supported on Chrome, Firefox, .. note:: The ACRN configuration app is supported on Chrome, Firefox,
and MS Edge. Do not use IE. and Microsoft Edge. Do not use Internet Explorer.
The website is shown below: The website is shown below:
@ -587,7 +624,7 @@ Instructions
#. Load or create the scenario setting by selecting among the following: #. Load or create the scenario setting by selecting among the following:
- Choose a scenario from the **Scenario Setting** menu which lists all - Choose a scenario from the **Scenario Setting** menu that lists all
user-defined scenarios for the board you selected in the previous step. user-defined scenarios for the board you selected in the previous step.
- Click the **Create a new scenario** from the **Scenario Setting** - Click the **Create a new scenario** from the **Scenario Setting**
@ -607,9 +644,9 @@ Instructions
.. figure:: images/choose_scenario.png .. figure:: images/choose_scenario.png
:align: center :align: center
Note that you can also use a customized scenario xml by clicking **Import Note that you can also use a customized scenario XML by clicking **Import
XML**. The configuration app automatically directs to the new scenario XML**. The configuration app automatically directs to the new scenario
xml once the import is complete. XML once the import is complete.
#. The configurable items display after one scenario is created/loaded/ #. The configurable items display after one scenario is created/loaded/
selected. Following is an industry scenario: selected. Following is an industry scenario:
@ -618,9 +655,9 @@ Instructions
:align: center :align: center
- You can edit these items directly in the text boxes, or you can choose - You can edit these items directly in the text boxes, or you can choose
single or even multiple items from the drop down list. single or even multiple items from the drop-down list.
- Read-only items are marked as grey. - Read-only items are marked as gray.
- Hover the mouse pointer over the item to display the description. - Hover the mouse pointer over the item to display the description.
@ -642,7 +679,7 @@ Instructions
pop-up model. pop-up model.
.. note:: .. note::
All customized scenario xmls will be in user-defined groups which are All customized scenario xmls will be in user-defined groups, which are
located in ``misc/vm_configs/xmls/config-xmls/[board]/user_defined/``. located in ``misc/vm_configs/xmls/config-xmls/[board]/user_defined/``.
Before saving the scenario xml, the configuration app validates the Before saving the scenario xml, the configuration app validates the
@ -661,8 +698,8 @@ Instructions
If **Source Path** in the pop-up model is edited, the source code is If **Source Path** in the pop-up model is edited, the source code is
generated into the edited Source Path relative to ``acrn-hypervisor``; generated into the edited Source Path relative to ``acrn-hypervisor``;
otherwise, the source code is generated into default folders and otherwise, source code is generated into default folders and
overwrite the old ones. The board-related configuration source overwrites the old ones. The board-related configuration source
code is located at code is located at
``misc/vm_configs/boards/[board]/`` and the ``misc/vm_configs/boards/[board]/`` and the
scenario-based VM configuration source code is located at scenario-based VM configuration source code is located at
@ -678,11 +715,11 @@ The **Launch Setting** is quite similar to the **Scenario Setting**:
- Click **Load a default launch script** from the **Launch Setting** menu. - Click **Load a default launch script** from the **Launch Setting** menu.
- Select one launch setting xml from the menu. - Select one launch setting XML file from the menu.
- Import the local launch setting xml by clicking **Import XML**. - Import the local launch setting XML file by clicking **Import XML**.
#. Select one scenario for the current launch setting from the **Select Scenario** drop down box. #. Select one scenario for the current launch setting from the **Select Scenario** drop-down box.
#. Configure the items for the current launch setting. #. Configure the items for the current launch setting.
@ -694,7 +731,7 @@ The **Launch Setting** is quite similar to the **Scenario Setting**:
- Remove a UOS launch script by clicking **Remove this VM** for the - Remove a UOS launch script by clicking **Remove this VM** for the
current launch setting. current launch setting.
#. Save the current launch setting to the user-defined xml files by #. Save the current launch setting to the user-defined XML files by
clicking **Export XML**. The configuration app validates the current clicking **Export XML**. The configuration app validates the current
configuration and lists all incorrect configurable items and shows errors. configuration and lists all incorrect configurable items and shows errors.

View File

@ -18,8 +18,8 @@ This setup was tested with the following configuration,
- Platforms Tested: ApolloLake, KabyLake, CoffeeLake - Platforms Tested: ApolloLake, KabyLake, CoffeeLake
Pre-Requisites Prerequisites
************** *************
1. Make sure the platform supports Intel VMX as well as VT-d technologies. On ubuntu18.04, this 1. Make sure the platform supports Intel VMX as well as VT-d technologies. On ubuntu18.04, this
can be checked by installing ``cpu-checker`` tool. If the output displays **KVM acceleration can be used** can be checked by installing ``cpu-checker`` tool. If the output displays **KVM acceleration can be used**
the platform supports it. the platform supports it.

View File

@ -186,7 +186,7 @@ shown in the following example:
Formats: Formats:
0x00000005: event id for trace test 0x00000005: event id for trace test
%(cpu)d: corresponding cpu index with 'decimal' format %(cpu)d: corresponding CPU index with 'decimal' format
%(event)016x: corresponding event id with 'hex' format %(event)016x: corresponding event id with 'hex' format

View File

@ -151,7 +151,7 @@ Depending on your Linux version, install the needed tools:
sudo dnf install doxygen python3-pip python3-wheel make graphviz sudo dnf install doxygen python3-pip python3-wheel make graphviz
And for any of these Linux environments, install the remaining python-based And for any of these Linux environments, install the remaining Python-based
tools: tools:
.. code-block:: bash .. code-block:: bash
@ -160,7 +160,7 @@ tools:
pip3 install --user -r scripts/requirements.txt pip3 install --user -r scripts/requirements.txt
Add ``$HOME/.local/bin`` to the front of your ``PATH`` so the system will Add ``$HOME/.local/bin`` to the front of your ``PATH`` so the system will
find expected versions of python utilities such as ``sphinx-build`` and find expected versions of Python utilities such as ``sphinx-build`` and
``breathe``: ``breathe``:
.. code-block:: bash .. code-block:: bash
@ -304,7 +304,7 @@ Sphinx/Breathe, we've added a post-processing filter on the output of
the documentation build process to check for "expected" messages from the the documentation build process to check for "expected" messages from the
generation process output. generation process output.
The output from the Sphinx build is processed by the python script The output from the Sphinx build is processed by the Python script
``scripts/filter-known-issues.py`` together with a set of filter ``scripts/filter-known-issues.py`` together with a set of filter
configuration files in the ``.known-issues/doc`` folder. (This configuration files in the ``.known-issues/doc`` folder. (This
filtering is done as part of the ``Makefile``.) filtering is done as part of the ``Makefile``.)

View File

@ -0,0 +1,172 @@
.. _enable_ivshmem:
Enable Inter-VM Communication Based on ``ivshmem``
##################################################
You can use inter-VM communication based on the ``ivshmem`` dm-land
solution or hv-land solution, according to the usage scenario needs.
(See :ref:`ivshmem-hld` for a high-level description of these solutions.)
While both solutions can be used at the same time, VMs using different
solutions can not communicate with each other.
ivshmem dm-land usage
*********************
Add this line as an ``acrn-dm`` boot parameter::
-s slot,ivshmem,shm_name,shm_size
where
- ``-s slot`` - Specify the virtual PCI slot number
- ``ivshmem`` - Virtual PCI device name
- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the same
``shm_name`` share a shared memory region. The ``shm_name`` needs to start
with ``dm:/`` prefix. For example, ``dm:/test``
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The size
ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
For example, to set up a shared memory of 2 megabytes, use ``2``
instead of ``shm_size``. The two communicating VMs must define the same size.
.. note:: This device can be used with real-time VM (RTVM) as well.
ivshmem hv-land usage
*********************
The ``ivshmem`` hv-land solution is disabled by default in ACRN. You
enable it using the :ref:`acrn_configuration_tool` with these steps:
- Enable ``ivshmem`` hv-land in ACRN XML configuration file. For example, the
XML configuration file for the hybrid_rt scenario on a whl-ipc-i5 board is found in
``acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml``
- Edit ``IVSHMEM_ENABLED`` to ``y`` in ACRN scenario XML configuration
to enable ``ivshmem`` hv-land
- Edit ``IVSHMEM_REGION`` to specify the shared memory name, size and
communication VMs in ACRN scenario XML configuration. The ``IVSHMEM_REGION``
format is ``shm_name,shm_size,VM IDs``:
- ``shm_name`` - Specify a shared memory name. The name needs to start
with the ``hv:/`` prefix. For example, ``hv:/shm_region_0``
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The
size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
For example, to set up a shared memory of 2 megabytes, use ``2``
instead of ``shm_size``.
- ``VM IDs`` - Specify the VM IDs to use the same shared memory
communication and separate it with ``:``. For example, the
communication between VM0 and VM2, it can be written as ``0:2``
.. note:: You can define up to eight ``ivshmem`` hv-land shared regions.
- Build the XML configuration, refer to :ref:`getting-started-building`
Inter-VM Communication Examples
*******************************
dm-land example
===============
This example uses dm-land inter-VM communication between two
Linux-based post-launched VMs (VM1 and VM2).
.. note:: An ``ivshmem`` Windows driver exists and can be found `here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_
1. Add a new virtual PCI device for both VMs: the device type is
``ivshmem``, shared memory name is ``dm:/test``, and shared memory
size is 2MB. Both VMs must have the same shared memory name and size:
- VM1 Launch Script Sample
.. code-block:: none
:emphasize-lines: 7
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 2,pci-gvt -G "$2" \
-s 5,virtio-console,@stdio:stdio_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,/home/acrn/uos1.img \
-s 4,virtio-net,tap0 \
-s 6,ivshmem,dm:/test,2 \
-s 7,virtio-rnd \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
- VM2 Launch Script Sample
.. code-block:: none
:emphasize-lines: 5
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 2,pci-gvt -G "$2" \
-s 3,virtio-blk,/home/acrn/uos2.img \
-s 4,virtio-net,tap0 \
-s 5,ivshmem,dm:/test,2 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
$vm_name
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.
- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
3. As recorded in the `PCI ID Repository <https://pci-ids.ucw.cz/read/PC/1af4>`_,
the ``ivshmem`` device vendor ID is ``1af4`` (RedHat) and device ID is ``1110``
(Inter-VM shared memory). Use these commands to probe the device::
$ sudo modprobe uio
$ sudo modprobe uio_pci_generic
$ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
.. note:: These commands are applicable to Linux-based guests with ``CONFIG_UIO`` and ``CONFIG_UIO_PCI_GENERIC`` enabled.
4. Finally, a user application can get the shared memory base address from
the ``ivshmem`` device BAR resource
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
the ``ivshmem`` device config resource
(``/sys/class/uio/uioX/device/config``).
The ``X`` in ``uioX`` above, is a number that can be retrieved using the
``ls`` command:
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
hv-land example
===============
This example uses hv-land inter-VM communication between two
Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM).
1. Configure shared memory for the communication between VM0 and VM2 for hybrid_rt
scenario on whl-ipc-i5 board, the shared memory name is ``hv:/shm_region_0``,
and shared memory size is 2M bytes:
- Edit XML configuration file for hybrid_rt scenario on whl-ipc-i5 board
``acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml``
to enable ``ivshmem`` and configure the shared memory region using the format
``shm_name, shm_size, VM IDs`` (as described above in the ACRN dm boot parameters).
The region name must start with ``hv:/`` for an hv-land shared region, and we'll allocate 2MB
shared between VMs 0 and 2:
.. code-block:: none
:emphasize-lines: 2,3
<IVSHMEM desc="IVSHMEM configuration">
<IVSHMEM_ENABLED>y</IVSHMEM_ENABLED>
<IVSHMEM_REGION>hv:/shm_region_0, 2, 0:2</IVSHMEM_REGION>
</IVSHMEM>
2. Build ACRN based on the XML configuration for hybrid_rt scenario on whl-ipc-i5 board::
make BOARD_FILE=acrn-hypervisor/misc/vm_configs/xmls/board-xmls/whl-ipc-i5.xml \
SCENARIO_FILE=acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml TARGET_DIR=xxx
3. Continue following the dm-land steps 2-4 and the ``ivshmem`` device BDF may be different
depending on the configuration.

View File

@ -23,8 +23,8 @@ Verified version
***************** *****************
- ACRN-hypervisor tag: **acrn-2020w17.4-140000p** - ACRN-hypervisor tag: **acrn-2020w17.4-140000p**
- ACRN-Kernel (Service VM kernel): **master** branch, commit id **095509221660daf82584ebdd8c50ea0078da3c2d** - ACRN-Kernel (Service VM kernel): **master** branch, commit ID **095509221660daf82584ebdd8c50ea0078da3c2d**
- ACRN-EDK2 (OVMF): **ovmf-acrn** branch, commit id **0ff86f6b9a3500e4c7ea0c1064e77d98e9745947** - ACRN-EDK2 (OVMF): **ovmf-acrn** branch, commit ID **0ff86f6b9a3500e4c7ea0c1064e77d98e9745947**
Prerequisites Prerequisites
************* *************
@ -94,9 +94,6 @@ Passthrough the GPU to Guest
4. Run ``launch_win.sh``. 4. Run ``launch_win.sh``.
.. note:: If you want to passthrough the GPU to a Clear Linux User VM, the
steps are the same as above except your script.
Enable the GVT-d GOP driver Enable the GVT-d GOP driver
*************************** ***************************
@ -120,12 +117,12 @@ Steps
git clone https://github.com/projectacrn/acrn-edk2.git git clone https://github.com/projectacrn/acrn-edk2.git
#. Fetch the vbt and gop drivers. #. Fetch the VBT and GOP drivers.
Fetch the **vbt** and **gop** drivers from the board manufacturer Fetch the **VBT** and **GOP** drivers from the board manufacturer
according to your CPU model name. according to your CPU model name.
#. Add the **vbt** and **gop** drivers to the OVMF: #. Add the **VBT** and **GOP** drivers to the OVMF:
:: ::

View File

@ -70,7 +70,7 @@ Build ACRN with Pre-Launched RT Mode
The ACRN VM configuration framework can easily configure resources for The ACRN VM configuration framework can easily configure resources for
Pre-Launched VMs. On Whiskey Lake WHL-IPC-I5, to passthrough SATA and Pre-Launched VMs. On Whiskey Lake WHL-IPC-I5, to passthrough SATA and
ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with: Ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with:
.. code-block:: none .. code-block:: none

View File

@ -144,9 +144,9 @@ Configure RDT for VM using VM Configuration
#. RDT hardware feature is enabled by default on supported platforms. This #. RDT hardware feature is enabled by default on supported platforms. This
information can be found using an offline tool that generates a information can be found using an offline tool that generates a
platform-specific xml file that helps ACRN identify RDT-supported platform-specific XML file that helps ACRN identify RDT-supported
platforms. RDT on ACRN is enabled by configuring the ``FEATURES`` platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
sub-section of the scenario xml file as in the below example. For sub-section of the scenario XML file as in the below example. For
details on building ACRN with scenario refer to :ref:`build-with-acrn-scenario`. details on building ACRN with scenario refer to :ref:`build-with-acrn-scenario`.
.. code-block:: none .. code-block:: none
@ -163,7 +163,7 @@ Configure RDT for VM using VM Configuration
<MBA_DELAY desc="Memory Bandwidth Allocation delay value"></MBA_DELAY> <MBA_DELAY desc="Memory Bandwidth Allocation delay value"></MBA_DELAY>
</RDT> </RDT>
#. Once RDT is enabled in the scenario xml file, the next step is to program #. Once RDT is enabled in the scenario XML file, the next step is to program
the desired cache mask or/and the MBA delay value as needed in the the desired cache mask or/and the MBA delay value as needed in the
scenario file. Each cache mask or MBA delay configuration corresponds scenario file. Each cache mask or MBA delay configuration corresponds
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4 to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4

View File

@ -1,9 +1,9 @@
.. _rt_performance_tuning: .. _rt_performance_tuning:
ACRN Real-Time (RT) Performance Analysis ACRN Real-time (RT) Performance Analysis
######################################## ########################################
The document describes the methods to collect trace/data for ACRN Real-Time VM (RTVM) The document describes the methods to collect trace/data for ACRN real-time VM (RTVM)
real-time performance analysis. Two parts are included: real-time performance analysis. Two parts are included:
- Method to trace ``vmexit`` occurrences for analysis. - Method to trace ``vmexit`` occurrences for analysis.

View File

@ -1,6 +1,6 @@
.. _rt_perf_tips_rtvm: .. _rt_perf_tips_rtvm:
ACRN Real-Time VM Performance Tips ACRN Real-time VM Performance Tips
################################## ##################################
Background Background
@ -50,7 +50,7 @@ Tip: Apply the acrn-dm option ``--lapic_pt``
Tip: Use virtio polling mode Tip: Use virtio polling mode
Polling mode prevents the frontend of the VM-exit from sending a Polling mode prevents the frontend of the VM-exit from sending a
notification to the backend. We recommend that you passthrough a notification to the backend. We recommend that you passthrough a
physical peripheral device (such as block or an ethernet device), to an physical peripheral device (such as block or an Ethernet device), to an
RTVM. If no physical device is available, ACRN supports virtio devices RTVM. If no physical device is available, ACRN supports virtio devices
and enables polling mode to avoid a VM-exit at the frontend. Enable and enables polling mode to avoid a VM-exit at the frontend. Enable
virtio polling mode via the option ``--virtio_poll [polling interval]``. virtio polling mode via the option ``--virtio_poll [polling interval]``.

View File

@ -1,6 +1,6 @@
.. _rtvm_workload_guideline: .. _rtvm_workload_guideline:
Real-Time VM Application Design Guidelines Real-time VM Application Design Guidelines
########################################## ##########################################
An RTOS developer must be aware of the differences between running applications on a native An RTOS developer must be aware of the differences between running applications on a native

View File

@ -18,7 +18,7 @@ Prerequisites
#. Refer to the :ref:`ACRN supported hardware <hardware>`. #. Refer to the :ref:`ACRN supported hardware <hardware>`.
#. For a default prebuilt ACRN binary in the E2E package, you must have 4 #. For a default prebuilt ACRN binary in the E2E package, you must have 4
CPU cores or enable "CPU Hyper-Threading" in order to have 4 CPU threads for 2 CPU cores. CPU cores or enable "CPU Hyper-threading" in order to have 4 CPU threads for 2 CPU cores.
#. Follow the :ref:`rt_industry_ubuntu_setup` to set up the ACRN Service VM #. Follow the :ref:`rt_industry_ubuntu_setup` to set up the ACRN Service VM
based on Ubuntu. based on Ubuntu.
#. This tutorial is validated on the following configurations: #. This tutorial is validated on the following configurations:

View File

@ -24,7 +24,7 @@ Use the following instructions to install Debian.
the bottom of the page). the bottom of the page).
- Follow the `Debian installation guide - Follow the `Debian installation guide
<https://www.debian.org/releases/stable/amd64/index.en.html>`_ to <https://www.debian.org/releases/stable/amd64/index.en.html>`_ to
install it on your NUC; we are using an Intel Kaby Lake NUC (NUC7i7DNHE) install it on your Intel NUC; we are using a Kaby Lake Intel NUC (NUC7i7DNHE)
in this tutorial. in this tutorial.
- :ref:`install-build-tools-dependencies` for ACRN. - :ref:`install-build-tools-dependencies` for ACRN.
- Update to the latest iASL (required by the ACRN Device Model): - Update to the latest iASL (required by the ACRN Device Model):

View File

@ -11,18 +11,18 @@ Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Clear Linux OS - Install a `Clear Linux OS
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ <https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
on your NUC kit. on your Intel NUC kit.
- Follow the instructions at XXX to set up the - Follow the instructions at XXX to set up the
Service VM automatically on your NUC kit. Follow steps 1 - 4. Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
.. important:: need updated instructions that aren't Clear Linux dependent .. important:: need updated instructions that aren't Clear Linux dependent
We are using Intel Kaby Lake NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial. We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
Before you start this tutorial, make sure the KVM tools are installed on the Before you start this tutorial, make sure the KVM tools are installed on the
development machine and set **IGD Aperture Size to 512** in the BIOS development machine and set **IGD Aperture Size to 512** in the BIOS
settings (refer to :numref:`intel-bios-deb`). Connect two monitors to your settings (refer to :numref:`intel-bios-deb`). Connect two monitors to your
NUC: Intel NUC:
.. code-block:: none .. code-block:: none
@ -47,7 +47,7 @@ Hardware Configurations
| | | Graphics | - UHD Graphics 620 | | | | Graphics | - UHD Graphics 620 |
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz | | | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
| | +----------------------+----------------------------------------------+ | | +----------------------+----------------------------------------------+
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz | | | | System memory | - 8GiB SO-DIMM DDR4 2400 MHz |
| | +----------------------+----------------------------------------------+ | | +----------------------+----------------------------------------------+
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z | | | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
+--------------------------+----------------------+----------------------+----------------------------------------------+ +--------------------------+----------------------+----------------------+----------------------------------------------+
@ -97,7 +97,7 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
#. Right-click **QEMU/KVM** and select **New**. #. Right-click **QEMU/KVM** and select **New**.
a. Choose **Local install media (ISO image or CDROM)** and then click a. Choose **Local install media (ISO image or CD-ROM)** and then click
**Forward**. A **Create a new virtual machine** box displays, as shown **Forward**. A **Create a new virtual machine** box displays, as shown
in :numref:`newVM-debian` below. in :numref:`newVM-debian` below.
@ -119,7 +119,7 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
#. Rename the image if you desire. You must check the **customize #. Rename the image if you desire. You must check the **customize
configuration before install** option before you finish all stages. configuration before install** option before you finish all stages.
#. Verify that you can see the Overview screen as set up, as shown in :numref:`debian10-setup` below: #. Verify that you can see the Overview screen as set up, shown in :numref:`debian10-setup` below:
.. figure:: images/debian-uservm-3.png .. figure:: images/debian-uservm-3.png
:align: center :align: center
@ -127,14 +127,14 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
Debian Setup Overview Debian Setup Overview
#. Complete the Debian installation. Verify that you have set up a vda #. Complete the Debian installation. Verify that you have set up a VDA
disk partition, as shown in :numref:`partition-vda` below: disk partition, as shown in :numref:`partition-vda` below:
.. figure:: images/debian-uservm-4.png .. figure:: images/debian-uservm-4.png
:align: center :align: center
:name: partition-vda :name: partition-vda
Virtual Disk (vda) partition Virtual Disk (VDA) partition
#. Upon installation completion, the KVM image is created in the #. Upon installation completion, the KVM image is created in the
``/var/lib/libvirt/images`` folder. Convert the `gcow2` format to `img` ``/var/lib/libvirt/images`` folder. Convert the `gcow2` format to `img`
@ -154,7 +154,7 @@ Re-use and modify the `launch_win.sh` script in order to launch the new Debian 1
"/dev/sda1" mentioned below with "/dev/nvme0n1p1" if you are using an "/dev/sda1" mentioned below with "/dev/nvme0n1p1" if you are using an
NVMe drive. NVMe drive.
1. Copy the debian.img to your NUC: 1. Copy the debian.img to your Intel NUC:
.. code-block:: none .. code-block:: none

View File

@ -11,9 +11,9 @@ Intel NUC Kit. If you have not, refer to the following instructions:
- Install a `Clear Linux OS - Install a `Clear Linux OS
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ <https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
on your NUC kit. on your Intel NUC kit.
- Follow the instructions at XXX to set up the - Follow the instructions at XXX to set up the
Service VM automatically on your NUC kit. Follow steps 1 - 4. Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
.. important:: need updated instructions that aren't Clear Linux .. important:: need updated instructions that aren't Clear Linux
dependent dependent
@ -21,7 +21,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
Before you start this tutorial, make sure the KVM tools are installed on the Before you start this tutorial, make sure the KVM tools are installed on the
development machine and set **IGD Aperture Size to 512** in the BIOS development machine and set **IGD Aperture Size to 512** in the BIOS
settings (refer to :numref:`intel-bios-ubun`). Connect two monitors to your settings (refer to :numref:`intel-bios-ubun`). Connect two monitors to your
NUC: Intel NUC:
.. code-block:: none .. code-block:: none
@ -46,7 +46,7 @@ Hardware Configurations
| | | Graphics | - UHD Graphics 620 | | | | Graphics | - UHD Graphics 620 |
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz | | | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
| | +----------------------+----------------------------------------------+ | | +----------------------+----------------------------------------------+
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz | | | | System memory | - 8GiB SO-DIMM DDR4 2400 MHz |
| | +----------------------+----------------------------------------------+ | | +----------------------+----------------------------------------------+
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z | | | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
+--------------------------+----------------------+----------------------+----------------------------------------------+ +--------------------------+----------------------+----------------------+----------------------------------------------+
@ -147,7 +147,7 @@ Modify the ``launch_win.sh`` script in order to launch Ubuntu as the User VM.
``/dev/sda1`` mentioned below with ``/dev/nvme0n1p1`` if you are ``/dev/sda1`` mentioned below with ``/dev/nvme0n1p1`` if you are
using an SSD. using an SSD.
1. Copy the ``uos.img`` to your NUC: 1. Copy the ``uos.img`` to your Intel NUC:
.. code-block:: none .. code-block:: none

View File

@ -149,7 +149,7 @@ CPUID Leaf 12H
Extended feature (same structure as XCR0). Extended feature (same structure as XCR0).
The hypervisor may change the allow-1 setting of XFRM in ATTRIBUTES for VM. The hypervisor may change the allow-1 setting of XFRM in ATTRIBUTES for VM.
If some feature is disabled for the VM, the bit is also cleared, eg. MPX. If some feature is disabled for the VM, the bit is also cleared, e.g. MPX.
**Intel SGX EPC Enumeration** **Intel SGX EPC Enumeration**

View File

@ -102,7 +102,7 @@ malware detection.
In embedded products such as an automotive IVI system, the most important In embedded products such as an automotive IVI system, the most important
security services requested by customers are keystore and secure security services requested by customers are keystore and secure
storage. In this article we will focus on these two services. storage. In this article, we will focus on these two services.
Keystore Keystore
======== ========
@ -126,14 +126,14 @@ and are permanently bound to the key, ensuring the key cannot be used in
any other way. any other way.
In addition to the list above, there is one more service that Keymaster In addition to the list above, there is one more service that Keymaster
implementations provide, but which is not exposed as an API: Random implementations provide, but is not exposed as an API: Random
number generation. This is used internally for generation of keys, number generation. This is used internally for generation of keys,
Initialization Vectors (IVs), random padding, and other elements of Initialization Vectors (IVs), random padding, and other elements of
secure protocols that require randomness. secure protocols that require randomness.
Using Android as an example, Keystore functions are explained in greater Using Android as an example, Keystore functions are explained in greater
details in this `Android keymaster functions document details in this `Android keymaster functions document
<https://source.android.com/security/keystore/implementer-ref>`_ <https://source.android.com/security/keystore/implementer-ref>`_.
.. figure:: images/trustyacrn-image3.png .. figure:: images/trustyacrn-image3.png
:align: center :align: center
@ -178,7 +178,7 @@ key). See `Android Key and ID Attestation
for details. for details.
In Trusty, the secure storage architecture is shown in the figure below. In Trusty, the secure storage architecture is shown in the figure below.
In the secure world, there is a SS (Secure Storage) TA, which has an In the secure world, there is an SS (Secure Storage) TA, which has an
RPMB authentication key (AuthKey, an HMAC key) and uses this Authkey to RPMB authentication key (AuthKey, an HMAC key) and uses this Authkey to
talk with the RPMB controller in the eMMC device. Since the eMMC device talk with the RPMB controller in the eMMC device. Since the eMMC device
is controlled by normal world driver, Trusty needs to send an RPMB data is controlled by normal world driver, Trusty needs to send an RPMB data
@ -260,7 +260,7 @@ One-VM, Two-Worlds
================== ==================
As previously mentioned, Trusty Secure Monitor could be any As previously mentioned, Trusty Secure Monitor could be any
hypervisor. In the ACRN project the ACRN hypervisor will behave as the hypervisor. In the ACRN project, the ACRN hypervisor will behave as the
secure monitor to schedule in/out Trusty secure world. secure monitor to schedule in/out Trusty secure world.
.. figure:: images/trustyacrn-image4.png .. figure:: images/trustyacrn-image4.png
@ -383,7 +383,7 @@ system security design. In practice, the Service VM designer and implementer
should obey these following rules (and more): should obey these following rules (and more):
- Make sure the Service VM is a closed system and doesn't allow users to - Make sure the Service VM is a closed system and doesn't allow users to
install any unauthorized 3rd party software or components. install any unauthorized third-party software or components.
- External peripherals are constrained. - External peripherals are constrained.
- Enable kernel-based hardening techniques, e.g., dm-verity (to make - Enable kernel-based hardening techniques, e.g., dm-verity (to make
sure integrity of DM and vBIOS/vOSloaders), kernel module signing, sure integrity of DM and vBIOS/vOSloaders), kernel module signing,

View File

@ -104,9 +104,9 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
``kernel_mod_tag`` of VM1 in the ``kernel_mod_tag`` of VM1 in the
``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file. ``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file.
The guest kernel command line arguments is configured in the The guest kernel command-line arguments is configured in the
hypervisor source code by default if no ``$(VMx bootargs)`` is present. hypervisor source code by default if no ``$(VMx bootargs)`` is present.
If ``$(VMx bootargs)`` is present, the default command line arguments If ``$(VMx bootargs)`` is present, the default command-line arguments
are overridden by the ``$(VMx bootargs)`` parameters. are overridden by the ``$(VMx bootargs)`` parameters.
The ``$(Service VM bootargs)`` parameter in the multiboot command The ``$(Service VM bootargs)`` parameter in the multiboot command

View File

@ -19,7 +19,8 @@ Prerequisites
************* *************
- Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_. - Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_.
- Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`. - Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`.
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your NUC. - Install Ubuntu 18.04 on your SATA device or on the NVME disk of your
Intel NUC.
Update Ubuntu GRUB Update Ubuntu GRUB
****************** ******************
@ -49,11 +50,11 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
.. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file. .. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file.
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
``kernel_mod_tag`` of VM0 which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` ``kernel_mod_tag`` of VM0, which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
file. The multiboot module ``/boot/bzImage`` is the Service VM kernel file. The multiboot module ``/boot/bzImage`` is the Service VM kernel
file. The param ``yyyyyy`` is the bzImage tag and must exactly match the file. The param ``yyyyyy`` is the bzImage tag and must exactly match the
``kernel_mod_tag`` of VM1 in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` ``kernel_mod_tag`` of VM1 in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
file. The kernel command line arguments used to boot the Service VM are file. The kernel command-line arguments used to boot the Service VM are
located in the header file ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` located in the header file ``misc/vm_configs/scenarios/hybrid/vm_configurations.h``
and are configured by the `SOS_VM_BOOTARGS` macro. and are configured by the `SOS_VM_BOOTARGS` macro.
The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0 (Zephyr). The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0 (Zephyr).
@ -73,8 +74,8 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
$ sudo update-grub $ sudo update-grub
#. Reboot the NUC. Select the **ACRN hypervisor Hybrid Scenario** entry to boot #. Reboot the Intel NUC. Select the **ACRN hypervisor Hybrid Scenario** entry to boot
the ACRN hypervisor on the NUC's display. The GRUB loader will boot the the ACRN hypervisor on the Intel NUC's display. The GRUB loader will boot the
hypervisor, and the hypervisor will start the VMs automatically. hypervisor, and the hypervisor will start the VMs automatically.
Hybrid Scenario Startup Checking Hybrid Scenario Startup Checking
@ -86,7 +87,7 @@ Hybrid Scenario Startup Checking
#. Use these steps to verify all VMs are running properly: #. Use these steps to verify all VMs are running properly:
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**. a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display ``Hello world! acrn``.
#. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console. #. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
#. Verify that the VM1's Service VM can boot and you can log in. #. Verify that the VM1's Service VM can boot and you can log in.

View File

@ -57,7 +57,8 @@ Update kernel image and modules of pre-launched VM
The last two commands build the bootable kernel image as The last two commands build the bootable kernel image as
``arch/x86/boot/bzImage``, and loadable kernel modules under the ``./out/`` ``arch/x86/boot/bzImage``, and loadable kernel modules under the ``./out/``
folder. Copy these files to a removable disk for installing on the NUC later. folder. Copy these files to a removable disk for installing on the
Intel NUC later.
#. The current ACRN logical partition scenario implementation requires a #. The current ACRN logical partition scenario implementation requires a
multi-boot capable bootloader to boot both the ACRN hypervisor and the multi-boot capable bootloader to boot both the ACRN hypervisor and the
@ -68,10 +69,10 @@ Update kernel image and modules of pre-launched VM
default, the GRUB bootloader is installed on the EFI System Partition default, the GRUB bootloader is installed on the EFI System Partition
(ESP) that's used to bootstrap the ACRN hypervisor. (ESP) that's used to bootstrap the ACRN hypervisor.
#. After installing the Ubuntu OS, power off the NUC. Attach the #. After installing the Ubuntu OS, power off the Intel NUC. Attach the
SATA disk and storage device with the USB interface to the NUC. Power on SATA disk and storage device with the USB interface to the Intel NUC. Power on
the NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in the Intel NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in
the removable disk with the kernel image into the NUC and then copy the the removable disk with the kernel image into the Intel NUC and then copy the
loadable kernel modules built in Step 1 to the ``/lib/modules/`` folder loadable kernel modules built in Step 1 to the ``/lib/modules/`` folder
on both the mounted SATA disk and storage device with USB interface. For on both the mounted SATA disk and storage device with USB interface. For
example, assuming the SATA disk and storage device with USB interface are example, assuming the SATA disk and storage device with USB interface are
@ -101,8 +102,8 @@ Update ACRN hypervisor image
#. Before building the ACRN hypervisor, find the I/O address of the serial #. Before building the ACRN hypervisor, find the I/O address of the serial
port and the PCI BDF addresses of the SATA controller nd the USB port and the PCI BDF addresses of the SATA controller nd the USB
controllers on the NUC. Enter the following command to get the controllers on the Intel NUC. Enter the following command to get the
I/O addresses of the serial port. The NUC supports one serial port, **ttyS0**. I/O addresses of the serial port. The Intel NUC supports one serial port, **ttyS0**.
Connect the serial port to the development workstation in order to access Connect the serial port to the development workstation in order to access
the ACRN serial console to switch between pre-launched VMs: the ACRN serial console to switch between pre-launched VMs:
@ -169,13 +170,13 @@ Update ACRN hypervisor image
#. Check or update the BDF information of the PCI devices for each #. Check or update the BDF information of the PCI devices for each
pre-launched VM; check it in the ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``. pre-launched VM; check it in the ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``.
#. Copy the artifact ``acrn.bin``, ``ACPI_VM0.bin`` and ``ACPI_VM1.bin`` to the ``/boot`` directory: #. Copy the artifact ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` to the ``/boot`` directory:
#. Copy ``acrn.bin``, ``ACPI_VM1.bin`` and ``ACPI_VM0.bin`` to a removable disk. #. Copy ``acrn.bin``, ``ACPI_VM1.bin`` and ``ACPI_VM0.bin`` to a removable disk.
#. Plug the removable disk into the NUC's USB port. #. Plug the removable disk into the Intel NUC's USB port.
#. Copy the ``acrn.bin``, ``ACPI_VM0.bin`` and ``ACPI_VM1.bin`` from the removable disk to ``/boot`` #. Copy the ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` from the removable disk to ``/boot``
directory. directory.
Update Ubuntu GRUB to boot hypervisor and load kernel image Update Ubuntu GRUB to boot hypervisor and load kernel image
@ -204,7 +205,7 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
.. note:: .. note::
Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter) Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
(or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``. (or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``.
The kernel command line arguments used to boot the pre-launched VMs is The kernel command-line arguments used to boot the pre-launched VMs is
located in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` header file located in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` header file
and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id number and ``*`` are arguments). and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id number and ``*`` are arguments).
The multiboot2 module param ``XXXXXX`` is the bzImage tag and must exactly match the ``kernel_mod_tag`` The multiboot2 module param ``XXXXXX`` is the bzImage tag and must exactly match the ``kernel_mod_tag``
@ -231,9 +232,9 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
$ sudo update-grub $ sudo update-grub
#. Reboot the NUC. Select the **ACRN hypervisor Logical Partition #. Reboot the Intel NUC. Select the **ACRN hypervisor Logical Partition
Scenario** entry to boot the logical partition of the ACRN hypervisor on Scenario** entry to boot the logical partition of the ACRN hypervisor on
the NUC's display. The GRUB loader will boot the hypervisor, and the the Intel NUC's display. The GRUB loader will boot the hypervisor, and the
hypervisor will automatically start the two pre-launched VMs. hypervisor will automatically start the two pre-launched VMs.
Logical partition scenario startup checking Logical partition scenario startup checking
@ -248,10 +249,10 @@ Logical partition scenario startup checking
properly: properly:
#. Use the ``vm_console 0`` to switch to VM0's console. #. Use the ``vm_console 0`` to switch to VM0's console.
#. The VM0's Clear Linux OS should boot and log in. #. The VM0's OS should boot and log in.
#. Use a :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Use a :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` to switch to VM1's console. #. Use the ``vm_console 1`` to switch to VM1's console.
#. The VM1's Clear Linux OS should boot and log in. #. The VM1's OS should boot and log in.
Refer to the :ref:`ACRN hypervisor shell user guide <acrnshell>` Refer to the :ref:`ACRN hypervisor shell user guide <acrnshell>`
for more information about available commands. for more information about available commands.

View File

@ -1,22 +1,22 @@
.. _connect_serial_port: .. _connect_serial_port:
Using the Serial Port on KBL NUC Using the Serial Port on KBL Intel NUC
================================ ======================================
You can enable the serial console on the You can enable the serial console on the
`KBL NUC <https://www.amazon.com/Intel-Business-Mini-Technology-BLKNUC7i7DNH1E/dp/B07CCQ8V4R>`_ `KBL Intel NUC <https://www.amazon.com/Intel-Business-Mini-Technology-BLKNUC7i7DNH1E/dp/B07CCQ8V4R>`_
(NUC7i7DNH). The KBL NUC has a serial port header you can (NUC7i7DNH). The KBL Intel NUC has a serial port header you can
expose with a serial DB9 header cable. (The NUC has a punch out hole for expose with a serial DB9 header cable. (The Intel NUC has a punch out hole for
mounting the serial connector.) mounting the serial connector.)
.. figure:: images/NUC-serial-port.jpg .. figure:: images/NUC-serial-port.jpg
KBL NUC with populated serial port punchout KBL Intel NUC with populated serial port punchout
You can `purchase You can `purchase
<https://www.amazon.com/dp/B07BV1W6N8/ref=cm_sw_r_cp_ep_dp_wYm0BbABD5AK6>`_ <https://www.amazon.com/dp/B07BV1W6N8/ref=cm_sw_r_cp_ep_dp_wYm0BbABD5AK6>`_
such a cable or you can build it yourself; such a cable or you can build it yourself;
refer to the `KBL NUC product specification refer to the `KBL Intel NUC product specification
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i7DN_TechProdSpec.pdf>`_ <https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i7DN_TechProdSpec.pdf>`_
as shown below: as shown below:

View File

@ -31,7 +31,7 @@ Steps for Using VxWorks as User VM
* CONSOLE_BAUD_RATE = 115200 * CONSOLE_BAUD_RATE = 115200
* SYS_CLK_RATE_MAX = 1000 * SYS_CLK_RATE_MAX = 1000
#. Build GRUB2 BootLoader Image #. Build GRUB2 bootloader Image
We use grub-2.02 as the bootloader of VxWorks in this tutorial; other versions may also work. We use grub-2.02 as the bootloader of VxWorks in this tutorial; other versions may also work.
@ -95,7 +95,7 @@ Steps for Using VxWorks as User VM
#. Follow XXX to boot the ACRN Service VM. #. Follow XXX to boot the ACRN Service VM.
.. important:: need instructions from deleted document (using sdc .. important:: need instructions from deleted document (using sdc
mode on the NUC) mode on the Intel NUC)
#. Boot VxWorks as User VM. #. Boot VxWorks as User VM.
@ -107,7 +107,7 @@ Steps for Using VxWorks as User VM
$ cp /usr/share/acrn/samples/nuc/launch_vxworks.sh . $ cp /usr/share/acrn/samples/nuc/launch_vxworks.sh .
You will also need to copy the ``VxWorks.img`` created in the VxWorks build environment into directory You will also need to copy the ``VxWorks.img`` created in the VxWorks build environment into directory
``vxworks`` (via, e.g. a USB stick or network). ``vxworks`` (via, e.g. a USB drive or network).
Run the ``launch_vxworks.sh`` script to launch VxWorks as the User VM. Run the ``launch_vxworks.sh`` script to launch VxWorks as the User VM.
@ -134,7 +134,7 @@ Steps for Using VxWorks as User VM
-> ->
Finally, you can type ``help`` to check whether the VxWorks works well. Finally, you can type ``help`` to see available VxWorks commands.
.. code-block:: console .. code-block:: console

View File

@ -46,7 +46,7 @@ Download Win10 ISO and drivers
- Select **ISO-LTSC** and click **Continue**. - Select **ISO-LTSC** and click **Continue**.
- Complete the required info. Click **Continue**. - Complete the required info. Click **Continue**.
- Select the language and **x86 64 bit**. Click **Download ISO** and save as ``windows10-LTSC-17763.iso``. - Select the language and **x86 64-bit**. Click **Download ISO** and save as ``windows10-LTSC-17763.iso``.
#. Download the `Intel DCH Graphics Driver #. Download the `Intel DCH Graphics Driver
<https://downloadmirror.intel.com/29074/a08/igfx_win10_100.7212.zip>`__. <https://downloadmirror.intel.com/29074/a08/igfx_win10_100.7212.zip>`__.
@ -57,8 +57,8 @@ Download Win10 ISO and drivers
- Select **Download Package**. Key in **Oracle Linux 7.6** and click - Select **Download Package**. Key in **Oracle Linux 7.6** and click
**Search**. **Search**.
- Click **DLP: Oracle Linux 7.6** to add to your Cart. - Click **DLP: Oracle Linux 7.6** to add to your Cart.
- Click **Checkout** which is located at the top-right corner. - Click **Checkout**, which is located at the top-right corner.
- Under **Platforms/Language**, select **x86 64 bit**. Click **Continue**. - Under **Platforms/Language**, select **x86 64-bit**. Click **Continue**.
- Check **I accept the terms in the license agreement**. Click **Continue**. - Check **I accept the terms in the license agreement**. Click **Continue**.
- From the list, right check the item labeled **Oracle VirtIO Drivers - From the list, right check the item labeled **Oracle VirtIO Drivers
Version for Microsoft Windows 1.x.x, yy MB**, and then **Save link as Version for Microsoft Windows 1.x.x, yy MB**, and then **Save link as
@ -129,8 +129,8 @@ Install Windows 10 by GVT-g
.. figure:: images/windows_install_4.png .. figure:: images/windows_install_4.png
:align: center :align: center
#. Click **Browser** and go to the drive that includes the virtio win #. Click **Browser** and go to the drive that includes the virtio
drivers. Select **all** under **vio\\w10\\amd64**. Install the Windows drivers. Select **all** under **vio\\w10\\amd64**. Install the
following drivers into the image: following drivers into the image:
- Virtio-balloon - Virtio-balloon
@ -201,7 +201,7 @@ ACRN Windows verified feature list
"IO Devices", "Virtio block as the boot device", "Working" "IO Devices", "Virtio block as the boot device", "Working"
, "AHCI as the boot device", "Working" , "AHCI as the boot device", "Working"
, "AHCI cdrom", "Working" , "AHCI CD-ROM", "Working"
, "Virtio network", "Working" , "Virtio network", "Working"
, "Virtio input - mouse", "Working" , "Virtio input - mouse", "Working"
, "Virtio input - keyboard", "Working" , "Virtio input - keyboard", "Working"
@ -235,7 +235,7 @@ Explanation for acrn-dm popular command lines
You may need to change 0/2/0 to match the bdf of the VGA controller on your platform. You may need to change 0/2/0 to match the bdf of the VGA controller on your platform.
* **-s 3,ahci,hd:/root/img/win10.img**: * **-s 3,ahci,hd:/root/img/win10.img**:
This is the hard disk onto which to install Windows 10. This is the hard disk where Windows 10 should be installed..
Make sure that the slot ID **3** points to your win10 img path. Make sure that the slot ID **3** points to your win10 img path.
* **-s 4,virtio-net,tap0**: * **-s 4,virtio-net,tap0**:
@ -253,11 +253,11 @@ Explanation for acrn-dm popular command lines
# cat /proc/bus/input/devices | grep mouse # cat /proc/bus/input/devices | grep mouse
* **-s 7,ahci,cd:/root/img/Windows10.iso**: * **-s 7,ahci,cd:/root/img/Windows10.iso**:
This is the IOS image used to install Windows 10. It appears as a cdrom This is the IOS image used to install Windows 10. It appears as a CD-ROM
device. Make sure that the slot ID **7** points to your win10 ISO path. device. Make sure that the slot ID **7** points to your win10 ISO path.
* **-s 8,ahci,cd:/root/img/winvirtio.iso**: * **-s 8,ahci,cd:/root/img/winvirtio.iso**:
This is cdrom device to install the virtio Windows driver. Make sure it points to your VirtIO ISO path. This is CD-ROM device to install the virtio Windows driver. Make sure it points to your VirtIO ISO path.
* **-s 9,passthru,0/14/0**: * **-s 9,passthru,0/14/0**:
This is to passthrough the USB controller to Windows. This is to passthrough the USB controller to Windows.

View File

@ -1,11 +1,11 @@
.. _using_xenomai_as_uos: .. _using_xenomai_as_uos:
Run Xenomai as the User VM OS (Real-Time VM) Run Xenomai as the User VM OS (Real-time VM)
############################################ ############################################
`Xenomai`_ is a versatile real-time framework that provides support to user space applications that are seamlessly integrated into Linux environments. `Xenomai`_ is a versatile real-time framework that provides support to user space applications that are seamlessly integrated into Linux environments.
This tutorial describes how to run Xenomai as the User VM OS (Real-Time VM) on the ACRN hypervisor. This tutorial describes how to run Xenomai as the User VM OS (real-time VM) on the ACRN hypervisor.
.. _Xenomai: https://gitlab.denx.de/Xenomai/xenomai/-/wikis/home .. _Xenomai: https://gitlab.denx.de/Xenomai/xenomai/-/wikis/home
@ -60,15 +60,15 @@ Launch the RTVM
#. Prepare a dedicated disk (NVMe or SATA) for the RTVM; in this example, we use ``/dev/sda``. #. Prepare a dedicated disk (NVMe or SATA) for the RTVM; in this example, we use ``/dev/sda``.
a. Download the Preempt-RT VM image: a. Download the Preempt-RT VM image::
$ wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w01.1-140000p/preempt-rt-32030.img.xz $ wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w01.1-140000p/preempt-rt-32030.img.xz
#. Decompress the xz image: #. Decompress the xz image::
$ xz -d preempt-rt-32030.img.xz $ xz -d preempt-rt-32030.img.xz
#. Burn the Preempt-RT VM image onto the SATA disk: #. Burn the Preempt-RT VM image onto the SATA disk::
$ sudo dd if=preempt-rt-32030.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc $ sudo dd if=preempt-rt-32030.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
@ -95,5 +95,6 @@ Launch the RTVM
Install the Xenomai libraries and tools Install the Xenomai libraries and tools
*************************************** ***************************************
To build and install Xenomai tools or its libraries in the RVTM, refer to the official `Xenomai documentation <https://gitlab.denx.de/Xenomai/xenomai/-/wikis/Installing_Xenomai_3#library-install>`_. To build and install Xenomai tools or its libraries in the RVTM, refer to the official
`Xenomai documentation <https://gitlab.denx.de/Xenomai/xenomai/-/wikis/Installing_Xenomai_3#library-install>`_.
Note that the current supported version is Xenomai-3.1 with the 4.19.59 kernel. Note that the current supported version is Xenomai-3.1 with the 4.19.59 kernel.

View File

@ -0,0 +1,41 @@
.. _using_yp:
Using Yocto Project with ACRN
#############################
The `Yocto Project <https://yoctoproject.org>`_ (YP) is an open source
collaboration project that helps developers create custom Linux-based
systems. The project provides a flexible set of tools and a space where
embedded developers worldwide can share technologies, software stacks,
configurations, and best practices used to create tailored Linux images
for embedded and IoT devices, or anywhere a customized Linux OS is
needed.
Yocto Project layers support the inclusion of technologies, hardware
components, and software components. Layers are repositories containing
related sets of instructions which tell the Yocto Project build system
what to do.
The meta-acrn layer
*******************
The meta-acrn layer integrates the ACRN hypervisor with OpenEmbedded,
letting you build your Service VM or Guest VM OS with the Yocto Project.
The `OpenEmbedded Layer Index's meta-acrn entry
<http://layers.openembedded.org/layerindex/branch/master/layer/meta-acrn/>`_
tracks work on this meta-acrn layer and lists the available meta-acrn
recipes including Service and User VM OSs for Linux Kernel 4.19 and 5.4
with the ACRN hypervisor enabled.
Read more about the meta-acrn layer and how to use it, directly from the
`meta-acrn GitHub repo documentation
<https://github.com/intel/meta-acrn/tree/master/docs>`_:
* `Getting Started guide
<https://github.com/intel/meta-acrn/blob/master/docs/getting-started.md>`_
* `Booting ACRN with Slim Bootloader
<https://github.com/intel/meta-acrn/blob/master/docs/slimbootloader.md>`_
* `Testing Procedure
<https://github.com/intel/meta-acrn/blob/master/docs/qa.md>`_
* `References
<https://github.com/intel/meta-acrn/blob/master/docs/references.md>`_

View File

@ -4,7 +4,7 @@ Run Zephyr as the User VM
######################### #########################
This tutorial describes how to run Zephyr as the User VM on the ACRN hypervisor. We are using This tutorial describes how to run Zephyr as the User VM on the ACRN hypervisor. We are using
Kaby Lake-based NUC (model NUC7i5DNHE) in this tutorial. Kaby Lake-based Intel NUC (model NUC7i5DNHE) in this tutorial.
Other :ref:`ACRN supported platforms <hardware>` should work as well. Other :ref:`ACRN supported platforms <hardware>` should work as well.
.. note:: .. note::
@ -40,8 +40,8 @@ Steps for Using Zephyr as User VM
#. Build grub2 boot loader image #. Build grub2 boot loader image
We can build grub2 bootloader for Zephyr using ``boards/x86/common/scripts/build_grub.sh`` We can build the grub2 bootloader for Zephyr using ``boards/x86/common/scripts/build_grub.sh``
which locate in `Zephyr Sourcecode <https://github.com/zephyrproject-rtos/zephyr>`_. found in the `Zephyr source code <https://github.com/zephyrproject-rtos/zephyr>`_.
.. code-block:: none .. code-block:: none
@ -89,13 +89,14 @@ Steps for Using Zephyr as User VM
$ sudo umount /mnt $ sudo umount /mnt
You now have a virtual disk image with a bootable Zephyr in ``zephyr.img``. If the Zephyr build system is not You now have a virtual disk image with a bootable Zephyr in ``zephyr.img``. If the Zephyr build system is not
the ACRN Service VM, then you will need to transfer this image to the ACRN Service VM (via, e.g, a USB stick or network ) the ACRN Service VM, then you will need to transfer this image to the
ACRN Service VM (via, e.g, a USB drive or network )
#. Follow XXX to boot "The ACRN Service OS" based on Clear Linux OS 28620 #. Follow XXX to boot "The ACRN Service OS" based on Clear Linux OS 28620
(ACRN tag: acrn-2019w14.3-140000p) (ACRN tag: acrn-2019w14.3-140000p)
.. important:: need to remove reference to Clear Linux and reference .. important:: need to remove reference to Clear Linux and reference
to deleted document (use SDC mode on the NUC) to deleted document (use SDC mode on the Intel NUC)
#. Boot Zephyr as User VM #. Boot Zephyr as User VM

View File

@ -6,10 +6,13 @@ Enable vUART Configurations
Introduction Introduction
============ ============
The virtual universal asynchronous receiver-transmitter (vUART) supports two functions: one is the console, the other is communication. vUART only works on a single function. The virtual universal asynchronous receiver-transmitter (vUART) supports
two functions: one is the console, the other is communication. vUART
only works on a single function.
Currently, only two vUART configurations are added to the Currently, only two vUART configurations are added to the
``misc/vm_configs/scenarios/<xxx>/vm_configuration.c`` file, but you can change the value in it. ``misc/vm_configs/scenarios/<xxx>/vm_configuration.c`` file, but you can
change the value in it.
.. code-block:: none .. code-block:: none
@ -22,9 +25,9 @@ Currently, only two vUART configurations are added to the
.addr.port_base = INVALID_COM_BASE, .addr.port_base = INVALID_COM_BASE,
} }
**vuart[0]** is initiated as the **console** port. ``vuart[0]`` is initiated as the **console** port.
**vuart[1]** is initiated as a **communication** port. ``vuart[1]`` is initiated as a **communication** port.
Console enable list Console enable list
=================== ===================
@ -48,12 +51,15 @@ Console enable list
How to configure a console port How to configure a console port
=============================== ===============================
To enable the console port for a VM, change only the ``port_base`` and ``irq``. If the irq number is already in use in your system (``cat /proc/interrupt``), choose another irq number. If you set the ``.irq =0``, the vuart will work in polling mode. To enable the console port for a VM, change only the ``port_base`` and
``irq``. If the IRQ number is already in use in your system (``cat
/proc/interrupt``), choose another IRQ number. If you set the ``.irq =0``,
the vUART will work in polling mode.
- COM1_BASE (0x3F8) + COM1_IRQ(4) - ``COM1_BASE (0x3F8) + COM1_IRQ(4)``
- COM2_BASE (0x2F8) + COM2_IRQ(3) - ``COM2_BASE (0x2F8) + COM2_IRQ(3)``
- COM3_BASE (0x3E8) + COM3_IRQ(6) - ``COM3_BASE (0x3E8) + COM3_IRQ(6)``
- COM4_BASE (0x2E8) + COM4_IRQ(7) - ``COM4_BASE (0x2E8) + COM4_IRQ(7)``
Example: Example:
@ -70,11 +76,12 @@ How to configure a communication port
To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate. To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate.
The port_base and irq should differ from the ``vuart[0]`` in the same VM. The port_base and IRQ should differ from the ``vuart[0]`` in the same VM.
**t_vuart.vm_id** is the target VM's vm_id, start from 0. (0 means VM0) **t_vuart.vm_id** is the target VM's vm_id, start from 0. (0 means VM0)
**t_vuart.vuart_id** is the target vuart index in the target VM. start from 1. (1 means vuart[1]) **t_vuart.vuart_id** is the target vuart index in the target VM. start
from 1. (1 means ``vuart[1]``)
Example: Example:
@ -120,16 +127,19 @@ Communication vUART enable list
Launch script Launch script
============= =============
- *-s 1:0,lpc -l com1,stdio* - ``-s 1:0,lpc -l com1,stdio``
This option is only needed for WaaG and VxWorks (and also when using OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide the ACPI table for UART. This option is only needed for WaaG and VxWorks (and also when using
OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide
the ACPI table for UART.
- *-B " ....,console=ttyS0, ..."* - ``-B " ....,console=ttyS0, ..."``
Add this to the kernel-based system. Add this to the kernel-based system.
Test the communication port Test the communication port
=========================== ===========================
After you have configured the communication port in hypervisor, you can access the corresponding port. For example, in Clear Linux: After you have configured the communication port in hypervisor, you can
access the corresponding port. For example, in Clear Linux:
1. With ``echo`` and ``cat`` 1. With ``echo`` and ``cat``
@ -137,20 +147,26 @@ After you have configured the communication port in hypervisor, you can access t
On VM2: ``# echo "test test" > /dev/ttyS1`` On VM2: ``# echo "test test" > /dev/ttyS1``
you can find the message from VM1 ``/dev/ttyS1``. You can find the message from VM1 ``/dev/ttyS1``.
If you are not sure which port is the communication port, you can run ``dmesg | grep ttyS`` under the Linux shell to check the base address. If it matches what you have set in the ``vm_configuration.c`` file, it is the correct port. If you are not sure which port is the communication port, you can run
``dmesg | grep ttyS`` under the Linux shell to check the base address.
If it matches what you have set in the ``vm_configuration.c`` file, it
is the correct port.
#. With minicom #. With minicom
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test`` in VM1's minicom. The message should appear in VM2's minicom. Disable flow control in minicom. Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test``
in VM1's minicom. The message should appear in VM2's minicom. Disable
flow control in minicom.
#. Limitations #. Limitations
- The msg cannot be longer than 256 bytes. - The msg cannot be longer than 256 bytes.
- This cannot be used to transfer files because flow control is not supported so data may be lost. - This cannot be used to transfer files because flow control is
not supported so data may be lost.
vUART design vUART design
============ ============
@ -178,19 +194,23 @@ This adds ``com1 (0x3f8)`` and ``com2 (0x2f8)`` modules in the Guest VM, includi
**Data Flows** **Data Flows**
Three different data flows exist based on how the post-launched VM is started, as shown in the diagram below. Three different data flows exist based on how the post-launched VM is
started, as shown in the diagram below:
Figure 1 data flow: The post-launched VM is started with the vUART enabled in the hypervisor configuration file only. * Figure 1 data flow: The post-launched VM is started with the vUART
enabled in the hypervisor configuration file only.
Figure 2 data flow: The post-launched VM is started with the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only. * Figure 2 data flow: The post-launched VM is started with the
``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
Figure 3 data flow: The post-launched VM is started with both vUART enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``. * Figure 3 data flow: The post-launched VM is started with both vUART
enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
.. figure:: images/vuart-config-post-launch.png .. figure:: images/vuart-config-post-launch.png
:align: center :align: center
:name: Post-Launched VMs :name: Post-Launched VMs
.. note:: .. note::
For operating systems such as VxWorks and Windows that depend on the ACPI table to probe the uart driver, adding the vuart configuration in the hypervisor is not sufficient. Currently, we recommend that you use the configuration in the figure 3 data flow. This may be refined in the future. For operating systems such as VxWorks and Windows that depend on the
ACPI table to probe the uart driver, adding the vUART configuration in
the hypervisor is not sufficient. Currently, we recommend that you use
the configuration in the figure 3 data flow. This may be refined in the
future.

View File

@ -23,7 +23,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
default value. default value.
* - :kbd:`-B, --bootargs <bootargs>` * - :kbd:`-B, --bootargs <bootargs>`
- Set the User VM kernel command line arguments. - Set the User VM kernel command-line arguments.
The maximum length is 1023. The maximum length is 1023.
The bootargs string will be passed to the kernel as its cmdline. The bootargs string will be passed to the kernel as its cmdline.
@ -326,16 +326,16 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
- This option is to create a VM with the local APIC (LAPIC) passed-through. - This option is to create a VM with the local APIC (LAPIC) passed-through.
With this option, a VM is created with ``LAPIC_PASSTHROUGH`` and With this option, a VM is created with ``LAPIC_PASSTHROUGH`` and
``IO_COMPLETION_POLLING`` mode. This option is typically used for hard ``IO_COMPLETION_POLLING`` mode. This option is typically used for hard
realtime scenarios. real-time scenarios.
By default, this option is not enabled. By default, this option is not enabled.
* - :kbd:`--rtvm` * - :kbd:`--rtvm`
- This option is used to create a VM with realtime attributes. - This option is used to create a VM with real-time attributes.
With this option, a VM is created with ``GUEST_FLAG_RT`` and With this option, a VM is created with ``GUEST_FLAG_RT`` and
``GUEST_FLAG_IO_COMPLETION_POLLING`` mode. This kind of VM is ``GUEST_FLAG_IO_COMPLETION_POLLING`` mode. This kind of VM is
generally used for soft realtime scenarios (without ``--lapic_pt``) or generally used for soft real-time scenarios (without ``--lapic_pt``) or
hard realtime scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``, hard real-time scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``,
the Service VM cannot interfere with this kind of VM when it is the Service VM cannot interfere with this kind of VM when it is
running. It can only be powered off from inside the VM itself. running. It can only be powered off from inside the VM itself.

View File

@ -13,7 +13,7 @@ The ACRN hypervisor supports the following parameter:
+=================+=============================+========================================================================================+ +=================+=============================+========================================================================================+
| | disabled | This disables the serial port completely. | | | disabled | This disables the serial port completely. |
| +-----------------------------+----------------------------------------------------------------------------------------+ | +-----------------------------+----------------------------------------------------------------------------------------+
| uart= | bdf@<BDF value> | This sets the PCI serial port based on its BDF. e.g. bdf@0:18.1 | | ``uart=`` | bdf@<BDF value> | This sets the PCI serial port based on its BDF. e.g. ``bdf@0:18.1`` |
| +-----------------------------+----------------------------------------------------------------------------------------+ | +-----------------------------+----------------------------------------------------------------------------------------+
| | port@<port address> | This sets the serial port address. | | | port@<port address> | This sets the serial port address. |
+-----------------+-----------------------------+----------------------------------------------------------------------------------------+ +-----------------+-----------------------------+----------------------------------------------------------------------------------------+

View File

@ -22,7 +22,7 @@ relevant for configuring or debugging ACRN-based systems.
- Description - Description
- Usage example - Usage example
* - module_blacklist * - ``module_blacklist``
- Service VM - Service VM
- A comma-separated list of modules that should not be loaded. - A comma-separated list of modules that should not be loaded.
Useful to debug or work Useful to debug or work
@ -31,14 +31,14 @@ relevant for configuring or debugging ACRN-based systems.
module_blacklist=dwc3_pci module_blacklist=dwc3_pci
* - no_timer_check * - ``no_timer_check``
- Service VM,User VM - Service VM,User VM
- Disables the code which tests for broken timer IRQ sources. - Disables the code which tests for broken timer IRQ sources.
- :: - ::
no_timer_check no_timer_check
* - console * - ``console``
- Service VM,User VM - Service VM,User VM
- Output console device and options. - Output console device and options.
@ -64,7 +64,7 @@ relevant for configuring or debugging ACRN-based systems.
console=ttyS0 console=ttyS0
console=hvc0 console=hvc0
* - loglevel * - ``loglevel``
- Service VM - Service VM
- All Kernel messages with a loglevel less than the console loglevel will - All Kernel messages with a loglevel less than the console loglevel will
be printed to the console. The loglevel can also be changed with be printed to the console. The loglevel can also be changed with
@ -95,7 +95,7 @@ relevant for configuring or debugging ACRN-based systems.
loglevel=7 loglevel=7
* - ignore_loglevel * - ``ignore_loglevel``
- User VM - User VM
- Ignoring loglevel setting will print **all** - Ignoring loglevel setting will print **all**
kernel messages to the console. Useful for debugging. kernel messages to the console. Useful for debugging.
@ -107,7 +107,7 @@ relevant for configuring or debugging ACRN-based systems.
ignore_loglevel ignore_loglevel
* - log_buf_len * - ``log_buf_len``
- User VM - User VM
- Sets the size of the printk ring buffer, - Sets the size of the printk ring buffer,
in bytes. n must be a power of two and greater in bytes. n must be a power of two and greater
@ -120,7 +120,7 @@ relevant for configuring or debugging ACRN-based systems.
log_buf_len=16M log_buf_len=16M
* - consoleblank * - ``consoleblank``
- Service VM,User VM - Service VM,User VM
- The console blank (screen saver) timeout in - The console blank (screen saver) timeout in
seconds. Defaults to 600 (10 minutes). A value of 0 seconds. Defaults to 600 (10 minutes). A value of 0
@ -129,7 +129,7 @@ relevant for configuring or debugging ACRN-based systems.
consoleblank=0 consoleblank=0
* - rootwait * - ``rootwait``
- Service VM,User VM - Service VM,User VM
- Wait (indefinitely) for root device to show up. - Wait (indefinitely) for root device to show up.
Useful for devices that are detected asynchronously Useful for devices that are detected asynchronously
@ -138,7 +138,7 @@ relevant for configuring or debugging ACRN-based systems.
rootwait rootwait
* - root * - ``root``
- Service VM,User VM - Service VM,User VM
- Define the root filesystem - Define the root filesystem
@ -165,14 +165,14 @@ relevant for configuring or debugging ACRN-based systems.
root=/dev/vda2 root=/dev/vda2
root=PARTUUID=00112233-4455-6677-8899-AABBCCDDEEFF root=PARTUUID=00112233-4455-6677-8899-AABBCCDDEEFF
* - rw * - ``rw``
- Service VM,User VM - Service VM,User VM
- Mount root device read-write on boot - Mount root device read/write on boot
- :: - ::
rw rw
* - tsc * - ``tsc``
- User VM - User VM
- Disable clocksource stability checks for TSC. - Disable clocksource stability checks for TSC.
@ -180,14 +180,14 @@ relevant for configuring or debugging ACRN-based systems.
``reliable``: ``reliable``:
Mark TSC clocksource as reliable, and disables clocksource Mark TSC clocksource as reliable, and disables clocksource
verification at runtime, and the stability checks done at bootup. verification at runtime, and the stability checks done at boot.
Used to enable high-resolution timer mode on older hardware, and in Used to enable high-resolution timer mode on older hardware, and in
virtualized environments. virtualized environments.
- :: - ::
tsc=reliable tsc=reliable
* - cma * - ``cma``
- Service VM - Service VM
- Sets the size of the kernel global memory area for - Sets the size of the kernel global memory area for
contiguous memory allocations, and optionally the contiguous memory allocations, and optionally the
@ -199,7 +199,7 @@ relevant for configuring or debugging ACRN-based systems.
cma=64M@0 cma=64M@0
* - hvlog * - ``hvlog``
- Service VM - Service VM
- Sets the guest physical address and size of the dedicated hypervisor - Sets the guest physical address and size of the dedicated hypervisor
log ring buffer between the hypervisor and Service VM. log ring buffer between the hypervisor and Service VM.
@ -221,21 +221,21 @@ relevant for configuring or debugging ACRN-based systems.
hvlog=2M@0xe00000 hvlog=2M@0xe00000
* - memmap * - ``memmap``
- Service VM - Service VM
- Mark specific memory as reserved. - Mark specific memory as reserved.
``memmap=nn[KMG]$ss[KMG]`` ``memmap=nn[KMG]$ss[KMG]``
Region of memory to be reserved is from ``ss`` to ``ss+nn``, Region of memory to be reserved is from ``ss`` to ``ss+nn``,
using ``K``, ``M``, and ``G`` representing Kilobytes, Megabytes, and using ``K``, ``M``, and ``G`` representing kilobytes, megabytes, and
Gigabytes, respectively. gigabytes, respectively.
- :: - ::
memmap=0x400000$0xa00000 memmap=0x400000$0xa00000
* - ramoops.mem_address * - ``ramoops.mem_address``
ramoops.mem_size ``ramoops.mem_size``
ramoops.console_size ``ramoops.console_size``
- Service VM - Service VM
- Ramoops is an oops/panic logger that writes its logs to RAM - Ramoops is an oops/panic logger that writes its logs to RAM
before the system crashes. Ramoops uses a predefined memory area before the system crashes. Ramoops uses a predefined memory area
@ -252,21 +252,21 @@ relevant for configuring or debugging ACRN-based systems.
ramoops.console_size=0x200000 ramoops.console_size=0x200000
* - reboot_panic * - ``reboot_panic``
- Service VM - Service VM
- Reboot in case of panic - Reboot in case of panic
The comma-delimited parameters are: The comma-delimited parameters are:
reboot_mode: reboot_mode:
``w`` (warm), ``s`` (soft), ``c`` (cold), or ``g`` (gpio) ``w`` (warm), ``s`` (soft), ``c`` (cold), or ``g`` (GPIO)
reboot_type: reboot_type:
``b`` (bios), ``a`` (acpi), ``k`` (kbd), ``t`` (triple), ``e`` (efi), ``b`` (BIOS), ``a`` (ACPI), ``k`` (kbd), ``t`` (triple), ``e`` (EFI),
or ``p`` (pci) or ``p`` (PCI)
reboot_cpu: reboot_cpu:
``s###`` (smp, and processor number to be used for rebooting) ``s###`` (SMP, and processor number to be used for rebooting)
reboot_force: reboot_force:
``f`` (force), or not specified. ``f`` (force), or not specified.
@ -274,17 +274,17 @@ relevant for configuring or debugging ACRN-based systems.
reboot_panic=p,w reboot_panic=p,w
* - maxcpus * - ``maxcpus``
- User VM - User VM
- Maximum number of processors that an SMP kernel - Maximum number of processors that an SMP kernel
will bring up during bootup. will bring up during boot.
``maxcpus=n`` where n >= 0 limits ``maxcpus=n`` where n >= 0 limits
the kernel to bring up ``n`` processors during system bootup. the kernel to bring up ``n`` processors during system boot.
Giving n=0 is a special case, equivalent to ``nosmp``,which Giving n=0 is a special case, equivalent to ``nosmp``,which
also disables the I/O APIC. also disables the I/O APIC.
After bootup, you can bring up additional plugged CPUs by executing After booting, you can bring up additional plugged CPUs by executing
``echo 1 > /sys/devices/system/cpu/cpuX/online`` ``echo 1 > /sys/devices/system/cpu/cpuX/online``
- :: - ::
@ -298,7 +298,7 @@ relevant for configuring or debugging ACRN-based systems.
nohpet nohpet
* - intel_iommu * - ``intel_iommu``
- User VM - User VM
- Intel IOMMU driver (DMAR) option - Intel IOMMU driver (DMAR) option
@ -351,7 +351,7 @@ section below has more details on a few select parameters.
* - i915.enable_initial_modeset * - i915.enable_initial_modeset
- Service VM - Service VM
- On MRB, value must be ``1``. On NUC or UP2 boards, value must be - On MRB, value must be ``1``. On Intel NUC or UP2 boards, value must be
``0``. See :ref:`i915-enable-initial-modeset`. ``0``. See :ref:`i915-enable-initial-modeset`.
- :: - ::

View File

@ -8,7 +8,7 @@ Description
*********** ***********
The ``acrnctl`` tool helps users create, delete, launch, and stop a User The ``acrnctl`` tool helps users create, delete, launch, and stop a User
OS (UOS). The tool runs under the Service OS, and UOSs should be based VM (aka UOS). The tool runs under the Service VM, and User VMs should be based
on ``acrn-dm``. The daemon for acrn-manager is `acrnd`_. on ``acrn-dm``. The daemon for acrn-manager is `acrnd`_.
@ -42,7 +42,7 @@ Add a VM
======== ========
The ``add`` command lets you add a VM by specifying a The ``add`` command lets you add a VM by specifying a
script that will launch a UOS, for example ``launch_uos.sh``: script that will launch a User VM, for example ``launch_uos.sh``:
.. code-block:: none .. code-block:: none
@ -58,7 +58,7 @@ container::
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/devicemodel/samples/nuc/launch_uos.sh>`_ <https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/devicemodel/samples/nuc/launch_uos.sh>`_
that supports the ``-C`` (``run_container`` function) option. that supports the ``-C`` (``run_container`` function) option.
Note that the launch script must only launch one UOS instance. Note that the launch script must only launch one User VM instance.
The VM name is important. ``acrnctl`` searches VMs by their The VM name is important. ``acrnctl`` searches VMs by their
names so duplicate VM names are not allowed. If the names so duplicate VM names are not allowed. If the
launch script changes the VM name at launch time, ``acrnctl`` launch script changes the VM name at launch time, ``acrnctl``
@ -113,7 +113,7 @@ gracefully by itself.
# acrnctl stop -f vm-ubuntu # acrnctl stop -f vm-ubuntu
RESCAN BLOCK DEVICE Rescan Block Device
=================== ===================
Use the ``blkrescan`` command to trigger a rescan of Use the ``blkrescan`` command to trigger a rescan of
@ -139,10 +139,10 @@ update the backend file.
acrnd acrnd
***** *****
The ``acrnd`` daemon process provides a way for launching or resuming a UOS The ``acrnd`` daemon process provides a way for launching or resuming a User VM
should the UOS shut down, either planned or unexpected. A UOS can ask ``acrnd`` should the User VM shut down, either in a planned manner or unexpectedly. A User
to set up a timer to make sure the UOS is running, even if the SOS is VM can ask ``acrnd`` to set up a timer to make sure the User VM is running, even
suspended or stopped. if the Service VM is suspended or stopped.
Usage Usage
===== =====
@ -163,13 +163,13 @@ Normally, ``acrnd`` runs silently (messages are directed to
``/dev/null``). Use the ``-t`` option to direct messages to ``stdout``, ``/dev/null``). Use the ``-t`` option to direct messages to ``stdout``,
useful for debugging. useful for debugging.
The ``acrnd`` daemon stores pending UOS work to ``/usr/share/acrn/conf/timer_list`` The ``acrnd`` daemon stores pending User VM work to ``/usr/share/acrn/conf/timer_list``
and sets an RTC timer to wake up the SOS or bring the SOS back up again. and sets an RTC timer to wake up the Service VM or bring the Service VM back up again.
When ``acrnd`` daemon is restarted, it restores the previously saved timer When ``acrnd`` daemon is restarted, it restores the previously saved timer
list and launches the UOSs at the right time. list and launches the User VMs at the right time.
A ``systemd`` service file (``acrnd.service``) is installed by default that will A ``systemd`` service file (``acrnd.service``) is installed by default that will
start the ``acrnd`` daemon when the Service OS comes up. start the ``acrnd`` daemon when the Service VM (Linux-based) comes up.
You can restart/stop acrnd service using ``systemctl`` You can restart/stop acrnd service using ``systemctl``
.. note:: .. note::
@ -178,10 +178,10 @@ You can restart/stop acrnd service using ``systemctl``
Build and Install Build and Install
***************** *****************
Source code for both ``acrnctl`` and ``acrnd`` is in the ``tools/acrn-manager`` folder. Source code for both ``acrnctl`` and ``acrnd`` is in the ``misc/acrn-manager`` folder.
Change to that folder and run: Change to that folder and run:
.. code-block:: none .. code-block:: none
# make $ make
# make install $ sudo make install

View File

@ -14,8 +14,8 @@ acrn-kernel, install them on your target system, and boot running ACRN.
.. rst-class:: numbered-step .. rst-class:: numbered-step
Set up Pre-requisites Set up prerequisites
********************* ********************
Your development system should be running Ubuntu Your development system should be running Ubuntu
18.04 and be connected to the internet. (You'll be installing software 18.04 and be connected to the internet. (You'll be installing software
@ -66,7 +66,7 @@ Here's the default ``release.json`` configuration:
Run the package-building script Run the package-building script
******************************* *******************************
The ``install_uSoS.py`` python script does all the work to install The ``install_uSoS.py`` Python script does all the work to install
needed tools (such as make, gnu-efi, libssl-dev, libpciaccess-dev, needed tools (such as make, gnu-efi, libssl-dev, libpciaccess-dev,
uuid-dev, and more). It also verifies that tool versions (such as the uuid-dev, and more). It also verifies that tool versions (such as the
gcc compiler) are appropriate (as configured in the ``release.json`` gcc compiler) are appropriate (as configured in the ``release.json``
@ -89,7 +89,7 @@ When done, it creates two Debian packages:
* ``acrn_kernel_deb_package.deb`` with the ACRN-patched Linux kernel. * ``acrn_kernel_deb_package.deb`` with the ACRN-patched Linux kernel.
You'll need to copy these two files onto your target system, either via You'll need to copy these two files onto your target system, either via
the network or simply by using a thumbdrive. the network or simply by using a USB drive.
.. rst-class:: numbered-step .. rst-class:: numbered-step
@ -112,7 +112,7 @@ Install Debian packages on your target system
********************************************* *********************************************
Copy the Debian packages you created on your development system, for Copy the Debian packages you created on your development system, for
example, using a thumbdrive. Then install the ACRN Debian package:: example, using a USB drive. Then install the ACRN Debian package::
sudo dpkg -i acrn_deb_package.deb sudo dpkg -i acrn_deb_package.deb
@ -228,4 +228,4 @@ by looking at the dmesg log:
4.python3 compile_iasl.py 4.python3 compile_iasl.py
========================= =========================
this scriptrs is help compile iasl and cp to /usr/sbin this script helps compile iasl and cp to /usr/sbin

View File

@ -29,7 +29,7 @@ The ``ACRN-Crashlog`` tool depends on the following libraries
- libblkid - libblkid
- e2fsprogs - e2fsprogs
Refer to the :ref:`getting_started` for instructions on how to set-up your Refer to the :ref:`getting_started` for instructions on how to set up your
build environment, and follow the instructions below to build and configure the build environment, and follow the instructions below to build and configure the
``ACRN-Crashlog`` tool. ``ACRN-Crashlog`` tool.
@ -217,7 +217,7 @@ The source code structure:
like ``ipanic``, ``pstore`` and etc. For the log on AaaG, it's collected with like ``ipanic``, ``pstore`` and etc. For the log on AaaG, it's collected with
monitoring the change of related folders on the sos image, like monitoring the change of related folders on the sos image, like
``/data/logs/``. ``acrnprobe`` also provides a flexible way to allow users to ``/data/logs/``. ``acrnprobe`` also provides a flexible way to allow users to
configure which crash or event they want to collect through the xml file configure which crash or event they want to collect through the XML file
easily. easily.
- ``common``: some utils for logs, command and string. - ``common``: some utils for logs, command and string.
- ``data``: configuration file, service files and shell script. - ``data``: configuration file, service files and shell script.

View File

@ -42,7 +42,7 @@ Architecture
Terms Terms
===== =====
- channel : channel
Channel represents a way of detecting the system's events. There are 3 Channel represents a way of detecting the system's events. There are 3
channels: channels:
@ -50,33 +50,33 @@ Terms
+ polling: run a detecting job with fixed time interval. + polling: run a detecting job with fixed time interval.
+ inotify: monitor the change of file or dir. + inotify: monitor the change of file or dir.
- trigger : trigger
Essentially, trigger represents one section of content. It could be Essentially, trigger represents one section of content. It could be
a file's content, a directory's content, or a memory's content which can be a file's content, a directory's content, or a memory's content, which can be
obtained. By monitoring it ``acrnprobe`` could detect certain events which obtained. By monitoring it, ``acrnprobe`` could detect certain events
happened in the system. that happened in the system.
- crash : crash
A subtype of event. It often corresponds to a crash of programs, system, or A subtype of event. It often corresponds to a crash of programs, system, or
hypervisor. ``acrnprobe`` detects it and reports it as ``CRASH``. hypervisor. ``acrnprobe`` detects it and reports it as ``CRASH``.
- info : info
A subtype of event. ``acrnprobe`` detects it and reports it as ``INFO``. A subtype of event. ``acrnprobe`` detects it and reports it as ``INFO``.
- event queue : event queue
There is a global queue to receive all events detected. There is a global queue to receive all events detected.
Generally, events are enqueued in channel, and dequeued in event handler. Generally, events are enqueued in channel, and dequeued in event handler.
- event handler : event handler
Event handler is a thread to handle events detected by channel. Event handler is a thread to handle events detected by channel.
It's awakened by an enqueued event. It's awakened by an enqueued event.
- sender : sender
The sender corresponds to an exit of event. The sender corresponds to an exit of event.
There are two senders: There are two senders:
+ Crashlog is responsible for collecting logs and saving it locally. + ``crashlog`` is responsible for collecting logs and saving it locally.
+ Telemd is responsible for sending log records to telemetrics client. + ``telemd`` is responsible for sending log records to telemetrics client.
Description Description
=========== ===========
@ -86,30 +86,30 @@ As a log collection mechanism to record critical events on the platform,
1. detect event 1. detect event
From experience, the occurrence of an system event is usually accompanied From experience, the occurrence of a system event is usually accompanied
by some effects. The effects could be a generated file, an error message in by some effects. The effects could be a generated file, an error message in
kernel's log, or a system reboot. To get these effects, for some of them we kernel's log, or a system reboot. To get these effects, for some of them we
can monitor a directory, for other of them we might need to do a detection can monitor a directory, for others, we might need to do detection
in a time loop. in a time loop.
*So we implement the channel, which represents a common method of detection.* So we implement the channel, which represents a common method of detection.
2. analyze event and determine the event type 2. analyze event and determine the event type
Generally, a specific effect correspond to a particular type of events. Generally, a specific effect corresponds to a particular type of events.
However, it is the icing on the cake for analyzing the detailed event types However, it is the icing on the cake for analyzing the detailed event types
according to some phenomena. *Crash reclassify is implemented for this according to some phenomena. Crash reclassifying is implemented for this
purpose.* purpose.
3. collect information for detected events 3. collect information for detected events
This is for debug purpose. Events without information are meaningless, This is for debug purpose. Events without information are meaningless,
and developers need to use this information to improve their system. *Sender and developers need to use this information to improve their system. Sender
crashlog is implemented for this purpose.* ``crashlog`` is implemented for this purpose.
4. archive these information as logs, and generate records 4. archive these information as logs, and generate records
There must be a central place to tell user what happened in system. There must be a central place to tell user what happened in system.
*Sender telemd is implemented for this purpose.* Sender ``telemd`` is implemented for this purpose.
Diagram Diagram
======= =======
@ -172,7 +172,7 @@ Source files
This file provides the function to get system reboot reason from kernel This file provides the function to get system reboot reason from kernel
command line. command line.
- android_events.c - android_events.c
Sync events detected by android crashlog. Sync events detected by Android ``crashlog``.
- loop.c - loop.c
This file provides interfaces to read from image. This file provides interfaces to read from image.

View File

@ -81,7 +81,7 @@ Other properties
- ``inherit``: - ``inherit``:
Specify a parent for a certain crash. Specify a parent for a certain crash.
The child crash will inherit all configurations from the specified (by id) The child crash will inherit all configurations from the specified (by ID)
crash. These inherited configurations could be overwritten by new ones. crash. These inherited configurations could be overwritten by new ones.
Also, this property helps build the crash tree in ``acrnprobe``. Also, this property helps build the crash tree in ``acrnprobe``.
- ``expression``: - ``expression``:
@ -90,7 +90,7 @@ Other properties
Crash tree in acrnprobe Crash tree in acrnprobe
*********************** ***********************
There could be a parent-child relationship between crashes. Refer to the There could be a parent/child relationship between crashes. Refer to the
diagrams below, crash B and D are the children of crash A, because crash B and diagrams below, crash B and D are the children of crash A, because crash B and
D inherit from crash A, and crash C is the child of crash B. D inherit from crash A, and crash C is the child of crash B.
@ -260,10 +260,10 @@ Example:
* ``channel``: * ``channel``:
The ``channel`` name to get the virtual machine events. The ``channel`` name to get the virtual machine events.
* ``interval``: * ``interval``:
Time interval in seconds of polling vm's image. Time interval in seconds of polling VM's image.
* ``syncevent``: * ``syncevent``:
Event type ``acrnprobe`` will synchronize from virtual machine's ``crashlog``. Event type ``acrnprobe`` will synchronize from virtual machine's ``crashlog``.
User could specify different types by id. The event type can also be User could specify different types by ID. The event type can also be
indicated by ``type/subtype``. indicated by ``type/subtype``.
Log Log
@ -369,6 +369,6 @@ Example:
The name of channel info use. The name of channel info use.
* ``log``: * ``log``:
The log to be collected. The value is the configured name in log module. User The log to be collected. The value is the configured name in log module. User
could specify different logs by id. could specify different logs by ID.
.. _`XML standard`: http://www.w3.org/TR/REC-xml .. _`XML standard`: http://www.w3.org/TR/REC-xml