mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-02 20:35:32 +00:00
doc: update release_2.2 branch documentation
Update documentaiton in the release_2.2 branch with changes made after tagged for code freeze Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
3b6b5fb662
commit
7e676dbb1c
@ -33,6 +33,7 @@ Service VM Tutorials
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/running_deb_as_serv_vm
|
||||
tutorials/using_yp
|
||||
|
||||
User VM Tutorials
|
||||
*****************
|
||||
@ -72,6 +73,7 @@ Enable ACRN Features
|
||||
tutorials/acrn_on_qemu
|
||||
tutorials/using_grub
|
||||
tutorials/pre-launched-rt
|
||||
tutorials/enable_ivshmem
|
||||
|
||||
Debug
|
||||
*****
|
||||
|
@ -22,4 +22,4 @@ documented in this section.
|
||||
Hostbridge emulation <hostbridge-virt-hld>
|
||||
AT keyboard controller emulation <atkbdc-virt-hld>
|
||||
Split Device Model <split-dm>
|
||||
Shared memory based inter-vm communication <ivshmem-hld>
|
||||
Shared memory based inter-VM communication <ivshmem-hld>
|
||||
|
@ -5,7 +5,7 @@ ACRN high-level design overview
|
||||
|
||||
ACRN is an open source reference hypervisor (HV) that runs on top of Intel
|
||||
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & real-time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
I/O mediation solution with a permissive license and provides auto makers and
|
||||
industry users a reference software stack for corresponding use.
|
||||
|
||||
@ -124,7 +124,7 @@ ACRN 2.0
|
||||
========
|
||||
|
||||
ACRN 2.0 is extending ACRN to support pre-launched VM (mainly for safety VM)
|
||||
and Real-Time (RT) VM.
|
||||
and real-time (RT) VM.
|
||||
|
||||
:numref:`overview-arch2.0` shows the architecture of ACRN 2.0; the main difference
|
||||
compared to ACRN 1.0 is that:
|
||||
|
@ -1016,7 +1016,7 @@ access is like this:
|
||||
#. If the verification is successful in eMMC RPMB controller, then the
|
||||
data will be written into storage device.
|
||||
|
||||
This work flow of authenticated data read is very similar to this flow
|
||||
This workflow of authenticated data read is very similar to this flow
|
||||
above, but in reverse order.
|
||||
|
||||
Note that there are some security considerations in this design:
|
||||
|
@ -358,7 +358,7 @@ general workflow of ioeventfd.
|
||||
:align: center
|
||||
:name: ioeventfd-workflow
|
||||
|
||||
ioeventfd general work flow
|
||||
ioeventfd general workflow
|
||||
|
||||
The workflow can be summarized as:
|
||||
|
||||
|
@ -13,7 +13,7 @@ SoC and back, as well as signals the SoC uses to control onboard
|
||||
peripherals.
|
||||
|
||||
.. note::
|
||||
NUC and UP2 platforms do not support IOC hardware, and as such, IOC
|
||||
Intel NUC and UP2 platforms do not support IOC hardware, and as such, IOC
|
||||
virtualization is not supported on these platforms.
|
||||
|
||||
The main purpose of IOC virtualization is to transfer data between
|
||||
@ -131,7 +131,7 @@ There are five parts in this high-level design:
|
||||
* State transfer introduces IOC mediator work states
|
||||
* CBC protocol illustrates the CBC data packing/unpacking
|
||||
* Power management involves boot/resume/suspend/shutdown flows
|
||||
* Emulated CBC commands introduces some commands work flow
|
||||
* Emulated CBC commands introduces some commands workflow
|
||||
|
||||
IOC mediator has three threads to transfer data between User VM and Service VM. The
|
||||
core thread is responsible for data reception, and Tx and Rx threads are
|
||||
|
@ -57,8 +57,8 @@ configuration and copies them to the corresponding guest memory.
|
||||
.. figure:: images/partition-image18.png
|
||||
:align: center
|
||||
|
||||
ACRN set-up for guests
|
||||
**********************
|
||||
ACRN setup for guests
|
||||
*********************
|
||||
|
||||
Cores
|
||||
=====
|
||||
|
@ -39,7 +39,7 @@ resource allocator.) The user can check the cache capabilities such as cache
|
||||
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
|
||||
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
|
||||
CLOS ID, to select a cache mask to take effect. These configurations can be
|
||||
done in scenario xml file under ``FEATURES`` section as shown in the below example.
|
||||
done in scenario XML file under ``FEATURES`` section as shown in the below example.
|
||||
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
|
||||
to enforce the settings.
|
||||
|
||||
@ -52,7 +52,7 @@ to enforce the settings.
|
||||
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
|
||||
|
||||
Once the cache mask is set of each individual CPU, the respective CLOS ID
|
||||
needs to be set in the scenario xml file under ``VM`` section. If user desires
|
||||
needs to be set in the scenario XML file under ``VM`` section. If user desires
|
||||
to use CDP feature, CDP_ENABLED should be set to ``y``.
|
||||
|
||||
.. code-block:: none
|
||||
@ -106,7 +106,7 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
|
||||
users can check the MBA capabilities such as mba delay values and
|
||||
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
|
||||
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
|
||||
These configurations can be done in scenario xml file under ``FEATURES`` section
|
||||
These configurations can be done in scenario XML file under ``FEATURES`` section
|
||||
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
|
||||
for non-root and root modes to enforce the settings.
|
||||
|
||||
@ -120,7 +120,7 @@ for non-root and root modes to enforce the settings.
|
||||
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
|
||||
|
||||
Once the cache mask is set of each individual CPU, the respective CLOS ID
|
||||
needs to be set in the scenario xml file under ``VM`` section.
|
||||
needs to be set in the scenario XML file under ``VM`` section.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2
|
||||
|
@ -15,17 +15,24 @@ Inter-VM Communication Overview
|
||||
:align: center
|
||||
:name: ivshmem-architecture-overview
|
||||
|
||||
ACRN shared memory based inter-vm communication architecture
|
||||
ACRN shared memory based inter-VM communication architecture
|
||||
|
||||
The ``ivshmem`` device is emulated in the ACRN device model (dm-land)
|
||||
and its shared memory region is allocated from the Service VM's memory
|
||||
space. This solution only supports communication between post-launched
|
||||
VMs.
|
||||
There are two ways ACRN can emulate the ``ivshmem`` device:
|
||||
|
||||
.. note:: In a future implementation, the ``ivshmem`` device could
|
||||
instead be emulated in the hypervisor (hypervisor-land) and the shared
|
||||
memory regions reserved in the hypervisor's memory space. This solution
|
||||
would work for both pre-launched and post-launched VMs.
|
||||
``ivshmem`` dm-land
|
||||
The ``ivshmem`` device is emulated in the ACRN device model,
|
||||
and the shared memory regions are reserved in the Service VM's
|
||||
memory space. This solution only supports communication between
|
||||
post-launched VMs.
|
||||
|
||||
``ivshmem`` hv-land
|
||||
The ``ivshmem`` device is emulated in the hypervisor, and the
|
||||
shared memory regions are reserved in the hypervisor's
|
||||
memory space. This solution works for both pre-launched and
|
||||
post-launched VMs.
|
||||
|
||||
While both solutions can be used at the same time, Inter-VM communication
|
||||
may only be done between VMs using the same solution.
|
||||
|
||||
ivshmem hv:
|
||||
The **ivshmem hv** implements register virtualization
|
||||
@ -98,89 +105,7 @@ MMIO Registers Definition
|
||||
Usage
|
||||
*****
|
||||
|
||||
To support two post-launched VMs communicating via an ``ivshmem`` device,
|
||||
add this line as an ``acrn-dm`` boot parameter::
|
||||
|
||||
-s slot,ivshmem,shm_name,shm_size
|
||||
|
||||
where
|
||||
|
||||
- ``-s slot`` - Specify the virtual PCI slot number
|
||||
|
||||
- ``ivshmem`` - Virtual PCI device name
|
||||
|
||||
- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the
|
||||
same ``shm_name`` share a shared memory region.
|
||||
|
||||
- ``shm_size`` - Specify a shared memory size. The two communicating
|
||||
VMs must define the same size.
|
||||
|
||||
.. note:: This device can be used with Real-Time VM (RTVM) as well.
|
||||
|
||||
Inter-VM Communication Example
|
||||
******************************
|
||||
|
||||
The following example uses inter-vm communication between two Linux-based
|
||||
post-launched VMs (VM1 and VM2).
|
||||
|
||||
.. note:: An ``ivshmem`` Windows driver exists and can be found `here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_
|
||||
|
||||
1. Add a new virtual PCI device for both VMs: the device type is
|
||||
``ivshmem``, shared memory name is ``test``, and shared memory size is
|
||||
4096 bytes. Both VMs must have the same shared memory name and size:
|
||||
|
||||
- VM1 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 7
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$2" \
|
||||
-s 5,virtio-console,@stdio:stdio_port \
|
||||
-s 6,virtio-hyper_dmabuf \
|
||||
-s 3,virtio-blk,/home/acrn/uos1.img \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 6,ivshmem,test,4096 \
|
||||
-s 7,virtio-rnd \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
|
||||
- VM2 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$2" \
|
||||
-s 3,virtio-blk,/home/acrn/uos2.img \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 5,ivshmem,test,4096 \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.
|
||||
|
||||
- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
|
||||
3. Use these commands to probe the device::
|
||||
|
||||
$ sudo modprobe uio
|
||||
$ sudo modprobe uio_pci_generic
|
||||
$ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
|
||||
|
||||
4. Finally, a user application can get the shared memory base address from
|
||||
the ``ivshmem`` device BAR resource
|
||||
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
|
||||
the ``ivshmem`` device config resource
|
||||
(``/sys/class/uio/uioX/device/config``).
|
||||
|
||||
The ``X`` in ``uioX`` above, is a number that can be retrieved using the
|
||||
``ls`` command:
|
||||
|
||||
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
|
||||
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
|
||||
For usage information, see :ref:`enable_ivshmem`
|
||||
|
||||
Inter-VM Communication Security hardening (BKMs)
|
||||
************************************************
|
||||
|
@ -86,7 +86,7 @@ I/O ports definition::
|
||||
RTC emulation
|
||||
=============
|
||||
|
||||
ACRN supports RTC (Real-Time Clock) that can only be accessed through
|
||||
ACRN supports RTC (real-time clock) that can only be accessed through
|
||||
I/O ports (0x70 and 0x71).
|
||||
|
||||
0x70 is used to access CMOS address register and 0x71 is used to access
|
||||
|
@ -61,7 +61,7 @@ Add the following parameters into the command line::
|
||||
controller_name, you can use it as controller_name directly. You can
|
||||
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can
|
||||
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
|
||||
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
and Intel NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one
|
||||
|
@ -34,8 +34,8 @@ It receives read/write commands from the watchdog driver, does the
|
||||
actions, and returns. In ACRN, the commands are from User VM
|
||||
watchdog driver.
|
||||
|
||||
User VM watchdog work flow
|
||||
**************************
|
||||
User VM watchdog workflow
|
||||
*************************
|
||||
|
||||
When the User VM does a read or write operation on the watchdog device's
|
||||
registers or memory space (Port IO or Memory map I/O), it will trap into
|
||||
|
@ -77,7 +77,7 @@ PTEs (with present bit cleared, or reserved bit set) pointing to valid
|
||||
host PFNs, a malicious guest may use those EPT PTEs to construct an attack.
|
||||
|
||||
A special aspect of L1TF in the context of virtualization is symmetric
|
||||
multi threading (SMT), e.g. Intel |reg| Hyper-Threading Technology.
|
||||
multi threading (SMT), e.g. Intel |reg| Hyper-threading Technology.
|
||||
Logical processors on the affected physical cores share the L1 Data Cache
|
||||
(L1D). This fact could make more variants of L1TF-based attack, e.g.
|
||||
a malicious guest running on one logical processor can attack the data which
|
||||
@ -88,11 +88,11 @@ Guest -> guest Attack
|
||||
=====================
|
||||
|
||||
The possibility of guest -> guest attack varies on specific configuration,
|
||||
e.g. whether CPU partitioning is used, whether Hyper-Threading is on, etc.
|
||||
e.g. whether CPU partitioning is used, whether Hyper-threading is on, etc.
|
||||
|
||||
If CPU partitioning is enabled (default policy in ACRN), there is
|
||||
1:1 mapping between vCPUs and pCPUs i.e. no sharing of pCPU. There
|
||||
may be an attack possibility when Hyper-Threading is on, where
|
||||
may be an attack possibility when Hyper-threading is on, where
|
||||
logical processors of same physical core may be allocated to two
|
||||
different guests. Then one guest may be able to attack the other guest
|
||||
on sibling thread due to shared L1D.
|
||||
@ -221,7 +221,7 @@ This mitigation is always enabled.
|
||||
Core-based scheduling
|
||||
=====================
|
||||
|
||||
If Hyper-Threading is enabled, it's important to avoid running
|
||||
If Hyper-threading is enabled, it's important to avoid running
|
||||
sensitive context (if containing security data which a given VM
|
||||
has no permission to access) on the same physical core that runs
|
||||
said VM. It requires scheduler enhancement to enable core-based
|
||||
@ -265,9 +265,9 @@ requirements:
|
||||
- Doing 5) is not feasible, or
|
||||
- CPU sharing is enabled (in the future)
|
||||
|
||||
If Hyper-Threading is enabled, there is no available mitigation
|
||||
If Hyper-threading is enabled, there is no available mitigation
|
||||
option before core scheduling is planned. User should understand
|
||||
the security implication and only turn on Hyper-Threading
|
||||
the security implication and only turn on Hyper-threading
|
||||
when the potential risk is acceptable to their usage.
|
||||
|
||||
Mitigation Status
|
||||
|
@ -566,7 +566,7 @@ The following table shows some use cases of module level configuration design:
|
||||
- This module is used to virtualize part of LAPIC functionalities.
|
||||
It can be done via APICv or software emulation depending on CPU
|
||||
capabilities.
|
||||
For example, KBL NUC doesn't support virtual-interrupt delivery, while
|
||||
For example, KBL Intel NUC doesn't support virtual-interrupt delivery, while
|
||||
other platforms support it.
|
||||
- If a function pointer is used, the prerequisite is
|
||||
"hv_operation_mode == OPERATIONAL".
|
||||
|
@ -31,8 +31,8 @@ details:
|
||||
* :option:`CONFIG_UOS_RAM_SIZE`
|
||||
* :option:`CONFIG_HV_RAM_SIZE`
|
||||
|
||||
For example, if the NUC's physical memory size is 32G, you may follow these steps
|
||||
to make the new uefi ACRN hypervisor, and then deploy it onto the NUC board to boot
|
||||
For example, if the Intel NUC's physical memory size is 32G, you may follow these steps
|
||||
to make the new UEFI ACRN hypervisor, and then deploy it onto the Intel NUC to boot
|
||||
the ACRN Service VM with the 32G memory size.
|
||||
|
||||
#. Use ``make menuconfig`` to change the ``RAM_SIZE``::
|
||||
|
@ -54,7 +54,7 @@ distribution.
|
||||
|
||||
.. note::
|
||||
ACRN uses ``menuconfig``, a python3 text-based user interface (TUI)
|
||||
for configuring hypervisor options and using python's ``kconfiglib``
|
||||
for configuring hypervisor options and using Python's ``kconfiglib``
|
||||
library.
|
||||
|
||||
Install the necessary tools for the following systems:
|
||||
@ -79,8 +79,17 @@ Install the necessary tools for the following systems:
|
||||
libblkid-dev \
|
||||
e2fslibs-dev \
|
||||
pkg-config \
|
||||
libnuma-dev
|
||||
libnuma-dev \
|
||||
liblz4-tool \
|
||||
flex \
|
||||
bison
|
||||
|
||||
$ sudo pip3 install kconfiglib
|
||||
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz
|
||||
$ tar zxvf acpica-unix-20191018.tar.gz
|
||||
$ cd acpica-unix-20191018
|
||||
$ make clean && make iasl
|
||||
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/
|
||||
|
||||
.. note::
|
||||
ACRN requires ``gcc`` version 7.3.* (or higher) and ``binutils`` version
|
||||
@ -274,7 +283,4 @@ of the acrn-hypervisor directory):
|
||||
from XML files. If the ``TARGET_DIR`` is not specified, the original
|
||||
configuration files of acrn-hypervisor would be overridden.
|
||||
|
||||
In the 2.1 release, there is a known issue (:acrn-issue:`5157`) that
|
||||
``TARGET_DIR=xxx`` does not work.
|
||||
|
||||
Follow the same instructions to boot and test the images you created from your build.
|
||||
|
@ -7,9 +7,9 @@ Verified version
|
||||
****************
|
||||
|
||||
- Ubuntu version: **18.04**
|
||||
- GCC version: **9.0**
|
||||
- ACRN-hypervisor branch: **release_2.0 (acrn-2020w23.6-180000p)**
|
||||
- ACRN-Kernel (Service VM kernel): **release_2.0 (5.4.43-PKT-200203T060100Z)**
|
||||
- GCC version: **7.4**
|
||||
- ACRN-hypervisor branch: **release_2.2 (acrn-2020w40.1-180000p)**
|
||||
- ACRN-Kernel (Service VM kernel): **release_2.2 (5.4.43-PKT-200203T060100Z)**
|
||||
- RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)**
|
||||
- HW: Maxtang Intel WHL-U i7-8665U (`AX8665U-A2 <http://www.maxtangpc.com/fanlessembeddedcomputers/140.html>`_)
|
||||
|
||||
@ -34,7 +34,7 @@ Hardware Connection
|
||||
Connect the WHL Maxtang with the appropriate external devices.
|
||||
|
||||
#. Connect the WHL Maxtang board to a monitor via an HDMI cable.
|
||||
#. Connect the mouse, keyboard, ethernet cable, and power supply cable to
|
||||
#. Connect the mouse, keyboard, Ethernet cable, and power supply cable to
|
||||
the WHL Maxtang board.
|
||||
#. Insert the Ubuntu 18.04 USB boot disk into the USB port.
|
||||
|
||||
@ -55,7 +55,7 @@ Install Ubuntu on the SATA disk
|
||||
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
|
||||
#. Power on the machine, then press F11 to select the USB disk as the boot
|
||||
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
|
||||
label depends on the brand/make of the USB stick.
|
||||
label depends on the brand/make of the USB drive.
|
||||
#. Install the Ubuntu OS.
|
||||
#. Select **Something else** to create the partition.
|
||||
|
||||
@ -72,7 +72,7 @@ Install Ubuntu on the SATA disk
|
||||
#. Complete the Ubuntu installation on ``/dev/sda``.
|
||||
|
||||
This Ubuntu installation will be modified later (see `Build and Install the RT kernel for the Ubuntu User VM`_)
|
||||
to turn it into a Real-Time User VM (RTVM).
|
||||
to turn it into a real-time User VM (RTVM).
|
||||
|
||||
Install the Ubuntu Service VM on the NVMe disk
|
||||
==============================================
|
||||
@ -87,7 +87,7 @@ Install Ubuntu on the NVMe disk
|
||||
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
|
||||
#. Power on the machine, then press F11 to select the USB disk as the boot
|
||||
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
|
||||
label depends on the brand/make of the USB stick.
|
||||
label depends on the brand/make of the USB drive.
|
||||
#. Install the Ubuntu OS.
|
||||
#. Select **Something else** to create the partition.
|
||||
|
||||
@ -103,7 +103,7 @@ Install Ubuntu on the NVMe disk
|
||||
|
||||
#. Complete the Ubuntu installation and reboot the system.
|
||||
|
||||
.. note:: Set **acrn** as the username for the Ubuntu Service VM.
|
||||
.. note:: Set ``acrn`` as the username for the Ubuntu Service VM.
|
||||
|
||||
|
||||
Build and Install ACRN on Ubuntu
|
||||
@ -155,6 +155,28 @@ Build the ACRN Hypervisor on Ubuntu
|
||||
|
||||
$ sudo pip3 install kconfiglib
|
||||
|
||||
#. Starting with the ACRN v2.2 release, we use the ``iasl`` tool to
|
||||
compile an offline ACPI binary for pre-launched VMs while building ACRN,
|
||||
so we need to install the ``iasl`` tool in the ACRN build environment.
|
||||
|
||||
Follow these steps to install ``iasl`` (and its dependencies) and
|
||||
then update the ``iasl`` binary with a newer version not available
|
||||
in Ubuntu 18.04:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo -E apt-get install iasl
|
||||
$ cd /home/acrn/work
|
||||
$ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz
|
||||
$ tar zxvf acpica-unix-20191018.tar.gz
|
||||
$ cd acpica-unix-20191018
|
||||
$ make clean && make iasl
|
||||
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/
|
||||
|
||||
.. note:: While there are newer versions of software available from
|
||||
the `ACPICA downloads site <https://acpica.org/downloads>`_, this
|
||||
20191018 version has been verified to work.
|
||||
|
||||
#. Get the ACRN source code:
|
||||
|
||||
.. code-block:: none
|
||||
@ -163,30 +185,20 @@ Build the ACRN Hypervisor on Ubuntu
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypervisor
|
||||
|
||||
#. Switch to the v2.0 version:
|
||||
#. Switch to the v2.2 version:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ git checkout -b v2.0 remotes/origin/release_2.0
|
||||
$ git checkout -b v2.2 remotes/origin/release_2.2
|
||||
|
||||
#. Build ACRN:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ make all BOARD_FILE=misc/acrn-config/xmls/board-xmls/whl-ipc-i7.xml SCENARIO_FILE=misc/acrn-config/xmls/config-xmls/whl-ipc-i7/industry.xml RELEASE=0
|
||||
$ make all BOARD_FILE=misc/vm-configs/xmls/board-xmls/whl-ipc-i7.xml SCENARIO_FILE=misc/vm-configs/xmls/config-xmls/whl-ipc-i7/industry.xml RELEASE=0
|
||||
$ sudo make install
|
||||
$ sudo cp build/hypervisor/acrn.bin /boot/acrn/
|
||||
|
||||
Enable network sharing for the User VM
|
||||
======================================
|
||||
|
||||
In the Ubuntu Service VM, enable network sharing for the User VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
$ sudo systemctl start systemd-networkd
|
||||
|
||||
Build and install the ACRN kernel
|
||||
=================================
|
||||
|
||||
@ -202,7 +214,7 @@ Build and install the ACRN kernel
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ git checkout -b v2.0 remotes/origin/release_2.0
|
||||
$ git checkout -b v2.2 remotes/origin/release_2.2
|
||||
$ cp kernel_config_uefi_sos .config
|
||||
$ make olddefconfig
|
||||
$ make all
|
||||
@ -256,6 +268,7 @@ Update Grub for the Ubuntu Service VM
|
||||
GRUB_DEFAULT=ubuntu-service-vm
|
||||
#GRUB_TIMEOUT_STYLE=hidden
|
||||
GRUB_TIMEOUT=5
|
||||
GRUB_CMDLINE_LINUX="text"
|
||||
|
||||
#. Update Grub on your system:
|
||||
|
||||
@ -263,6 +276,17 @@ Update Grub for the Ubuntu Service VM
|
||||
|
||||
$ sudo update-grub
|
||||
|
||||
Enable network sharing for the User VM
|
||||
======================================
|
||||
|
||||
In the Ubuntu Service VM, enable network sharing for the User VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
$ sudo systemctl start systemd-networkd
|
||||
|
||||
|
||||
Reboot the system
|
||||
=================
|
||||
|
||||
@ -287,7 +311,7 @@ BIOS settings of GVT-d for WaaG
|
||||
-------------------------------
|
||||
|
||||
.. note::
|
||||
Skip this step if you are using a Kaby Lake (KBL) NUC.
|
||||
Skip this step if you are using a Kaby Lake (KBL) Intel NUC.
|
||||
|
||||
Go to **Chipset** -> **System Agent (SA) Configuration** -> **Graphics
|
||||
Configuration** and make the following settings:
|
||||
@ -313,9 +337,13 @@ The User VM will be launched by OVMF, so copy it to the specific folder:
|
||||
Install IASL in Ubuntu for User VM launch
|
||||
-----------------------------------------
|
||||
|
||||
ACRN uses ``iasl`` to parse **User VM ACPI** information. The original ``iasl``
|
||||
in Ubuntu 18.04 is too old to match with ``acrn-dm``; update it using the
|
||||
following steps:
|
||||
Starting with the ACRN v2.2 release, we use the ``iasl`` tool to
|
||||
compile an offline ACPI binary for pre-launched VMs while building ACRN,
|
||||
so we need to install the ``iasl`` tool in the ACRN build environment.
|
||||
|
||||
Follow these steps to install ``iasl`` (and its dependencies) and
|
||||
then update the ``iasl`` binary with a newer version not available
|
||||
in Ubuntu 18.04:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -327,6 +355,11 @@ following steps:
|
||||
$ make clean && make iasl
|
||||
$ sudo cp ./generate/unix/bin/iasl /usr/sbin/
|
||||
|
||||
.. note:: While there are newer versions of software available from
|
||||
the `ACPICA downloads site <https://acpica.org/downloads>`_, this
|
||||
20191018 version has been verified to work.
|
||||
|
||||
|
||||
Build and Install the RT kernel for the Ubuntu User VM
|
||||
------------------------------------------------------
|
||||
|
||||
@ -441,7 +474,7 @@ Recommended BIOS settings for RTVM
|
||||
.. csv-table::
|
||||
:widths: 15, 30, 10
|
||||
|
||||
"Hyper-Threading", "Intel Advanced Menu -> CPU Configuration", "Disabled"
|
||||
"Hyper-threading", "Intel Advanced Menu -> CPU Configuration", "Disabled"
|
||||
"Intel VMX", "Intel Advanced Menu -> CPU Configuration", "Enable"
|
||||
"Speed Step", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
|
||||
"Speed Shift", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
|
||||
@ -458,7 +491,7 @@ Recommended BIOS settings for RTVM
|
||||
"Delay Enable DMI ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
|
||||
"DMI Link ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
|
||||
"Aggressive LPM Support", "Intel Advanced Menu -> PCH-IO Configuration -> SATA And RST Configuration", "Disabled"
|
||||
"USB Periodic Smi", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled"
|
||||
"USB Periodic SMI", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled"
|
||||
"ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled"
|
||||
"Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled"
|
||||
|
||||
@ -522,13 +555,13 @@ this, follow the below steps to allocate all housekeeping tasks to core 0:
|
||||
# Move all rcu tasks to core 0.
|
||||
for i in `pgrep rcu`; do taskset -pc 0 $i; done
|
||||
|
||||
# Change realtime attribute of all rcu tasks to SCHED_OTHER and priority 0
|
||||
# Change real-time attribute of all rcu tasks to SCHED_OTHER and priority 0
|
||||
for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
# Change realtime attribute of all tasks on core 1 to SCHED_OTHER and priority 0
|
||||
# Change real-time attribute of all tasks on core 1 to SCHED_OTHER and priority 0
|
||||
for i in `pgrep /1`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
# Change realtime attribute of all tasks to SCHED_OTHER and priority 0
|
||||
# Change real-time attribute of all tasks to SCHED_OTHER and priority 0
|
||||
for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done
|
||||
|
||||
echo disabling timer migration
|
||||
@ -668,7 +701,8 @@ Passthrough a hard disk to RTVM
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
hard_rtvm
|
||||
|
||||
#. Upon deployment completion, launch the RTVM directly onto your WHL NUC:
|
||||
#. Upon deployment completion, launch the RTVM directly onto your WHL
|
||||
Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -6,12 +6,12 @@ What is ACRN
|
||||
Introduction to Project ACRN
|
||||
****************************
|
||||
|
||||
ACRN |trade| is a, flexible, lightweight reference hypervisor, built with
|
||||
ACRN |trade| is a flexible, lightweight reference hypervisor, built with
|
||||
real-time and safety-criticality in mind, and optimized to streamline
|
||||
embedded development through an open source platform. ACRN defines a
|
||||
device hypervisor reference stack and an architecture for running
|
||||
multiple software subsystems, managed securely, on a consolidated system
|
||||
by means of a virtual machine manager (VMM). It also defines a reference
|
||||
using a virtual machine manager (VMM). It also defines a reference
|
||||
framework implementation for virtual device emulation, called the "ACRN
|
||||
Device Model".
|
||||
|
||||
@ -69,7 +69,7 @@ through the Device Model. Currently, the service VM is based on Linux,
|
||||
but it can also use other operating systems as long as the ACRN Device
|
||||
Model is ported into it. A user VM can be Ubuntu*, Android*,
|
||||
Windows* or VxWorks*. There is one special user VM, called a
|
||||
post-launched Real-Time VM (RTVM), designed to run a hard real-time OS,
|
||||
post-launched real-time VM (RTVM), designed to run a hard real-time OS,
|
||||
such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM
|
||||
can be used for soft programmable logic controller (PLC), inter-process
|
||||
communication (IPC), or Robotics applications.
|
||||
@ -87,7 +87,7 @@ platform to run both safety-critical applications and non-safety
|
||||
applications, together with security functions that safeguard the
|
||||
system.
|
||||
|
||||
There are a number of pre-defined scenarios included in ACRN's source code. They
|
||||
There are a number of predefined scenarios included in ACRN's source code. They
|
||||
all build upon the three fundamental modes of operation that have been explained
|
||||
above, i.e. the *logical partitioning*, *sharing*, and *hybrid* modes. They
|
||||
further specify the number of VMs that can be run, their attributes and the
|
||||
@ -130,7 +130,7 @@ In total, up to 7 post-launched User VMs can be started:
|
||||
- 5 regular User VMs,
|
||||
- One `Kata Containers <https://katacontainers.io>`_ User VM (see
|
||||
:ref:`run-kata-containers` for more details), and
|
||||
- One Real-Time VM (RTVM).
|
||||
- One real-time VM (RTVM).
|
||||
|
||||
In this example, one post-launched User VM provides Human Machine Interface
|
||||
(HMI) capability, another provides Artificial Intelligence (AI) capability, some
|
||||
@ -157,15 +157,15 @@ Industrial usage scenario:
|
||||
with tools such as Kubernetes*.
|
||||
- The HMI Application OS can be Windows* or Linux*. Windows is dominant
|
||||
in Industrial HMI environments.
|
||||
- ACRN can support a soft Real-time OS such as preempt-rt Linux for
|
||||
soft-PLC control, or a hard Real-time OS that offers less jitter.
|
||||
- ACRN can support a soft real-time OS such as preempt-rt Linux for
|
||||
soft-PLC control, or a hard real-time OS that offers less jitter.
|
||||
|
||||
Automotive Application Scenarios
|
||||
================================
|
||||
|
||||
As shown in :numref:`V2-SDC-scenario`, the ACRN hypervisor can be used
|
||||
for building Automotive Software Defined Cockpit (SDC) and In-Vehicle
|
||||
Experience (IVE) solutions.
|
||||
for building Automotive Software Defined Cockpit (SDC) and in-vehicle
|
||||
experience (IVE) solutions.
|
||||
|
||||
.. figure:: images/ACRN-V2-SDC-scenario.png
|
||||
:width: 600px
|
||||
@ -177,12 +177,12 @@ Experience (IVE) solutions.
|
||||
As a reference implementation, ACRN provides the basis for embedded
|
||||
hypervisor vendors to build solutions with a reference I/O mediation
|
||||
solution. In this scenario, an automotive SDC system consists of the
|
||||
Instrument Cluster (IC) system running in the Service VM and the In-Vehicle
|
||||
Infotainment (IVI) system is running the post-launched User VM. Additionally,
|
||||
instrument cluster (IC) system running in the Service VM and the in-vehicle
|
||||
infotainment (IVI) system is running the post-launched User VM. Additionally,
|
||||
one could modify the SDC scenario to add more post-launched User VMs that can
|
||||
host Rear Seat Entertainment (RSE) systems (not shown on the picture).
|
||||
host rear seat entertainment (RSE) systems (not shown on the picture).
|
||||
|
||||
An **Instrument Cluster (IC)** system is used to show the driver operational
|
||||
An **instrument cluster (IC)** system is used to show the driver operational
|
||||
information about the vehicle, such as:
|
||||
|
||||
- the speed, fuel level, trip mileage, and other driving information of
|
||||
@ -191,14 +191,14 @@ information about the vehicle, such as:
|
||||
fuel or tire pressure;
|
||||
- showing rear-view and surround-view cameras for parking assistance.
|
||||
|
||||
An **In-Vehicle Infotainment (IVI)** system's capabilities can include:
|
||||
An **in-vehicle infotainment (IVI)** system's capabilities can include:
|
||||
|
||||
- navigation systems, radios, and other entertainment systems;
|
||||
- connection to mobile devices for phone calls, music, and applications
|
||||
via voice recognition;
|
||||
- control interaction by gesture recognition or touch.
|
||||
|
||||
A **Rear Seat Entertainment (RSE)** system could run:
|
||||
A **rear seat entertainment (RSE)** system could run:
|
||||
|
||||
- entertainment system;
|
||||
- virtual office;
|
||||
@ -221,7 +221,7 @@ A block diagram of ACRN's SDC usage scenario is shown in
|
||||
capabilities.
|
||||
- Resources are partitioned to ensure safety-critical and
|
||||
non-safety-critical domains are able to coexist on one platform.
|
||||
- Rich I/O mediators allows sharing of various I/O devices across VMs,
|
||||
- Rich I/O mediators allow sharing of various I/O devices across VMs,
|
||||
delivering a comprehensive user experience.
|
||||
- Multiple operating systems are supported by one SoC through efficient
|
||||
virtualization.
|
||||
@ -229,15 +229,15 @@ A block diagram of ACRN's SDC usage scenario is shown in
|
||||
Best Known Configurations
|
||||
*************************
|
||||
|
||||
The ACRN Github codebase defines five best known configurations (BKC)
|
||||
The ACRN GitHub codebase defines five best known configurations (BKC)
|
||||
targeting SDC and Industry usage scenarios. Developers can start with
|
||||
one of these pre-defined configurations and customize it to their own
|
||||
one of these predefined configurations and customize it to their own
|
||||
application scenario needs.
|
||||
|
||||
.. list-table:: Scenario-based Best Known Configurations
|
||||
:header-rows: 1
|
||||
|
||||
* - Pre-defined BKC
|
||||
* - Predefined BKC
|
||||
- Usage Scenario
|
||||
- VM0
|
||||
- VM1
|
||||
@ -256,7 +256,7 @@ application scenario needs.
|
||||
- Service VM
|
||||
- Up to 5 Post-launched VMs
|
||||
- One Kata Containers VM
|
||||
- Post-launched RTVM (Soft or Hard realtime)
|
||||
- Post-launched RTVM (Soft or Hard real-time)
|
||||
|
||||
* - Hybrid Usage Config
|
||||
- Hybrid
|
||||
@ -265,9 +265,9 @@ application scenario needs.
|
||||
- Post-launched VM
|
||||
-
|
||||
|
||||
* - Hybrid Real-Time Usage Config
|
||||
* - Hybrid real-time Usage Config
|
||||
- Hybrid RT
|
||||
- Pre-launched VM (Real-Time VM)
|
||||
- Pre-launched VM (real-time VM)
|
||||
- Service VM
|
||||
- Post-launched VM
|
||||
-
|
||||
@ -284,8 +284,8 @@ Here are block diagrams for each of these four scenarios.
|
||||
SDC scenario
|
||||
============
|
||||
|
||||
In this SDC scenario, an Instrument Cluster (IC) system runs with the
|
||||
Service VM and an In-Vehicle Infotainment (IVI) system runs in a user
|
||||
In this SDC scenario, an instrument cluster (IC) system runs with the
|
||||
Service VM and an in-vehicle infotainment (IVI) system runs in a user
|
||||
VM.
|
||||
|
||||
.. figure:: images/ACRN-V2-SDC-scenario.png
|
||||
@ -300,10 +300,10 @@ Industry scenario
|
||||
|
||||
In this Industry scenario, the Service VM provides device sharing capability for
|
||||
a Windows-based HMI User VM. One post-launched User VM can run a Kata Container
|
||||
application. Another User VM supports either hard or soft Real-time OS
|
||||
application. Another User VM supports either hard or soft real-time OS
|
||||
applications. Up to five additional post-launched User VMs support functions
|
||||
such as Human Machine Interface (HMI), Artificial Intelligence (AI), Computer
|
||||
Vision, etc.
|
||||
such as human/machine interface (HMI), artificial intelligence (AI), computer
|
||||
vision, etc.
|
||||
|
||||
.. figure:: images/ACRN-Industry.png
|
||||
:width: 600px
|
||||
@ -326,10 +326,10 @@ non-real-time tasks.
|
||||
|
||||
Hybrid scenario
|
||||
|
||||
Hybrid Real-Time (RT) scenario
|
||||
Hybrid real-time (RT) scenario
|
||||
==============================
|
||||
|
||||
In this Hybrid Real-Time (RT) scenario, a pre-launched RTVM is started by the
|
||||
In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the
|
||||
hypervisor. The Service VM runs a post-launched User VM that runs non-safety or
|
||||
non-real-time tasks.
|
||||
|
||||
@ -401,7 +401,7 @@ The ACRN hypervisor can be booted from a third-party bootloader
|
||||
directly. A popular bootloader is `grub`_ and is
|
||||
also widely used by Linux distributions.
|
||||
|
||||
:ref:`using_grub` has a introduction on how to boot ACRN hypervisor with GRUB.
|
||||
:ref:`using_grub` has an introduction on how to boot ACRN hypervisor with GRUB.
|
||||
|
||||
In :numref:`boot-flow-2`, we show the boot sequence:
|
||||
|
||||
@ -425,8 +425,8 @@ In this boot mode, the boot options of pre-launched VM and service VM are define
|
||||
in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config``
|
||||
in the source code ``misc/vm_configs/$(SCENARIO)/vm_configurations.c`` by default.
|
||||
Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for
|
||||
details. The boot options of post-launched VM is not covered by hypervisor
|
||||
source code or GRUB menu, it is defined in guest image file or specified by
|
||||
details. The boot options of a post-launched VM are not covered by hypervisor
|
||||
source code or a GRUB menu; they are defined in a guest image file or specified by
|
||||
launch scripts.
|
||||
|
||||
.. note::
|
||||
@ -458,11 +458,11 @@ all types of Virtual Machines (VMs) represented:
|
||||
- Pre-launched Service VM
|
||||
- Post-launched User VM
|
||||
- Kata Container VM (post-launched)
|
||||
- Real-Time VM (RTVM)
|
||||
- real-time VM (RTVM)
|
||||
|
||||
The Service VM owns most of the devices including the platform devices, and
|
||||
provides I/O mediation. The notable exceptions are the devices assigned to the
|
||||
pre-launched User VM. Some of the PCIe devices may be passed through
|
||||
pre-launched User VM. Some PCIe devices may be passed through
|
||||
to the post-launched User OSes via the VM configuration. The Service VM runs
|
||||
hypervisor-specific applications together, such as the ACRN device model, and
|
||||
ACRN VM manager.
|
||||
@ -500,10 +500,10 @@ usually not used by commercial OSes).
|
||||
As shown in :numref:`VMX-brief`, VMM mode and guest mode are switched
|
||||
through VM Exit and VM Entry. When the bootloader hands off control to
|
||||
the ACRN hypervisor, the processor hasn't enabled VMX operation yet. The
|
||||
ACRN hypervisor needs to enable VMX operation thru a VMXON instruction
|
||||
ACRN hypervisor needs to enable VMX operation through a VMXON instruction
|
||||
first. Initially, the processor stays in VMM mode when the VMX operation
|
||||
is enabled. It enters guest mode thru a VM resume instruction (or first
|
||||
time VM launch), and returns back to VMM mode thru a VM exit event. VM
|
||||
is enabled. It enters guest mode through a VM resume instruction (or
|
||||
first-time VM launch), and returns to VMM mode through a VM exit event. VM
|
||||
exit occurs in response to certain instructions and events.
|
||||
|
||||
The behavior of processor execution in guest mode is controlled by a
|
||||
@ -522,7 +522,7 @@ reason (for example if a guest memory page is not mapped yet) and resume
|
||||
the guest to re-execute the instruction.
|
||||
|
||||
Note that the address space used in VMM mode is different from that in
|
||||
guest mode. The guest mode and VMM mode use different memory mapping
|
||||
guest mode. The guest mode and VMM mode use different memory-mapping
|
||||
tables, and therefore the ACRN hypervisor is protected from guest
|
||||
access. The ACRN hypervisor uses EPT to map the guest address, using the
|
||||
guest page table to map from guest linear address to guest physical
|
||||
@ -537,7 +537,7 @@ used to give VM applications (and OSes) access to these shared devices.
|
||||
Traditionally there are three architectural approaches to device
|
||||
emulation:
|
||||
|
||||
* The first architecture is **device emulation within the hypervisor** which
|
||||
* The first architecture is **device emulation within the hypervisor**, which
|
||||
is a common method implemented within the VMware\* workstation product
|
||||
(an operating system-based hypervisor). In this method, the hypervisor
|
||||
includes emulations of common devices that the various guest operating
|
||||
@ -548,7 +548,7 @@ emulation:
|
||||
name implies, rather than the device emulation being embedded within
|
||||
the hypervisor, it is instead implemented in a separate user space
|
||||
application. QEMU, for example, provides this kind of device emulation
|
||||
also used by a large number of independent hypervisors. This model is
|
||||
also used by many independent hypervisors. This model is
|
||||
advantageous, because the device emulation is independent of the
|
||||
hypervisor and can therefore be shared for other hypervisors. It also
|
||||
permits arbitrary device emulation without having to burden the
|
||||
@ -557,11 +557,11 @@ emulation:
|
||||
|
||||
* The third variation on hypervisor-based device emulation is
|
||||
**paravirtualized (PV) drivers**. In this model introduced by the `XEN
|
||||
project`_ the hypervisor includes the physical drivers, and each guest
|
||||
Project`_, the hypervisor includes the physical drivers, and each guest
|
||||
operating system includes a hypervisor-aware driver that works in
|
||||
concert with the hypervisor drivers.
|
||||
|
||||
.. _XEN project:
|
||||
.. _XEN Project:
|
||||
https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum
|
||||
|
||||
In the device emulation models discussed above, there's a price to pay
|
||||
@ -600,14 +600,14 @@ ACRN Device model incorporates these three aspects:
|
||||
**VHM**:
|
||||
The Virtio and Hypervisor Service Module is a kernel module in the
|
||||
Service VM acting as a middle layer to support the device model. The VHM
|
||||
and its client handling flow is described below:
|
||||
client handling flow is described below:
|
||||
|
||||
#. ACRN hypervisor IOREQ is forwarded to the VHM by an upcall
|
||||
notification to the Service VM.
|
||||
#. VHM will mark the IOREQ as "in process" so that the same IOREQ will
|
||||
not pick up again. The IOREQ will be sent to the client for handling.
|
||||
Meanwhile, the VHM is ready for another IOREQ.
|
||||
#. IOREQ clients are either an Service VM Userland application or a Service VM
|
||||
#. IOREQ clients are either a Service VM Userland application or a Service VM
|
||||
Kernel space module. Once the IOREQ is processed and completed, the
|
||||
Client will issue an IOCTL call to the VHM to notify an IOREQ state
|
||||
change. The VHM then checks and hypercalls to ACRN hypervisor
|
||||
@ -646,7 +646,7 @@ Finally, there may be specialized PCI devices that only one guest domain
|
||||
uses, so they should be passed through to the guest. Individual USB
|
||||
ports could be isolated to a given domain too, or a serial port (which
|
||||
is itself not shareable) could be isolated to a particular guest. In
|
||||
ACRN hypervisor, we support USB controller passthrough only and we
|
||||
ACRN hypervisor, we support USB controller passthrough only, and we
|
||||
don't support passthrough for a legacy serial port, (for example
|
||||
0x3f8).
|
||||
|
||||
@ -701,8 +701,8 @@ ACRN I/O mediator
|
||||
|
||||
Following along with the numbered items in :numref:`io-emulation-path`:
|
||||
|
||||
1. When a guest execute an I/O instruction (PIO or MMIO), a VM exit happens.
|
||||
ACRN hypervisor takes control, and analyzes the the VM
|
||||
1. When a guest executes an I/O instruction (PIO or MMIO), a VM exit happens.
|
||||
ACRN hypervisor takes control, and analyzes the VM
|
||||
exit reason, which is a VMX_EXIT_REASON_IO_INSTRUCTION for PIO access.
|
||||
2. ACRN hypervisor fetches and analyzes the guest instruction, and
|
||||
notices it is a PIO instruction (``in AL, 20h`` in this example), and put
|
||||
@ -717,7 +717,7 @@ Following along with the numbered items in :numref:`io-emulation-path`:
|
||||
module is activated to execute its processing APIs. Otherwise, the VHM
|
||||
module leaves the IO request in the shared page and wakes up the
|
||||
device model thread to process.
|
||||
5. The ACRN device model follow the same mechanism as the VHM. The I/O
|
||||
5. The ACRN device model follows the same mechanism as the VHM. The I/O
|
||||
processing thread of device model queries the IO request ring to get the
|
||||
PIO instruction details and checks to see if any (guest) device emulation
|
||||
module claims ownership of the IO port: if a module claimed it,
|
||||
@ -726,14 +726,14 @@ Following along with the numbered items in :numref:`io-emulation-path`:
|
||||
in this example), (say uDev1 here), uDev1 puts the result into the
|
||||
shared page (in register AL in this example).
|
||||
7. ACRN device model then returns control to ACRN hypervisor to indicate the
|
||||
completion of an IO instruction emulation, typically thru VHM/hypercall.
|
||||
completion of an IO instruction emulation, typically through VHM/hypercall.
|
||||
8. The ACRN hypervisor then knows IO emulation is complete, and copies
|
||||
the result to the guest register context.
|
||||
9. The ACRN hypervisor finally advances the guest IP to
|
||||
indicate completion of instruction execution, and resumes the guest.
|
||||
|
||||
The MMIO path is very similar, except the VM exit reason is different. MMIO
|
||||
access usually is trapped thru VMX_EXIT_REASON_EPT_VIOLATION in
|
||||
access is usually trapped through a VMX_EXIT_REASON_EPT_VIOLATION in
|
||||
the hypervisor.
|
||||
|
||||
Virtio framework architecture
|
||||
@ -750,7 +750,7 @@ should have a straightforward, efficient, standard and extensible
|
||||
mechanism for virtual devices, rather than boutique per-environment or
|
||||
per-OS mechanisms.
|
||||
|
||||
Virtio provides a common frontend driver framework which not only
|
||||
Virtio provides a common frontend driver framework that not only
|
||||
standardizes device interfaces, but also increases code reuse across
|
||||
different virtualization platforms.
|
||||
|
||||
@ -786,16 +786,16 @@ here:
|
||||
and BE drivers to interact with each other. For example, FE driver could
|
||||
read/write registers of the device, and the virtual device could
|
||||
interrupt FE driver, on behalf of the BE driver, in case of something is
|
||||
happening. Currently Virtio supports PCI/PCIe bus and MMIO bus. In
|
||||
happening. Currently, Virtio supports PCI/PCIe bus and MMIO bus. In
|
||||
ACRN project, only PCI/PCIe bus is supported, and all the Virtio devices
|
||||
share the same vendor ID 0x1AF4.
|
||||
|
||||
**Efficient**: batching operation is encouraged
|
||||
Batching operation and deferred notification are important to achieve
|
||||
high-performance I/O, since notification between FE and BE driver
|
||||
usually involves an expensive exit of the guest. Therefore batching
|
||||
usually involves an expensive exit of the guest. Therefore, batching
|
||||
operating and notification suppression are highly encouraged if
|
||||
possible. This will give an efficient implementation for the performance
|
||||
possible. This will give an efficient implementation for performance
|
||||
critical devices.
|
||||
|
||||
**Standard: virtqueue**
|
||||
@ -811,9 +811,10 @@ here:
|
||||
|
||||
The virtqueues are created in guest physical memory by the FE drivers.
|
||||
The BE drivers only need to parse the virtqueue structures to obtain
|
||||
the requests and get the requests done. How virtqueue is organized is
|
||||
the requests and get the requests done. Virtqueue organization is
|
||||
specific to the User OS. In the implementation of Virtio in Linux, the
|
||||
virtqueue is implemented as a ring buffer structure called vring.
|
||||
virtqueue is implemented as a ring buffer structure called
|
||||
``vring``.
|
||||
|
||||
In ACRN, the virtqueue APIs can be leveraged
|
||||
directly so users don't need to worry about the details of the
|
||||
@ -823,7 +824,7 @@ here:
|
||||
**Extensible: feature bits**
|
||||
A simple extensible feature negotiation mechanism exists for each virtual
|
||||
device and its driver. Each virtual device could claim its
|
||||
device specific features while the corresponding driver could respond to
|
||||
device-specific features while the corresponding driver could respond to
|
||||
the device with the subset of features the driver understands. The
|
||||
feature mechanism enables forward and backward compatibility for the
|
||||
virtual device and driver.
|
||||
@ -839,11 +840,11 @@ space as shown in :numref:`virtio-framework-userland`:
|
||||
Virtio Framework - User Land
|
||||
|
||||
In the Virtio user-land framework, the implementation is compatible with
|
||||
Virtio Spec 0.9/1.0. The VBS-U is statically linked with Device Model,
|
||||
and communicates with Device Model through the PCIe interface: PIO/MMIO
|
||||
or MSI/MSIx. VBS-U accesses Virtio APIs through user space vring service
|
||||
API helpers. User space vring service API helpers access shared ring
|
||||
through remote memory map (mmap). VHM maps User VM memory with the help of
|
||||
Virtio Spec 0.9/1.0. The VBS-U is statically linked with the Device Model,
|
||||
and communicates with the Device Model through the PCIe interface: PIO/MMIO
|
||||
or MSI/MSIx. VBS-U accesses Virtio APIs through the user space ``vring`` service
|
||||
API helpers. User space ``vring`` service API helpers access shared ring
|
||||
through a remote memory map (mmap). VHM maps User VM memory with the help of
|
||||
ACRN Hypervisor.
|
||||
|
||||
.. figure:: images/virtio-framework-kernel.png
|
||||
@ -856,10 +857,10 @@ ACRN Hypervisor.
|
||||
VBS-U offloads data plane processing to VBS-K. VBS-U initializes VBS-K
|
||||
at the right timings, for example. The FE driver sets
|
||||
VIRTIO_CONFIG_S_DRIVER_OK to avoid unnecessary device configuration
|
||||
changes while running. VBS-K can access shared rings through VBS-K
|
||||
changes while running. VBS-K can access shared rings through the VBS-K
|
||||
virtqueue APIs. VBS-K virtqueue APIs are similar to VBS-U virtqueue
|
||||
APIs. VBS-K registers as VHM client(s) to handle a continuous range of
|
||||
registers
|
||||
APIs. VBS-K registers as a VHM client to handle a continuous range of
|
||||
registers.
|
||||
|
||||
There may be one or more VHM-clients for each VBS-K, and there can be a
|
||||
single VHM-client for all VBS-Ks as well. VBS-K notifies FE through VHM
|
||||
|
@ -12,7 +12,7 @@ Minimum System Requirements for Installing ACRN
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
| Hardware | Minimum Requirements | Recommended |
|
||||
+========================+===================================+=================================================================================+
|
||||
| Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper Threading Technology enabled in the BIOS or more cores |
|
||||
| Processor | Compatible x86 64-bit processor | 2 core with Intel Hyper-threading Technology enabled in the BIOS or more cores |
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
| System memory | 4GB RAM | 8GB or more (< 32G) |
|
||||
+------------------------+-----------------------------------+---------------------------------------------------------------------------------+
|
||||
@ -29,13 +29,25 @@ Platforms with multiple PCI segments
|
||||
|
||||
ACRN assumes the following conditions are satisfied from the Platform BIOS
|
||||
|
||||
* All the PCI device BARs should be assigned resources, including SR-IOv VF BARs if a device supports.
|
||||
* All the PCI device BARs should be assigned resources, including SR-IOV VF BARs if a device supports.
|
||||
|
||||
* Bridge windows for PCI bridge devices and the resources for root bus, should be programmed with values
|
||||
that enclose resources used by all the downstream devices.
|
||||
|
||||
* There should be no conflict in resources among the PCI devices and also between PCI devices and other platform devices.
|
||||
|
||||
|
||||
New Processor Families
|
||||
**********************
|
||||
|
||||
Here are announced Intel processor architectures that are supported by ACRN v2.2, but don't yet have a recommended platform available:
|
||||
|
||||
* `Tiger Lake <https://ark.intel.com/content/www/us/en/ark/products/codename/88759/tiger-lake.html#@Embedded>`_
|
||||
(Q3'2020 Launch Date)
|
||||
* `Elkhart Lake <https://ark.intel.com/content/www/us/en/ark/products/codename/128825/elkhart-lake.html#@Embedded>`_
|
||||
(Q1'2021 Launch Date)
|
||||
|
||||
|
||||
Verified Platforms According to ACRN Usage
|
||||
******************************************
|
||||
|
||||
@ -69,7 +81,7 @@ For general instructions setting up ACRN on supported hardware platforms, visit
|
||||
|
||||
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| Platform (Intel x86) | Product/Kit Name | Usage Scenerio - BKC Examples |
|
||||
| Platform (Intel x86) | Product/Kit Name | Usage Scenario - BKC Examples |
|
||||
| | +-----------+-----------+-------------+------------+
|
||||
| | | SDC with | IU without| IU with | Logical |
|
||||
| | | 2 VMs | Safety VM | Safety VM | Partition |
|
||||
@ -86,16 +98,16 @@ For general instructions setting up ACRN on supported hardware platforms, visit
|
||||
| | | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Kaby Lake** | | `NUC7i5BNH`_ | V | | | |
|
||||
| | (Codename: Baby Canyon) | | (Board: NUC7i5BNB) | | | | |
|
||||
| | (Code name: Baby Canyon) | | (Board: NUC7i5BNB) | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Kaby Lake** | | `NUC7i7BNH`_ | V | | | |
|
||||
| | (Codename: Baby Canyon) | | (Board: NUC7i7BNB | | | | |
|
||||
| | (Code name: Baby Canyon) | | (Board: NUC7i7BNB | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Kaby Lake** | | `NUC7i5DNH`_ | V | | | |
|
||||
| | (Codename: Dawson Canyon) | | (Board: NUC7i5DNB) | | | | |
|
||||
| | (Code name: Dawson Canyon) | | (Board: NUC7i5DNB) | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Kaby Lake** | | `NUC7i7DNH`_ | V | V | V | V |
|
||||
| | (Codename: Dawson Canyon) | | (Board: NUC7i7DNB) | | | | |
|
||||
| | (Code name: Dawson Canyon) | | (Board: NUC7i7DNB) | | | | |
|
||||
+--------------------------------+-------------------------+-----------+-----------+-------------+------------+
|
||||
| | **Whiskey Lake** | | `WHL-IPC-I5`_ | V | V | V | V |
|
||||
| | | | (Board: WHL-IPC-I5) | | | | |
|
||||
@ -127,10 +139,10 @@ Verified Hardware Specifications Detail
|
||||
| | **Apollo Lake** | | UP2 - N3350 | Processor | - Intel® Celeron™ N3350 (2C2T, up to 2.4 GHz) |
|
||||
| | | UP2 - N4200 | | - Intel® Pentium™ N4200 (4C4T, up to 2.5 GHz) |
|
||||
| | | UP2 - x5-E3940 | | - Intel® Atom ™ x5-E3940 (4C4T) |
|
||||
| | | | (up to 1.8Ghz)/x7-E3950 (4C4T, up to 2.0GHz) |
|
||||
| | | | (up to 1.8GHz)/x7-E3950 (4C4T, up to 2.0GHz) |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - 2GB ( single channel) LPDDR4 |
|
||||
| | | | - 4GB/8GB ( dual channel) LPDDR4 |
|
||||
| | | Graphics | - 2GB (single channel) LPDDR4 |
|
||||
| | | | - 4GB/8GB (dual channel) LPDDR4 |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Intel® Gen 9 HD, supporting 4K Codec |
|
||||
| | | | Decode and Encode for HEVC4, H.264, VP8 |
|
||||
@ -140,16 +152,16 @@ Verified Hardware Specifications Detail
|
||||
| | | Serial Port | - Yes |
|
||||
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
|
||||
| | **Kaby Lake** | | NUC7i5BNH | Processor | - Intel® Core™ i5-7260U CPU @ 2.20GHz (2C4T) |
|
||||
| | (Codename: Baby Canyon) | | (Board: NUC7i5BNB) | | |
|
||||
| | (Code name: Baby Canyon) | | (Board: NUC7i5BNB) | | |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - Intel® Iris™ Plus Graphics 640 |
|
||||
| | | Graphics | - Intel® Iris® Plus Graphics 640 |
|
||||
| | | | - One HDMI\* 2.0 port with 4K at 60 Hz |
|
||||
| | | | - Thunderbolt™ 3 port with support for USB\* 3.1 |
|
||||
| | | | Gen 2, DisplayPort\* 1.2 and 40 Gb/s Thunderbolt |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - Micro SDXC slot with UHS-I support on the side |
|
||||
| | | Storage capabilities | - microSDXC slot with UHS-I support on the side |
|
||||
| | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD |
|
||||
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
|
||||
| | | | (up to 9.5 mm thickness) |
|
||||
@ -157,16 +169,16 @@ Verified Hardware Specifications Detail
|
||||
| | | Serial Port | - Yes |
|
||||
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
|
||||
| | **Kaby Lake** | | NUC7i7BNH | Processor | - Intel® Core™ i7-7567U CPU @ 3.50GHz (2C4T) |
|
||||
| | (Codename: Baby Canyon) | | (Board: NUC7i7BNB) | | |
|
||||
| | (Code name: Baby Canyon) | | (Board: NUC7i7BNB) | | |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - Intel® Iris™ Plus Graphics 650 |
|
||||
| | | Graphics | - Intel® Iris® Plus Graphics 650 |
|
||||
| | | | - One HDMI\* 2.0 port with 4K at 60 Hz |
|
||||
| | | | - Thunderbolt™ 3 port with support for USB\* 3.1 Gen 2, |
|
||||
| | | | DisplayPort\* 1.2 and 40 Gb/s Thunderbolt |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2133 MHz), 1.2V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - Micro SDXC slot with UHS-I support on the side |
|
||||
| | | Storage capabilities | - microSDXC slot with UHS-I support on the side |
|
||||
| | | | - One M.2 connector supporting 22x42 or 22x80 M.2 SSD |
|
||||
| | | | - One SATA3 port for connection to 2.5" HDD or SSD |
|
||||
| | | | (up to 9.5 mm thickness) |
|
||||
@ -174,7 +186,7 @@ Verified Hardware Specifications Detail
|
||||
| | | Serial Port | - No |
|
||||
+--------------------------------+------------------------+------------------------+-----------------------------------------------------------+
|
||||
| | **Kaby Lake** | | NUC7i5DNH | Processor | - Intel® Core™ i5-7300U CPU @ 2.64GHz (2C4T) |
|
||||
| | (Codename: Dawson Canyon) | | (Board: NUC7i5DNB) | | |
|
||||
| | (Code name: Dawson Canyon) | | (Board: NUC7i5DNB) | | |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Graphics | - Intel® HD Graphics 620 |
|
||||
| | | | - Two HDMI\* 2.0a ports supporting 4K at 60 Hz |
|
||||
@ -197,7 +209,7 @@ Verified Hardware Specifications Detail
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - One M.2 connector for WIFI |
|
||||
| | | Storage capabilities | - One M.2 connector for Wi-Fi |
|
||||
| | | | - One M.2 connector for 3G/4G module, supporting |
|
||||
| | | | LTE Category 6 and above |
|
||||
| | | | - One M.2 connector for 2242 SSD |
|
||||
@ -213,7 +225,7 @@ Verified Hardware Specifications Detail
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | System memory | - Two DDR4 SO-DIMM sockets (up to 32 GB, 2400 MHz), 1.2V |
|
||||
| | +------------------------+-----------------------------------------------------------+
|
||||
| | | Storage capabilities | - One M.2 connector for WIFI |
|
||||
| | | Storage capabilities | - One M.2 connector for Wi-Fi |
|
||||
| | | | - One M.2 connector for 3G/4G module, supporting |
|
||||
| | | | LTE Category 6 and above |
|
||||
| | | | - One M.2 connector for 2242 SSD |
|
||||
|
@ -6,7 +6,7 @@ ACRN v1.0 (May 2019)
|
||||
We are pleased to announce the release of ACRN version 1.0, a key
|
||||
Project ACRN milestone focused on automotive Software-Defined Cockpit
|
||||
(SDC) use cases and introducing additional architecture enhancements for
|
||||
more IOT usages, such as Industrial.
|
||||
more IoT usages, such as Industrial.
|
||||
|
||||
This v1.0 release is a production-ready reference solution for SDC
|
||||
usages that require multiple VMs and rich I/O mediation for device
|
||||
|
@ -44,7 +44,7 @@ We have many new `reference documents available <https://projectacrn.github.io>`
|
||||
|
||||
* Getting Started Guide for Industry scenario
|
||||
* :ref:`ACRN Configuration Tool Manual <acrn_configuration_tool>`
|
||||
* :ref:`Trace and Data Collection for ACRN Real-Time(RT) Performance Tuning <rt_performance_tuning>`
|
||||
* :ref:`Trace and Data Collection for ACRN real-time (RT) Performance Tuning <rt_performance_tuning>`
|
||||
* Building ACRN in Docker
|
||||
* :ref:`Running Ubuntu as the User VM <running_ubun_as_user_vm>`
|
||||
* :ref:`Running Debian as the User VM <running_deb_as_user_vm>`
|
||||
|
@ -39,7 +39,7 @@ What's New in v1.6
|
||||
|
||||
- The ACRN hypervisor allows a SRIOV-capable PCI device's Virtual Functions (VFs) to be allocated to any VM.
|
||||
|
||||
- The ACRN Service VM supports the SRIOV ethernet device (through the PF driver), and ensures that the SRIOV VF device is able to be assigned (passthrough) to a post-launched VM (launched by ACRN-DM).
|
||||
- The ACRN Service VM supports the SRIOV Ethernet device (through the PF driver), and ensures that the SRIOV VF device is able to be assigned (passthrough) to a post-launched VM (launched by ACRN-DM).
|
||||
|
||||
* CPU sharing enhancement - Halt/Pause emulation
|
||||
|
||||
|
@ -118,14 +118,14 @@ ACRN supports Open Virtual Machine Firmware (OVMF) as a virtual boot
|
||||
loader for the Service VM to launch post-launched VMs such as Windows,
|
||||
Linux, VxWorks, or Zephyr RTOS. Secure boot is also supported.
|
||||
|
||||
Post-launched Real-Time VM Support
|
||||
Post-launched real-time VM Support
|
||||
==================================
|
||||
|
||||
ACRN supports a post-launched RTVM, which also uses partitioned hardware
|
||||
resources to ensure adequate real-time performance, as required for
|
||||
industrial use cases.
|
||||
|
||||
Real-Time VM Performance Optimizations
|
||||
Real-time VM Performance Optimizations
|
||||
======================================
|
||||
|
||||
ACRN 2.0 improves RTVM performance with these optimizations:
|
||||
@ -165,7 +165,7 @@ Large selection of OSs for User VMs
|
||||
===================================
|
||||
|
||||
ACRN now supports Windows* 10, Android*, Ubuntu*, Xenomai, VxWorks*,
|
||||
Real-Time Linux*, and Zephyr* RTOS. ACRN's Windows support now conforms
|
||||
real-time Linux*, and Zephyr* RTOS. ACRN's Windows support now conforms
|
||||
to the Microsoft* Hypervisor Top-Level Functional Specification (TLFS).
|
||||
ACRN 2.0 also improves overall Windows as a Guest (WaaG) stability and
|
||||
performance.
|
||||
|
145
doc/release_notes/release_notes_2.2.rst
Normal file
145
doc/release_notes/release_notes_2.2.rst
Normal file
@ -0,0 +1,145 @@
|
||||
.. _release_notes_2.2:
|
||||
|
||||
ACRN v2.2 (Sep 2020)
|
||||
####################
|
||||
|
||||
We are pleased to announce the release of the Project ACRN
|
||||
hypervisor version 2.2.
|
||||
|
||||
ACRN is a flexible, lightweight reference hypervisor that is built with
|
||||
real-time and safety-criticality in mind. It is optimized to streamline
|
||||
embedded development through an open source platform. Check out the
|
||||
:ref:`introduction` introduction for more information. All project ACRN
|
||||
source code is maintained in the
|
||||
https://github.com/projectacrn/acrn-hypervisor repository and includes
|
||||
folders for the ACRN hypervisor, the ACRN device model, tools, and
|
||||
documentation. You can either download this source code as a zip or
|
||||
tar.gz file (see the `ACRN v2.2 GitHub release page
|
||||
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v2.2>`_) or
|
||||
use Git clone and checkout commands::
|
||||
|
||||
git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
cd acrn-hypervisor
|
||||
git checkout v2.2
|
||||
|
||||
The project's online technical documentation is also tagged to
|
||||
correspond with a specific release: generated v2.2 documents can be
|
||||
found at https://projectacrn.github.io/2.2/. Documentation for the
|
||||
latest under-development branch is found at
|
||||
https://projectacrn.github.io/latest/.
|
||||
|
||||
ACRN v2.2 requires Ubuntu 18.04. Follow the instructions in the
|
||||
:ref:`rt_industry_ubuntu_setup` to get started with ACRN.
|
||||
|
||||
|
||||
What’s New in v2.2
|
||||
******************
|
||||
|
||||
Elkhart Lake and Tiger Lake processor support.
|
||||
At `Intel Industrial iSummit 2020
|
||||
<https://newsroom.intel.com/press-kits/intel-industrial-summit-2020>`_,
|
||||
Intel announced the latest additions to their
|
||||
enhanced-for-IoT Edge portfolio: the Intel® Atom® x6000E Series, Intel®
|
||||
Pentium® and Intel® Celeron® N and J Series (all code named Elkhart Lake),
|
||||
and 11th Gen Intel® Core™ processors (code named Tiger Lake-UP3). The ACRN
|
||||
team is pleased to announce that this ACRN v2.2 release already supports
|
||||
these processors.
|
||||
|
||||
* Support for time deterministic applications with new features e.g.,
|
||||
Time Coordinated Computing and Time Sensitive Networking
|
||||
* Support for functional safety with new features e.g., Intel Safety Island
|
||||
|
||||
**On Elkhart Lake, ACRN can boot using `Slim Bootloader <https://slimbootloader.github.io/>`_ (an alternative bootloader to UEFI BIOS).**
|
||||
|
||||
Shared memory based inter-VM communication (ivshmem) is extended
|
||||
ivshmem now supports all kinds of VMs including pre-launched VM, Service VM, and
|
||||
other User VMs. (See :ref:`ivshmem-hld`)
|
||||
|
||||
**CPU sharing supports pre-launched VM.**
|
||||
|
||||
**RTLinux with preempt-RT linux kernel 5.4 is validated both as a pre-launched and post-launched VM.**
|
||||
|
||||
**ACRN hypervisor can emulate MSI-X based on physical MSI with multiple vectors.**
|
||||
|
||||
Staged removal of deprivileged boot mode support.
|
||||
ACRN has supported deprivileged boot mode to ease the integration of
|
||||
Linux distributions such as Clear Linux. Unfortunately, deprivileged boot
|
||||
mode limits ACRN's scalability and is unsuitable for ACRN's hybrid
|
||||
hypervisor mode. In ACRN v2.2, deprivileged boot mode is no longer the default
|
||||
and will be complete removed in ACRN v2.3. We're focusing instead
|
||||
on using multiboot2 boot (via Grub). Multiboot2 is not supported in
|
||||
Clearlinux though, so we have chosen Ubuntu (and Yocto Project) as the
|
||||
preferred Service VM OSs moving forward.
|
||||
|
||||
Document updates
|
||||
****************
|
||||
|
||||
New and updated reference documents are available, including:
|
||||
|
||||
.. rst-class:: rst-columns
|
||||
|
||||
* :ref:`develop_acrn`
|
||||
* :ref:`asm_coding_guidelines`
|
||||
* :ref:`c_coding_guidelines`
|
||||
* :ref:`contribute_guidelines`
|
||||
* :ref:`hv-cpu-virt`
|
||||
* :ref:`IOC_virtualization_hld`
|
||||
* :ref:`hv-startup`
|
||||
* :ref:`hv-vm-management`
|
||||
* :ref:`ivshmem-hld`
|
||||
* :ref:`virtio-i2c`
|
||||
* :ref:`sw_design_guidelines`
|
||||
* :ref:`faq`
|
||||
* :ref:`getting-started-building`
|
||||
* :ref:`introduction`
|
||||
* :ref:`acrn_configuration_tool`
|
||||
* :ref:`enable_ivshmem`
|
||||
* :ref:`setup_openstack_libvirt`
|
||||
* :ref:`using_grub`
|
||||
* :ref:`using_partition_mode_on_nuc`
|
||||
* :ref:`connect_serial_port`
|
||||
* :ref:`using_yp`
|
||||
* :ref:`acrn-dm_parameters`
|
||||
* :ref:`hv-parameters`
|
||||
* :ref:`acrnctl`
|
||||
|
||||
Because we're dropping deprivileged boot mode support in the next v2.3
|
||||
release, we're also switching our Service VM of choice away from Clear
|
||||
Linux. We've begun this transition in the v2.2 documentation and removed
|
||||
some Clear Linux-specific tutorials. Deleted documents are still
|
||||
available in the `version-specific v2.1 documentation
|
||||
<https://projectacrn.github.io/v2.1/>`_.
|
||||
|
||||
|
||||
Fixed Issues Details
|
||||
********************
|
||||
- :acrn-issue:`5008` - Slowdown in UOS (Zephyr)
|
||||
- :acrn-issue:`5033` - SOS decode instruction failed in hybrid mode
|
||||
- :acrn-issue:`5038` - [WHL][Yocto] SOS occasionally hangs/crashes with a kernel panic
|
||||
- :acrn-issue:`5048` - iTCO_wdt issue: can't request region for resource
|
||||
- :acrn-issue:`5102` - Can't access shared memory base address in ivshmem
|
||||
- :acrn-issue:`5118` - GPT ERROR when write preempt img to SATA on NUC7i5BNB
|
||||
- :acrn-issue:`5148` - dm: support to provide ACPI SSDT for UOS
|
||||
- :acrn-issue:`5157` - [build from source] during build HV with XML, "TARGET_DIR=xxx" does not work
|
||||
- :acrn-issue:`5165` - [WHL][Yocto][YaaG] No UI display when launch Yaag gvt-g with acrn kernel
|
||||
- :acrn-issue:`5215` - [UPsquared N3350 board] Solution to Bootloader issue
|
||||
- :acrn-issue:`5233` - Boot Acrn failed on Dell-OptiPlex 5040 with Intel i5-6500T
|
||||
- :acrn-issue:`5238` - acrn-config: add hybrid_rt scenario xml config for ehl-crb-b
|
||||
- :acrn-issue:`5240` - passthrough DHRD-ignored device
|
||||
- :acrn-issue:`5242` - acrn-config: add pse-gpio to vmsix_on_msi devices list
|
||||
- :acrn-issue:`4691` - hv: add vgpio device model support
|
||||
- :acrn-issue:`5245` - hv: add INTx mapping for pre-launched VMs
|
||||
- :acrn-issue:`5426` - hv: add vgpio device model support
|
||||
- :acrn-issue:`5257` - hv: support PIO access to platform hidden devices
|
||||
- :acrn-issue:`5278` - [EHL][acrn-configuration-tool]: create a new hybrid_rt based scenario for P2SB MMIO pass-thru use case
|
||||
- :acrn-issue:`5304` - Cannot cross-compile - Build process assumes build system always hosts the ACRN hypervisor
|
||||
|
||||
Known Issues
|
||||
************
|
||||
- :acrn-issue:`5150` - [REG][WHL][[Yocto][Passthru] Launch RTVM fails with usb passthru
|
||||
- :acrn-issue:`5151` - [WHL][VxWorks] Launch VxWorks fails due to no suitable video mode found
|
||||
- :acrn-issue:`5154` - [TGL][Yocto][PM] 148213_PM_SystemS5 with life_mngr fail
|
||||
- :acrn-issue:`5368` - [TGL][Yocto][Passthru] Audio does not work on TGL
|
||||
- :acrn-issue:`5369` - [TGL][qemu] Cannot launch qemu on TGL
|
||||
- :acrn-issue:`5370` - [TGL][RTVM][PTCM] Launch RTVM failed with mem size smaller than 2G and PTCM enabled
|
||||
- :acrn-issue:`5371` - [TGL][Industry][Xenomai]Xenomai post launch fail
|
@ -13,7 +13,7 @@ Introduction
|
||||
************
|
||||
|
||||
ACRN includes three types of configurations: Hypervisor, Board, and VM. Each
|
||||
are discussed in the following sections.
|
||||
is discussed in the following sections.
|
||||
|
||||
Hypervisor configuration
|
||||
========================
|
||||
@ -52,7 +52,7 @@ to launch post-launched User VMs.
|
||||
Scenario based VM configurations are organized as ``*.c/*.h`` files. The
|
||||
reference scenarios are located in the
|
||||
``misc/vm_configs/scenarios/$(SCENARIO)/`` folder.
|
||||
The board specific configurations on this scenario is stored in the
|
||||
The board-specific configurations on this scenario are stored in the
|
||||
``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/`` folder.
|
||||
|
||||
User VM launch script samples are located in the
|
||||
@ -168,6 +168,24 @@ Additional scenario XML elements:
|
||||
Specify whether force to disable software workaround for Machine Check
|
||||
Error on Page Size Change is enabled.
|
||||
|
||||
``IVSHMEM`` (a child node of ``FEATURE``):
|
||||
Specify the inter-VM shared memory configuration
|
||||
|
||||
``IVSHMEM_ENABLED`` (a child node of ``FEATURE/IVSHMEM``):
|
||||
Specify if the inter-VM shared memory feature is enabled.
|
||||
|
||||
``IVSHMEM_REGION`` (a child node of ``FEATURE/IVSHMEM``):
|
||||
Specify a comma-separated list of the inter-VM shared memory region name,
|
||||
size, and VM IDs that may communicate using this shared region.
|
||||
|
||||
* Prefix the region ``name`` with ``hv:/`` (for an hv-land solution).
|
||||
(See :ref:`ivshmem-hld` for details.)
|
||||
* Specify the region ``size`` in MB, and a power of 2 (e.g., 2, 4, 8, 16)
|
||||
up to 512.
|
||||
* Specify all VM IDs that may use this shared memory area,
|
||||
separated by a ``:``, for example, ``0:2`` (to share this area between
|
||||
VMs 0 and 2), or ``0:1:2`` (to let VMs 0, 1, and 2 share this area).
|
||||
|
||||
``STACK_SIZE`` (a child node of ``MEMORY``):
|
||||
Specify the size of stacks used by physical cores. Each core uses one stack
|
||||
for normal operations and another three for specific exceptions.
|
||||
@ -224,7 +242,7 @@ Additional scenario XML elements:
|
||||
- ``PRE_STD_VM`` pre-launched Standard VM
|
||||
- ``SOS_VM`` pre-launched Service VM
|
||||
- ``POST_STD_VM`` post-launched Standard VM
|
||||
- ``POST_RT_VM`` post-launched realtime capable VM
|
||||
- ``POST_RT_VM`` post-launched real-time capable VM
|
||||
- ``KATA_VM`` post-launched Kata Container VM
|
||||
|
||||
``name`` (a child node of ``vm``):
|
||||
@ -239,7 +257,7 @@ Additional scenario XML elements:
|
||||
- ``GUEST_FLAG_IO_COMPLETION_POLLING`` specify whether the hypervisor needs
|
||||
IO polling to completion
|
||||
- ``GUEST_FLAG_HIDE_MTRR`` specify whether to hide MTRR from the VM
|
||||
- ``GUEST_FLAG_RT`` specify whether the VM is RT-VM (realtime)
|
||||
- ``GUEST_FLAG_RT`` specify whether the VM is RT-VM (real-time)
|
||||
|
||||
``cpu_affinity``:
|
||||
List of pCPU: the guest VM is allowed to create vCPU from all or a subset of this list.
|
||||
@ -271,7 +289,7 @@ Additional scenario XML elements:
|
||||
exactly match the module tag in the GRUB multiboot cmdline.
|
||||
|
||||
``ramdisk_mod`` (a child node of ``os_config``):
|
||||
The tag for the ramdisk image which acts as a multiboot module; it
|
||||
The tag for the ramdisk image, which acts as a multiboot module; it
|
||||
must exactly match the module tag in the GRUB multiboot cmdline.
|
||||
|
||||
``bootargs`` (a child node of ``os_config``):
|
||||
@ -313,6 +331,18 @@ Additional scenario XML elements:
|
||||
PCI devices list of the VM; it is hard-coded for each scenario so it
|
||||
is not configurable for now.
|
||||
|
||||
``mmio_resources``:
|
||||
MMIO resources to passthrough.
|
||||
|
||||
``TPM2`` (a child node of ``mmio_resources``):
|
||||
TPM2 device to passthrough.
|
||||
|
||||
``p2sb`` (a child node of ``mmio_resources``):
|
||||
Exposing the P2SB (Primary-to-Sideband) bridge to the pre-launched VM.
|
||||
|
||||
``pt_intx``:
|
||||
Forward specific IOAPIC interrupts (with interrupt line remapping) to the pre-launched VM.
|
||||
|
||||
``board_private``:
|
||||
Stores scenario-relevant board configuration.
|
||||
|
||||
@ -345,7 +375,7 @@ current scenario has:
|
||||
``ZEPHYR`` or ``VXWORKS``.
|
||||
|
||||
``rtos_type``:
|
||||
Specify the User VM Realtime capability: Soft RT, Hard RT, or none of them.
|
||||
Specify the User VM Real-time capability: Soft RT, Hard RT, or none of them.
|
||||
|
||||
``mem_size``:
|
||||
Specify the User VM memory size in Mbyte.
|
||||
@ -373,10 +403,17 @@ current scenario has:
|
||||
``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``.
|
||||
Refer to :ref:`usb_virtualization` for details.
|
||||
|
||||
``shm_regions``:
|
||||
List of shared memory regions for inter-VM communication.
|
||||
|
||||
``shm_region`` (a child node of ``shm_regions``):
|
||||
configure the shm regions for current VM, input format: hv:/<;shm name>;,
|
||||
<;shm size in MB>;. Refer to :ref:`ivshmem-hld` for details.
|
||||
|
||||
``passthrough_devices``:
|
||||
Select the passthrough device from the lspci list. Currently we support:
|
||||
usb_xdci, audio, audio_codec, ipu, ipu_i2c, cse, wifi, Bluetooth, sd_card,
|
||||
ethernet, wifi, sata, and nvme.
|
||||
Ethernet, wifi, sata, and nvme.
|
||||
|
||||
``network`` (a child node of ``virtio_devices``):
|
||||
The virtio network device setting.
|
||||
@ -394,7 +431,7 @@ current scenario has:
|
||||
.. note::
|
||||
|
||||
The ``configurable`` and ``readonly`` attributes are used to mark
|
||||
whether the items is configurable for users. When ``configurable="0"``
|
||||
whether the item is configurable for users. When ``configurable="0"``
|
||||
and ``readonly="true"``, the item is not configurable from the web
|
||||
interface. When ``configurable="0"``, the item does not appear on the
|
||||
interface.
|
||||
@ -562,7 +599,7 @@ Instructions
|
||||
because the app needs to download some JavaScript files.
|
||||
|
||||
.. note:: The ACRN configuration app is supported on Chrome, Firefox,
|
||||
and MS Edge. Do not use IE.
|
||||
and Microsoft Edge. Do not use Internet Explorer.
|
||||
|
||||
The website is shown below:
|
||||
|
||||
@ -587,7 +624,7 @@ Instructions
|
||||
|
||||
#. Load or create the scenario setting by selecting among the following:
|
||||
|
||||
- Choose a scenario from the **Scenario Setting** menu which lists all
|
||||
- Choose a scenario from the **Scenario Setting** menu that lists all
|
||||
user-defined scenarios for the board you selected in the previous step.
|
||||
|
||||
- Click the **Create a new scenario** from the **Scenario Setting**
|
||||
@ -607,9 +644,9 @@ Instructions
|
||||
.. figure:: images/choose_scenario.png
|
||||
:align: center
|
||||
|
||||
Note that you can also use a customized scenario xml by clicking **Import
|
||||
Note that you can also use a customized scenario XML by clicking **Import
|
||||
XML**. The configuration app automatically directs to the new scenario
|
||||
xml once the import is complete.
|
||||
XML once the import is complete.
|
||||
|
||||
#. The configurable items display after one scenario is created/loaded/
|
||||
selected. Following is an industry scenario:
|
||||
@ -618,9 +655,9 @@ Instructions
|
||||
:align: center
|
||||
|
||||
- You can edit these items directly in the text boxes, or you can choose
|
||||
single or even multiple items from the drop down list.
|
||||
single or even multiple items from the drop-down list.
|
||||
|
||||
- Read-only items are marked as grey.
|
||||
- Read-only items are marked as gray.
|
||||
|
||||
- Hover the mouse pointer over the item to display the description.
|
||||
|
||||
@ -642,7 +679,7 @@ Instructions
|
||||
pop-up model.
|
||||
|
||||
.. note::
|
||||
All customized scenario xmls will be in user-defined groups which are
|
||||
All customized scenario xmls will be in user-defined groups, which are
|
||||
located in ``misc/vm_configs/xmls/config-xmls/[board]/user_defined/``.
|
||||
|
||||
Before saving the scenario xml, the configuration app validates the
|
||||
@ -661,8 +698,8 @@ Instructions
|
||||
|
||||
If **Source Path** in the pop-up model is edited, the source code is
|
||||
generated into the edited Source Path relative to ``acrn-hypervisor``;
|
||||
otherwise, the source code is generated into default folders and
|
||||
overwrite the old ones. The board-related configuration source
|
||||
otherwise, source code is generated into default folders and
|
||||
overwrites the old ones. The board-related configuration source
|
||||
code is located at
|
||||
``misc/vm_configs/boards/[board]/`` and the
|
||||
scenario-based VM configuration source code is located at
|
||||
@ -678,11 +715,11 @@ The **Launch Setting** is quite similar to the **Scenario Setting**:
|
||||
|
||||
- Click **Load a default launch script** from the **Launch Setting** menu.
|
||||
|
||||
- Select one launch setting xml from the menu.
|
||||
- Select one launch setting XML file from the menu.
|
||||
|
||||
- Import the local launch setting xml by clicking **Import XML**.
|
||||
- Import the local launch setting XML file by clicking **Import XML**.
|
||||
|
||||
#. Select one scenario for the current launch setting from the **Select Scenario** drop down box.
|
||||
#. Select one scenario for the current launch setting from the **Select Scenario** drop-down box.
|
||||
|
||||
#. Configure the items for the current launch setting.
|
||||
|
||||
@ -694,7 +731,7 @@ The **Launch Setting** is quite similar to the **Scenario Setting**:
|
||||
- Remove a UOS launch script by clicking **Remove this VM** for the
|
||||
current launch setting.
|
||||
|
||||
#. Save the current launch setting to the user-defined xml files by
|
||||
#. Save the current launch setting to the user-defined XML files by
|
||||
clicking **Export XML**. The configuration app validates the current
|
||||
configuration and lists all incorrect configurable items and shows errors.
|
||||
|
||||
|
@ -18,8 +18,8 @@ This setup was tested with the following configuration,
|
||||
- Platforms Tested: ApolloLake, KabyLake, CoffeeLake
|
||||
|
||||
|
||||
Pre-Requisites
|
||||
**************
|
||||
Prerequisites
|
||||
*************
|
||||
1. Make sure the platform supports Intel VMX as well as VT-d technologies. On ubuntu18.04, this
|
||||
can be checked by installing ``cpu-checker`` tool. If the output displays **KVM acceleration can be used**
|
||||
the platform supports it.
|
||||
|
@ -186,7 +186,7 @@ shown in the following example:
|
||||
Formats:
|
||||
0x00000005: event id for trace test
|
||||
|
||||
%(cpu)d: corresponding cpu index with 'decimal' format
|
||||
%(cpu)d: corresponding CPU index with 'decimal' format
|
||||
|
||||
%(event)016x: corresponding event id with 'hex' format
|
||||
|
||||
|
@ -151,7 +151,7 @@ Depending on your Linux version, install the needed tools:
|
||||
|
||||
sudo dnf install doxygen python3-pip python3-wheel make graphviz
|
||||
|
||||
And for any of these Linux environments, install the remaining python-based
|
||||
And for any of these Linux environments, install the remaining Python-based
|
||||
tools:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -160,7 +160,7 @@ tools:
|
||||
pip3 install --user -r scripts/requirements.txt
|
||||
|
||||
Add ``$HOME/.local/bin`` to the front of your ``PATH`` so the system will
|
||||
find expected versions of python utilities such as ``sphinx-build`` and
|
||||
find expected versions of Python utilities such as ``sphinx-build`` and
|
||||
``breathe``:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -304,7 +304,7 @@ Sphinx/Breathe, we've added a post-processing filter on the output of
|
||||
the documentation build process to check for "expected" messages from the
|
||||
generation process output.
|
||||
|
||||
The output from the Sphinx build is processed by the python script
|
||||
The output from the Sphinx build is processed by the Python script
|
||||
``scripts/filter-known-issues.py`` together with a set of filter
|
||||
configuration files in the ``.known-issues/doc`` folder. (This
|
||||
filtering is done as part of the ``Makefile``.)
|
||||
|
172
doc/tutorials/enable_ivshmem.rst
Normal file
172
doc/tutorials/enable_ivshmem.rst
Normal file
@ -0,0 +1,172 @@
|
||||
.. _enable_ivshmem:
|
||||
|
||||
Enable Inter-VM Communication Based on ``ivshmem``
|
||||
##################################################
|
||||
|
||||
You can use inter-VM communication based on the ``ivshmem`` dm-land
|
||||
solution or hv-land solution, according to the usage scenario needs.
|
||||
(See :ref:`ivshmem-hld` for a high-level description of these solutions.)
|
||||
While both solutions can be used at the same time, VMs using different
|
||||
solutions can not communicate with each other.
|
||||
|
||||
ivshmem dm-land usage
|
||||
*********************
|
||||
|
||||
Add this line as an ``acrn-dm`` boot parameter::
|
||||
|
||||
-s slot,ivshmem,shm_name,shm_size
|
||||
|
||||
where
|
||||
|
||||
- ``-s slot`` - Specify the virtual PCI slot number
|
||||
|
||||
- ``ivshmem`` - Virtual PCI device name
|
||||
|
||||
- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the same
|
||||
``shm_name`` share a shared memory region. The ``shm_name`` needs to start
|
||||
with ``dm:/`` prefix. For example, ``dm:/test``
|
||||
|
||||
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The size
|
||||
ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
|
||||
For example, to set up a shared memory of 2 megabytes, use ``2``
|
||||
instead of ``shm_size``. The two communicating VMs must define the same size.
|
||||
|
||||
.. note:: This device can be used with real-time VM (RTVM) as well.
|
||||
|
||||
ivshmem hv-land usage
|
||||
*********************
|
||||
|
||||
The ``ivshmem`` hv-land solution is disabled by default in ACRN. You
|
||||
enable it using the :ref:`acrn_configuration_tool` with these steps:
|
||||
|
||||
- Enable ``ivshmem`` hv-land in ACRN XML configuration file. For example, the
|
||||
XML configuration file for the hybrid_rt scenario on a whl-ipc-i5 board is found in
|
||||
``acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml``
|
||||
|
||||
- Edit ``IVSHMEM_ENABLED`` to ``y`` in ACRN scenario XML configuration
|
||||
to enable ``ivshmem`` hv-land
|
||||
|
||||
- Edit ``IVSHMEM_REGION`` to specify the shared memory name, size and
|
||||
communication VMs in ACRN scenario XML configuration. The ``IVSHMEM_REGION``
|
||||
format is ``shm_name,shm_size,VM IDs``:
|
||||
|
||||
- ``shm_name`` - Specify a shared memory name. The name needs to start
|
||||
with the ``hv:/`` prefix. For example, ``hv:/shm_region_0``
|
||||
|
||||
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The
|
||||
size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
|
||||
For example, to set up a shared memory of 2 megabytes, use ``2``
|
||||
instead of ``shm_size``.
|
||||
|
||||
- ``VM IDs`` - Specify the VM IDs to use the same shared memory
|
||||
communication and separate it with ``:``. For example, the
|
||||
communication between VM0 and VM2, it can be written as ``0:2``
|
||||
|
||||
.. note:: You can define up to eight ``ivshmem`` hv-land shared regions.
|
||||
|
||||
- Build the XML configuration, refer to :ref:`getting-started-building`
|
||||
|
||||
Inter-VM Communication Examples
|
||||
*******************************
|
||||
|
||||
dm-land example
|
||||
===============
|
||||
|
||||
This example uses dm-land inter-VM communication between two
|
||||
Linux-based post-launched VMs (VM1 and VM2).
|
||||
|
||||
.. note:: An ``ivshmem`` Windows driver exists and can be found `here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_
|
||||
|
||||
1. Add a new virtual PCI device for both VMs: the device type is
|
||||
``ivshmem``, shared memory name is ``dm:/test``, and shared memory
|
||||
size is 2MB. Both VMs must have the same shared memory name and size:
|
||||
|
||||
- VM1 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 7
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$2" \
|
||||
-s 5,virtio-console,@stdio:stdio_port \
|
||||
-s 6,virtio-hyper_dmabuf \
|
||||
-s 3,virtio-blk,/home/acrn/uos1.img \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 6,ivshmem,dm:/test,2 \
|
||||
-s 7,virtio-rnd \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
|
||||
- VM2 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
-s 2,pci-gvt -G "$2" \
|
||||
-s 3,virtio-blk,/home/acrn/uos2.img \
|
||||
-s 4,virtio-net,tap0 \
|
||||
-s 5,ivshmem,dm:/test,2 \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.
|
||||
|
||||
- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
|
||||
3. As recorded in the `PCI ID Repository <https://pci-ids.ucw.cz/read/PC/1af4>`_,
|
||||
the ``ivshmem`` device vendor ID is ``1af4`` (RedHat) and device ID is ``1110``
|
||||
(Inter-VM shared memory). Use these commands to probe the device::
|
||||
|
||||
$ sudo modprobe uio
|
||||
$ sudo modprobe uio_pci_generic
|
||||
$ sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
|
||||
|
||||
.. note:: These commands are applicable to Linux-based guests with ``CONFIG_UIO`` and ``CONFIG_UIO_PCI_GENERIC`` enabled.
|
||||
|
||||
4. Finally, a user application can get the shared memory base address from
|
||||
the ``ivshmem`` device BAR resource
|
||||
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
|
||||
the ``ivshmem`` device config resource
|
||||
(``/sys/class/uio/uioX/device/config``).
|
||||
|
||||
The ``X`` in ``uioX`` above, is a number that can be retrieved using the
|
||||
``ls`` command:
|
||||
|
||||
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
|
||||
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
|
||||
|
||||
hv-land example
|
||||
===============
|
||||
|
||||
This example uses hv-land inter-VM communication between two
|
||||
Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM).
|
||||
|
||||
1. Configure shared memory for the communication between VM0 and VM2 for hybrid_rt
|
||||
scenario on whl-ipc-i5 board, the shared memory name is ``hv:/shm_region_0``,
|
||||
and shared memory size is 2M bytes:
|
||||
|
||||
- Edit XML configuration file for hybrid_rt scenario on whl-ipc-i5 board
|
||||
``acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml``
|
||||
to enable ``ivshmem`` and configure the shared memory region using the format
|
||||
``shm_name, shm_size, VM IDs`` (as described above in the ACRN dm boot parameters).
|
||||
The region name must start with ``hv:/`` for an hv-land shared region, and we'll allocate 2MB
|
||||
shared between VMs 0 and 2:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2,3
|
||||
|
||||
<IVSHMEM desc="IVSHMEM configuration">
|
||||
<IVSHMEM_ENABLED>y</IVSHMEM_ENABLED>
|
||||
<IVSHMEM_REGION>hv:/shm_region_0, 2, 0:2</IVSHMEM_REGION>
|
||||
</IVSHMEM>
|
||||
|
||||
2. Build ACRN based on the XML configuration for hybrid_rt scenario on whl-ipc-i5 board::
|
||||
|
||||
make BOARD_FILE=acrn-hypervisor/misc/vm_configs/xmls/board-xmls/whl-ipc-i5.xml \
|
||||
SCENARIO_FILE=acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml TARGET_DIR=xxx
|
||||
|
||||
3. Continue following the dm-land steps 2-4 and the ``ivshmem`` device BDF may be different
|
||||
depending on the configuration.
|
@ -23,8 +23,8 @@ Verified version
|
||||
*****************
|
||||
|
||||
- ACRN-hypervisor tag: **acrn-2020w17.4-140000p**
|
||||
- ACRN-Kernel (Service VM kernel): **master** branch, commit id **095509221660daf82584ebdd8c50ea0078da3c2d**
|
||||
- ACRN-EDK2 (OVMF): **ovmf-acrn** branch, commit id **0ff86f6b9a3500e4c7ea0c1064e77d98e9745947**
|
||||
- ACRN-Kernel (Service VM kernel): **master** branch, commit ID **095509221660daf82584ebdd8c50ea0078da3c2d**
|
||||
- ACRN-EDK2 (OVMF): **ovmf-acrn** branch, commit ID **0ff86f6b9a3500e4c7ea0c1064e77d98e9745947**
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
@ -94,9 +94,6 @@ Passthrough the GPU to Guest
|
||||
|
||||
4. Run ``launch_win.sh``.
|
||||
|
||||
.. note:: If you want to passthrough the GPU to a Clear Linux User VM, the
|
||||
steps are the same as above except your script.
|
||||
|
||||
Enable the GVT-d GOP driver
|
||||
***************************
|
||||
|
||||
@ -120,12 +117,12 @@ Steps
|
||||
|
||||
git clone https://github.com/projectacrn/acrn-edk2.git
|
||||
|
||||
#. Fetch the vbt and gop drivers.
|
||||
#. Fetch the VBT and GOP drivers.
|
||||
|
||||
Fetch the **vbt** and **gop** drivers from the board manufacturer
|
||||
Fetch the **VBT** and **GOP** drivers from the board manufacturer
|
||||
according to your CPU model name.
|
||||
|
||||
#. Add the **vbt** and **gop** drivers to the OVMF:
|
||||
#. Add the **VBT** and **GOP** drivers to the OVMF:
|
||||
|
||||
::
|
||||
|
||||
|
@ -70,7 +70,7 @@ Build ACRN with Pre-Launched RT Mode
|
||||
|
||||
The ACRN VM configuration framework can easily configure resources for
|
||||
Pre-Launched VMs. On Whiskey Lake WHL-IPC-I5, to passthrough SATA and
|
||||
ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with:
|
||||
Ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -144,9 +144,9 @@ Configure RDT for VM using VM Configuration
|
||||
|
||||
#. RDT hardware feature is enabled by default on supported platforms. This
|
||||
information can be found using an offline tool that generates a
|
||||
platform-specific xml file that helps ACRN identify RDT-supported
|
||||
platform-specific XML file that helps ACRN identify RDT-supported
|
||||
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
|
||||
sub-section of the scenario xml file as in the below example. For
|
||||
sub-section of the scenario XML file as in the below example. For
|
||||
details on building ACRN with scenario refer to :ref:`build-with-acrn-scenario`.
|
||||
|
||||
.. code-block:: none
|
||||
@ -163,7 +163,7 @@ Configure RDT for VM using VM Configuration
|
||||
<MBA_DELAY desc="Memory Bandwidth Allocation delay value"></MBA_DELAY>
|
||||
</RDT>
|
||||
|
||||
#. Once RDT is enabled in the scenario xml file, the next step is to program
|
||||
#. Once RDT is enabled in the scenario XML file, the next step is to program
|
||||
the desired cache mask or/and the MBA delay value as needed in the
|
||||
scenario file. Each cache mask or MBA delay configuration corresponds
|
||||
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
|
||||
|
@ -1,9 +1,9 @@
|
||||
.. _rt_performance_tuning:
|
||||
|
||||
ACRN Real-Time (RT) Performance Analysis
|
||||
ACRN Real-time (RT) Performance Analysis
|
||||
########################################
|
||||
|
||||
The document describes the methods to collect trace/data for ACRN Real-Time VM (RTVM)
|
||||
The document describes the methods to collect trace/data for ACRN real-time VM (RTVM)
|
||||
real-time performance analysis. Two parts are included:
|
||||
|
||||
- Method to trace ``vmexit`` occurrences for analysis.
|
||||
|
@ -1,6 +1,6 @@
|
||||
.. _rt_perf_tips_rtvm:
|
||||
|
||||
ACRN Real-Time VM Performance Tips
|
||||
ACRN Real-time VM Performance Tips
|
||||
##################################
|
||||
|
||||
Background
|
||||
@ -50,7 +50,7 @@ Tip: Apply the acrn-dm option ``--lapic_pt``
|
||||
Tip: Use virtio polling mode
|
||||
Polling mode prevents the frontend of the VM-exit from sending a
|
||||
notification to the backend. We recommend that you passthrough a
|
||||
physical peripheral device (such as block or an ethernet device), to an
|
||||
physical peripheral device (such as block or an Ethernet device), to an
|
||||
RTVM. If no physical device is available, ACRN supports virtio devices
|
||||
and enables polling mode to avoid a VM-exit at the frontend. Enable
|
||||
virtio polling mode via the option ``--virtio_poll [polling interval]``.
|
||||
|
@ -1,6 +1,6 @@
|
||||
.. _rtvm_workload_guideline:
|
||||
|
||||
Real-Time VM Application Design Guidelines
|
||||
Real-time VM Application Design Guidelines
|
||||
##########################################
|
||||
|
||||
An RTOS developer must be aware of the differences between running applications on a native
|
||||
|
@ -18,7 +18,7 @@ Prerequisites
|
||||
|
||||
#. Refer to the :ref:`ACRN supported hardware <hardware>`.
|
||||
#. For a default prebuilt ACRN binary in the E2E package, you must have 4
|
||||
CPU cores or enable "CPU Hyper-Threading" in order to have 4 CPU threads for 2 CPU cores.
|
||||
CPU cores or enable "CPU Hyper-threading" in order to have 4 CPU threads for 2 CPU cores.
|
||||
#. Follow the :ref:`rt_industry_ubuntu_setup` to set up the ACRN Service VM
|
||||
based on Ubuntu.
|
||||
#. This tutorial is validated on the following configurations:
|
||||
|
@ -24,7 +24,7 @@ Use the following instructions to install Debian.
|
||||
the bottom of the page).
|
||||
- Follow the `Debian installation guide
|
||||
<https://www.debian.org/releases/stable/amd64/index.en.html>`_ to
|
||||
install it on your NUC; we are using an Intel Kaby Lake NUC (NUC7i7DNHE)
|
||||
install it on your Intel NUC; we are using a Kaby Lake Intel NUC (NUC7i7DNHE)
|
||||
in this tutorial.
|
||||
- :ref:`install-build-tools-dependencies` for ACRN.
|
||||
- Update to the latest iASL (required by the ACRN Device Model):
|
||||
|
@ -11,18 +11,18 @@ Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
on your NUC kit.
|
||||
on your Intel NUC kit.
|
||||
- Follow the instructions at XXX to set up the
|
||||
Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
|
||||
|
||||
.. important:: need updated instructions that aren't Clear Linux dependent
|
||||
|
||||
We are using Intel Kaby Lake NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
|
||||
We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-deb`). Connect two monitors to your
|
||||
NUC:
|
||||
Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -47,7 +47,7 @@ Hardware Configurations
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | | System memory | - 8GiB SO-DIMM DDR4 2400 MHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
@ -97,7 +97,7 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
|
||||
#. Right-click **QEMU/KVM** and select **New**.
|
||||
|
||||
a. Choose **Local install media (ISO image or CDROM)** and then click
|
||||
a. Choose **Local install media (ISO image or CD-ROM)** and then click
|
||||
**Forward**. A **Create a new virtual machine** box displays, as shown
|
||||
in :numref:`newVM-debian` below.
|
||||
|
||||
@ -119,7 +119,7 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
#. Rename the image if you desire. You must check the **customize
|
||||
configuration before install** option before you finish all stages.
|
||||
|
||||
#. Verify that you can see the Overview screen as set up, as shown in :numref:`debian10-setup` below:
|
||||
#. Verify that you can see the Overview screen as set up, shown in :numref:`debian10-setup` below:
|
||||
|
||||
.. figure:: images/debian-uservm-3.png
|
||||
:align: center
|
||||
@ -127,14 +127,14 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
|
||||
Debian Setup Overview
|
||||
|
||||
#. Complete the Debian installation. Verify that you have set up a vda
|
||||
#. Complete the Debian installation. Verify that you have set up a VDA
|
||||
disk partition, as shown in :numref:`partition-vda` below:
|
||||
|
||||
.. figure:: images/debian-uservm-4.png
|
||||
:align: center
|
||||
:name: partition-vda
|
||||
|
||||
Virtual Disk (vda) partition
|
||||
Virtual Disk (VDA) partition
|
||||
|
||||
#. Upon installation completion, the KVM image is created in the
|
||||
``/var/lib/libvirt/images`` folder. Convert the `gcow2` format to `img`
|
||||
@ -154,7 +154,7 @@ Re-use and modify the `launch_win.sh` script in order to launch the new Debian 1
|
||||
"/dev/sda1" mentioned below with "/dev/nvme0n1p1" if you are using an
|
||||
NVMe drive.
|
||||
|
||||
1. Copy the debian.img to your NUC:
|
||||
1. Copy the debian.img to your Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -11,9 +11,9 @@ Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
on your NUC kit.
|
||||
on your Intel NUC kit.
|
||||
- Follow the instructions at XXX to set up the
|
||||
Service VM automatically on your NUC kit. Follow steps 1 - 4.
|
||||
Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
|
||||
|
||||
.. important:: need updated instructions that aren't Clear Linux
|
||||
dependent
|
||||
@ -21,7 +21,7 @@ Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
settings (refer to :numref:`intel-bios-ubun`). Connect two monitors to your
|
||||
NUC:
|
||||
Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -46,7 +46,7 @@ Hardware Configurations
|
||||
| | | Graphics | - UHD Graphics 620 |
|
||||
| | | | - Two HDMI 2.0a ports supporting 4K at 60 Hz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | System memory | - 8GiB SODIMM DDR4 2400 MHz |
|
||||
| | | System memory | - 8GiB SO-DIMM DDR4 2400 MHz |
|
||||
| | +----------------------+----------------------------------------------+
|
||||
| | | Storage capabilities | - 1TB WDC WD10SPZX-22Z |
|
||||
+--------------------------+----------------------+----------------------+----------------------------------------------+
|
||||
@ -147,7 +147,7 @@ Modify the ``launch_win.sh`` script in order to launch Ubuntu as the User VM.
|
||||
``/dev/sda1`` mentioned below with ``/dev/nvme0n1p1`` if you are
|
||||
using an SSD.
|
||||
|
||||
1. Copy the ``uos.img`` to your NUC:
|
||||
1. Copy the ``uos.img`` to your Intel NUC:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -135,7 +135,7 @@ CPUID Leaf 12H
|
||||
**Intel SGX Capability Enumeration**
|
||||
|
||||
* CPUID_12H.0.EAX[0] SGX1: If 1, indicates that Intel SGX supports the
|
||||
collection of SGX1 leaf functions.If is_sgx_supported and the section count
|
||||
collection of SGX1 leaf functions. If is_sgx_supported and the section count
|
||||
is initialized for the VM, this bit will be set.
|
||||
* CPUID_12H.0.EAX[1] SGX2: If 1, indicates that Intel SGX supports the
|
||||
collection of SGX2 leaf functions. If hardware supports it and SGX enabled
|
||||
@ -149,7 +149,7 @@ CPUID Leaf 12H
|
||||
Extended feature (same structure as XCR0).
|
||||
|
||||
The hypervisor may change the allow-1 setting of XFRM in ATTRIBUTES for VM.
|
||||
If some feature is disabled for the VM, the bit is also cleared, eg. MPX.
|
||||
If some feature is disabled for the VM, the bit is also cleared, e.g. MPX.
|
||||
|
||||
**Intel SGX EPC Enumeration**
|
||||
|
||||
|
@ -61,7 +61,7 @@ Space Layout Randomization), and stack overflow protector.
|
||||
|
||||
There are a couple of built-in Trusted Apps running in user mode of
|
||||
Trusty OS. However, an OEM can add more Trusted Apps in Trusty OS to
|
||||
serve any other customized security services.For security reasons and
|
||||
serve any other customized security services. For security reasons and
|
||||
for serving early-boot time security requests (e.g. disk decryption),
|
||||
Trusty OS and Apps are typically started before Normal world OS.
|
||||
|
||||
@ -102,7 +102,7 @@ malware detection.
|
||||
|
||||
In embedded products such as an automotive IVI system, the most important
|
||||
security services requested by customers are keystore and secure
|
||||
storage. In this article we will focus on these two services.
|
||||
storage. In this article, we will focus on these two services.
|
||||
|
||||
Keystore
|
||||
========
|
||||
@ -126,14 +126,14 @@ and are permanently bound to the key, ensuring the key cannot be used in
|
||||
any other way.
|
||||
|
||||
In addition to the list above, there is one more service that Keymaster
|
||||
implementations provide, but which is not exposed as an API: Random
|
||||
implementations provide, but is not exposed as an API: Random
|
||||
number generation. This is used internally for generation of keys,
|
||||
Initialization Vectors (IVs), random padding, and other elements of
|
||||
secure protocols that require randomness.
|
||||
|
||||
Using Android as an example, Keystore functions are explained in greater
|
||||
details in this `Android keymaster functions document
|
||||
<https://source.android.com/security/keystore/implementer-ref>`_
|
||||
<https://source.android.com/security/keystore/implementer-ref>`_.
|
||||
|
||||
.. figure:: images/trustyacrn-image3.png
|
||||
:align: center
|
||||
@ -161,7 +161,7 @@ You can read the `eMMC/UFS JEDEC specification
|
||||
to understand that.
|
||||
|
||||
This secure storage can provide data confidentiality, integrity, and
|
||||
anti-replay protection.Confidentiality is guaranteed by data encryption
|
||||
anti-replay protection. Confidentiality is guaranteed by data encryption
|
||||
with a root key derived from the platform chipset's unique key/secret.
|
||||
|
||||
RPMB partition is a fixed size partition (128KB ~ 16MB) in eMMC (or UFS)
|
||||
@ -178,11 +178,11 @@ key). See `Android Key and ID Attestation
|
||||
for details.
|
||||
|
||||
In Trusty, the secure storage architecture is shown in the figure below.
|
||||
In the secure world, there is a SS (Secure Storage) TA, which has an
|
||||
In the secure world, there is an SS (Secure Storage) TA, which has an
|
||||
RPMB authentication key (AuthKey, an HMAC key) and uses this Authkey to
|
||||
talk with the RPMB controller in the eMMC device. Since the eMMC device
|
||||
is controlled by normal world driver, Trusty needs to send an RPMB data
|
||||
frame ( encrypted by hardware-backed unique encryption key and signed by
|
||||
frame (encrypted by hardware-backed unique encryption key and signed by
|
||||
AuthKey) over Trusty IPC channel to Trusty SS proxy daemon, which then
|
||||
forwards RPMB data frame to physical RPMB partition in eMMC.
|
||||
|
||||
@ -260,7 +260,7 @@ One-VM, Two-Worlds
|
||||
==================
|
||||
|
||||
As previously mentioned, Trusty Secure Monitor could be any
|
||||
hypervisor. In the ACRN project the ACRN hypervisor will behave as the
|
||||
hypervisor. In the ACRN project, the ACRN hypervisor will behave as the
|
||||
secure monitor to schedule in/out Trusty secure world.
|
||||
|
||||
.. figure:: images/trustyacrn-image4.png
|
||||
@ -364,7 +364,7 @@ access is like this:
|
||||
#. If the verification is successful in the eMMC RPMB controller, the
|
||||
data will be written into the storage device.
|
||||
|
||||
The work flow of authenticated data read is very similar to this flow
|
||||
The workflow of authenticated data read is very similar to this flow
|
||||
above in reverse order.
|
||||
|
||||
Note that there are some security considerations in this architecture:
|
||||
@ -383,7 +383,7 @@ system security design. In practice, the Service VM designer and implementer
|
||||
should obey these following rules (and more):
|
||||
|
||||
- Make sure the Service VM is a closed system and doesn't allow users to
|
||||
install any unauthorized 3rd party software or components.
|
||||
install any unauthorized third-party software or components.
|
||||
- External peripherals are constrained.
|
||||
- Enable kernel-based hardening techniques, e.g., dm-verity (to make
|
||||
sure integrity of DM and vBIOS/vOSloaders), kernel module signing,
|
||||
|
@ -104,9 +104,9 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
|
||||
``kernel_mod_tag`` of VM1 in the
|
||||
``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file.
|
||||
|
||||
The guest kernel command line arguments is configured in the
|
||||
The guest kernel command-line arguments is configured in the
|
||||
hypervisor source code by default if no ``$(VMx bootargs)`` is present.
|
||||
If ``$(VMx bootargs)`` is present, the default command line arguments
|
||||
If ``$(VMx bootargs)`` is present, the default command-line arguments
|
||||
are overridden by the ``$(VMx bootargs)`` parameters.
|
||||
|
||||
The ``$(Service VM bootargs)`` parameter in the multiboot command
|
||||
|
@ -19,7 +19,8 @@ Prerequisites
|
||||
*************
|
||||
- Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_.
|
||||
- Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`.
|
||||
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your NUC.
|
||||
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your
|
||||
Intel NUC.
|
||||
|
||||
Update Ubuntu GRUB
|
||||
******************
|
||||
@ -49,11 +50,11 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
|
||||
.. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file.
|
||||
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
|
||||
``kernel_mod_tag`` of VM0 which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
|
||||
``kernel_mod_tag`` of VM0, which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
|
||||
file. The multiboot module ``/boot/bzImage`` is the Service VM kernel
|
||||
file. The param ``yyyyyy`` is the bzImage tag and must exactly match the
|
||||
``kernel_mod_tag`` of VM1 in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c``
|
||||
file. The kernel command line arguments used to boot the Service VM are
|
||||
file. The kernel command-line arguments used to boot the Service VM are
|
||||
located in the header file ``misc/vm_configs/scenarios/hybrid/vm_configurations.h``
|
||||
and are configured by the `SOS_VM_BOOTARGS` macro.
|
||||
The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0 (Zephyr).
|
||||
@ -73,8 +74,8 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
|
||||
$ sudo update-grub
|
||||
|
||||
#. Reboot the NUC. Select the **ACRN hypervisor Hybrid Scenario** entry to boot
|
||||
the ACRN hypervisor on the NUC's display. The GRUB loader will boot the
|
||||
#. Reboot the Intel NUC. Select the **ACRN hypervisor Hybrid Scenario** entry to boot
|
||||
the ACRN hypervisor on the Intel NUC's display. The GRUB loader will boot the
|
||||
hypervisor, and the hypervisor will start the VMs automatically.
|
||||
|
||||
Hybrid Scenario Startup Checking
|
||||
@ -86,7 +87,7 @@ Hybrid Scenario Startup Checking
|
||||
|
||||
#. Use these steps to verify all VMs are running properly:
|
||||
|
||||
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**.
|
||||
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display ``Hello world! acrn``.
|
||||
#. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
|
||||
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
|
||||
#. Verify that the VM1's Service VM can boot and you can log in.
|
||||
|
@ -24,7 +24,7 @@ Prerequisites
|
||||
* NVMe disk
|
||||
* SATA disk
|
||||
* Storage device with USB interface (such as USB Flash
|
||||
or SATA disk connected with a USB3.0 SATA converter).
|
||||
or SATA disk connected with a USB 3.0 SATA converter).
|
||||
* Disable **Intel Hyper Threading Technology** in the BIOS to avoid
|
||||
interference from logical cores for the logical partition scenario.
|
||||
* In the logical partition scenario, two VMs (running Ubuntu OS)
|
||||
@ -57,7 +57,8 @@ Update kernel image and modules of pre-launched VM
|
||||
|
||||
The last two commands build the bootable kernel image as
|
||||
``arch/x86/boot/bzImage``, and loadable kernel modules under the ``./out/``
|
||||
folder. Copy these files to a removable disk for installing on the NUC later.
|
||||
folder. Copy these files to a removable disk for installing on the
|
||||
Intel NUC later.
|
||||
|
||||
#. The current ACRN logical partition scenario implementation requires a
|
||||
multi-boot capable bootloader to boot both the ACRN hypervisor and the
|
||||
@ -68,10 +69,10 @@ Update kernel image and modules of pre-launched VM
|
||||
default, the GRUB bootloader is installed on the EFI System Partition
|
||||
(ESP) that's used to bootstrap the ACRN hypervisor.
|
||||
|
||||
#. After installing the Ubuntu OS, power off the NUC. Attach the
|
||||
SATA disk and storage device with the USB interface to the NUC. Power on
|
||||
the NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in
|
||||
the removable disk with the kernel image into the NUC and then copy the
|
||||
#. After installing the Ubuntu OS, power off the Intel NUC. Attach the
|
||||
SATA disk and storage device with the USB interface to the Intel NUC. Power on
|
||||
the Intel NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in
|
||||
the removable disk with the kernel image into the Intel NUC and then copy the
|
||||
loadable kernel modules built in Step 1 to the ``/lib/modules/`` folder
|
||||
on both the mounted SATA disk and storage device with USB interface. For
|
||||
example, assuming the SATA disk and storage device with USB interface are
|
||||
@ -101,8 +102,8 @@ Update ACRN hypervisor image
|
||||
|
||||
#. Before building the ACRN hypervisor, find the I/O address of the serial
|
||||
port and the PCI BDF addresses of the SATA controller nd the USB
|
||||
controllers on the NUC. Enter the following command to get the
|
||||
I/O addresses of the serial port. The NUC supports one serial port, **ttyS0**.
|
||||
controllers on the Intel NUC. Enter the following command to get the
|
||||
I/O addresses of the serial port. The Intel NUC supports one serial port, **ttyS0**.
|
||||
Connect the serial port to the development workstation in order to access
|
||||
the ACRN serial console to switch between pre-launched VMs:
|
||||
|
||||
@ -169,13 +170,13 @@ Update ACRN hypervisor image
|
||||
#. Check or update the BDF information of the PCI devices for each
|
||||
pre-launched VM; check it in the ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``.
|
||||
|
||||
#. Copy the artifact ``acrn.bin``, ``ACPI_VM0.bin`` and ``ACPI_VM1.bin`` to the ``/boot`` directory:
|
||||
#. Copy the artifact ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` to the ``/boot`` directory:
|
||||
|
||||
#. Copy ``acrn.bin``, ``ACPI_VM1.bin`` and ``ACPI_VM0.bin`` to a removable disk.
|
||||
|
||||
#. Plug the removable disk into the NUC's USB port.
|
||||
#. Plug the removable disk into the Intel NUC's USB port.
|
||||
|
||||
#. Copy the ``acrn.bin``, ``ACPI_VM0.bin`` and ``ACPI_VM1.bin`` from the removable disk to ``/boot``
|
||||
#. Copy the ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` from the removable disk to ``/boot``
|
||||
directory.
|
||||
|
||||
Update Ubuntu GRUB to boot hypervisor and load kernel image
|
||||
@ -204,7 +205,7 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
|
||||
.. note::
|
||||
Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
|
||||
(or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``.
|
||||
The kernel command line arguments used to boot the pre-launched VMs is
|
||||
The kernel command-line arguments used to boot the pre-launched VMs is
|
||||
located in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` header file
|
||||
and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id number and ``*`` are arguments).
|
||||
The multiboot2 module param ``XXXXXX`` is the bzImage tag and must exactly match the ``kernel_mod_tag``
|
||||
@ -231,9 +232,9 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
|
||||
|
||||
$ sudo update-grub
|
||||
|
||||
#. Reboot the NUC. Select the **ACRN hypervisor Logical Partition
|
||||
#. Reboot the Intel NUC. Select the **ACRN hypervisor Logical Partition
|
||||
Scenario** entry to boot the logical partition of the ACRN hypervisor on
|
||||
the NUC's display. The GRUB loader will boot the hypervisor, and the
|
||||
the Intel NUC's display. The GRUB loader will boot the hypervisor, and the
|
||||
hypervisor will automatically start the two pre-launched VMs.
|
||||
|
||||
Logical partition scenario startup checking
|
||||
@ -248,10 +249,10 @@ Logical partition scenario startup checking
|
||||
properly:
|
||||
|
||||
#. Use the ``vm_console 0`` to switch to VM0's console.
|
||||
#. The VM0's Clear Linux OS should boot and log in.
|
||||
#. The VM0's OS should boot and log in.
|
||||
#. Use a :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
|
||||
#. Use the ``vm_console 1`` to switch to VM1's console.
|
||||
#. The VM1's Clear Linux OS should boot and log in.
|
||||
#. The VM1's OS should boot and log in.
|
||||
|
||||
Refer to the :ref:`ACRN hypervisor shell user guide <acrnshell>`
|
||||
for more information about available commands.
|
||||
|
@ -1,22 +1,22 @@
|
||||
.. _connect_serial_port:
|
||||
|
||||
Using the Serial Port on KBL NUC
|
||||
================================
|
||||
Using the Serial Port on KBL Intel NUC
|
||||
======================================
|
||||
|
||||
You can enable the serial console on the
|
||||
`KBL NUC <https://www.amazon.com/Intel-Business-Mini-Technology-BLKNUC7i7DNH1E/dp/B07CCQ8V4R>`_
|
||||
(NUC7i7DNH). The KBL NUC has a serial port header you can
|
||||
expose with a serial DB9 header cable. (The NUC has a punch out hole for
|
||||
`KBL Intel NUC <https://www.amazon.com/Intel-Business-Mini-Technology-BLKNUC7i7DNH1E/dp/B07CCQ8V4R>`_
|
||||
(NUC7i7DNH). The KBL Intel NUC has a serial port header you can
|
||||
expose with a serial DB9 header cable. (The Intel NUC has a punch out hole for
|
||||
mounting the serial connector.)
|
||||
|
||||
.. figure:: images/NUC-serial-port.jpg
|
||||
|
||||
KBL NUC with populated serial port punchout
|
||||
KBL Intel NUC with populated serial port punchout
|
||||
|
||||
You can `purchase
|
||||
<https://www.amazon.com/dp/B07BV1W6N8/ref=cm_sw_r_cp_ep_dp_wYm0BbABD5AK6>`_
|
||||
such a cable or you can build it yourself;
|
||||
refer to the `KBL NUC product specification
|
||||
refer to the `KBL Intel NUC product specification
|
||||
<https://www.intel.com/content/dam/support/us/en/documents/mini-pcs/nuc-kits/NUC7i7DN_TechProdSpec.pdf>`_
|
||||
as shown below:
|
||||
|
||||
|
@ -7,7 +7,7 @@ Run VxWorks as the User VM
|
||||
performance. This tutorial describes how to run VxWorks as the User VM on the ACRN hypervisor
|
||||
based on Clear Linux 29970 (ACRN tag v1.1).
|
||||
|
||||
.. note:: You'll need to be a WindRiver* customer and have purchased VxWorks to follow this tutorial.
|
||||
.. note:: You'll need to be a Wind River* customer and have purchased VxWorks to follow this tutorial.
|
||||
|
||||
Steps for Using VxWorks as User VM
|
||||
**********************************
|
||||
@ -15,7 +15,7 @@ Steps for Using VxWorks as User VM
|
||||
#. Build VxWorks
|
||||
|
||||
Follow the `VxWorks Getting Started Guide <https://docs.windriver.com/bundle/vxworks_7_tutorial_kernel_application_workbench_sr0610/page/rbu1422461642318.html>`_
|
||||
to setup the VxWorks development environment and build the VxWorks Image.
|
||||
to set up the VxWorks development environment and build the VxWorks Image.
|
||||
|
||||
.. note::
|
||||
The following kernel configuration should be **excluded**:
|
||||
@ -31,7 +31,7 @@ Steps for Using VxWorks as User VM
|
||||
* CONSOLE_BAUD_RATE = 115200
|
||||
* SYS_CLK_RATE_MAX = 1000
|
||||
|
||||
#. Build GRUB2 BootLoader Image
|
||||
#. Build GRUB2 bootloader Image
|
||||
|
||||
We use grub-2.02 as the bootloader of VxWorks in this tutorial; other versions may also work.
|
||||
|
||||
@ -95,7 +95,7 @@ Steps for Using VxWorks as User VM
|
||||
#. Follow XXX to boot the ACRN Service VM.
|
||||
|
||||
.. important:: need instructions from deleted document (using sdc
|
||||
mode on the NUC)
|
||||
mode on the Intel NUC)
|
||||
|
||||
#. Boot VxWorks as User VM.
|
||||
|
||||
@ -107,7 +107,7 @@ Steps for Using VxWorks as User VM
|
||||
$ cp /usr/share/acrn/samples/nuc/launch_vxworks.sh .
|
||||
|
||||
You will also need to copy the ``VxWorks.img`` created in the VxWorks build environment into directory
|
||||
``vxworks`` (via, e.g. a USB stick or network).
|
||||
``vxworks`` (via, e.g. a USB drive or network).
|
||||
|
||||
Run the ``launch_vxworks.sh`` script to launch VxWorks as the User VM.
|
||||
|
||||
@ -134,7 +134,7 @@ Steps for Using VxWorks as User VM
|
||||
|
||||
->
|
||||
|
||||
Finally, you can type ``help`` to check whether the VxWorks works well.
|
||||
Finally, you can type ``help`` to see available VxWorks commands.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -46,7 +46,7 @@ Download Win10 ISO and drivers
|
||||
|
||||
- Select **ISO-LTSC** and click **Continue**.
|
||||
- Complete the required info. Click **Continue**.
|
||||
- Select the language and **x86 64 bit**. Click **Download ISO** and save as ``windows10-LTSC-17763.iso``.
|
||||
- Select the language and **x86 64-bit**. Click **Download ISO** and save as ``windows10-LTSC-17763.iso``.
|
||||
|
||||
#. Download the `Intel DCH Graphics Driver
|
||||
<https://downloadmirror.intel.com/29074/a08/igfx_win10_100.7212.zip>`__.
|
||||
@ -57,8 +57,8 @@ Download Win10 ISO and drivers
|
||||
- Select **Download Package**. Key in **Oracle Linux 7.6** and click
|
||||
**Search**.
|
||||
- Click **DLP: Oracle Linux 7.6** to add to your Cart.
|
||||
- Click **Checkout** which is located at the top-right corner.
|
||||
- Under **Platforms/Language**, select **x86 64 bit**. Click **Continue**.
|
||||
- Click **Checkout**, which is located at the top-right corner.
|
||||
- Under **Platforms/Language**, select **x86 64-bit**. Click **Continue**.
|
||||
- Check **I accept the terms in the license agreement**. Click **Continue**.
|
||||
- From the list, right check the item labeled **Oracle VirtIO Drivers
|
||||
Version for Microsoft Windows 1.x.x, yy MB**, and then **Save link as
|
||||
@ -129,8 +129,8 @@ Install Windows 10 by GVT-g
|
||||
.. figure:: images/windows_install_4.png
|
||||
:align: center
|
||||
|
||||
#. Click **Browser** and go to the drive that includes the virtio win
|
||||
drivers. Select **all** under **vio\\w10\\amd64**. Install the
|
||||
#. Click **Browser** and go to the drive that includes the virtio
|
||||
Windows drivers. Select **all** under **vio\\w10\\amd64**. Install the
|
||||
following drivers into the image:
|
||||
|
||||
- Virtio-balloon
|
||||
@ -201,7 +201,7 @@ ACRN Windows verified feature list
|
||||
|
||||
"IO Devices", "Virtio block as the boot device", "Working"
|
||||
, "AHCI as the boot device", "Working"
|
||||
, "AHCI cdrom", "Working"
|
||||
, "AHCI CD-ROM", "Working"
|
||||
, "Virtio network", "Working"
|
||||
, "Virtio input - mouse", "Working"
|
||||
, "Virtio input - keyboard", "Working"
|
||||
@ -235,7 +235,7 @@ Explanation for acrn-dm popular command lines
|
||||
You may need to change 0/2/0 to match the bdf of the VGA controller on your platform.
|
||||
|
||||
* **-s 3,ahci,hd:/root/img/win10.img**:
|
||||
This is the hard disk onto which to install Windows 10.
|
||||
This is the hard disk where Windows 10 should be installed..
|
||||
Make sure that the slot ID **3** points to your win10 img path.
|
||||
|
||||
* **-s 4,virtio-net,tap0**:
|
||||
@ -253,11 +253,11 @@ Explanation for acrn-dm popular command lines
|
||||
# cat /proc/bus/input/devices | grep mouse
|
||||
|
||||
* **-s 7,ahci,cd:/root/img/Windows10.iso**:
|
||||
This is the IOS image used to install Windows 10. It appears as a cdrom
|
||||
This is the IOS image used to install Windows 10. It appears as a CD-ROM
|
||||
device. Make sure that the slot ID **7** points to your win10 ISO path.
|
||||
|
||||
* **-s 8,ahci,cd:/root/img/winvirtio.iso**:
|
||||
This is cdrom device to install the virtio Windows driver. Make sure it points to your VirtIO ISO path.
|
||||
This is CD-ROM device to install the virtio Windows driver. Make sure it points to your VirtIO ISO path.
|
||||
|
||||
* **-s 9,passthru,0/14/0**:
|
||||
This is to passthrough the USB controller to Windows.
|
||||
|
@ -1,11 +1,11 @@
|
||||
.. _using_xenomai_as_uos:
|
||||
|
||||
Run Xenomai as the User VM OS (Real-Time VM)
|
||||
Run Xenomai as the User VM OS (Real-time VM)
|
||||
############################################
|
||||
|
||||
`Xenomai`_ is a versatile real-time framework that provides support to user space applications that are seamlessly integrated into Linux environments.
|
||||
|
||||
This tutorial describes how to run Xenomai as the User VM OS (Real-Time VM) on the ACRN hypervisor.
|
||||
This tutorial describes how to run Xenomai as the User VM OS (real-time VM) on the ACRN hypervisor.
|
||||
|
||||
.. _Xenomai: https://gitlab.denx.de/Xenomai/xenomai/-/wikis/home
|
||||
|
||||
@ -60,21 +60,21 @@ Launch the RTVM
|
||||
|
||||
#. Prepare a dedicated disk (NVMe or SATA) for the RTVM; in this example, we use ``/dev/sda``.
|
||||
|
||||
a. Download the Preempt-RT VM image:
|
||||
a. Download the Preempt-RT VM image::
|
||||
|
||||
$ wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w01.1-140000p/preempt-rt-32030.img.xz
|
||||
$ wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w01.1-140000p/preempt-rt-32030.img.xz
|
||||
|
||||
#. Decompress the xz image:
|
||||
#. Decompress the xz image::
|
||||
|
||||
$ xz -d preempt-rt-32030.img.xz
|
||||
$ xz -d preempt-rt-32030.img.xz
|
||||
|
||||
#. Burn the Preempt-RT VM image onto the SATA disk:
|
||||
#. Burn the Preempt-RT VM image onto the SATA disk::
|
||||
|
||||
$ sudo dd if=preempt-rt-32030.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
$ sudo dd if=preempt-rt-32030.img of=/dev/sda bs=4M oflag=sync status=progress iflag=fullblock seek=0 conv=notrunc
|
||||
|
||||
#. Launch the RTVM via our script. Indicate the location of the root partition (sda3 in our example) and the kernel tarball::
|
||||
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_xenomai.sh -b /dev/sda3 -k /path/to/linux-4.19.59-xenomai-3.1-acrn+-x86.tar.gz
|
||||
$ sudo /usr/share/acrn/samples/nuc/launch_xenomai.sh -b /dev/sda3 -k /path/to/linux-4.19.59-xenomai-3.1-acrn+-x86.tar.gz
|
||||
|
||||
#. Verify that a login prompt displays::
|
||||
|
||||
@ -95,5 +95,6 @@ Launch the RTVM
|
||||
Install the Xenomai libraries and tools
|
||||
***************************************
|
||||
|
||||
To build and install Xenomai tools or its libraries in the RVTM, refer to the official `Xenomai documentation <https://gitlab.denx.de/Xenomai/xenomai/-/wikis/Installing_Xenomai_3#library-install>`_.
|
||||
To build and install Xenomai tools or its libraries in the RVTM, refer to the official
|
||||
`Xenomai documentation <https://gitlab.denx.de/Xenomai/xenomai/-/wikis/Installing_Xenomai_3#library-install>`_.
|
||||
Note that the current supported version is Xenomai-3.1 with the 4.19.59 kernel.
|
||||
|
41
doc/tutorials/using_yp.rst
Normal file
41
doc/tutorials/using_yp.rst
Normal file
@ -0,0 +1,41 @@
|
||||
.. _using_yp:
|
||||
|
||||
Using Yocto Project with ACRN
|
||||
#############################
|
||||
|
||||
The `Yocto Project <https://yoctoproject.org>`_ (YP) is an open source
|
||||
collaboration project that helps developers create custom Linux-based
|
||||
systems. The project provides a flexible set of tools and a space where
|
||||
embedded developers worldwide can share technologies, software stacks,
|
||||
configurations, and best practices used to create tailored Linux images
|
||||
for embedded and IoT devices, or anywhere a customized Linux OS is
|
||||
needed.
|
||||
|
||||
Yocto Project layers support the inclusion of technologies, hardware
|
||||
components, and software components. Layers are repositories containing
|
||||
related sets of instructions which tell the Yocto Project build system
|
||||
what to do.
|
||||
|
||||
The meta-acrn layer
|
||||
*******************
|
||||
|
||||
The meta-acrn layer integrates the ACRN hypervisor with OpenEmbedded,
|
||||
letting you build your Service VM or Guest VM OS with the Yocto Project.
|
||||
The `OpenEmbedded Layer Index's meta-acrn entry
|
||||
<http://layers.openembedded.org/layerindex/branch/master/layer/meta-acrn/>`_
|
||||
tracks work on this meta-acrn layer and lists the available meta-acrn
|
||||
recipes including Service and User VM OSs for Linux Kernel 4.19 and 5.4
|
||||
with the ACRN hypervisor enabled.
|
||||
|
||||
Read more about the meta-acrn layer and how to use it, directly from the
|
||||
`meta-acrn GitHub repo documentation
|
||||
<https://github.com/intel/meta-acrn/tree/master/docs>`_:
|
||||
|
||||
* `Getting Started guide
|
||||
<https://github.com/intel/meta-acrn/blob/master/docs/getting-started.md>`_
|
||||
* `Booting ACRN with Slim Bootloader
|
||||
<https://github.com/intel/meta-acrn/blob/master/docs/slimbootloader.md>`_
|
||||
* `Testing Procedure
|
||||
<https://github.com/intel/meta-acrn/blob/master/docs/qa.md>`_
|
||||
* `References
|
||||
<https://github.com/intel/meta-acrn/blob/master/docs/references.md>`_
|
@ -4,7 +4,7 @@ Run Zephyr as the User VM
|
||||
#########################
|
||||
|
||||
This tutorial describes how to run Zephyr as the User VM on the ACRN hypervisor. We are using
|
||||
Kaby Lake-based NUC (model NUC7i5DNHE) in this tutorial.
|
||||
Kaby Lake-based Intel NUC (model NUC7i5DNHE) in this tutorial.
|
||||
Other :ref:`ACRN supported platforms <hardware>` should work as well.
|
||||
|
||||
.. note::
|
||||
@ -24,7 +24,7 @@ Steps for Using Zephyr as User VM
|
||||
#. Build Zephyr
|
||||
|
||||
Follow the `Zephyr Getting Started Guide <https://docs.zephyrproject.org/latest/getting_started/>`_ to
|
||||
setup the Zephyr development environment.
|
||||
set up the Zephyr development environment.
|
||||
|
||||
The build process for ACRN User VM target is similar to other boards. We will build the `Hello World
|
||||
<https://docs.zephyrproject.org/latest/samples/hello_world/README.html>`_ sample for ACRN:
|
||||
@ -40,8 +40,8 @@ Steps for Using Zephyr as User VM
|
||||
|
||||
#. Build grub2 boot loader image
|
||||
|
||||
We can build grub2 bootloader for Zephyr using ``boards/x86/common/scripts/build_grub.sh``
|
||||
which locate in `Zephyr Sourcecode <https://github.com/zephyrproject-rtos/zephyr>`_.
|
||||
We can build the grub2 bootloader for Zephyr using ``boards/x86/common/scripts/build_grub.sh``
|
||||
found in the `Zephyr source code <https://github.com/zephyrproject-rtos/zephyr>`_.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -89,13 +89,14 @@ Steps for Using Zephyr as User VM
|
||||
$ sudo umount /mnt
|
||||
|
||||
You now have a virtual disk image with a bootable Zephyr in ``zephyr.img``. If the Zephyr build system is not
|
||||
the ACRN Service VM, then you will need to transfer this image to the ACRN Service VM (via, e.g, a USB stick or network )
|
||||
the ACRN Service VM, then you will need to transfer this image to the
|
||||
ACRN Service VM (via, e.g, a USB drive or network )
|
||||
|
||||
#. Follow XXX to boot "The ACRN Service OS" based on Clear Linux OS 28620
|
||||
(ACRN tag: acrn-2019w14.3-140000p)
|
||||
|
||||
.. important:: need to remove reference to Clear Linux and reference
|
||||
to deleted document (use SDC mode on the NUC)
|
||||
to deleted document (use SDC mode on the Intel NUC)
|
||||
|
||||
#. Boot Zephyr as User VM
|
||||
|
||||
|
@ -6,10 +6,13 @@ Enable vUART Configurations
|
||||
Introduction
|
||||
============
|
||||
|
||||
The virtual universal asynchronous receiver-transmitter (vUART) supports two functions: one is the console, the other is communication. vUART only works on a single function.
|
||||
The virtual universal asynchronous receiver-transmitter (vUART) supports
|
||||
two functions: one is the console, the other is communication. vUART
|
||||
only works on a single function.
|
||||
|
||||
Currently, only two vUART configurations are added to the
|
||||
``misc/vm_configs/scenarios/<xxx>/vm_configuration.c`` file, but you can change the value in it.
|
||||
``misc/vm_configs/scenarios/<xxx>/vm_configuration.c`` file, but you can
|
||||
change the value in it.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -22,9 +25,9 @@ Currently, only two vUART configurations are added to the
|
||||
.addr.port_base = INVALID_COM_BASE,
|
||||
}
|
||||
|
||||
**vuart[0]** is initiated as the **console** port.
|
||||
``vuart[0]`` is initiated as the **console** port.
|
||||
|
||||
**vuart[1]** is initiated as a **communication** port.
|
||||
``vuart[1]`` is initiated as a **communication** port.
|
||||
|
||||
Console enable list
|
||||
===================
|
||||
@ -48,12 +51,15 @@ Console enable list
|
||||
How to configure a console port
|
||||
===============================
|
||||
|
||||
To enable the console port for a VM, change only the ``port_base`` and ``irq``. If the irq number is already in use in your system (``cat /proc/interrupt``), choose another irq number. If you set the ``.irq =0``, the vuart will work in polling mode.
|
||||
To enable the console port for a VM, change only the ``port_base`` and
|
||||
``irq``. If the IRQ number is already in use in your system (``cat
|
||||
/proc/interrupt``), choose another IRQ number. If you set the ``.irq =0``,
|
||||
the vUART will work in polling mode.
|
||||
|
||||
- COM1_BASE (0x3F8) + COM1_IRQ(4)
|
||||
- COM2_BASE (0x2F8) + COM2_IRQ(3)
|
||||
- COM3_BASE (0x3E8) + COM3_IRQ(6)
|
||||
- COM4_BASE (0x2E8) + COM4_IRQ(7)
|
||||
- ``COM1_BASE (0x3F8) + COM1_IRQ(4)``
|
||||
- ``COM2_BASE (0x2F8) + COM2_IRQ(3)``
|
||||
- ``COM3_BASE (0x3E8) + COM3_IRQ(6)``
|
||||
- ``COM4_BASE (0x2E8) + COM4_IRQ(7)``
|
||||
|
||||
Example:
|
||||
|
||||
@ -70,11 +76,12 @@ How to configure a communication port
|
||||
|
||||
To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate.
|
||||
|
||||
The port_base and irq should differ from the ``vuart[0]`` in the same VM.
|
||||
The port_base and IRQ should differ from the ``vuart[0]`` in the same VM.
|
||||
|
||||
**t_vuart.vm_id** is the target VM's vm_id, start from 0. (0 means VM0)
|
||||
|
||||
**t_vuart.vuart_id** is the target vuart index in the target VM. start from 1. (1 means vuart[1])
|
||||
**t_vuart.vuart_id** is the target vuart index in the target VM. start
|
||||
from 1. (1 means ``vuart[1]``)
|
||||
|
||||
Example:
|
||||
|
||||
@ -120,16 +127,19 @@ Communication vUART enable list
|
||||
Launch script
|
||||
=============
|
||||
|
||||
- *-s 1:0,lpc -l com1,stdio*
|
||||
This option is only needed for WaaG and VxWorks (and also when using OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide the ACPI table for UART.
|
||||
- ``-s 1:0,lpc -l com1,stdio``
|
||||
This option is only needed for WaaG and VxWorks (and also when using
|
||||
OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide
|
||||
the ACPI table for UART.
|
||||
|
||||
- *-B " ....,console=ttyS0, ..."*
|
||||
- ``-B " ....,console=ttyS0, ..."``
|
||||
Add this to the kernel-based system.
|
||||
|
||||
Test the communication port
|
||||
===========================
|
||||
|
||||
After you have configured the communication port in hypervisor, you can access the corresponding port. For example, in Clear Linux:
|
||||
After you have configured the communication port in hypervisor, you can
|
||||
access the corresponding port. For example, in Clear Linux:
|
||||
|
||||
1. With ``echo`` and ``cat``
|
||||
|
||||
@ -137,20 +147,26 @@ After you have configured the communication port in hypervisor, you can access t
|
||||
|
||||
On VM2: ``# echo "test test" > /dev/ttyS1``
|
||||
|
||||
you can find the message from VM1 ``/dev/ttyS1``.
|
||||
You can find the message from VM1 ``/dev/ttyS1``.
|
||||
|
||||
If you are not sure which port is the communication port, you can run ``dmesg | grep ttyS`` under the Linux shell to check the base address. If it matches what you have set in the ``vm_configuration.c`` file, it is the correct port.
|
||||
If you are not sure which port is the communication port, you can run
|
||||
``dmesg | grep ttyS`` under the Linux shell to check the base address.
|
||||
If it matches what you have set in the ``vm_configuration.c`` file, it
|
||||
is the correct port.
|
||||
|
||||
|
||||
#. With minicom
|
||||
|
||||
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test`` in VM1's minicom. The message should appear in VM2's minicom. Disable flow control in minicom.
|
||||
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test``
|
||||
in VM1's minicom. The message should appear in VM2's minicom. Disable
|
||||
flow control in minicom.
|
||||
|
||||
|
||||
#. Limitations
|
||||
|
||||
- The msg cannot be longer than 256 bytes.
|
||||
- This cannot be used to transfer files because flow control is not supported so data may be lost.
|
||||
- This cannot be used to transfer files because flow control is
|
||||
not supported so data may be lost.
|
||||
|
||||
vUART design
|
||||
============
|
||||
@ -178,19 +194,23 @@ This adds ``com1 (0x3f8)`` and ``com2 (0x2f8)`` modules in the Guest VM, includi
|
||||
|
||||
**Data Flows**
|
||||
|
||||
Three different data flows exist based on how the post-launched VM is started, as shown in the diagram below.
|
||||
Three different data flows exist based on how the post-launched VM is
|
||||
started, as shown in the diagram below:
|
||||
|
||||
Figure 1 data flow: The post-launched VM is started with the vUART enabled in the hypervisor configuration file only.
|
||||
|
||||
Figure 2 data flow: The post-launched VM is started with the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
|
||||
|
||||
Figure 3 data flow: The post-launched VM is started with both vUART enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
|
||||
* Figure 1 data flow: The post-launched VM is started with the vUART
|
||||
enabled in the hypervisor configuration file only.
|
||||
* Figure 2 data flow: The post-launched VM is started with the
|
||||
``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
|
||||
* Figure 3 data flow: The post-launched VM is started with both vUART
|
||||
enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
|
||||
|
||||
.. figure:: images/vuart-config-post-launch.png
|
||||
:align: center
|
||||
:name: Post-Launched VMs
|
||||
|
||||
.. note::
|
||||
For operating systems such as VxWorks and Windows that depend on the ACPI table to probe the uart driver, adding the vuart configuration in the hypervisor is not sufficient. Currently, we recommend that you use the configuration in the figure 3 data flow. This may be refined in the future.
|
||||
|
||||
|
||||
For operating systems such as VxWorks and Windows that depend on the
|
||||
ACPI table to probe the uart driver, adding the vUART configuration in
|
||||
the hypervisor is not sufficient. Currently, we recommend that you use
|
||||
the configuration in the figure 3 data flow. This may be refined in the
|
||||
future.
|
||||
|
@ -23,7 +23,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
|
||||
default value.
|
||||
|
||||
* - :kbd:`-B, --bootargs <bootargs>`
|
||||
- Set the User VM kernel command line arguments.
|
||||
- Set the User VM kernel command-line arguments.
|
||||
The maximum length is 1023.
|
||||
The bootargs string will be passed to the kernel as its cmdline.
|
||||
|
||||
@ -326,16 +326,16 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
|
||||
- This option is to create a VM with the local APIC (LAPIC) passed-through.
|
||||
With this option, a VM is created with ``LAPIC_PASSTHROUGH`` and
|
||||
``IO_COMPLETION_POLLING`` mode. This option is typically used for hard
|
||||
realtime scenarios.
|
||||
real-time scenarios.
|
||||
|
||||
By default, this option is not enabled.
|
||||
|
||||
* - :kbd:`--rtvm`
|
||||
- This option is used to create a VM with realtime attributes.
|
||||
- This option is used to create a VM with real-time attributes.
|
||||
With this option, a VM is created with ``GUEST_FLAG_RT`` and
|
||||
``GUEST_FLAG_IO_COMPLETION_POLLING`` mode. This kind of VM is
|
||||
generally used for soft realtime scenarios (without ``--lapic_pt``) or
|
||||
hard realtime scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``,
|
||||
generally used for soft real-time scenarios (without ``--lapic_pt``) or
|
||||
hard real-time scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``,
|
||||
the Service VM cannot interfere with this kind of VM when it is
|
||||
running. It can only be powered off from inside the VM itself.
|
||||
|
||||
|
@ -13,7 +13,7 @@ The ACRN hypervisor supports the following parameter:
|
||||
+=================+=============================+========================================================================================+
|
||||
| | disabled | This disables the serial port completely. |
|
||||
| +-----------------------------+----------------------------------------------------------------------------------------+
|
||||
| uart= | bdf@<BDF value> | This sets the PCI serial port based on its BDF. e.g. bdf@0:18.1 |
|
||||
| ``uart=`` | bdf@<BDF value> | This sets the PCI serial port based on its BDF. e.g. ``bdf@0:18.1`` |
|
||||
| +-----------------------------+----------------------------------------------------------------------------------------+
|
||||
| | port@<port address> | This sets the serial port address. |
|
||||
+-----------------+-----------------------------+----------------------------------------------------------------------------------------+
|
||||
@ -21,14 +21,14 @@ The ACRN hypervisor supports the following parameter:
|
||||
The Generic hypervisor parameters are specified in the GRUB multiboot/multiboot2 command.
|
||||
For example:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
|
||||
menuentry 'Boot ACRN hypervisor from multiboot' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out uart=bdf@0:18.1
|
||||
module /boot/bzImage Linux_bzImage
|
||||
module /boot/bzImage2 Linux_bzImage2
|
||||
}
|
||||
menuentry 'Boot ACRN hypervisor from multiboot' {
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out uart=bdf@0:18.1
|
||||
module /boot/bzImage Linux_bzImage
|
||||
module /boot/bzImage2 Linux_bzImage2
|
||||
}
|
||||
|
@ -22,7 +22,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
- Description
|
||||
- Usage example
|
||||
|
||||
* - module_blacklist
|
||||
* - ``module_blacklist``
|
||||
- Service VM
|
||||
- A comma-separated list of modules that should not be loaded.
|
||||
Useful to debug or work
|
||||
@ -31,14 +31,14 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
module_blacklist=dwc3_pci
|
||||
|
||||
* - no_timer_check
|
||||
* - ``no_timer_check``
|
||||
- Service VM,User VM
|
||||
- Disables the code which tests for broken timer IRQ sources.
|
||||
- ::
|
||||
|
||||
no_timer_check
|
||||
|
||||
* - console
|
||||
* - ``console``
|
||||
- Service VM,User VM
|
||||
- Output console device and options.
|
||||
|
||||
@ -64,7 +64,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
console=ttyS0
|
||||
console=hvc0
|
||||
|
||||
* - loglevel
|
||||
* - ``loglevel``
|
||||
- Service VM
|
||||
- All Kernel messages with a loglevel less than the console loglevel will
|
||||
be printed to the console. The loglevel can also be changed with
|
||||
@ -95,7 +95,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
loglevel=7
|
||||
|
||||
* - ignore_loglevel
|
||||
* - ``ignore_loglevel``
|
||||
- User VM
|
||||
- Ignoring loglevel setting will print **all**
|
||||
kernel messages to the console. Useful for debugging.
|
||||
@ -107,7 +107,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
ignore_loglevel
|
||||
|
||||
|
||||
* - log_buf_len
|
||||
* - ``log_buf_len``
|
||||
- User VM
|
||||
- Sets the size of the printk ring buffer,
|
||||
in bytes. n must be a power of two and greater
|
||||
@ -120,7 +120,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
log_buf_len=16M
|
||||
|
||||
* - consoleblank
|
||||
* - ``consoleblank``
|
||||
- Service VM,User VM
|
||||
- The console blank (screen saver) timeout in
|
||||
seconds. Defaults to 600 (10 minutes). A value of 0
|
||||
@ -129,7 +129,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
consoleblank=0
|
||||
|
||||
* - rootwait
|
||||
* - ``rootwait``
|
||||
- Service VM,User VM
|
||||
- Wait (indefinitely) for root device to show up.
|
||||
Useful for devices that are detected asynchronously
|
||||
@ -138,7 +138,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
rootwait
|
||||
|
||||
* - root
|
||||
* - ``root``
|
||||
- Service VM,User VM
|
||||
- Define the root filesystem
|
||||
|
||||
@ -165,14 +165,14 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
root=/dev/vda2
|
||||
root=PARTUUID=00112233-4455-6677-8899-AABBCCDDEEFF
|
||||
|
||||
* - rw
|
||||
* - ``rw``
|
||||
- Service VM,User VM
|
||||
- Mount root device read-write on boot
|
||||
- Mount root device read/write on boot
|
||||
- ::
|
||||
|
||||
rw
|
||||
|
||||
* - tsc
|
||||
* - ``tsc``
|
||||
- User VM
|
||||
- Disable clocksource stability checks for TSC.
|
||||
|
||||
@ -180,14 +180,14 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
``reliable``:
|
||||
Mark TSC clocksource as reliable, and disables clocksource
|
||||
verification at runtime, and the stability checks done at bootup.
|
||||
verification at runtime, and the stability checks done at boot.
|
||||
Used to enable high-resolution timer mode on older hardware, and in
|
||||
virtualized environments.
|
||||
- ::
|
||||
|
||||
tsc=reliable
|
||||
|
||||
* - cma
|
||||
* - ``cma``
|
||||
- Service VM
|
||||
- Sets the size of the kernel global memory area for
|
||||
contiguous memory allocations, and optionally the
|
||||
@ -199,7 +199,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
cma=64M@0
|
||||
|
||||
* - hvlog
|
||||
* - ``hvlog``
|
||||
- Service VM
|
||||
- Sets the guest physical address and size of the dedicated hypervisor
|
||||
log ring buffer between the hypervisor and Service VM.
|
||||
@ -216,26 +216,26 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
You should enable ASLR on SOS. This ensures that when guest Linux is
|
||||
relocating kernel image, it will avoid this buffer address.
|
||||
|
||||
|
||||
- ::
|
||||
|
||||
hvlog=2M@0xe00000
|
||||
|
||||
* - memmap
|
||||
* - ``memmap``
|
||||
- Service VM
|
||||
- Mark specific memory as reserved.
|
||||
|
||||
``memmap=nn[KMG]$ss[KMG]``
|
||||
Region of memory to be reserved is from ``ss`` to ``ss+nn``,
|
||||
using ``K``, ``M``, and ``G`` representing Kilobytes, Megabytes, and
|
||||
Gigabytes, respectively.
|
||||
using ``K``, ``M``, and ``G`` representing kilobytes, megabytes, and
|
||||
gigabytes, respectively.
|
||||
- ::
|
||||
|
||||
memmap=0x400000$0xa00000
|
||||
|
||||
* - ramoops.mem_address
|
||||
ramoops.mem_size
|
||||
ramoops.console_size
|
||||
* - ``ramoops.mem_address``
|
||||
``ramoops.mem_size``
|
||||
``ramoops.console_size``
|
||||
- Service VM
|
||||
- Ramoops is an oops/panic logger that writes its logs to RAM
|
||||
before the system crashes. Ramoops uses a predefined memory area
|
||||
@ -252,21 +252,21 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
ramoops.console_size=0x200000
|
||||
|
||||
|
||||
* - reboot_panic
|
||||
* - ``reboot_panic``
|
||||
- Service VM
|
||||
- Reboot in case of panic
|
||||
|
||||
The comma-delimited parameters are:
|
||||
|
||||
reboot_mode:
|
||||
``w`` (warm), ``s`` (soft), ``c`` (cold), or ``g`` (gpio)
|
||||
``w`` (warm), ``s`` (soft), ``c`` (cold), or ``g`` (GPIO)
|
||||
|
||||
reboot_type:
|
||||
``b`` (bios), ``a`` (acpi), ``k`` (kbd), ``t`` (triple), ``e`` (efi),
|
||||
or ``p`` (pci)
|
||||
``b`` (BIOS), ``a`` (ACPI), ``k`` (kbd), ``t`` (triple), ``e`` (EFI),
|
||||
or ``p`` (PCI)
|
||||
|
||||
reboot_cpu:
|
||||
``s###`` (smp, and processor number to be used for rebooting)
|
||||
``s###`` (SMP, and processor number to be used for rebooting)
|
||||
|
||||
reboot_force:
|
||||
``f`` (force), or not specified.
|
||||
@ -274,17 +274,17 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
reboot_panic=p,w
|
||||
|
||||
* - maxcpus
|
||||
* - ``maxcpus``
|
||||
- User VM
|
||||
- Maximum number of processors that an SMP kernel
|
||||
will bring up during bootup.
|
||||
will bring up during boot.
|
||||
|
||||
``maxcpus=n`` where n >= 0 limits
|
||||
the kernel to bring up ``n`` processors during system bootup.
|
||||
the kernel to bring up ``n`` processors during system boot.
|
||||
Giving n=0 is a special case, equivalent to ``nosmp``,which
|
||||
also disables the I/O APIC.
|
||||
|
||||
After bootup, you can bring up additional plugged CPUs by executing
|
||||
After booting, you can bring up additional plugged CPUs by executing
|
||||
|
||||
``echo 1 > /sys/devices/system/cpu/cpuX/online``
|
||||
- ::
|
||||
@ -298,7 +298,7 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
|
||||
nohpet
|
||||
|
||||
* - intel_iommu
|
||||
* - ``intel_iommu``
|
||||
- User VM
|
||||
- Intel IOMMU driver (DMAR) option
|
||||
|
||||
@ -351,7 +351,7 @@ section below has more details on a few select parameters.
|
||||
|
||||
* - i915.enable_initial_modeset
|
||||
- Service VM
|
||||
- On MRB, value must be ``1``. On NUC or UP2 boards, value must be
|
||||
- On MRB, value must be ``1``. On Intel NUC or UP2 boards, value must be
|
||||
``0``. See :ref:`i915-enable-initial-modeset`.
|
||||
- ::
|
||||
|
||||
|
@ -8,7 +8,7 @@ Description
|
||||
***********
|
||||
|
||||
The ``acrnctl`` tool helps users create, delete, launch, and stop a User
|
||||
OS (UOS). The tool runs under the Service OS, and UOSs should be based
|
||||
VM (aka UOS). The tool runs under the Service VM, and User VMs should be based
|
||||
on ``acrn-dm``. The daemon for acrn-manager is `acrnd`_.
|
||||
|
||||
|
||||
@ -42,7 +42,7 @@ Add a VM
|
||||
========
|
||||
|
||||
The ``add`` command lets you add a VM by specifying a
|
||||
script that will launch a UOS, for example ``launch_uos.sh``:
|
||||
script that will launch a User VM, for example ``launch_uos.sh``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -58,7 +58,7 @@ container::
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/devicemodel/samples/nuc/launch_uos.sh>`_
|
||||
that supports the ``-C`` (``run_container`` function) option.
|
||||
|
||||
Note that the launch script must only launch one UOS instance.
|
||||
Note that the launch script must only launch one User VM instance.
|
||||
The VM name is important. ``acrnctl`` searches VMs by their
|
||||
names so duplicate VM names are not allowed. If the
|
||||
launch script changes the VM name at launch time, ``acrnctl``
|
||||
@ -113,7 +113,7 @@ gracefully by itself.
|
||||
|
||||
# acrnctl stop -f vm-ubuntu
|
||||
|
||||
RESCAN BLOCK DEVICE
|
||||
Rescan Block Device
|
||||
===================
|
||||
|
||||
Use the ``blkrescan`` command to trigger a rescan of
|
||||
@ -139,10 +139,10 @@ update the backend file.
|
||||
acrnd
|
||||
*****
|
||||
|
||||
The ``acrnd`` daemon process provides a way for launching or resuming a UOS
|
||||
should the UOS shut down, either planned or unexpected. A UOS can ask ``acrnd``
|
||||
to set up a timer to make sure the UOS is running, even if the SOS is
|
||||
suspended or stopped.
|
||||
The ``acrnd`` daemon process provides a way for launching or resuming a User VM
|
||||
should the User VM shut down, either in a planned manner or unexpectedly. A User
|
||||
VM can ask ``acrnd`` to set up a timer to make sure the User VM is running, even
|
||||
if the Service VM is suspended or stopped.
|
||||
|
||||
Usage
|
||||
=====
|
||||
@ -163,13 +163,13 @@ Normally, ``acrnd`` runs silently (messages are directed to
|
||||
``/dev/null``). Use the ``-t`` option to direct messages to ``stdout``,
|
||||
useful for debugging.
|
||||
|
||||
The ``acrnd`` daemon stores pending UOS work to ``/usr/share/acrn/conf/timer_list``
|
||||
and sets an RTC timer to wake up the SOS or bring the SOS back up again.
|
||||
The ``acrnd`` daemon stores pending User VM work to ``/usr/share/acrn/conf/timer_list``
|
||||
and sets an RTC timer to wake up the Service VM or bring the Service VM back up again.
|
||||
When ``acrnd`` daemon is restarted, it restores the previously saved timer
|
||||
list and launches the UOSs at the right time.
|
||||
list and launches the User VMs at the right time.
|
||||
|
||||
A ``systemd`` service file (``acrnd.service``) is installed by default that will
|
||||
start the ``acrnd`` daemon when the Service OS comes up.
|
||||
start the ``acrnd`` daemon when the Service VM (Linux-based) comes up.
|
||||
You can restart/stop acrnd service using ``systemctl``
|
||||
|
||||
.. note::
|
||||
@ -178,10 +178,10 @@ You can restart/stop acrnd service using ``systemctl``
|
||||
Build and Install
|
||||
*****************
|
||||
|
||||
Source code for both ``acrnctl`` and ``acrnd`` is in the ``tools/acrn-manager`` folder.
|
||||
Source code for both ``acrnctl`` and ``acrnd`` is in the ``misc/acrn-manager`` folder.
|
||||
Change to that folder and run:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# make
|
||||
# make install
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
@ -14,8 +14,8 @@ acrn-kernel, install them on your target system, and boot running ACRN.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Set up Pre-requisites
|
||||
*********************
|
||||
Set up prerequisites
|
||||
********************
|
||||
|
||||
Your development system should be running Ubuntu
|
||||
18.04 and be connected to the internet. (You'll be installing software
|
||||
@ -66,7 +66,7 @@ Here's the default ``release.json`` configuration:
|
||||
Run the package-building script
|
||||
*******************************
|
||||
|
||||
The ``install_uSoS.py`` python script does all the work to install
|
||||
The ``install_uSoS.py`` Python script does all the work to install
|
||||
needed tools (such as make, gnu-efi, libssl-dev, libpciaccess-dev,
|
||||
uuid-dev, and more). It also verifies that tool versions (such as the
|
||||
gcc compiler) are appropriate (as configured in the ``release.json``
|
||||
@ -89,7 +89,7 @@ When done, it creates two Debian packages:
|
||||
* ``acrn_kernel_deb_package.deb`` with the ACRN-patched Linux kernel.
|
||||
|
||||
You'll need to copy these two files onto your target system, either via
|
||||
the network or simply by using a thumbdrive.
|
||||
the network or simply by using a USB drive.
|
||||
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
@ -112,7 +112,7 @@ Install Debian packages on your target system
|
||||
*********************************************
|
||||
|
||||
Copy the Debian packages you created on your development system, for
|
||||
example, using a thumbdrive. Then install the ACRN Debian package::
|
||||
example, using a USB drive. Then install the ACRN Debian package::
|
||||
|
||||
sudo dpkg -i acrn_deb_package.deb
|
||||
|
||||
@ -228,4 +228,4 @@ by looking at the dmesg log:
|
||||
|
||||
4.python3 compile_iasl.py
|
||||
=========================
|
||||
this scriptrs is help compile iasl and cp to /usr/sbin
|
||||
this script helps compile iasl and cp to /usr/sbin
|
||||
|
@ -29,7 +29,7 @@ The ``ACRN-Crashlog`` tool depends on the following libraries
|
||||
- libblkid
|
||||
- e2fsprogs
|
||||
|
||||
Refer to the :ref:`getting_started` for instructions on how to set-up your
|
||||
Refer to the :ref:`getting_started` for instructions on how to set up your
|
||||
build environment, and follow the instructions below to build and configure the
|
||||
``ACRN-Crashlog`` tool.
|
||||
|
||||
@ -163,7 +163,7 @@ telemetrics-client on the system:
|
||||
of the telemetrics-client: it runs as a daemon autostarted when the system
|
||||
boots, and sends the crashlog path to the telemetrics-client that records
|
||||
events of interest and reports them to the backend using ``telemd`` the
|
||||
telemetrics daemon. The work flow of ``acrnprobe`` and
|
||||
telemetrics daemon. The workflow of ``acrnprobe`` and
|
||||
telemetrics-client is shown in :numref:`crashlog-workflow`:
|
||||
|
||||
.. graphviz:: images/crashlog-workflow.dot
|
||||
@ -217,7 +217,7 @@ The source code structure:
|
||||
like ``ipanic``, ``pstore`` and etc. For the log on AaaG, it's collected with
|
||||
monitoring the change of related folders on the sos image, like
|
||||
``/data/logs/``. ``acrnprobe`` also provides a flexible way to allow users to
|
||||
configure which crash or event they want to collect through the xml file
|
||||
configure which crash or event they want to collect through the XML file
|
||||
easily.
|
||||
- ``common``: some utils for logs, command and string.
|
||||
- ``data``: configuration file, service files and shell script.
|
||||
|
@ -42,7 +42,7 @@ Architecture
|
||||
Terms
|
||||
=====
|
||||
|
||||
- channel :
|
||||
channel
|
||||
Channel represents a way of detecting the system's events. There are 3
|
||||
channels:
|
||||
|
||||
@ -50,33 +50,33 @@ Terms
|
||||
+ polling: run a detecting job with fixed time interval.
|
||||
+ inotify: monitor the change of file or dir.
|
||||
|
||||
- trigger :
|
||||
trigger
|
||||
Essentially, trigger represents one section of content. It could be
|
||||
a file's content, a directory's content, or a memory's content which can be
|
||||
obtained. By monitoring it ``acrnprobe`` could detect certain events which
|
||||
happened in the system.
|
||||
a file's content, a directory's content, or a memory's content, which can be
|
||||
obtained. By monitoring it, ``acrnprobe`` could detect certain events
|
||||
that happened in the system.
|
||||
|
||||
- crash :
|
||||
crash
|
||||
A subtype of event. It often corresponds to a crash of programs, system, or
|
||||
hypervisor. ``acrnprobe`` detects it and reports it as ``CRASH``.
|
||||
|
||||
- info :
|
||||
info
|
||||
A subtype of event. ``acrnprobe`` detects it and reports it as ``INFO``.
|
||||
|
||||
- event queue :
|
||||
event queue
|
||||
There is a global queue to receive all events detected.
|
||||
Generally, events are enqueued in channel, and dequeued in event handler.
|
||||
|
||||
- event handler :
|
||||
event handler
|
||||
Event handler is a thread to handle events detected by channel.
|
||||
It's awakened by an enqueued event.
|
||||
|
||||
- sender :
|
||||
sender
|
||||
The sender corresponds to an exit of event.
|
||||
There are two senders:
|
||||
|
||||
+ Crashlog is responsible for collecting logs and saving it locally.
|
||||
+ Telemd is responsible for sending log records to telemetrics client.
|
||||
+ ``crashlog`` is responsible for collecting logs and saving it locally.
|
||||
+ ``telemd`` is responsible for sending log records to telemetrics client.
|
||||
|
||||
Description
|
||||
===========
|
||||
@ -86,30 +86,30 @@ As a log collection mechanism to record critical events on the platform,
|
||||
|
||||
1. detect event
|
||||
|
||||
From experience, the occurrence of an system event is usually accompanied
|
||||
From experience, the occurrence of a system event is usually accompanied
|
||||
by some effects. The effects could be a generated file, an error message in
|
||||
kernel's log, or a system reboot. To get these effects, for some of them we
|
||||
can monitor a directory, for other of them we might need to do a detection
|
||||
can monitor a directory, for others, we might need to do detection
|
||||
in a time loop.
|
||||
*So we implement the channel, which represents a common method of detection.*
|
||||
So we implement the channel, which represents a common method of detection.
|
||||
|
||||
2. analyze event and determine the event type
|
||||
|
||||
Generally, a specific effect correspond to a particular type of events.
|
||||
Generally, a specific effect corresponds to a particular type of events.
|
||||
However, it is the icing on the cake for analyzing the detailed event types
|
||||
according to some phenomena. *Crash reclassify is implemented for this
|
||||
purpose.*
|
||||
according to some phenomena. Crash reclassifying is implemented for this
|
||||
purpose.
|
||||
|
||||
3. collect information for detected events
|
||||
|
||||
This is for debug purpose. Events without information are meaningless,
|
||||
and developers need to use this information to improve their system. *Sender
|
||||
crashlog is implemented for this purpose.*
|
||||
and developers need to use this information to improve their system. Sender
|
||||
``crashlog`` is implemented for this purpose.
|
||||
|
||||
4. archive these information as logs, and generate records
|
||||
|
||||
There must be a central place to tell user what happened in system.
|
||||
*Sender telemd is implemented for this purpose.*
|
||||
Sender ``telemd`` is implemented for this purpose.
|
||||
|
||||
Diagram
|
||||
=======
|
||||
@ -172,7 +172,7 @@ Source files
|
||||
This file provides the function to get system reboot reason from kernel
|
||||
command line.
|
||||
- android_events.c
|
||||
Sync events detected by android crashlog.
|
||||
Sync events detected by Android ``crashlog``.
|
||||
- loop.c
|
||||
This file provides interfaces to read from image.
|
||||
|
||||
|
@ -81,7 +81,7 @@ Other properties
|
||||
|
||||
- ``inherit``:
|
||||
Specify a parent for a certain crash.
|
||||
The child crash will inherit all configurations from the specified (by id)
|
||||
The child crash will inherit all configurations from the specified (by ID)
|
||||
crash. These inherited configurations could be overwritten by new ones.
|
||||
Also, this property helps build the crash tree in ``acrnprobe``.
|
||||
- ``expression``:
|
||||
@ -90,7 +90,7 @@ Other properties
|
||||
Crash tree in acrnprobe
|
||||
***********************
|
||||
|
||||
There could be a parent-child relationship between crashes. Refer to the
|
||||
There could be a parent/child relationship between crashes. Refer to the
|
||||
diagrams below, crash B and D are the children of crash A, because crash B and
|
||||
D inherit from crash A, and crash C is the child of crash B.
|
||||
|
||||
@ -260,10 +260,10 @@ Example:
|
||||
* ``channel``:
|
||||
The ``channel`` name to get the virtual machine events.
|
||||
* ``interval``:
|
||||
Time interval in seconds of polling vm's image.
|
||||
Time interval in seconds of polling VM's image.
|
||||
* ``syncevent``:
|
||||
Event type ``acrnprobe`` will synchronize from virtual machine's ``crashlog``.
|
||||
User could specify different types by id. The event type can also be
|
||||
User could specify different types by ID. The event type can also be
|
||||
indicated by ``type/subtype``.
|
||||
|
||||
Log
|
||||
@ -369,6 +369,6 @@ Example:
|
||||
The name of channel info use.
|
||||
* ``log``:
|
||||
The log to be collected. The value is the configured name in log module. User
|
||||
could specify different logs by id.
|
||||
could specify different logs by ID.
|
||||
|
||||
.. _`XML standard`: http://www.w3.org/TR/REC-xml
|
||||
|
@ -23,7 +23,7 @@ the client. The client is responsible for collecting crash information
|
||||
and saving it in the crashlog file. After the saving work is done, the
|
||||
client notifies server and the server will clean up.
|
||||
|
||||
The work flow diagram:
|
||||
The workflow diagram:
|
||||
|
||||
::
|
||||
|
||||
|
@ -78,7 +78,7 @@ Options:
|
||||
- input file name
|
||||
|
||||
* - :kbd:`-o, --ofile=string`
|
||||
- output filename
|
||||
- output file name
|
||||
|
||||
* - :kbd:`-f, --frequency=unsigned_int`
|
||||
- TSC frequency in MHz
|
||||
@ -155,7 +155,7 @@ data to your Linux system, and run the analysis tool.
|
||||
-o /home/xxxx/trace_data/20171115-101605/cpu0 --vm_exit --irq
|
||||
|
||||
- The analysis report is written to stdout, or to a CSV file if
|
||||
a filename is specified using ``-o filename``.
|
||||
a file name is specified using ``-o filename``.
|
||||
- The scripts require Python3.
|
||||
|
||||
Build and Install
|
||||
|
Loading…
Reference in New Issue
Block a user