doc: push doc updates for v2.5 release

Cumulative changes to docs since the release_2.5 branch was made

Tracked-On: #5692

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder
2021-06-24 20:58:54 -07:00
committed by David Kinder
parent 7e9d625425
commit cd4dc73ca5
47 changed files with 1697 additions and 734 deletions

View File

@@ -0,0 +1,146 @@
.. _how-to-enable-acrn-secure-boot-with-efi-stub:
Enable ACRN Secure Boot With EFI-Stub
#####################################
Introduction
************
``ACRN EFI-Stub`` is an EFI application to support booting ACRN Hypervisor on
UEFI systems with Secure Boot. ACRN has supported
:ref:`how-to-enable-acrn-secure-boot-with-grub`.
It relies on the GRUB multiboot2 module by default. However, on certain platform
the GRUB multiboot2 is intentionally disabled when Secure Boot is enabled due
to the `CVE-2015-5281 <https://www.cvedetails.com/cve/CVE-2015-5281/>`_.
As an alternative booting method, ``ACRN EFI-Stub`` supports to boot ACRN HV on
UEFI systems without using GRUB. Although it is based on the legacy EFI-Stub
which was obsoleted in ACRN v2.3, the new EFI-Stub can boot ACRN HV in the direct
mode rather than the former deprivileged mode.
In order to boot ACRN HV with the new EFI-Stub, you need to create a container blob
which contains HV image and Service VM kernel image (and optionally pre-launched
VM kernel image and ACPI table). That blob file is stitched to the
EFI-Stub to form a single EFI application (``acrn.efi``). The overall boot flow is as below.
.. graphviz::
digraph G {
rankdir=LR;
bgcolor="transparent";
UEFI -> "acrn.efi" ->
"ACRN\nHypervisor" -> "pre-launched RTVM\nKernel";
"ACRN\nHypervisor" -> "Service VM\nKernel";
}
- UEFI firmware verifies ``acrn.efi``
- ``acrn.efi`` unpacks ACRN Hypervisor image and VM Kernels from a stitched container blob
- ``acrn.efi`` loads ACRN Hypervisor to memory
- ``acrn.efi`` prepares MBI to store Service VM & pre-launched RTVM Kernel info
- ``acrn.efi`` hands over control to ACRN Hypervisor with MBI
- ACRN Hypervisor boots Service VM and pre-launched RTVM in parallel
As the container blob format, ``ACRN EFI-Stub`` uses the `Slim Bootloader Container
Boot Image <https://slimbootloader.github.io/how-tos/create-container-boot-image.html>`_.
Verified Configurations
***********************
- ACRN Hypervisor Release Version 2.5
- hybrid_rt scenario
- TGL platform
- CONFIG_MULTIBOOT2=y (as default)
- CONFIG_RELOC=y (as default)
Building
********
Build Dependencies
==================
- Build Tools and Dependencies described in the :ref:`getting-started-building` guide
- ``gnu-efi`` package
- Service VM Kernel ``bzImage``
- pre-launched RTVM Kernel ``bzImage``
- `Slim Bootloader Container Tool <https://slimbootloader.github.io/how-tos/create-container-boot-image.html>`_
The Slim Bootloader Tools can be downloaded from its `GitHub project <https://github.com/slimbootloader/slimbootloader>`_.
The verified version is the commit `9f146af <https://github.com/slimbootloader/slimbootloader/tree/9f146af>`_.
You may use the `meta-acrn Yocto Project integration layer
<https://github.com/intel/meta-acrn>`_ to build Service VM Kernel and
pre-launched VM.
Build EFI-Stub for TGL hybrid_rt
======================================
.. code-block:: none
$ TOPDIR=`pwd`
$ cd acrn-hypervisor
$ make BOARD=tgl-rvp SCENARIO=hybrid_rt hypervisor
$ make BOARD=tgl-rvp SCENARIO=hybrid_rt -C misc/efi-stub/ \
HV_OBJDIR=`pwd`/build/hypervisor/ \
EFI_OBJDIR=`pwd`/build/hypervisor/misc/efi-stub `pwd`/build/hypervisor/misc/efi-stub/boot.efi
Create Container
================
.. code-block:: none
$ mkdir -p $TOPDIR/acrn-efi; cd $TOPDIR/acrn-efi
$ echo > hv_cmdline.txt
$ echo RT_bzImage > vm0_tag.txt
$ echo Linux_bzImage > vm1_tag.txt
$ echo ACPI_VM0 > acpi_vm0.txt
$ python3 GenContainer.py create -cl \
CMDL:./hv_cmdline.txt \
ACRN:$TOPDIR/acrn-hypervisor/build/hypervisor/acrn.32.out \
MOD0:./vm0_tag.txt \
MOD1:./vm0_kernel \
MOD2:./vm1_tag.txt \
MOD3:./vm1_kernel \
MOD4:./acpi_vm0.txt \
MOD5:$TOPDIR/acrn-hypervisor/build/hypervisor/acpi/ACPI_VM0.bin \
-o sbl_os \
-t MULTIBOOT \
-a NONE
You may optionally put HV boot options in the ``hv_cmdline.txt`` file. This file
must contain at least one character even if you don't need additional boot options.
.. code-block:: none
# Acceptable Examples
$ echo > hv_cmdline.txt # end-of-line
$ echo " " > hv_cmdline.txt # space + end-of-line
# Not Acceptable Example
$ touch hv_cmdline.txt # empty file
The ``vm0_kernel`` is the Kernel ``bzImage`` of the pre-launched RTVM, and the
``vm1_kernel`` is the image of the Service VM in the above case.
Stitch Container to EFI-Stub
============================
.. code-block:: none
$ objcopy --add-section .hv=sbl_os --change-section-vma .hv=0x6e000 \
--set-section-flags .hv=alloc,data,contents,load \
--section-alignment 0x1000 $TOPDIR/acrn-hypervisor/build/hypervisor/misc/efi-stub/boot.efi acrn.efi
Installing (without SB for testing)
***********************************
For example:
.. code-block:: none
$ sudo mkdir -p /boot/EFI/BOOT/
$ sudo cp acrn.efi /boot/EFI/BOOT/
$ sudo efibootmgr -c -l "\EFI\BOOT\acrn.efi" -d /dev/nvme0n1 -p 1 -L "ACRN Hypervisor"
$ sudo reboot
Signing
*******
See :ref:`how-to-enable-acrn-secure-boot-with-grub` for how to sign your ``acrn.efi`` file.

View File

@@ -115,13 +115,13 @@ toolset.
| **Native Linux requirement:**
| **Release:** Ubuntu 18.04+
| **Tools:** cpuid, rdmsr, lspci, dmidecode (optional)
| **Tools:** cpuid, rdmsr, lspci, lxml, dmidecode (optional)
| **Kernel cmdline:** "idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable"
#. Copy the ``target`` directory into the target file system and then run the
``sudo python3 board_parser.py $(BOARD)`` command.
#. Copy the ``board_inspector`` directory into the target file system and then run the
``sudo python3 cli.py $(BOARD)`` command.
#. A ``$(BOARD).xml`` that includes all needed hardware-specific information
is generated in the ``./out/`` directory. Here, ``$(BOARD)`` is the
is generated under the current working directory. Here, ``$(BOARD)`` is the
specified board name.
#. Customize your needs.
@@ -322,6 +322,13 @@ current scenario has:
Specify whether the User VM power off channel is through the IOC,
power button, or vUART.
``allow_trigger_s5``:
Allow VM to trigger s5 shutdown flow, this flag works with ``poweroff_channel``
``vuart1(pty)`` and ``vuart1(tty)`` only.
``enable_ptm``:
Enable the Precision Timing Measurement (PTM) feature.
``usb_xhci``:
USB xHCI mediator configuration. Input format:
``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``.
@@ -332,7 +339,16 @@ current scenario has:
``shm_region`` (a child node of ``shm_regions``):
configure the shared memory regions for current VM, input format:
``hv:/<;shm name>;, <;shm size in MB>;``. Refer to :ref:`ivshmem-hld` for details.
``hv:/<;shm name>; (or dm:/<shm_name>;), <;shm size in MB>;``. Refer to :ref:`ivshmem-hld` for details.
``console_vuart``:
Enable a PCI-based console vUART. Refer to :ref:`vuart_config` for details.
``communication_vuarts``:
List of PCI-based communication vUARTs. Refer to :ref:`vuart_config` for details.
``communication_vuart`` (a child node of ``communication_vuarts``):
Enable a PCI-based communication vUART with its ID. Refer to :ref:`vuart_config` for details.
``passthrough_devices``:
Select the passthrough device from the lspci list. Currently we support:
@@ -353,12 +369,15 @@ current scenario has:
Input format:
``[@]stdio|tty|pty|sock:portname[=portpath][,[@]stdio|tty|pty:portname[=portpath]]``.
``cpu_affinity``:
List of pCPU that this VM's vCPUs are pinned to.
.. note::
The ``configurable`` and ``readonly`` attributes are used to mark
whether the item is configurable for users. When ``configurable="0"``
and ``readonly="true"``, the item is not configurable from the web
interface. When ``configurable="0"``, the item does not appear on the
whether the item is configurable for users. When ``configurable="n"``
and ``readonly="y"``, the item is not configurable from the web
interface. When ``configurable="n"``, the item does not appear on the
interface.
.. _acrn_config_tool_ui:

View File

@@ -11,38 +11,36 @@ with basic functionality such as running Service VM (SOS) and User VM (UOS) for
This setup was tested with the following configuration,
- ACRN Hypervisor: tag ``v2.0``
- ACRN Kernel: release_2.0 (5.4.43-PKT-200203T060100Z)
- QEMU emulator version 4.2.0
- Service VM/User VM is ubuntu 18.04
- Platforms Tested: Apollo Lake, Kaby Lake, Coffee Lake
.. note::
ACRN versions newer than v2.0 do not work on QEMU.
- ACRN Hypervisor: ``v2.5`` tag
- ACRN Kernel: ``v2.5`` tag
- QEMU emulator version 4.2.1
- Service VM/User VM is Ubuntu 20.04
- Platforms Tested: Kaby Lake, Skylake
Prerequisites
*************
1. Make sure the platform supports Intel VMX as well as VT-d
technologies. On Ubuntu 18.04, this
can be checked by installing ``cpu-checker`` tool. If the output displays **KVM acceleration can be used**
technologies. On Ubuntu 20.04, this
can be checked by installing ``cpu-checker`` tool. If the
output displays **KVM acceleration can be used**
the platform supports it.
.. code-block:: none
$ kvm-ok
kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used
2. Ensure the Ubuntu18.04 Host kernel version is **at least 5.3.0** and above.
2. The host kernel version must be **at least 5.3.0** or above.
Ubuntu 20.04 uses a 5.8.0 kernel (or later),
so no changes are needed if you are using it.
3. Make sure KVM and the following utilities are installed.
.. code-block:: none
$ sudo apt update && sudo apt upgrade -y
$ sudo apt install qemu-kvm libvirt-bin virtinst -y
sudo apt update && sudo apt upgrade -y
sudo apt install qemu-kvm virtinst libvirt-daemon-system -y
Prepare Service VM (L1 Guest)
@@ -51,7 +49,7 @@ Prepare Service VM (L1 Guest)
.. code-block:: none
$ virt-install \
virt-install \
--connect qemu:///system \
--name ACRNSOS \
--machine q35 \
@@ -68,35 +66,40 @@ Prepare Service VM (L1 Guest)
--location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \
--extra-args "console=tty0 console=ttyS0,115200n8"
2. Walk through the installation steps as prompted. Here are a few things to note:
#. Walk through the installation steps as prompted. Here are a few things to note:
a. Make sure to install an OpenSSH server so that once the installation is complete, we can SSH into the system.
.. figure:: images/acrn_qemu_1.png
:align: center
b. We use GRUB to boot ACRN, so make sure you install it when prompted.
b. We use Grub to boot ACRN, so make sure you install it when prompted.
.. figure:: images/acrn_qemu_2.png
:align: center
3. To login to the Service VM guest, find the IP address of the guest to SSH. This can be done via the
virsh command as shown below,
c. The Service VM (guest) will be restarted once the installation is complete.
.. figure:: images/acrn_qemu_3.png
:align: center
#. Login to the Service VM guest. Find the IP address of the guest and use it to connect
via SSH. The IP address can be retrieved using the ``virsh`` command as shown below.
4. Once ACRN hypervisor is enabled, the above virsh command might not display the IP. So enable Serial console by,
.. code-block:: console
virsh domifaddr ACRNSOS
Name MAC address Protocol Address
-------------------------------------------------------------------------------
vnet0 52:54:00:72:4e:71 ipv4 192.168.122.31/24
#. Once logged into the Service VM, enable the serial console. Once ACRN is enabled,
the ``virsh`` command will no longer show the IP.
.. code-block:: none
$ sudo systemctl enable serial-getty@ttyS0.service
$ sudo systemctl start serial-getty@ttyS0.service
sudo systemctl enable serial-getty@ttyS0.service
sudo systemctl start serial-getty@ttyS0.service
.. note::
You might want to write down the Service VM IP address in case you want to SSH to it.
5. Enable GRUB menu to choose between Ubuntu vs ACRN hypervisor. Modify :file:`/etc/default/grub` and edit below entries,
#. Enable the Grub menu to choose between Ubuntu and the ACRN hypervisor.
Modify :file:`/etc/default/grub` and edit below entries,
.. code-block:: none
@@ -105,62 +108,60 @@ Prepare Service VM (L1 Guest)
GRUB_CMDLINE_LINUX_DEFAULT=""
GRUB_GFXMODE=text
6. Update GRUB changes by ``sudo update-grub``
#. The Service VM guest can also be launched again later using ``virsh start ACRNSOS --console``.
Make sure to use the domain name you used while creating the VM in case it is different than ``ACRNSOS``.
7. Once the above steps are done, Service VM guest can also be launched using, ``virsh start ACRNSOS --console``. Make sure to use the domain name
you used while creating the VM instead of ``ACRNSOS``.
This concludes setting up of Service VM and preparing it to boot ACRN hypervisor.
This concludes the initial configuration of the Service VM, the next steps will install ACRN in it.
.. _install_acrn_hypervisor:
Install ACRN Hypervisor
***********************
1. Clone ACRN repo with ``tag: acrn-2020w19.5-140000p`` or the latest
(main) branch. Below steps show our tested version,
1. Launch the ``ACRNSOS`` Service VM guest and log onto it (SSH is recommended but the console is
available too).
.. important:: All the steps below are performed **inside** the Service VM guest that we built in the
previous section.
#. Install the ACRN build tools and dependencies following the :ref:`install-build-tools-dependencies`
#. Clone ACRN repo and check out the ``v2.5`` tag.
.. code-block:: none
$ git clone https://github.com/projectacrn/acrn-hypervisor.git
$ cd acrn-hypervisor/
$ git fetch --all --tags --prune
$ git checkout tags/acrn-2020w19.5-140000p -b acrn_on_qemu
cd ~
git clone https://github.com/projectacrn/acrn-hypervisor.git
cd acrn-hypervisor
git checkout v2.5
2. Use the following command to build ACRN for QEMU,
#. Build ACRN for QEMU,
.. code-block:: none
$ make all BOARD_FILE=./misc/acrn-config/xmls/board-xmls/qemu.xml SCENARIO_FILE=./misc/acrn-config/xmls/config-xmls/qemu/sdc.xml
make BOARD=qemu SCENARIO=sdc
For more details, refer to :ref:`getting-started-building`.
For more details, refer to :ref:`getting-started-building`.
3. Copy ``acrn.32.out`` from ``build/hypervisor`` to Service VM guest ``/boot/`` directory.
#. Install the ACRN Device Model and tools
4. Clone and build the Service VM kernel that includes the virtio-blk driver. User VM (L2 guest) uses virtio-blk
driver to mount rootfs.
.. code-block::
sudo make install
#. Copy ``acrn.32.out`` to the Service VM guest ``/boot`` directory.
.. code-block:: none
$ git clone https://github.com/projectacrn/acrn-kernel
$ cd acrn-kernel
$ cp kernel_config_uefi_sos .config
$ make olddefconfig
$ make menuconfig
$ make
sudo cp build/hypervisor/acrn.32.out /boot
The below figure shows the drivers to be enabled using ``make menuconfig`` command.
#. Clone and configure the Service VM kernel repository following the instructions at
:ref:`build-and-install-ACRN-kernel` and using the ``v2.5`` tag. The User VM (L2 guest)
uses the ``virtio-blk`` driver to mount the rootfs. This driver is included in the default
kernel configuration as of the ``v2.5`` tag.
.. figure:: images/acrn_qemu_4.png
:align: center
Once the Service VM kernel is built successfully, copy ``arch/x86/boot/bzImage`` to the Service VM /boot/ directory and rename it to ``bzImage_sos``.
.. note::
The Service VM kernel contains all needed drivers so you won't need to install extra kernel modules.
5. Update Ubuntu GRUB to boot ACRN hypervisor and load ACRN Kernel Image. Append the following
configuration to the :file:`/etc/grub.d/40_custom`,
#. Update Grub to boot the ACRN hypervisor and load the Service VM kernel. Append the following
configuration to the :file:`/etc/grub.d/40_custom`.
.. code-block:: none
@@ -174,107 +175,73 @@ Install ACRN Hypervisor
echo 'Loading ACRN hypervisor with SDC scenario ...'
multiboot --quirk-modules-after-kernel /boot/acrn.32.out
module /boot/bzImage_sos Linux_bzImage
module /boot/bzImage Linux_bzImage
}
6. Update GRUB ``sudo update-grub``.
#. Update Grub: ``sudo update-grub``.
7. Shut down the guest and relaunch using, ``virsh start ACRNSOS --console``
and select ACRN hypervisor from GRUB menu to launch Service
VM running on top of ACRN.
This can be verified using ``dmesg``, as shown below,
#. Enable networking for the User VMs
.. code-block:: none
sudo systemctl enable systemd-networkd
sudo systemctl start systemd-networkd
#. Shut down the guest and relaunch it using ``virsh start ACRNSOS --console``.
Select the ``ACRN hypervisor`` entry from the Grub menu.
.. note::
You may occasionnally run into the following error: ``Assertion failed in file
arch/x86/vtd.c,line 256 : fatal error`` occasionally. This is a transient issue,
try to restart the VM when that happens. If you need a more stable setup, you
can work around the problem by switching your native host to a non-graphical
environment (``sudo systemctl set-default multi-user.target``).
#. Verify that you are now running ACRN using ``dmesg``.
.. code-block:: console
guestl1@ACRNSOS:~$ dmesg | grep ACRN
dmesg | grep ACRN
[ 0.000000] Hypervisor detected: ACRN
[ 2.337176] ACRNTrace: Initialized acrn trace module with 4 cpu
[ 2.368358] ACRN HVLog: Initialized hvlog module with 4 cpu
[ 2.727905] systemd[1]: Set hostname to <ACRNSOS>.
8. When shutting down, make sure to cleanly destroy the Service VM to prevent crashes in subsequent boots. This can be done using,
.. note::
When shutting down the Service VM, make sure to cleanly destroy it with these commands,
to prevent crashes in subsequent boots.
.. code-block:: none
$ virsh destroy ACRNSOS # where ACRNSOS is the virsh domain name.
Service VM Networking Updates for User VM
*****************************************
Follow these steps to enable networking for the User VM (L2 guest):
1. Edit your :file:`/etc/netplan/01-netcfg.yaml` file to add acrn-br0 as below,
.. code-block:: none
network:
version: 2
renderer: networkd
ethernets:
enp1s0:
dhcp4: no
bridges:
acrn-br0:
interfaces: [enp1s0]
dhcp4: true
dhcp6: no
2. Apply the new network configuration by,
.. code-block:: none
$ cd /etc/netplan
$ sudo netplan generate
$ sudo netplan apply
3. Create a tap interface (tap0) and add the tap interface as part of the acrn-br0 using the below steps,
a. Copy files ``misc/acrnbridge/acrn.network`` and ``misc/acrnbridge/tap0.netdev`` from the cloned ACRN repo to :file:`/usr/lib/system/network`.
b. Rename ``acrn.network`` to ``50-acrn.network``.
c. Rename ``tap0.netdev`` to ``50-tap0.netdev``.
4. Restart ACRNSOS guest (L1 guest) to complete the setup and start with bring-up of User VM
.. code-block:: none
virsh destroy ACRNSOS # where ACRNSOS is the virsh domain name.
Bring-Up User VM (L2 Guest)
***************************
1. Build the device-model, using ``make devicemodel`` and copy acrn-dm to ACRNSOS guest (L1 guest) directory ``/usr/bin/acrn-dm``
.. note::
It should be already built as part of :ref:`install_acrn_hypervisor`.
2. On the ACRNSOS guest, install shared libraries for acrn-dm (if not already installed).
1. Build the ACRN User VM kernel.
.. code-block:: none
$ sudo apt-get install libpciaccess-dev
cd ~/acrn-kernel
cp kernel_config_uos .config
make olddefconfig
make
3. Install latest `IASL tool <https://acpica.org/downloads>`_ and copy the binary to ``/usr/sbin/iasl``.
For this setup, used IASL 20200326 version but anything after 20190215 should be good.
4. Clone latest stable version or main branch and build ACRN User VM Kernel.
#. Copy the User VM kernel to your home folder, we will use it to launch the User VM (L2 guest)
.. code-block:: none
$ git clone https://github.com/projectacrn/acrn-kernel
$ cd acrn-kernel
$ cp kernel_config_uos .config
$ make
cp arch/x86/boot/bzImage ~/bzImage_uos
Once the User VM kernel is built successfully, copy ``arch/x86/boot/bzImage`` to ACRNSOS (L1 guest) and rename this to ``bzImage_uos``. Need this to launch the User VM (L2 guest)
.. note::
The User VM kernel contains all needed drivers so you won't need to install extra kernel modules.
5. Build ubuntu.img using :ref:`build-the-ubuntu-kvm-image` and copy it to the ACRNSOS (L1 Guest).
Alternatively you can also use virt-install to create a User VM image similar to ACRNSOS as shown below,
#. Build the User VM disk image (``UOS.img``) following :ref:`build-the-ubuntu-kvm-image` and copy it to the ACRNSOS (L1 Guest).
Alternatively you can also use ``virt-install`` **in the host environment** to create a User VM image similarly to how we built ACRNSOS previously.
.. code-block:: none
$ virt-install \
virt-install \
--name UOS \
--ram 2048 \
--disk path=/var/lib/libvirt/images/UOSUbuntu.img,size=8 \
--ram 1024 \
--disk path=/var/lib/libvirt/images/UOS.img,size=8,format=raw \
--vcpus 2 \
--virt-type kvm \
--os-type linux \
@@ -283,18 +250,29 @@ Bring-Up User VM (L2 Guest)
--location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \
--extra-args "console=tty0 console=ttyS0,115200n8"
.. note::
Image at ``/var/lib/libvirt/images/UOSUbuntu.img`` is a qcow2 image. Convert it to raw image using, ``qemu-img convert -f qcow2 UOSUbuntu.img -O raw UOS.img``
#. Transfer the ``UOS.img`` User VM disk image to the Service VM (L1 guest).
6. Launch User VM using launch script from the cloned repo path ``devicemodel/samples/launch_ubuntu.sh``. Make sure to update with your ubuntu image and rootfs
.. code-block::
sudo scp /var/lib/libvirt/images/UOS.img <username>@<IP address>
Where ``<username>`` is your username in the Service VM and ``<IP address>`` its IP address.
#. Launch User VM using the ``launch_ubuntu.sh`` script.
.. code-block:: none
cp ~/acrn-hypervisor/misc/config_tools/data/samples_launch_scripts/launch_ubuntu.sh ~/
#. Update the script to use your disk image and kernel
.. code-block:: none
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
-s 3,virtio-blk,/home/guestl1/acrn-dm-bins/UOS.img \
-s 3,virtio-blk,~/UOS.img \
-s 4,virtio-net,tap0 \
-s 5,virtio-console,@stdio:stdio_port \
-k /home/guestl1/acrn-dm-bins/bzImage_uos \
-k ~/bzImage_uos \
-B "earlyprintk=serial,ttyS0,115200n8 consoleblank=0 root=/dev/vda1 rw rootwait maxcpus=1 nohpet console=tty0 console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M tsc=reliable" \
$logger_setting \
$vm_name

View File

@@ -8,6 +8,10 @@ documentation and publishing it to https://projectacrn.github.io.
You can also use these instructions to generate the ACRN documentation
on your local system.
.. contents::
:local:
:depth: 1
Documentation Overview
**********************
@@ -67,14 +71,15 @@ recommended folder setup for documentation contributions and generation:
misc/
acrn-kernel/
The parent ``projectacrn folder`` is there because we'll also be creating a
publishing area later in these steps. For API documentation generation, we'll also
need the ``acrn-kernel`` repo contents in a sibling folder to the
acrn-hypervisor repo contents.
The parent ``projectacrn folder`` is there because, if you have repo publishing
rights, we'll also be creating a publishing area later in these steps. For API
documentation generation, we'll also need the ``acrn-kernel`` repo contents in a
sibling folder to the acrn-hypervisor repo contents.
It's best if the ``acrn-hypervisor``
folder is an ssh clone of your personal fork of the upstream project
repos (though ``https`` clones work too):
It's best if the ``acrn-hypervisor`` folder is an ssh clone of your personal
fork of the upstream project repos (though ``https`` clones work too and won't
require you to
`register your public SSH key with GitHub <https://github.com/settings/keys>`_):
#. Use your browser to visit https://github.com/projectacrn and do a
fork of the **acrn-hypervisor** repo to your personal GitHub account.)
@@ -100,8 +105,11 @@ repos (though ``https`` clones work too):
cd acrn-hypervisor
git remote add upstream git@github.com:projectacrn/acrn-hypervisor.git
After that, you'll have ``origin`` pointing to your cloned personal repo and
``upstream`` pointing to the project repo.
#. For API documentation generation we'll also need the ``acrn-kernel`` repo available
locally:
locally into the ``acrn-hypervisor`` folder:
.. code-block:: bash
@@ -151,7 +159,7 @@ Then use ``pip3`` to install the remaining Python-based tools:
cd ~/projectacrn/acrn-hypervisor/doc
pip3 install --user -r scripts/requirements.txt
Add ``$HOME/.local/bin`` to the front of your ``PATH`` so the system will
Use this command to add ``$HOME/.local/bin`` to the front of your ``PATH`` so the system will
find expected versions of these Python utilities such as ``sphinx-build`` and
``breathe``:
@@ -159,7 +167,7 @@ find expected versions of these Python utilities such as ``sphinx-build`` and
printf "\nexport PATH=\$HOME/.local/bin:\$PATH" >> ~/.bashrc
.. note::
.. important::
You will need to open a new terminal for this change to take effect.
Adding this to your ``~/.bashrc`` file ensures it is set by default.
@@ -197,7 +205,7 @@ another ``make html`` and the output layout and style is changed. The
sphinx build system creates document cache information that attempts to
expedite documentation rebuilds, but occasionally can cause an unexpected error or
warning to be generated. Doing a ``make clean`` to create a clean
generation environment and a ``make html`` again generally cleans this up.
generation environment and a ``make html`` again generally fixes these issues.
The ``read-the-docs`` theme is installed as part of the
``requirements.txt`` list above. Tweaks to the standard

View File

@@ -9,50 +9,24 @@ solution or hv-land solution, according to the usage scenario needs.
While both solutions can be used at the same time, VMs using different
solutions cannot communicate with each other.
Ivshmem DM-Land Usage
*********************
Enable Ivshmem Support
**********************
Add this line as an ``acrn-dm`` boot parameter::
-s slot,ivshmem,shm_name,shm_size
where
- ``-s slot`` - Specify the virtual PCI slot number
- ``ivshmem`` - Virtual PCI device name
- ``shm_name`` - Specify a shared memory name. Post-launched VMs with the same
``shm_name`` share a shared memory region. The ``shm_name`` needs to start
with ``dm:/`` prefix. For example, ``dm:/test``
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The size
ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
For example, to set up a shared memory of 2 megabytes, use ``2``
instead of ``shm_size``. The two communicating VMs must define the same size.
.. note:: This device can be used with real-time VM (RTVM) as well.
.. _ivshmem-hv:
Ivshmem HV-Land Usage
*********************
The ``ivshmem`` hv-land solution is disabled by default in ACRN. You can enable
The ``ivshmem`` solution is disabled by default in ACRN. You can enable
it using the :ref:`ACRN configuration toolset <acrn_config_workflow>` with these
steps:
- Enable ``ivshmem`` hv-land in ACRN XML configuration file.
- Enable ``ivshmem`` via ACRN configuration tool GUI.
- Edit ``IVSHMEM_ENABLED`` to ``y`` in ACRN scenario XML configuration
to enable ``ivshmem`` hv-land
- Set :option:`hv.FEATURES.IVSHMEM.IVSHMEM_ENABLED` to ``y``
- Edit ``IVSHMEM_REGION`` to specify the shared memory name, size and
communication VMs in ACRN scenario XML configuration. The ``IVSHMEM_REGION``
format is ``shm_name,shm_size,VM IDs``:
- Edit :option:`hv.FEATURES.IVSHMEM.IVSHMEM_REGION` to specify the shared memory name, size and
communication VMs. The ``IVSHMEM_REGION`` format is ``shm_name,shm_size,VM IDs``:
- ``shm_name`` - Specify a shared memory name. The name needs to start
with the ``hv:/`` prefix. For example, ``hv:/shm_region_0``
with the ``hv:/`` prefix for hv-land, or ``dm:/`` for dm-land.
For example, ``hv:/shm_region_0`` for hv-land and ``dm:/shm_region_0``
for dm-land.
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The
size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
@@ -63,10 +37,54 @@ steps:
communication and separate it with ``:``. For example, the
communication between VM0 and VM2, it can be written as ``0:2``
.. note:: You can define up to eight ``ivshmem`` hv-land shared regions.
- Build with the XML configuration, refer to :ref:`getting-started-building`.
Ivshmem DM-Land Usage
*********************
Follow `Enable Ivshmem Support`_ and
add below line as an ``acrn-dm`` boot parameter::
-s slot,ivshmem,shm_name,shm_size
where
- ``-s slot`` - Specify the virtual PCI slot number
- ``ivshmem`` - Virtual PCI device emulating the Shared Memory
- ``shm_name`` - Specify a shared memory name. This ``shm_name`` must be listed
in :option:`hv.FEATURES.IVSHMEM.IVSHMEM_REGION` in `Enable Ivshmem Support`_ section and needs to start
with ``dm:/`` prefix.
- ``shm_size`` - Shared memory size of selected ``shm_name``.
There are two ways to insert above boot parameter for ``acrn-dm``
- Manually edit launch script file, in this case, user shall ensure that both
``shm_name`` and ``shm_size`` match with that are defined via configuration tool GUI.
- Use the command following below format to create a launch script, when IVSHMEM is enabled
and :option:`hv.FEATURES.IVSHMEM.IVSHMEM_REGION` is properly configured via configuration tool GUI.
.. code-block:: none
:emphasize-lines: 5
python3 misc/config_tools/launch_config/launch_cfg_gen.py \
--board <path_to_your_boardxml> \
--scenario <path_to_your_scenarioxml> \
--launch <path_to_your_launched_script_xml> \
--uosid <desired_single_vmid_or_0_for_all_vmids>
.. note:: This device can be used with real-time VM (RTVM) as well.
.. _ivshmem-hv:
Ivshmem HV-Land Usage
*********************
Follow `Enable Ivshmem Support`_ to setup HV-Land Ivshmem support.
Ivshmem Notification Mechanism
******************************
@@ -188,7 +206,7 @@ Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM).
2. Build ACRN based on the XML configuration for hybrid_rt scenario on whl-ipc-i5 board::
make BOARD=whl-ipc-i5 SCENARIO=<path/to/edited/scenario.xml> TARGET_DIR=xxx
make BOARD=whl-ipc-i5 SCENARIO=<path/to/edited/scenario.xml> TARGET_DIR=xxx
3. Add a new virtual PCI device for VM2 (post-launched VM): the device type is
``ivshmem``, shared memory name is ``hv:/shm_region_0``, and shared memory

View File

@@ -58,9 +58,10 @@ the request via vUART to the lifecycle manager in the Service VM which in turn a
the request and trigger the following flow.
.. note:: The User VM need to be authorized to be able to request a Shutdown, this is achieved by adding
"``--pm_notify_channel uart``" in the launch script of that VM.
"``--pm_notify_channel uart,allow_trigger_s5``" in the launch script of that VM.
And, there is only one VM in the system can be configured to request a shutdown. If there is a second User
VM launched with "``--pm_notify_channel uart``", ACRN will stop launching it and throw out below error message:
VM launched with "``--pm_notify_channel uart,allow_trigger_s5``", ACRN will stop launching it and throw
out below error message:
``initiate a connection on a socket error``
``create socket to connect life-cycle manager failed``

View File

@@ -28,7 +28,7 @@ Verified Version
Prerequisites
*************
Follow :ref:`these instructions <rt_industry_ubuntu_setup>` to set up
Follow :ref:`these instructions <gsg>` to set up
Ubuntu as the ACRN Service VM.
Supported Hardware Platform

Binary file not shown.

After

Width:  |  Height:  |  Size: 113 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 7.0 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -0,0 +1,352 @@
.. _nested_virt:
Enable Nested Virtualization
############################
With nested virtualization enabled in ACRN, you can run virtual machine
instances inside of a guest VM (also called a user VM) running on the ACRN hypervisor.
Although both "level 1" guest VMs and nested guest VMs can be launched
from the Service VM, the following distinction is worth noting:
* The VMX feature (``CPUID01.01H:ECX[5]``) does not need to be visible to the Service VM
in order to launch guest VMs. A guest VM not running on top of the
Service VM is considered a level 1 (L1) guest.
* The VMX feature must be visible to an L1 guest to launch a nested VM. An instance
of a guest hypervisor (KVM) runs on the L1 guest and works with the
L0 ACRN hypervisor to run the nested VM.
The conventional single-level virtualization has two levels - the L0 host
(ACRN hypervisor) and the L1 guest VMs. With nested virtualization enabled,
ACRN can run guest VMs with their associated virtual machines that define a
third level:
* The host (ACRN hypervisor), which we call the L0 hypervisor
* The guest hypervisor (KVM), which we call the L1 hypervisor
* The nested guest VMs, which we call the L2 guest VMs
.. figure:: images/nvmx_1.png
:width: 700px
:align: center
Generic Nested Virtualization
High Level ACRN Nested Virtualization Design
********************************************
The high-level design of nested virtualization in ACRN is shown in :numref:`nested_virt_hld`.
Nested VMX is enabled by allowing a guest VM to use VMX instructions,
and emulating them using the single level of VMX available in the hardware.
In x86, a logical processor uses VMCSs to manage VM entries and VM exits as
well as processor behavior in VMX non-root operation. The trick of nVMX
emulation is ACRN builds a VMCS02 out of the VMCS01, which is the VMCS
ACRN uses to run the L1 VM, and VMCS12 which is built by L1 hypervisor to
actually run the L2 guest.
.. figure:: images/nvmx_arch_1.png
:width: 400px
:align: center
:name: nested_virt_hld
Nested Virtualization in ACRN
#. L0 hypervisor (ACRN) runs L1 guest with VMCS01
#. L1 hypervisor (KVM) creates VMCS12 to run a L2 guest
#. VMX instructions from L1 hypervisor trigger VMExits to L0 hypervisor:
#. L0 hypervisor runs a L2 guest with VMCS02
a. L0 caches VMCS12 in host memory
#. L0 merges VMCS01 and VMCS12 to create VMCS02
#. L2 guest runs until triggering VMExits to L0
a. L0 reflects most VMEXits to L1 hypervisor
#. L0 runs L1 guest with VMCS01 and VMCS02 as the shadow VMCS
Restrictions and Constraints
****************************
Nested virtualization is considered an experimental feature, and only tested
on Tiger Lake and Kaby Lake platforms (See :ref:`hardware`.)
L1 VMs have the following restrictions:
* KVM is the only L1 hypervisor supported by ACRN
* KVM runs in 64-bit mode
* KVM enables EPT for L2 guests
* QEMU is used to launch L2 guests
Constraints on L1 guest configuration:
* Local APIC passthrough must be enabled
* Only the ``SCHED_NOOP`` scheduler is supported. ACRN can't receive timer interrupts
on LAPIC passthrough pCPUs
Service OS VM configuration
***************************
ACRN only supports enabling the nested virtualization feature on the Service VM, not on pre-launched
VMs.
The nested virtualization feature is disabled by default in ACRN. You can
enable it using the :ref:`Use the ACRN Configuration Editor <acrn_config_tool_ui>`
with these settings:
.. note:: Normally you'd use the configuration tool GUI to edit the scenario XML file.
The tool wasn't updated in time for the v2.5 release, so you'll need to manually edit
the ACRN scenario XML configuration file to edit the ``SCHEDULER``, ``NVMX_ENABLED``,
``pcpu_id`` , ``guest_flags``, ``legacy_vuart``, and ``console_vuart`` settings for
the Service VM (SOS), as shown below:
#. Configure system level features:
- Edit :option:`hv.FEATURES.NVMX_ENABLED` to `y` to enable nested virtualization
- Edit :option:`hv.FEATURES.SCHEDULER` to ``SCHED_NOOP`` to disable CPU sharing
.. code-block:: xml
:emphasize-lines: 3,18
<FEATURES>
<RELOC>y</RELOC>
<SCHEDULER>SCHED_NOOP</SCHEDULER>
<MULTIBOOT2>y</MULTIBOOT2>
<ENFORCE_TURNOFF_AC>y</ENFORCE_TURNOFF_AC>
<RDT>
<RDT_ENABLED>n</RDT_ENABLED>
<CDP_ENABLED>y</CDP_ENABLED>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK>
</RDT>
<NVMX_ENABLED>y</NVMX_ENABLED>
<HYPERV_ENABLED>y</HYPERV_ENABLED>
#. In each guest VM configuration:
- Edit :option:`vm.guest_flags.guest_flag` on the SOS VM section and add ``GUEST_FLAG_NVMX_ENABLED``
to enable the nested virtualization feature on the Service VM.
- Edit :option:`vm.guest_flags.guest_flag` and add ``GUEST_FLAG_LAPIC_PASSTHROUGH`` to enable local
APIC passthrough on the Service VM.
- Edit :option:`vm.cpu_affinity.pcpu_id` to assign ``pCPU`` IDs to run the Service VM. If you are
using debug build and need the hypervisor console, don't assign
``pCPU0`` to the Service VM.
.. code-block:: xml
:emphasize-lines: 5,6,7,10,11
<vm id="1">
<vm_type>SOS_VM</vm_type>
<name>ACRN SOS VM</name>
<cpu_affinity>
<pcpu_id>1</pcpu_id>
<pcpu_id>2</pcpu_id>
<pcpu_id>3</pcpu_id>
</cpu_affinity>
<guest_flags>
<guest_flag>GUEST_FLAG_NVMX_ENABLED</guest_flag>
<guest_flag>GUEST_FLAG_LAPIC_PASSTHROUGH</guest_flag>
</guest_flags>
The Service VM's virtual legacy UART interrupt doesn't work with LAPIC
passthrough, which may prevent the Service VM from booting. Instead, we need to use
the PCI-vUART for the Service VM. Refer to :ref:`Enable vUART Configurations <vuart_config>`
for more details about VUART configuration.
- Edit :option:`vm.legacy_vuart.base` in ``legacy_vuart 0`` and set it to ``INVALID_LEGACY_PIO``
- Edit :option:`vm.console_vuart.base` in ``console_vuart 0`` and set it to ``PCI_VUART``
.. code-block:: xml
:emphasize-lines: 3, 14
<legacy_vuart id="0">
<type>VUART_LEGACY_PIO</type>
<base>INVALID_COM_BASE</base>
<irq>COM1_IRQ</irq>
</legacy_vuart>
<legacy_vuart id="1">
<type>VUART_LEGACY_PIO</type>
<base>INVALID_COM_BASE</base>
<irq>COM2_IRQ</irq>
<target_vm_id>1</target_vm_id>
<target_uart_id>1</target_uart_id>
</legacy_vuart>
<console_vuart id="0">
<base>PCI_VUART</base>
</console_vuart>
#. Remove CPU sharing VMs
Since CPU sharing is disabled, you may need to delete all ``POST_STD_VM`` and ``KATA_VM`` VMs
from the scenario configuration file, which may share pCPU with the Service OS VM.
#. Follow instructions in :ref:`getting-started-building` and build with this XML configuration.
Prepare for Service VM Kernel and rootfs
****************************************
The service VM can run Ubuntu or other Linux distributions.
Instructions on how to boot Ubuntu as the Service VM can be found in
:ref:`gsg`.
The Service VM kernel needs to be built from the ``acrn-kernel`` repo, and some changes
to the kernel ``.config`` are needed.
Instructions on how to build and install the Service VM kernel can be found
in :ref:`Build and Install the ACRN Kernel <build-and-install-ACRN-kernel>`.
Here is a summary of how to modify and build the kernel:
.. code-block:: none
git clone https://github.com/projectacrn/acrn-kernel
cd acrn-kernel
cp kernel_config_uefi_sos .config
make olddefconfig
The following configuration entries are needed to launch nested
guests on the Service VM:
.. code-block:: none
CONFIG_KVM=y
CONFIG_KVM_INTEL=y
CONFIG_ACRN_GUEST=y
After you made these configuration modifications, build and install the kernel
as described in :ref:`gsg`.
Launch a Nested Guest VM
************************
Create an Ubuntu KVM Image
==========================
Refer to :ref:`Build the Ubuntu KVM Image <build-the-ubuntu-kvm-image>`
on how to create an Ubuntu KVM image as the nested guest VM's root filesystem.
There is no particular requirement for this image, e.g., it could be of either
qcow2 or raw format.
Prepare for Launch Scripts
==========================
Install QEMU on the Service VM that will launch the nested guest VM:
.. code-block:: none
sudo apt-get install qemu-kvm qemu virt-manager virt-viewer libvirt-bin
.. important:: The QEMU ``-cpu host`` option is needed to launch a nested guest VM, and ``-nographics``
is required to run nested guest VMs reliably.
You can prepare the script just like the one you use to launch a VM
on native Linux. For example, other than ``-hda``, you can use the following option to launch
a virtio block based RAW image::
-drive format=raw,file=/root/ubuntu-20.04.img,if=virtio
Use the following option to enable Ethernet on the guest VM::
-netdev tap,id=net0 -device virtio-net-pci,netdev=net0,mac=a6:cd:47:5f:20:dc
The following is a simple example for the script to launch a nested guest VM.
.. code-block:: bash
:emphasize-lines: 2-4
sudo qemu-system-x86_64 \
-enable-kvm \
-cpu host \
-nographic \
-m 2G -smp 2 -hda /root/ubuntu-20.04.qcow2 \
-net nic,macaddr=00:16:3d:60:0a:80 -net tap,script=/etc/qemu-ifup
Launch the Guest VM
===================
You can launch the nested guest VM from the Service VM's virtual serial console
or from an SSH remote login.
If the nested VM is launched successfully, you should see the nested
VM's login prompt:
.. code-block:: console
[ OK ] Started Terminate Plymouth Boot Screen.
[ OK ] Started Hold until boot process finishes up.
[ OK ] Starting Set console scheme...
[ OK ] Started Serial Getty on ttyS0.
[ OK ] Started LXD - container startup/shutdown.
[ OK ] Started Set console scheme.
[ OK ] Started Getty on tty1.
[ OK ] Reached target Login Prompts.
[ OK ] Reached target Multi-User System.
[ OK ] Started Update UTMP about System Runlevel Changes.
Ubuntu 20.04 LTS ubuntu_vm ttyS0
ubuntu_vm login:
You won't see the nested guest from a ``vcpu_list`` or ``vm_list`` command
on the ACRN hypervisor console because these commands only show level 1 VMs.
.. code-block:: console
ACRN:\>vm_list
VM_UUID VM_ID VM_NAME VM_STATE
================================ ===== ==========================
dbbbd4347a574216a12c2201f1ab0240 0 ACRN SOS VM Running
ACRN:\>vcpu_list
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE THREAD STATE
===== ======= ======= ========= ========== ============
0 1 0 PRIMARY Running RUNNING
0 2 1 SECONDARY Running RUNNING
0 3 2 SECONDARY Running RUNNING
On the nested guest VM console, run an ``lshw`` or ``dmidecode`` command
and you'll see that this is a QEMU-managed virtual machine:
.. code-block:: console
:emphasize-lines: 4,5
$ sudo lshw -c system
ubuntu_vm
description: Computer
product: Standard PC (i440FX + PIIX, 1996)
vendor: QEMU
version: pc-i440fx-5.2
width: 64 bits
capabilities: smbios-2.8 dmi-2.8 smp vsyscall32
configuration: boot=normal
For example, compare this to the same command run on the L1 guest (Service VM):
.. code-block:: console
:emphasize-lines: 4,5
$ sudo lshw -c system
localhost.localdomain
description: Computer
product: NUC7i5DNHE
vendor: Intel Corporation
version: J57828-507
serial: DW1710099900081
width: 64 bits
capabilities: smbios-3.1 dmi-3.1 smp vsyscall32
configuration: boot=normal family=Intel NUC uuid=36711CA2-A784-AD49-B0DC-54B2030B16AB

View File

@@ -44,7 +44,7 @@ kernels are loaded as multiboot modules. The ACRN hypervisor, Service
VM, and Pre-Launched RT kernel images are all located on the NVMe drive.
We recommend installing Ubuntu on the NVMe drive as the Service VM OS,
which also has the required GRUB image to launch Pre-Launched RT mode.
Refer to :ref:`rt_industry_ubuntu_setup`, to
Refer to :ref:`gsg`, to
install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
@@ -83,7 +83,7 @@ Add Pre-Launched RT Kernel Image to GRUB Config
The last step is to modify the GRUB configuration file to load the Pre-Launched
kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM
<rt_industry_ubuntu_setup>`.) The grub config file will look something
<gsg_update_grub>` section in the :ref:`gsg`.) The grub config file will look something
like this:
.. code-block:: none

View File

@@ -249,19 +249,7 @@ Configure RDT for VM Using VM Configuration
per-LP CLOS is applied to the core. If HT is turned on, don't place high
priority threads on sibling LPs running lower priority threads.
#. Based on our scenario, build the ACRN hypervisor and copy the
artifact ``acrn.efi`` to the
``/boot/EFI/acrn`` directory. If needed, update the device model
``acrn-dm`` as well in ``/usr/bin`` directory. see
:ref:`getting-started-building` for building instructions.
.. code-block:: none
$ make hypervisor BOARD=apl-up2 FIRMWARE=uefi
...
# these operations are done on UP2 board
$ mount /dev/mmcblk0p0 /boot
$ scp <acrn.efi-at-your-compile-PC> /boot/EFI/acrn
#. Based on our scenario, build and install ACRN. See :ref:`build-with-acrn-scenario`
for building and installing instructions.
#. Restart the platform.

View File

@@ -18,7 +18,7 @@ Prerequisites
#. Refer to the :ref:`ACRN supported hardware <hardware>`.
#. For a default prebuilt ACRN binary in the end-to-end (E2E) package, you must have 4
CPU cores or enable "CPU Hyper-threading" in order to have 4 CPU threads for 2 CPU cores.
#. Follow the :ref:`rt_industry_ubuntu_setup` to set up the ACRN Service VM
#. Follow the :ref:`gsg` to set up the ACRN Service VM
based on Ubuntu.
#. This tutorial is validated on the following configurations:
@@ -75,7 +75,7 @@ to automate the Kata Containers installation procedure.
$ sudo cp build/misc/tools/acrnctl /usr/bin/
.. note:: This assumes you have built ACRN on this machine following the
instructions in the :ref:`rt_industry_ubuntu_setup`.
instructions in the :ref:`gsg`.
#. Modify the :ref:`daemon.json` file in order to:

View File

@@ -74,21 +74,21 @@ Install ACRN on the Debian VM
#. Build and Install the Service VM kernel:
.. code-block:: bash
.. code-block:: bash
$ mkdir ~/sos-kernel && cd ~/sos-kernel
$ git clone https://github.com/projectacrn/acrn-kernel
$ cd acrn-kernel
$ git checkout release_2.2
$ cp kernel_config_uefi_sos .config
$ make olddefconfig
$ make all
$ sudo make modules_install
$ sudo cp arch/x86/boot/bzImage /boot/bzImage
$ mkdir ~/sos-kernel && cd ~/sos-kernel
$ git clone https://github.com/projectacrn/acrn-kernel
$ cd acrn-kernel
$ git checkout release_2.2
$ cp kernel_config_uefi_sos .config
$ make olddefconfig
$ make all
$ sudo make modules_install
$ sudo cp arch/x86/boot/bzImage /boot/bzImage
#. Update Grub for the Debian Service VM
#. Update Grub for the Debian Service VM:
Update the ``/etc/grub.d/40_custom`` file as shown below.
Update the ``/etc/grub.d/40_custom`` file as shown below.
.. note::
Enter the command line for the kernel in ``/etc/grub.d/40_custom`` as
@@ -146,10 +146,11 @@ Install ACRN on the Debian VM
[ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19
[ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp
Enable the Network Sharing to Give Network Access to User VM
Enable Network Sharing to Give Network Access to the User VM
************************************************************
.. code-block:: bash
$ sudo systemctl enable systemd-networkd
$ sudo systemctl start systemd-networkd
.. code-block:: bash
$ sudo systemctl enable systemd-networkd
$ sudo systemctl start systemd-networkd

View File

@@ -7,7 +7,7 @@ ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr
or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is
launched by a Device model in the Service VM.
.. figure:: images/hybrid_scenario_on_nuc.png
.. figure:: images/ACRN-Hybrid.png
:align: center
:width: 600px
:name: hybrid_scenario_on_nuc
@@ -18,12 +18,20 @@ The following guidelines
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
as shown in :numref:`hybrid_scenario_on_nuc`.
.. note::
All build operations are done directly on the target. Building the artifacts (ACRN hypervisor, kernel, tools and Zephyr)
on a separate development machine can be done but is not described in this document.
.. contents::
:local:
:depth: 1
Prerequisites
*************
.. rst-class:: numbered-step
Set-up base installation
************************
- Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_.
- Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`.
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your
@@ -31,6 +39,51 @@ Prerequisites
.. rst-class:: numbered-step
Prepare the Zephyr image
************************
Prepare the Zephyr kernel that you will run in VM0 later.
- Follow step 1 from the :ref:`using_zephyr_as_uos` instructions
.. note:: We only need the binary Zephyr kernel, not the entire ``zephyr.img``
- Copy the :file:`zephyr/zephyr.bin` to the ``/boot`` folder::
sudo cp zephyr/zephyr.bin /boot
.. rst-class:: numbered-step
Set-up ACRN on your device
**************************
- Follow the instructions in :Ref:`getting-started-building` to build ACRN using the
``hybrid`` scenario. Here is the build command-line for the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_::
make BOARD=nuc7i7dnb SCENARIO=hybrid
- Install the ACRN hypervisor and tools
.. code-block:: none
cd ~/acrn-hypervisor # Or wherever your sources are
sudo make install
sudo cp build/hypervisor/acrn.bin /boot
sudo cp build/hypervisor/acpi/ACPI_VM0.bin /boot
- Build and install the ACRN kernel
.. code-block:: none
cd ~/acrn-kernel # Or where your ACRN kernel sources are
cp kernel_config_uefi_sos .config
make olddefconfig
make
sudo make modules_install
sudo cp arch/x86/boot/bzImage /boot/bzImage
.. rst-class:: numbered-step
Update Ubuntu GRUB
******************

View File

@@ -11,7 +11,7 @@ ACRN hypervisor.
ACRN Service VM Setup
*********************
Follow the steps in this :ref:`rt_industry_ubuntu_setup` to set up ACRN
Follow the steps in this :ref:`gsg` to set up ACRN
based on Ubuntu and launch the Service VM.
Setup for Using Windows as the Guest VM