doc: push doc updates for v2.5 release

Cumulative changes to docs since the release_2.5 branch was made

Tracked-On: #5692

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder
2021-06-24 20:58:54 -07:00
committed by David Kinder
parent 7e9d625425
commit cd4dc73ca5
47 changed files with 1697 additions and 734 deletions

View File

@@ -179,12 +179,12 @@ the following to build the hypervisor, device model, and tools:
You can also build ACRN with your customized scenario:
* Build with your own scenario configuration on the ``nuc6cayh``, assuming the
* Build with your own scenario configuration on the ``nuc11tnbi5``, assuming the
scenario is defined in ``/path/to/scenario.xml``:
.. code-block:: none
make BOARD=nuc6cayh SCENARIO=/path/to/scenario.xml
make BOARD=nuc11tnbi5 SCENARIO=/path/to/scenario.xml
* Build with your own board and scenario configuration, assuming the board and
scenario XML files are ``/path/to/board.xml`` and ``/path/to/scenario.xml``:

View File

@@ -1,21 +1,38 @@
.. _gsg:
.. _rt_industry_ubuntu_setup:
Getting Started Guide for ACRN Industry Scenario With Ubuntu Service VM
#######################################################################
Getting Started Guide
#####################
.. contents::
:local:
:depth: 1
Introduction
************
This document describes the various steps to set up a system based on the following components:
- ACRN: Industry scenario
- Service VM OS: Ubuntu (running off the NVMe storage device)
- Real-Time VM (RTVM) OS: Ubuntu modified to use a PREEMPT-RT kernel (running off the
SATA storage device)
- Post-launched User VM OS: Windows
Verified Version
****************
- Ubuntu version: **18.04**
- GCC version: **7.5**
- ACRN-hypervisor branch: **release_2.4 (v2.4)**
- ACRN-Kernel (Service VM kernel): **release_2.4 (v2.4)**
- ACRN-hypervisor branch: **release_2.5 (v2.5)**
- ACRN-Kernel (Service VM kernel): **release_2.5 (v2.5)**
- RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)**
- HW: Maxtang Intel WHL-U i7-8665U (`AX8665U-A2 <http://www.maxtangpc.com/fanlessembeddedcomputers/140.html>`_)
- HW: Intel NUC 11 Pro Kit NUC11TNHi5 (`NUC11TNHi5
<https://ark.intel.com/content/www/us/en/ark/products/205594/intel-nuc-11-pro-kit-nuc11tnhi5.html>`_)
.. note:: This NUC is based on the
`NUC11TNBi5 board <https://ark.intel.com/content/www/us/en/ark/products/205596/intel-nuc-11-pro-board-nuc11tnbi5.html>`_.
The ``BOARD`` parameter that is used to build ACRN for this NUC is therefore ``nuc11tnbi5``.
Prerequisites
*************
@@ -25,26 +42,24 @@ Prerequisites
- Monitors with HDMI interface (DP interface is optional)
- USB keyboard and mouse
- Ethernet cables
- A grub-2.04-7 bootloader with the following patch:
http://git.savannah.gnu.org/cgit/grub.git/commit/?id=0f3f5b7c13fa9b677a64cf11f20eca0f850a2b20:
multiboot2: Set min address for mbi allocation to 0x1000
.. rst-class:: numbered-step
Hardware Connection
*******************
Connect the WHL Maxtang with the appropriate external devices.
Connect the NUC11TNHi5 with the appropriate external devices.
#. Connect the WHL Maxtang board to a monitor via an HDMI cable.
#. Connect the NUC11TNHi5 NUC to a monitor via an HDMI cable.
#. Connect the mouse, keyboard, Ethernet cable, and power supply cable to
the WHL Maxtang board.
the NUC11TNHi5 board.
#. Insert the Ubuntu 18.04 USB boot disk into the USB port.
.. figure:: images/rt-ind-ubun-hw-1.png
:scale: 15
.. figure:: images/rt-ind-ubun-hw-2.png
:scale: 15
.. rst-class:: numbered-step
@@ -54,12 +69,12 @@ Connect the WHL Maxtang with the appropriate external devices.
Install the Ubuntu User VM (RTVM) on the SATA Disk
**************************************************
.. note:: The WHL Maxtang machine contains both an NVMe and SATA disk.
.. note:: The NUC11TNHi5 NUC contains both an NVMe and SATA disk.
Before you install the Ubuntu User VM on the SATA disk, either
remove the NVMe disk or delete its blocks.
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
#. Power on the machine, then press F11 to select the USB disk as the boot
#. Insert the Ubuntu USB boot disk into the NUC11TNHi5 machine.
#. Power on the machine, then press F10 to select the USB disk as the boot
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
label depends on the brand/make of the USB drive.
#. Install the Ubuntu OS.
@@ -69,10 +84,10 @@ Install the Ubuntu User VM (RTVM) on the SATA Disk
#. Configure the ``/dev/sda`` partition. Refer to the diagram below:
.. figure:: images/native-ubuntu-on-SATA-2.png
.. figure:: images/native-ubuntu-on-SATA-3.png
a. Select the ``/dev/sda`` partition, not ``/dev/nvme0p1``.
b. Select ``/dev/sda`` **ATA KINGSTON RBUSNS4** as the device for the
b. Select ``/dev/sda`` **ATA KINGSTON SA400S3** as the device for the
bootloader installation. Note that the label depends on the SATA disk used.
#. Complete the Ubuntu installation on ``/dev/sda``.
@@ -87,12 +102,11 @@ to turn it into a real-time User VM (RTVM).
Install the Ubuntu Service VM on the NVMe Disk
**********************************************
.. note:: Before you install the Ubuntu Service VM on the NVMe disk, either
remove the SATA disk or disable it in the BIOS. Disable it by going to:
**Chipset****PCH-IO Configuration** -> **SATA and RST Configuration** -> **SATA Controller [Disabled]**
.. note:: Before you install the Ubuntu Service VM on the NVMe disk, please
remove the SATA disk.
#. Insert the Ubuntu USB boot disk into the WHL Maxtang machine.
#. Power on the machine, then press F11 to select the USB disk as the boot
#. Insert the Ubuntu USB boot disk into the NUC11TNHi5 machine.
#. Power on the machine, then press F10 to select the USB disk as the boot
device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the
label depends on the brand/make of the USB drive.
#. Install the Ubuntu OS.
@@ -102,10 +116,10 @@ Install the Ubuntu Service VM on the NVMe Disk
#. Configure the ``/dev/nvme0n1`` partition. Refer to the diagram below:
.. figure:: images/native-ubuntu-on-NVME-2.png
.. figure:: images/native-ubuntu-on-NVME-3.png
a. Select the ``/dev/nvme0n1`` partition, not ``/dev/sda``.
b. Select ``/dev/nvme0n1`` **FORESEE 256GB SSD** as the device for the
b. Select ``/dev/nvme0n1`` **Lenovo SL700 PCI-E M.2 256G** as the device for the
bootloader installation. Note that the label depends on the NVMe disk used.
#. Complete the Ubuntu installation and reboot the system.
@@ -143,7 +157,7 @@ Build the ACRN Hypervisor on Ubuntu
.. code-block:: none
$ sudo -E apt install gcc \
$ sudo apt install gcc \
git \
make \
libssl-dev \
@@ -152,6 +166,7 @@ Build the ACRN Hypervisor on Ubuntu
libsystemd-dev \
libevent-dev \
libxml2-dev \
libxml2-utils \
libusb-1.0-0-dev \
python3 \
python3-pip \
@@ -162,7 +177,8 @@ Build the ACRN Hypervisor on Ubuntu
liblz4-tool \
flex \
bison \
xsltproc
xsltproc \
clang-format
$ sudo pip3 install lxml xmlschema
@@ -191,21 +207,23 @@ Build the ACRN Hypervisor on Ubuntu
$ git clone https://github.com/projectacrn/acrn-hypervisor
$ cd acrn-hypervisor
#. Switch to the v2.4 version:
#. Switch to the v2.5 version:
.. code-block:: none
$ git checkout v2.4
$ git checkout v2.5
#. Build ACRN:
.. code-block:: none
$ make BOARD=whl-ipc-i7 SCENARIO=industry
$ make BOARD=nuc11tnbi5 SCENARIO=industry
$ sudo make install
$ sudo mkdir -p /boot/acrn
$ sudo cp build/hypervisor/acrn.bin /boot/acrn/
.. _build-and-install-ACRN-kernel:
Build and Install the ACRN Kernel
=================================
@@ -221,7 +239,7 @@ Build and Install the ACRN Kernel
.. code-block:: none
$ git checkout v2.4
$ git checkout v2.5
$ cp kernel_config_uefi_sos .config
$ make olddefconfig
$ make all
@@ -234,6 +252,8 @@ Install the Service VM Kernel and Modules
$ sudo make modules_install
$ sudo cp arch/x86/boot/bzImage /boot/bzImage
.. _gsg_update_grub:
Update Grub for the Ubuntu Service VM
=====================================
@@ -317,33 +337,6 @@ typical output of a successful installation resembles the following:
Additional Settings in the Service VM
=====================================
BIOS Settings of GVT-d for WaaG
-------------------------------
.. note::
Skip this step if you are using a Kaby Lake (KBL) Intel NUC.
Go to **Chipset** -> **System Agent (SA) Configuration** -> **Graphics
Configuration** and make the following settings:
Set **DVMT Pre-Allocated** to **64MB**:
.. figure:: images/DVMT-reallocated-64mb.png
Set **PM Support** to **Enabled**:
.. figure:: images/PM-support-enabled.png
Use OVMF to Launch the User VM
------------------------------
The User VM will be launched by OVMF, so copy it to the specific folder:
.. code-block:: none
$ sudo mkdir -p /usr/share/acrn/bios
$ sudo cp /home/acrn/work/acrn-hypervisor/devicemodel/bios/OVMF.fd /usr/share/acrn/bios
Build and Install the RT Kernel for the Ubuntu User VM
------------------------------------------------------
@@ -361,7 +354,7 @@ Follow these instructions to build the RT kernel.
$ git clone https://github.com/projectacrn/acrn-kernel
$ cd acrn-kernel
$ git checkout 4.19/preempt-rt
$ git checkout origin/4.19/preempt-rt
$ make mrproper
.. note::
@@ -382,8 +375,7 @@ Follow these instructions to build the RT kernel.
$ sudo mount /dev/sda2 /mnt
$ sudo cp arch/x86/boot/bzImage /mnt/boot/
$ sudo tar -zxvf linux-4.19.72-rt25-x86.tar.gz -C /mnt/lib/modules/
$ sudo cp -r /mnt/lib/modules/lib/modules/4.19.72-rt25 /mnt/lib/modules/
$ sudo tar -zxvf linux-4.19.72-rt25-x86.tar.gz -C /mnt/
$ sudo cd ~ && sudo umount /mnt && sync
.. rst-class:: numbered-step
@@ -455,42 +447,11 @@ Launch the RTVM
.. code-block:: none
$ sudo cp /home/acrn/work/acrn-hyperviso/misc/config_tools/data/sample_launch_scripts/nuc/launch_hard_rt_vm.sh /usr/share/acrn/
$ sudo /usr/share/acrn/launch_hard_rt_vm.sh
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
.. note::
If using a KBL NUC, the script must be adapted to match the BDF on the actual HW platform
Recommended BIOS Settings for RTVM
----------------------------------
.. csv-table::
:widths: 15, 30, 10
"Hyper-threading", "Intel Advanced Menu -> CPU Configuration", "Disabled"
"Intel VMX", "Intel Advanced Menu -> CPU Configuration", "Enable"
"Speed Step", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
"Speed Shift", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
"C States", "Intel Advanced Menu -> Power & Performance -> CPU - Power Management Control", "Disabled"
"RC6", "Intel Advanced Menu -> Power & Performance -> GT - Power Management", "Disabled"
"GT freq", "Intel Advanced Menu -> Power & Performance -> GT - Power Management", "Lowest"
"SA GV", "Intel Advanced Menu -> Memory Configuration", "Fixed High"
"VT-d", "Intel Advanced Menu -> System Agent Configuration", "Enable"
"Gfx Low Power Mode", "Intel Advanced Menu -> System Agent Configuration -> Graphics Configuration", "Disabled"
"DMI spine clock gating", "Intel Advanced Menu -> System Agent Configuration -> DMI/OPI Configuration", "Disabled"
"PCH Cross Throttling", "Intel Advanced Menu -> PCH-IO Configuration", "Disabled"
"Legacy IO Low Latency", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Enabled"
"PCI Express Clock Gating", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
"Delay Enable DMI ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
"DMI Link ASPM", "Intel Advanced Menu -> PCH-IO Configuration -> PCI Express Configuration", "Disabled"
"Aggressive LPM Support", "Intel Advanced Menu -> PCH-IO Configuration -> SATA And RST Configuration", "Disabled"
"USB Periodic SMI", "Intel Advanced Menu -> LEGACY USB Configuration", "Disabled"
"ACPI S3 Support", "Intel Advanced Menu -> ACPI Settings", "Disabled"
"Native ASPM", "Intel Advanced Menu -> ACPI Settings", "Disabled"
.. note:: BIOS settings depend on the platform and BIOS version; some may
not be applicable.
Recommended Kernel Cmdline for RTVM
-----------------------------------
@@ -523,13 +484,13 @@ this, follow the below steps to allocate all housekeeping tasks to core 0:
#. Prepare the RTVM launch script
Follow the `Passthrough a hard disk to RTVM`_ section to make adjustments to
the ``/usr/share/acrn/launch_hard_rt_vm.sh`` launch script.
the ``/usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh`` launch script.
#. Launch the RTVM:
.. code-block:: none
$ sudo /usr/share/acrn/launch_hard_rt_vm.sh
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
#. Log in to the RTVM as root and run the script as below:
@@ -575,13 +536,13 @@ Run Cyclictest
.. code-block:: none
# apt install rt-tests
sudo apt install rt-tests
#. Use the following command to start cyclictest:
.. code-block:: none
# cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
sudo cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
Parameter descriptions:
@@ -599,22 +560,8 @@ Run Cyclictest
Launch the Windows VM
*********************
#. Follow this :ref:`guide <using_windows_as_uos>` to prepare the Windows
image file and then reboot with a new ``acrngt.conf``.
#. Modify the ``launch_uos_id1.sh`` script as follows and then launch
the Windows VM as one of the post-launched standard VMs:
.. code-block:: none
:emphasize-lines: 2
acrn-dm -A -m $mem_size -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
-s 2,passthru,0/2/0,gpu \
-s 3,virtio-blk,./win10-ltsc.img \
-s 4,virtio-net,tap0 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
--windows \
$vm_name
Follow this :ref:`guide <using_windows_as_uos>` to prepare the Windows
image file and then reboot.
Troubleshooting
***************
@@ -635,7 +582,7 @@ to the ``launch_hard_rt_vm.sh`` script before launching it:
--rtvm \
--virtio_poll 1000000 \
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
-s 2,passthru,02/0/0 \
-s 2,passthru,00/17/0 \
-s 3,virtio-console,@stdio:stdio_port \
-s 8,virtio-net,tap0 \
--ovmf /usr/share/acrn/bios/OVMF.fd \
@@ -652,7 +599,7 @@ Passthrough a Hard Disk to RTVM
.. code-block:: none
# lspci -nn | grep -i sata
00:17.0 SATA controller [0106]: Intel Corporation Cannon Point-LP SATA Controller [AHCI Mode] [8086:9dd3] (rev 30)
00:17.0 SATA controller [0106]: Intel Corporation Device [8086:a0d3] (rev 20)
#. Modify the script to use the correct SATA device IDs and bus number:
@@ -661,14 +608,14 @@ Passthrough a Hard Disk to RTVM
# vim /usr/share/acrn/launch_hard_rt_vm.sh
passthru_vpid=(
["eth"]="8086 156f"
["sata"]="8086 9dd3"
["nvme"]="8086 f1a6"
["eth"]="8086 15f2"
["sata"]="8086 a0d3"
["nvme"]="126f 2263"
)
passthru_bdf=(
["eth"]="0000:00:1f.6"
["eth"]="0000:58:00.0"
["sata"]="0000:00:17.0"
["nvme"]="0000:02:00.0"
["nvme"]="0000:01:00.0"
)
# SATA pass-through
@@ -694,9 +641,8 @@ Passthrough a Hard Disk to RTVM
--ovmf /usr/share/acrn/bios/OVMF.fd \
hard_rtvm
#. Upon deployment completion, launch the RTVM directly onto your WHL
Intel NUC:
#. Upon deployment completion, launch the RTVM directly onto your NUC11TNHi5:
.. code-block:: none
$ sudo /usr/share/acrn/launch_hard_rt_vm.sh
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh

Binary file not shown.

After

Width:  |  Height:  |  Size: 167 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 324 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 55 KiB

After

Width:  |  Height:  |  Size: 317 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 73 KiB

After

Width:  |  Height:  |  Size: 164 KiB