Doc: Minor grammatical edits to GSG for ACRN logical part mode

Signed-off-by: Deb Taylor <deb.taylor@intel.com>
This commit is contained in:
Deb Taylor
2020-04-17 11:02:50 -04:00
committed by deb-intel
parent 3b21146dfb
commit 1815ed4e14

View File

@@ -1,14 +1,14 @@
.. _using_partition_mode_on_nuc: .. _using_partition_mode_on_nuc:
Getting Started Guide for ACRN Logical Partition mode Getting Started Guide for ACRN logical partition mode
##################################################### #####################################################
ACRN hypervisor supports logical partition scenario, in which the User OS (such The ACRN hypervisor supports a logical partition scenario in which the User
as Clear Linux) running in a pre-launched VM can bypass the ACRN hypervisor and OS (such as Clear Linux) running in a pre-launched VM can bypass the ACRN
directly access isolated PCI devices. The following guidelines provide step by hypervisor and directly access isolated PCI devices. The following
step instructions on how to set up the ACRN hypervisor logical partition guidelines provide step-by-step instructions on how to set up the ACRN
scenario on Intel NUC running two pre-launched VMs. The logical partition hypervisor logical partition scenario on Intel NUC while running two
scenario on Intel NUC is shown in pre-launched VMs. The logical partition scenario on the Intel NUC is shown in
:numref:`logical_partition_scenario_on_nuc`. :numref:`logical_partition_scenario_on_nuc`.
.. figure:: images/logical_partition_scenario_on_nuc.png .. figure:: images/logical_partition_scenario_on_nuc.png
@@ -33,23 +33,23 @@ Prerequisites
* NVMe disk * NVMe disk
* SATA disk * SATA disk
* Storage device with USB interface (such as USB Flash * Storage device with USB interface (such as USB Flash
or SATA disk connected with a USB3.0 SATA converter etc) or SATA disk connected with a USB3.0 SATA converter).
* Disable "Intel Hyper Threading Technology" in the BIOS, to avoid * Disable **Intel Hyper Threading Technology** in the BIOS to avoid
interference from logical cores for the logical partition scenario. interference from logical cores for the logical partition scenario.
* In the logical partition scenario, two VMs (running Clear Linux) * In the logical partition scenario, two VMs (running Clear Linux)
are started by the ACRN hypervisor. Each VM has its own root are started by the ACRN hypervisor. Each VM has its own root
filesystem. Set up each VM by following the `Install Clear Linux filesystem. Set up each VM by following the `Install Clear Linux
OS on bare metal with live server OS on bare metal with live server
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ <https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_ instructions
and install Clear Linux OS (version: 32680) first on a SATA disk and then again and install Clear Linux OS (version: 32680) first on a SATA disk and then
on a storage device with a USB interface. The two pre-launched again on a storage device with a USB interface. The two pre-launched
VMs will mount the root file systems via the SATA controller and VMs will mount the root file systems via the SATA controller and
the USB controller respectively. the USB controller respectively.
Update kernel image and modules of Pre-launched VM Update kernel image and modules of pre-launched VM
************************************************** **************************************************
#. On your development workstation, clone the ACRN kernel source tree, and build #. On your development workstation, clone the ACRN kernel source tree, and
the Linux kernel image that will be used to boot the pre-launched VMs: build the Linux kernel image that will be used to boot the pre-launched VMs:
.. code-block:: none .. code-block:: none
@@ -70,24 +70,25 @@ Update kernel image and modules of Pre-launched VM
``arch/x86/boot/bzImage``, and loadable kernel modules under the ``./out/`` ``arch/x86/boot/bzImage``, and loadable kernel modules under the ``./out/``
folder. Copy these files to a removable disk for installing on the NUC later. folder. Copy these files to a removable disk for installing on the NUC later.
#. Current ACRN logical partition scenario implementation requires a multi-boot #. The current ACRN logical partition scenario implementation requires a
capable bootloader to boot both the ACRN hypervisor and the bootable kernel multi-boot capable bootloader to boot both the ACRN hypervisor and the
image built from the previous step. Install Ubuntu OS to the on-board NVMe bootable kernel image built from the previous step. Install the Ubuntu OS
SSD by following the `Ubuntu desktop installation instructions on the on-board NVMe SSD by following the `Ubuntu desktop installation
<https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop>`_ The instructions <https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop>`_ The
Ubuntu installer creates 3 disk partitions on the on-board NVMe SSD. By Ubuntu installer creates 3 disk partitions on the on-board NVMe SSD. By
default, the GRUB bootloader is installed on the EFI System Partition (ESP), default, the GRUB bootloader is installed on the EFI System Partition
used to bootstrap the ACRN hypervisor. (ESP) that's used to bootstrap the ACRN hypervisor.
#. After installing the Ubuntu OS, power off the NUC, attach the
SATA disk and storage device with USB interface to the NUC. Power on the #. After installing the Ubuntu OS, power off the NUC. Attach the
NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in the SATA disk and storage device with the USB interface to the NUC. Power on
removable disk with kernel image into the NUC, then copy the loadable the NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in
kernel modules built in Step 1 to the ``/lib/modules/`` folder on both the the removable disk with the kernel image into the NUC and then copy the
mounted SATA disk and storage device with USB interface. For example, loadable kernel modules built in Step 1 to the ``/lib/modules/`` folder
assuming the SATA disk and storage device with USB interface are assigned to on both the mounted SATA disk and storage device with USB interface. For
``/dev/sda`` and ``/dev/sdb`` respectively, the following commands set up the example, assuming the SATA disk and storage device with USB interface are
partition mode loadable kernel modules onto the root file systems to be assigned to ``/dev/sda`` and ``/dev/sdb`` respectively, the following
loaded by the pre-launched VMs: commands set up the partition mode loadable kernel modules onto the root
file systems to be loaded by the pre-launched VMs:
.. code-block:: none .. code-block:: none
@@ -106,15 +107,15 @@ Update kernel image and modules of Pre-launched VM
$ sudo cp <path-to-kernel-image-built-in-step1>/bzImage /boot/ $ sudo cp <path-to-kernel-image-built-in-step1>/bzImage /boot/
Update ACRN hypervisor Image Update ACRN hypervisor image
**************************** ****************************
#. Before building the ACRN hypervisor, you need to figure out the I/O #. Before building the ACRN hypervisor, find the I/O address of the serial
address of the serial port, and the PCI BDF addresses of the SATA controller port and the PCI BDF addresses of the SATA controller nd the USB
and the USB controllers on the NUC. Enter the following command to get the controllers on the NUC. Enter the following command to get the
I/O addresses of the serial port. The NUC supports one serial port ttyS0. I/O addresses of the serial port. The NUC supports one serial port, **ttyS0**.
You'll need to connect the serial port to the development workstation, in Connect the serial port to the development workstation in order to access
order to access the ACRN serial console to switch between pre-launched VMs: the ACRN serial console to switch between pre-launched VMs:
.. code-block:: none .. code-block:: none
@@ -137,17 +138,16 @@ Update ACRN hypervisor Image
Subsystem: Intel Corporation Ethernet Connection I219-LM Subsystem: Intel Corporation Ethernet Connection I219-LM
.. note:: .. note::
Verify the PCI devices BDF defined in the
Double check PCI devices BDF defined in the
``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h`` ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``
with the information reported by the ``lspci -vv`` command. with the information reported by the ``lspci -vv`` command.
#. Clone the ACRN source code and configure the build options #. Clone the ACRN source code and configure the build options.
Please refer :ref:`getting-started-building` to setup ACRN build environment Refer to :ref:`getting-started-building` to set up the ACRN build
on your development workstation. environment on your development workstation.
Clone the ACRN source code and checkout to the tag v1.6: Clone the ACRN source code and check out to the tag v1.6:
.. code-block:: none .. code-block:: none
@@ -155,21 +155,20 @@ Update ACRN hypervisor Image
$ cd acrn-hypervisor $ cd acrn-hypervisor
$ git checkout v1.6 $ git checkout v1.6
Build ACRN hypervisor with default xmls: Build the ACRN hypervisor with default xmls:
.. code-block:: none .. code-block:: none
$ make hypervisor BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/logical_partition.xml RELEASE=0 $ make hypervisor BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/logical_partition.xml RELEASE=0
.. note:: .. note::
The ``acrn.32.out`` will be generated to ``./build/hypervisor/acrn.32.out``. The ``acrn.32.out`` will be generated to ``./build/hypervisor/acrn.32.out``.
#. Check Ubuntu boot loader name #. Check the Ubuntu boot loader name.
On the current design, logical partition depends on the GRUB boot loader, In the current design, the logical partition depends on the GRUB boot
otherwise, the hypervisor will fail to boot. Double check the default boot loader; otherwise, the hypervisor will fail to boot. Verify that the
loader is GRUB: default boot loader is GRUB:
.. code-block:: none .. code-block:: none
@@ -177,18 +176,19 @@ Update ACRN hypervisor Image
The above command output should contain the ``GRUB`` keyword. The above command output should contain the ``GRUB`` keyword.
#. Check or update BDF information of PCI devices of each pre-launched VM; #. Check or update the BDF information of the PCI devices for each
Check it in the ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``. pre-launched VM; check it in the ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``.
#. copy the artifact ``acrn.32.out`` to the ``/boot`` directory: #. Copy the artifact ``acrn.32.out`` to the ``/boot`` directory:
#. Copy ``acrn.32.out`` to a removable disk. #. Copy ``acrn.32.out`` to a removable disk.
#. Plug in the removable disk into the NUC's USB port. #. Plug the removable disk into the NUC's USB port.
#. Copy the ``acrn.32.out`` from removable disk to ``/boot`` directory. #. Copy the ``acrn.32.out`` from the removable disk to ``/boot``
directory.
Update Ubuntu GRUB to Boot hypervisor and Load Kernel Image Update Ubuntu GRUB to boot hypervisor and load kernel image
*********************************************************** ***********************************************************
#. Append the following configuration to the ``/etc/grub.d/40_custom`` file: #. Append the following configuration to the ``/etc/grub.d/40_custom`` file:
@@ -209,17 +209,14 @@ Update Ubuntu GRUB to Boot hypervisor and Load Kernel Image
} }
.. note:: .. note::
The kernel command line arguments used to boot the pre-launched VMs is The kernel command line arguments used to boot the pre-launched VMs is
located in header file located in the ``hypervisor/scenarios/logical_partition/vm_configurations.h`` header file and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id
``hypervisor/scenarios/logical_partition/vm_configurations.h`` and number and ``*`` are arguments). The multiboot module param ``XXXXXX``
configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id is the bzImage tag and must exactly match the ``kernel_mod_tag``
number and ``*`` are arguments). configured in the
The multiboot module param ``XXXXXX`` is the bzImage tag and must ``hypervisor/scenarios/logical_partition/vm_configurations.c`` file.
exactly match the ``kernel_mod_tag`` configured in file
``hypervisor/scenarios/logical_partition/vm_configurations.c``.
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu #. Modify the `/etc/default/grub` file as follows to make the GRUB menu
visible when booting: visible when booting:
.. code-block:: none .. code-block:: none
@@ -230,31 +227,33 @@ Update Ubuntu GRUB to Boot hypervisor and Load Kernel Image
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
GRUB_CMDLINE_LINUX="" GRUB_CMDLINE_LINUX=""
#. Update grub: #. Update GRUB:
.. code-block:: none .. code-block:: none
$ sudo update-grub $ sudo update-grub
#. Reboot the NUC. Select the "ACRN hypervisor Logical Partition #. Reboot the NUC. Select the **ACRN hypervisor Logical Partition
Scenario" entry to boot the logical partition of the ACRN hypervisor on the Scenario** entry to boot the logical partition of the ACRN hypervisor on
NUC's display. The GRUB loader will boot the hypervisor, and the the NUC's display. The GRUB loader will boot the hypervisor, and the
hypervisor will start two pre-launched VMs automatically. hypervisor will automatically start the two pre-launched VMs.
Logical Partition Scenario Startup Checking Logical partition scenario startup checking
******************************************* *******************************************
#. Use these steps to verify the hypervisor is properly running: #. Use these steps to verify that the hypervisor is properly running:
#. Login ACRN hypervisor shell from serial console. #. Log in to the ACRN hypervisor shell from the serial console.
#. Use the ``vm_list`` check pre-launched VMs. #. Use the ``vm_list`` to check the pre-launched VMs.
#. Use these steps to verify the two pre-launched VMs are running properly:
#. Use these steps to verify that the two pre-launched VMs are running
properly:
#. Use the ``vm_console 0`` to switch to VM0's console. #. Use the ``vm_console 0`` to switch to VM0's console.
#. The VM0's clearlinux OS could boot up and login in. #. The VM0's Clear Linux OS should boot up and log in.
#. Use a ``Ctrl-Spacebar`` to return to the Acrn hypervisor shell. #. Use a ``Ctrl-Spacebar`` to return to the Acrn hypervisor shell.
#. Use the ``vm_console 1`` to switch to VM1's console. #. Use the ``vm_console 1`` to switch to VM1's console.
#. The VM1's clearlinux OS could boot up and login in. #. The VM1's Clear Linux OS should boot up and log in.
Refer to the :ref:`ACRN hypervisor shell user guide <acrnshell>` Refer to the :ref:`ACRN hypervisor shell user guide <acrnshell>`
for more information about available commands. for more information about available commands.