Doc: Modify Getting Started and Advanced Guides sections (navigation)

Signed-off-by: Deb Taylor <deb.taylor@intel.com>
This commit is contained in:
Deb Taylor 2020-04-24 08:40:35 -04:00 committed by deb-intel
parent 3078595da5
commit b514eba9d6
7 changed files with 12 additions and 728 deletions

View File

@ -4,8 +4,8 @@ Advanced Guides
###############
Tools
*****
Configuration and Tools
***********************
.. toctree::
:glob:
@ -13,6 +13,7 @@ Tools
tutorials/acrn_configuration_tool
reference/kconfig/index
user-guides/kernel-parameters
user-guides/acrn-shell
user-guides/acrn-dm-parameters
misc/tools/acrn-crashlog/README
@ -34,7 +35,6 @@ User VM Tutorials
.. toctree::
:maxdepth: 1
tutorials/using_agl_as_uos
tutorials/agl-vms
tutorials/using_celadon_as_uos
tutorials/building_uos_from_clearlinux
@ -60,6 +60,7 @@ Enable ACRN Features
tutorials/rdt_configuration
tutorials/using_sbl_on_up2
tutorials/trustyACRN
tutorials/run_kata_containers
tutorials/waag-secure-boot
tutorials/enable_s5
tutorials/cpu_sharing
@ -85,13 +86,9 @@ Additional Tutorials
tutorials/increase-uos-disk-size
tutorials/sign_clear_linux_image
tutorials/static-ip
tutorials/using_partition_mode_on_nuc
tutorials/using_partition_mode_on_up2
tutorials/using_sdc2_mode_on_nuc
tutorials/using_hybrid_mode_on_nuc
tutorials/kbl-nuc-sdc
tutorials/enable_laag_secure_boot
tutorials/building_acrn_in_docker
tutorials/acrn_ootb
tutorials/run_kata_containers
user-guides/kernel-parameters

View File

@ -52,8 +52,8 @@ defined **Usage Scenarios** in this release, including:
* :ref:`Introduction to Project ACRN <introduction>`
* :ref:`Build ACRN from Source <getting-started-building>`
* :ref:`Supported Hardware <hardware>`
* :ref:`Using Hybrid mode on NUC <using_hybrid_mode_on_nuc>`
* :ref:`Launch Two User VMs on NUC using SDC2 Scenario <using_sdc2_mode_on_nuc>`
* Using Hybrid mode on NUC (removed in v1.7)
* Launch Two User VMs on NUC using SDC2 Scenario (removed in v1.7)
New Features Details
********************

View File

@ -22,3 +22,5 @@ Follow these getting started guides to give ACRN a try:
reference/hardware
getting-started/building-from-source
getting-started/rt_industry
tutorials/using_hybrid_mode_on_nuc
tutorials/using_partition_mode_on_nuc

View File

@ -1,127 +0,0 @@
.. _using_agl_as_uos:
Using AGL as the User VM
########################
This tutorial describes the steps to run Automotive Grade Linux (AGL)
as the User VM on ACRN hypervisor and the existing issues we still have.
We hope the steps documented in this article will help others reproduce the
issues we're seeing, and provide information for further debugging.
We're using an Apollo Lake-based NUC model `NUC6CAYH
<https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc6cayh.html>`_
and other platforms may be used as well.
.. image:: images/The-overview-of-AGL-as-UOS.png
:align: center
Introduction to AGL
*******************
Automotive Grade Linux is a collaborative open source project that is
bringing together automakers, suppliers, and technology companies to
accelerate the development and adoption of a fully open software stack
for the connected car. With Linux at its core, AGL is developing an open
platform from the ground up that can serve as the de facto industry
standard to enable rapid development of new features and technologies.
For more information about AGL, please visit `AGL's official website
<https://www.automotivelinux.org/>`_.
Steps for using AGL as the User VM
**********************************
#. Follow the instructions found in the :ref:`kbl-nuc-sdc` to
boot "The ACRN Service OS"
#. In Service VM, download the release of AGL from https://download.automotivelinux.org/AGL/release/eel/.
We're using release ``eel_5.1.0`` for our example:
.. code-block:: none
$ cd ~
$ wget https://download.automotivelinux.org/AGL/release/eel/5.1.0/intel-corei7-64/deploy/images/intel-corei7-64/agl-demo-platform-crosssdk-intel-corei7-64.wic.xz
$ unxz agl-demo-platform-crosssdk-intel-corei7-64.wic.xz
#. Deploy the User VM kernel modules to User VM virtual disk image
.. code-block:: none
$ sudo losetup -f -P --show ~/agl-demo-platform-crosssdk-intel-corei7-64.wic
$ sudo mount /dev/loop0p2 /mnt
$ sudo cp -r /usr/lib/modules/4.19.0-27.iot-lts2018 /mnt/lib/modules/
$ sudo umount /mnt
$ sync
.. note::
Follow the instructions in :ref:`kbl-nuc-sdc`,
the ``linux-iot-lts2018`` kernels and modules will be installed
by default after adding the bundle ``kernel-iot-lts2018``.
Here the version of modules is ``4.19.0-27.iot-lts2018``.
#. Adjust the ``/usr/share/acrn/samples/nuc/launch_uos.sh`` script to match your installation.
These are the couple of lines you need to modify
.. code-block:: none
-s 3,virtio-blk,/root/agl-demo-platform-crosssdk-intel-corei7-64.wic \
-k /usr/lib/kernel/default-iot-lts2018 \
-B "root=/dev/vda2 ...
.. note::
In case you have downloaded a different AGL image or stored the image in another directory,
you will need to modify the AGL file name or directory (the ``-s 3,virtio-blk`` argument)
to match what you have downloaded above.
Likewise, you may need to adjust the kernel file name to ``default-iot-lts2018``.
#. Start the User VM
.. code-block:: none
$ sudo /usr/share/acrn/samples/nuc/launch_uos.sh
**Congratulations**, you are now watching the User VM booting up!
And you should be able to see the console of AGL:
.. image:: images/The-console-of-AGL.png
:align: center
When you see this output on the console, AGL has been successfully loaded
and now you can operate on the console.
Enable the AGL display
*************************
By following these setup steps, you will get a black screen in AGL.
We provide a workaround for this black screen in the steps below.
By debugging, we identified the problem as an issue with the (not well supported) ``ivi-shell.so`` library.
We can light the screen with the weston GUI, as shown below.
.. image:: images/The-GUI-of-weston.png
:align: center
To enable weston in AGL, we need to modify weston's ``weston.ini`` configuration file.
.. code-block:: none
$ vim /etc/xdg/weston/weston.ini
Make these changes to ``weston.ini``:
#. Comment ``ivi-shell.so`` out
#. Check the name of output is ``HDMI-A-2``
After that, set up an environment variable and restart the weston service:
.. code-block:: none
$ export XDG_RUNTIME_DIR=/run/platform/display
$ systemctl restart weston
You should now see the weston GUI in AGL.
Follow up
*********
ACRN Hypervisor is expanding support for more operating systems,
and AGL is an example of this effort. We continue to debug the ``ivi-shell.so`` issue,
and investigating why the AGL GUI is not launching as expected.

View File

@ -1,7 +1,7 @@
.. _using_hybrid_mode_on_nuc:
Using Hybrid Mode on the NUC
############################
Getting Started Guide for ACRN hybrid mode
##########################################
ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr
or Clear Linux) runs in a pre-launched VM or in a post-launched VM that is
launched by a Device model in the Service VM. The following guidelines

View File

@ -1,422 +0,0 @@
.. _partition_mode:
Using partition mode on UP2
###########################
ACRN hypervisor supports partition mode, in which the User OS running in a
privileged VM can bypass the ACRN hypervisor and directly access isolated
PCI devices. This tutorial provides step by step instructions on how to set up
the ACRN hypervisor partition mode on
`UP2 <https://up-board.org/upsquared/specifications/>`_ boards running two
privileged VMs as shown in :numref:`two-priv-vms`:
.. figure:: images/partition_mode_up2.png
:align: center
:name: two-priv-vms
Two privileged VMs running in partition mode
Prerequisites
*************
In this tutorial two Linux privileged VMs are started by the ACRN hypervisor.
To set up the Linux root filesystems for each VM, follow the Clear Linux OS
`bare metal installation guide
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
to install Clear Linux OS on a **SATA disk** and a **USB flash disk** prior the setup,
as the two privileged VMs will mount the root filesystems via the SATA controller
and the USB controller respectively.
This tutorial is verified on a tagged ACRN v0.6.
Build kernel and modules for partition mode User VM
***************************************************
#. On your development workstation, clone the ACRN kernel source tree, and
build the Linux kernel image that will be used to boot the privileged VMs:
.. code-block:: none
$ git clone https://github.com/projectacrn/acrn-kernel.git
Cloning into 'acrn-kernel'...
...
$ cd acrn-kernel
$ cp kernel_config_uos .config
$ make olddefconfig
scripts/kconfig/conf --olddefconfig Kconfig
#
# configuration written to .config
#
$ make
$ make modules_install INSTALL_MOD_PATH=out/
The last two commands built the bootable kernel image ``arch/x86/boot/bzImage``,
and the loadable kernel modules under the ``./out/`` folder. Copy these files
to a removable disk for installing on the UP2 board later.
#. Current ACRN partition mode implementation requires a multi-boot capable
bootloader to boot both ACRN hypervisor and the bootable kernel image
built from the previous step. You could install Ubuntu OS to the UP2 board
by following `this Ubuntu tutorial
<https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop>`_.
The Ubuntu installer creates 3 disk partitions on the on-board eMMC memory.
By default, the GRUB bootloader is installed on the ESP (EFI System Partition)
partition, which will be used to bootstrap the partition mode ACRN hypervisor.
#. After installing the Ubuntu OS, power off the UP2 board, attach the SATA disk
and the USB flash disk to the board. Power on the board and make sure
it boots the Ubuntu OS from the eMMC, then copy the loadable kernel modules
built in Step 1 to the ``/lib/modules/`` folder on both the mounted SATA
disk and USB disk. For example, assuming the SATA disk and USB flash disk
are assigned to ``/dev/sda`` and ``/dev/sdb`` respectively, the following
commands set up the partition mode loadable kernel modules onto the root
filesystems to be loaded by the privileged VMs:
.. code-block:: none
# Mount the Clear Linux OS root filesystem on the SATA disk
$ sudo mount /dev/sda3 /mnt
$ sudo cp -r <kernel-modules-folder-built-in-step1>/lib/modules/* /mnt/lib/modules
$ sudo umount /mnt
# Mount the Clear Linux OS root filesystem on the USB flash disk
$ sudo mount /dev/sdb3 /mnt
$ sudo cp -r <path-to-kernel-module-folder-built-in-step1>/lib/modules/* /mnt/lib/modules
$ sudo umount /mnt
#. Copy the bootable kernel image to the ``/boot`` directory:
.. code-block:: none
$ sudo cp <path-to-kernel-image-built-in-step1>/bzImage /boot/
Enable partition mode in ACRN hypervisor
****************************************
#. Before building the ACRN hypervisor, you need to figure out the I/O address
of the serial port, and the PCI BDF addresses of the SATA controller and
the USB controller on your UP2 board.
Enter the following command to get the I/O addresses of the serial ports.
UP2 boards support two serial ports. The addresses from the command
output represent the I/O addresses of the serial port of the 10-pin side
connector and the 40-pin expansion header respectively. You will need to
connect the serial port to the development host, in order to access
the ACRN serial console to switch between privileged VMs.
.. code-block:: none
:emphasize-lines: 1
$ sudo lspci | grep UART
00:18.0 . Series HSUART Controller #1 (rev 0b)
00:18.1 . Series HSUART Controller #2 (rev 0b)
The second with ``00:18.1`` is the one on the 40-pin expansion connector.
The following command prints detailed information about all PCI buses
and devices in the system.
.. code-block:: none
:emphasize-lines: 1,3,16
$ sudo lspci -vv
...
00:12.0 SATA controller: Intel Corporation Device 5ae3 (rev 0b) (prog-if 01 [AHCI 1.0])
Subsystem: Intel Corporation Device 7270
Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz+ UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 123
Region 0: Memory at 91514000 (32-bit, non-prefetchable) [size=8K]
Region 1: Memory at 91537000 (32-bit, non-prefetchable) [size=256]
Region 2: I/O ports at f090 [size=8]
Region 3: I/O ports at f080 [size=4]
Region 4: I/O ports at f060 [size=32]
Region 5: Memory at 91536000 (32-bit, non-prefetchable) [size=2K]
...
00:15.0 USB controller: Intel Corporation Device 5aa8 (rev 0b) (prog-if 30 [XHCI])
Subsystem: Intel Corporation Device 7270
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=medium >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
Latency: 0
Interrupt: pin A routed to IRQ 122
Region 0: Memory at 91500000 (64-bit, non-prefetchable) [size=64K]
#. Clone the ACRN source code and configure the build options with
``make menuconfig`` command:
.. code-block:: none
$ git clone https://github.com/projectacrn/acrn-hypervisor.git
$ cd acrn-hypervisor
$ git checkout v0.6
$ cd hypervisor
$ make menuconfig
Set the ``Hypervisor mode`` option to ``Partition mode``, and depending
on the serial port you are using, enter its BDF to the configuration
menu as shown in this screenshot. Finally, save the configuration.
.. figure:: images/menuconfig-partition-mode.png
:align: center
.. note::
Refer to the :ref:`getting-started-building` for more information on how
to install all the ACRN build dependencies.
#. Prepare VM configurations for UP2 partition mode
The BOARD specific VM configurations should be under the folder:
``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/``.
For UP2 board, we can simply copy configurations of apl-mrb to the up2 folder:
.. code-block:: none
$ cp hypervisor/arch/x86/configs/apl-mrb/* hypervisor/arch/x86/configs/up2/
#. Configure the partition mode configuration arguments
The partition mode configuration information is located in header file
``hypervisor/arch/x86/configs/up2/partition_config.h`` and configured by
``VMx_CONFIG_XXXX`` MACROs (where x is the VM id number and XXXX are arguments).
The most frequent configure items for end user are:
``VMx_CONFIG_NAME``: the VMx name string, must less than 32 bytes;
``VMx_CONFIG_PCPU_BITMAP``: assign physical CPUs to VMx by MACRO of ``PLUG_CPU(cpu_id)``;
Below is an example of partition mode configuration for UP2:
.. code-block:: none
:caption: hypervisor/arch/x86/configs/up2/partition_config.h
...
#define VM0_CONFIGURED
#define VM0_CONFIG_NAME "PRE-LAUNCHED VM1 for UP2"
#define VM0_CONFIG_TYPE PRE_LAUNCHED_VM
#define VM0_CONFIG_PCPU_BITMAP (PLUG_CPU(0) | PLUG_CPU(2))
#define VM0_CONFIG_FLAGS IO_COMPLETION_POLLING
#define VM0_CONFIG_MEM_START_HPA 0x100000000UL
#define VM0_CONFIG_MEM_SIZE 0x20000000UL
#define VM0_CONFIG_OS_NAME "ClearLinux 26600"
#define VM0_CONFIG_OS_BOOTARGS "root=/dev/sda3 rw rootwait noxsave maxcpus=2 nohpet \
console=ttyS2 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable"
#define VM1_CONFIGURED
#define VM1_CONFIG_NAME "PRE-LAUNCHED VM2 for UP2"
#define VM1_CONFIG_TYPE PRE_LAUNCHED_VM
#define VM1_CONFIG_PCPU_BITMAP (PLUG_CPU(1) | PLUG_CPU(3))
#define VM1_CONFIG_FLAGS IO_COMPLETION_POLLING
#define VM1_CONFIG_MEM_START_HPA 0x120000000UL
#define VM1_CONFIG_MEM_SIZE 0x20000000UL
#define VM1_CONFIG_OS_NAME "ClearLinux 26600"
#define VM1_CONFIG_OS_BOOTARGS "root=/dev/sda3 rw rootwait noxsave maxcpus=2 nohpet \
console=ttyS2 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable"
#define VM0_CONFIG_PCI_PTDEV_NUM 2U
#define VM1_CONFIG_PCI_PTDEV_NUM 3U
#. Configure the PCI device info for each VM
PCI devices that are available to the privileged VMs
are hardcoded in the source file ``hypervisor/arch/x86/configs/up2/pt_dev.c``.
You need to review and modify the ``vm0_pci_devs`` and ``vm1_pci_devs``
structures in the source code to match the PCI BDF addresses of the SATA
controller and the USB controller noted in step 1:
.. code-block:: none
:emphasize-lines: 5,9,17,21,25
:caption: hypervisor/arch/x86/configs/up2/pt_dev.c
...
struct acrn_vm_pci_dev_config vm0_pci_devs[2] = {
{
.vbdf.bits = {.b = 0x00U, .d = 0x00U, .f = 0x00U},
.pbdf.bits = {.b = 0x00U, .d = 0x00U, .f = 0x00U},
},
{
.vbdf.bits = {.b = 0x00U, .d = 0x01U, .f = 0x00U},
.pbdf.bits = {.b = 0x00U, .d = 0x12U, .f = 0x00U},
},
};
...
struct acrn_vm_pci_dev_config vm1_pci_devs[3] = {
{
.vbdf.bits = {.b = 0x00U, .d = 0x00U, .f = 0x00U},
.pbdf.bits = {.b = 0x00U, .d = 0x00U, .f = 0x00U},
},
{
.vbdf.bits = {.b = 0x00U, .d = 0x01U, .f = 0x00U},
.pbdf.bits = {.b = 0x00U, .d = 0x15U, .f = 0x00U},
},
{
.vbdf.bits = {.b = 0x00U, .d = 0x02U, .f = 0x00U},
.pbdf.bits = {.b = 0x02U, .d = 0x00U, .f = 0x00U},
},
};
...
.. note::
The first BDF(0:0.0) is for host bridge;
``vbdf.bits`` in each VM could be any BDF if it is valid and no confliction.
#. Optionally, configure the ``VMx_CONFIG_OS_BOOTARGS`` kernel command line arguments
The kernel command line arguments used to boot the privileged VMs are
hardcoded as ``/dev/sda3`` to meet the Clear Linux OS automatic installation.
In case you plan to use your customized root
filesystem, you may optionally edit the ``root=`` parameter specified
in the ``VMx_CONFIG_OS_BOOTARGS`` MACRO, to instruct the Linux kernel to
mount the right disk partition:
.. code-block:: none
:emphasize-lines: 12-14
:caption: hypervisor/arch/x86/configs/up2/partition_config.h
...
#define VM0_CONFIGURED
#define VM0_CONFIG_NAME "PRE-LAUNCHED VM1 for UP2"
#define VM0_CONFIG_TYPE PRE_LAUNCHED_VM
#define VM0_CONFIG_PCPU_BITMAP (PLUG_CPU(0) | PLUG_CPU(2))
#define VM0_CONFIG_FLAGS IO_COMPLETION_POLLING
#define VM0_CONFIG_MEM_START_HPA 0x100000000UL
#define VM0_CONFIG_MEM_SIZE 0x20000000UL
#define VM0_CONFIG_OS_NAME "ClearLinux 26600"
#define VM0_CONFIG_OS_BOOTARGS "root=/dev/sda3 rw rootwait noxsave maxcpus=2 nohpet \
console=ttyS2 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable"
.. note::
The root device for VM1 is also ``/dev/sda3`` since the USB
controller is the only one seen in that VM.
#. Build the ACRN hypervisor and copy the artifact ``acrn.32.out`` to the
``/boot`` directory:
.. code-block:: none
$ make BOARD=apl-up2
...
$ sudo cp build/acrn.32.out /boot
#. Modify the ``/etc/grub.d/40_custom`` file to create a new GRUB entry
that will multi-boot the ACRN hypervisor and the User VM kernel image
Append the following configuration to the ``/etc/grub.d/40_custom`` file:
.. code-block:: none
menuentry 'ACRN Partition Mode' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' {
recordfail
load_video
gfxmode $linux_gfx_mode
insmod gzio
insmod part_gpt
insmod ext2
echo 'Loading partition mode hypervisor ...'
multiboot /boot/acrn.32.out
module /boot/bzImage XXXXXX
}
.. note::
The multiboot module param ``XXXXXX`` is the bzImage tag and must
exactly match the ``kernel_mod_tag`` configured in file
``hypervisor/scenarios/logical_partition/vm_configurations.c``.
Modify the ``/etc/default/grub`` file as follows to make the GRUB menu visible
when booting:
.. code-block:: none
#GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=false
Re-generate the GRUB configuration file and reboot the UP2 board. Select
the ``ACRN Partition Mode`` entry to boot the partition mode of the ACRN
hypervisor, the hypervisor will start the privileged VMs automatically.
.. code-block:: none
$ sudo update-grub
.. code-block:: console
:emphasize-lines: 4
Ubuntu
Advanced options for Ubuntu
System setup
*ACRN Partition Mode
Switch between privileged VMs
*****************************
Connect the serial port on the UP2 board to the development workstation.
If you set the BDF of the serial port right while building the ACRN hypervisor,
you should see the output from the ACRN serial console as below.
You could then log in to the privileged VMs by ``vm_console`` command,
and press :kbd:`CTRL+Space` keys to return to the ACRN serial console.
.. code-block:: console
:emphasize-lines: 14,31
ACRN Hypervisor
calibrate_tsc, tsc_khz=1094400
[21017289us][cpu=0][sev=2][seq=1]:HV version 0.6-unstable-2019-02-02 22:30:31-d0c2a88-dirty DBG (daily tag:acrn-2019w05.4-140000p) build by clear, start time 20997424us
[21034127us][cpu=0][sev=2][seq=2]:API version 1.0
[21039218us][cpu=0][sev=2][seq=3]:Detect processor: Intel(R) Pentium(R) CPU N4200 @ 1.10GHz
[21048422us][cpu=0][sev=2][seq=4]:hardware support HV
[21053897us][cpu=0][sev=1][seq=5]:SECURITY WARNING!!!!!!
[21059672us][cpu=0][sev=1][seq=6]:Please apply the latest CPU uCode patch!
[21074487us][cpu=0][sev=2][seq=28]:Start VM id: 1 name: PRE-LAUNCHED VM2 for UP2
[21074488us][cpu=3][sev=2][seq=29]:Start VM id: 0 name: PRE-LAUNCHED VM1 for UP2
[21885195us][cpu=0][sev=3][seq=34]:vlapic: Start Secondary VCPU1 for VM[1]...
[21889889us][cpu=3][sev=3][seq=35]:vlapic: Start Secondary VCPU1 for VM[2]...
ACRN:\>
ACRN:\>vm_console 0
----- Entering Guest 1 Shell -----
[ 1.997439] systemd[1]: Listening on Network Service Netlink Socket.
[ OK ] Listening on Network Service Netlink Socket.
[ 1.999347] systemd[1]: Created slice system-serial\x2dgetty.slice.
[ OK ] Created slice system-serial\x2dgetty.slice.
[ OK ] Listening on Journal Socket (/dev/log).
...
clr-932c8a3012ec4dc6af53790b7afbf6ba login: root
Password:
root@clr-932c8a3012ec4dc6af53790b7afbf6ba ~ # lspci
00:00.0 Host bridge: Intel Corporation Celeron N3350/Pentium N4200/Atom E3900 Series Host Bridge (rev 0b)
00:01.0 SATA controller: Intel Corporation Celeron N3350/Pentium N4200/Atom E3900 Series SATA AHCI Controller (rev 0b)
root@clr-932c8a3012ec4dc6af53790b7afbf6ba ~ #
---Entering ACRN SHELL---
ACRN:\>vm_console 1
----- Entering Guest 2 Shell -----
[ 1.490122] usb 1-4: new full-speed USB device number 2 using xhci_hcd
[ 1.621311] usb 1-4: not running at top speed; connect to a high speed hub
[ 1.627824] usb 1-4: New USB device found, idVendor=058f, idProduct=6387, bcdDevice= 1.01
[ 1.628438] usb 1-4: New USB device strings: Mfr=1, Product=2, SerialNumber=3
...
clr-2e8082cd4fc24d57a3c2d3db43368d36 login: root
Password:
root@clr-2e8082cd4fc24d57a3c2d3db43368d36 ~ # lspci
00:00.0 Host bridge: Intel Corporation Celeron N3350/Pentium N4200/Atom E3900 Series Host Bridge (rev 0b)
00:01.0 USB controller: Intel Corporation Celeron N3350/Pentium N4200/Atom E3900 Series USB xHCI (rev 0b)
00:02.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
root@clr-2e8082cd4fc24d57a3c2d3db43368d36 ~ #

View File

@ -1,166 +0,0 @@
.. _using_sdc2_mode_on_nuc:
Launch Two User VMs on NUC using SDC2 Scenario
##############################################
Starting with the ACRN v1.2 release, the ACRN hypervisor supports a new
Software Defined Cockpit scenario SDC2, where up to three User VMs
running potentially different OSes, can be launched from the Service VM.
This tutorial provides step-by-step instructions for enabling this SDC2
scenario on an Intel NUC and activate two post-launched User VMs. One of
these User VMs will be running Clear Linux, the other Ubuntu. The same
process can be applied to launch a third Linux VM as well.
ACRN Service VM Setup
*********************
Follow the steps in :ref:`kbl-nuc-sdc` to set up ACRN on an
Intel NUC. The target device must be capable of launching a Clear Linux
User VM as a starting point.
Re-build ACRN UEFI Executable
*****************************
The ACRN prebuilt UEFI executable ``acrn.efi`` is compiled for the
single post-launched VM ``SDC scenario`` by default. To activate additional
post-launched VMs, you need to enable the ``SDC2 scenario`` and rebuild
the UEFI executable using the following steps:
#. Refer to :ref:`getting-started-building` to set up the development environment
for re-compiling the UEFI executable from the ACRN source tree.
#. Enter the ``hypervisor`` directory under the ACRN source tree and use
menuconfig to reconfigure the ACRN hypervisor for SDC2 scenario. The
following example starts with the configurations for the
``kbl-nuc-i7`` board as a template. You can specify another board type
which is closer to your target system.
.. code-block:: bash
$ cd hypervisor/
$ make defconfig BOARD=kbl-nuc-i7
$ make menuconfig
.. figure:: images/sdc2-defconfig.png
:align: center
:width: 600px
:name: Reconfigure the ACRN hypervisor
#. Select ``Software Defined Cockpit 2`` option for the **ACRN Scenario** configuration:
.. figure:: images/sdc2-selected.png
:align: center
:width: 600px
:name: Select the SDC2 scenario option
#. Press :kbd:`D` to save the minimum configurations to a default file ``defconfig``,
then press :kbd:`Q` to quit the menuconfig script.
.. figure:: images/sdc2-save-mini-config.png
:align: center
:width: 600px
:name: Save the customized configurations
#. Create a new BOARD configuration (say ``mydevice``) with the SDC2
scenario you just enabled. Replace the following ``kbl-nuc-i7`` soft
linked target by the board type you specified in the previous step (if
different):
.. code-block:: bash
$ cp defconfig arch/x86/configs/mydevice.config
$ ln -s kbl-nuc-i7 arch/x86/configs/mydevice
#. Go to the root of ACRN source tree to build the ACRN UEFI executable
with the customized configurations:
.. code-block:: bash
$ cd ..
$ make FIRMWARE=uefi BOARD=mydevice
#. Copy the generated ``acrn.efi`` executable to the ESP partition.
(You may need to mount the ESP partition if it's not mounted.)
.. code-block:: bash
$ sudo mount /dev/sda1 /boot
$ sudo cp build/hypervisor/acrn.efi /boot/EFI/acrn/acrn.efi
#. Reboot the ACRN hypervisor and the Service VM.
Launch User VMs with predefined UUIDs
*************************************
In the SDC2 scenario, each User VMs launched by the ACRN device model ``acrn-dm``
must use one of the following UUIDs:
* ``d2795438-25d6-11e8-864e-cb7a18b34643``
* ``495ae2e5-2603-4d64-af76-d4bc5a8ec0e5``
* ``38158821-5208-4005-b72a-8a609e4190d0``
As shown below, add the ``-U`` parameter to the ``launch_uos.sh`` script
to attach the specific VM through an ``acrn-dm`` command. For example, the
following code snippet is used to launch VM1:
.. code-block:: none
:emphasize-lines: 9
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
-s 2,pci-gvt -G "$3" \
-s 5,virtio-console,@pty:pty_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,clear-27550-kvm.img \
-s 4,virtio-net,tap0 \
$logger_setting \
--mac_seed $mac_seed \
-U d2795438-25d6-11e8-864e-cb7a18b34643 \
-k /usr/lib/kernel/default-iot-lts2018 \
-B "root=/dev/vda3 rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
Likewise, the following code snippet specifies a different UUID and a
different network tap device ``tap1`` to launch VM2 and connect VM2 to
the network:
.. code-block:: none
:emphasize-lines: 2,6,10
acrn-dm -A -m $mem_size -c $2 -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \
-s 2,pci-gvt -G "$3" \
-s 5,virtio-console,@pty:pty_port \
-s 6,virtio-hyper_dmabuf \
-s 3,virtio-blk,ubuntu-16.04.img \
-s 4,virtio-net,tap1 \
-s 7,virtio-rnd \
$logger_setting \
--mac_seed $mac_seed \
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
-k /usr/lib/kernel/default-iot-lts2018 \
-B "root=/dev/vda rw rootwait maxcpus=$2 nohpet console=tty0 console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=$4 \
i915.enable_hangcheck=0 i915.nuclear_pageflip=1 i915.enable_guc_loading=0 \
i915.enable_guc_submission=0 i915.enable_guc=0" $vm_name
.. note::
The i915 GPU supports three hardware pipes to drive the displays,
however only certain products are designed with circuitry needed to
connect to three external displays. On a system supporting two external
displays, because the primary display is assigned to the Service VM at
boot time, you may remove the ``-s 2,pci-gvt -G "$3"`` options in one of
the previous VM-launching example scripts to completely disable the
GVT-g feature for that VM. Refer the :ref:`APL_GVT-g-hld` for
detailed information.
Here's a screen shot of the resulting launch of the Clear Linux and Ubuntu
User VMs, with a Clear Linux Service VM:
.. figure:: images/sdc2-launch-2-laag.png
:align: center
:name: Launching two User VMs, running Clear Linux and Ubuntu