mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2026-04-04 19:33:54 +00:00
doc: update release branch with new docs
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
committed by
David Kinder
parent
d57a213242
commit
0859836756
259
doc/tutorials/acrn-secure-boot-with-grub.rst
Normal file
259
doc/tutorials/acrn-secure-boot-with-grub.rst
Normal file
@@ -0,0 +1,259 @@
|
||||
.. _how-to-enable-acrn-secure-boot-with-grub:
|
||||
|
||||
Enable ACRN Secure Boot with GRUB
|
||||
#################################
|
||||
|
||||
This document shows how to enable ACRN secure boot with GRUB including:
|
||||
|
||||
- ACRN Secure Boot Sequence
|
||||
- Generate GPG Key
|
||||
- Setup Standalone GRUB EFI Binary
|
||||
- Enable UEFI Secure Boot
|
||||
|
||||
**Validation Environment:**
|
||||
|
||||
- Hardware Platform: TGL-I7, Supported hardware described in
|
||||
:ref:`hardware`.
|
||||
- ACRN Scenario: Industry
|
||||
- Service VM: Yocto & Ubuntu
|
||||
- GRUB: 2.04
|
||||
|
||||
.. note::
|
||||
Note that GRUB may stop booting in case of problems, make sure you
|
||||
know how to recover a bootloader on your platform.
|
||||
|
||||
ACRN Secure Boot Sequence
|
||||
*************************
|
||||
|
||||
ACRN can be booted by Multiboot compatible bootloader, following diagram
|
||||
illustrates the boot sequence of ACRN with GRUB:
|
||||
|
||||
.. image:: images/acrn_secureboot_flow.png
|
||||
:align: center
|
||||
:width: 800px
|
||||
|
||||
|
||||
For details on enabling GRUB on ACRN, see :ref:`using_grub`.
|
||||
|
||||
From a secureboot point of view:
|
||||
|
||||
- UEFI firmware verifies shim/GRUB
|
||||
- GRUB verifies ACRN, Service VM kernel, and pre-launched User VM kernel
|
||||
- Service VM OS kernel verifies the Device Model (``acrn-dm``) and User
|
||||
VM OVMF bootloader (with the help of ``acrn-dm``)
|
||||
- User VM virtual bootloader (e.g. OVMF) starts the guest side verified boot process
|
||||
|
||||
This document shows you how to enable GRUB to
|
||||
verify ACRN binaries such ``acrn.bin``, Service VM kernel (``bzImage``), and
|
||||
if present, a pre-launched User VM kernel image.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Generate GPG Key
|
||||
****************
|
||||
|
||||
GRUB supports loading GPG signed files only if digital signatures are
|
||||
enabled. Here's an example of generating a GPG signing key::
|
||||
|
||||
mkdir --mode 0700 keys
|
||||
gpg --homedir keys --gen-key
|
||||
gpg --homedir keys --export > boot.key
|
||||
|
||||
The :command:`gpg --gen-key` generates a public and private key pair.
|
||||
The private key is used to sign GRUB configuration files and ACRN
|
||||
binaries. The public key will be embedded in GRUB and is used to verify
|
||||
GRUB configuration files or binaries GRUB tries to load.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Setup Standalone GRUB EFI Binary
|
||||
********************************
|
||||
|
||||
Prepare Initial GRUB Configuration grub.init.cfg
|
||||
================================================
|
||||
|
||||
Create file ``grub.init.cfg`` to store the following minimal GRUB
|
||||
configuration. The environment variable ``check_signatures=enforce``
|
||||
tells GRUB to enable digital signatures::
|
||||
|
||||
set check_signatures=enforce
|
||||
export check_signatures
|
||||
|
||||
search --no-floppy --fs-uuid --set=root ESP_UUID
|
||||
configfile /grub.cfg
|
||||
echo /grub.cfg did not boot the system, rebooting in 10 seconds.
|
||||
sleep 10
|
||||
reboot
|
||||
|
||||
Replace the ESP_UUID with the UUID of your EFI system partition (found
|
||||
by running the :command:`lsblk -f`. In the example output below,
|
||||
the UUID is ``24FC-BE7A``:
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 2
|
||||
|
||||
sda
|
||||
├─sda1 vfat ESP 24FC-BE7A /boot/efi
|
||||
├─sda2 vfat OS 7015-557F
|
||||
├─sda3 ext4 UBUNTU e8640994-b2a3-45ad-9b72-e68960fb22f0 /
|
||||
└─sda4 swap 262d1113-64be-4910-a700-670b9d2277cc [SWAP]
|
||||
|
||||
|
||||
Enable Authentication in GRUB
|
||||
=============================
|
||||
|
||||
With authentication enabled, a user/password is required to restrict
|
||||
access to the GRUB shell, where arbitrary commands could be run.
|
||||
A typical GRUB configuration fragment (added to ``grub.init.cfg``) might
|
||||
look like this::
|
||||
|
||||
set superusers="root"
|
||||
export superusers
|
||||
password_pbkdf2 root GRUB_PASSWORD_HASH
|
||||
|
||||
Replace the ``GRUB_PASSWORD_HASH`` with the result of the :command:`grub-mkpasswd-pbkdf2`
|
||||
with your custom passphrase.
|
||||
|
||||
Use this command to sign the :file:`grub.init.cfg` file with your private
|
||||
GPG key and create the :file:`grub.init.cfg.sig`::
|
||||
|
||||
gpg --homedir keys --detach-sign grub.init.cfg
|
||||
|
||||
|
||||
Create Standalone GRUB EFI Binary
|
||||
=================================
|
||||
|
||||
Use the ``grub-mkstandalone`` tool to create a standalone GRUB EFI binary
|
||||
file with the buit-in modules and the signed ``grub.init.cfg`` file.
|
||||
The ``--pubkey`` option adds a GPG public key that will be used for
|
||||
verification. The public key ``boot.key`` is no longer required.
|
||||
|
||||
.. note::
|
||||
You should make a backup copy of your current GRUB image
|
||||
(:file:`grubx64.efi`) before replacing it with the new signed GRUB image.
|
||||
This would allow you to restore GRUB in case of errors updating it.
|
||||
|
||||
Here's an example sequence to do this build::
|
||||
|
||||
#!/bin/bash
|
||||
#
|
||||
|
||||
TARGET_EFI='path/to/grubx64.efi'
|
||||
|
||||
# GRUB doesn't allow loading new modules from disk when secure boot is in
|
||||
# effect, therefore pre-load the required modules.
|
||||
|
||||
MODULES="all_video archelp boot bufio configfile crypto echo efi_gop efi_uga ext2 extcmd \
|
||||
fat font fshelp gcry_dsa gcry_rsa gcry_sha1 gcry_sha512 gettext gfxterm linux linuxefi ls \
|
||||
memdisk minicmd mmap mpi normal part_gpt part_msdos password_pbkdf2 pbkdf2 reboot relocator \
|
||||
search search_fs_file search_fs_uuid search_label sleep tar terminal verifiers video_fb"
|
||||
|
||||
grub-mkstandalone \
|
||||
--directory /usr/lib/grub/x86_64-efi \
|
||||
--format x86_64-efi \
|
||||
--modules "$MODULES" \
|
||||
--pubkey ./boot.key \
|
||||
--output ./grubx64.efi \
|
||||
"boot/grub/grub.cfg=./grub.init.cfg" \
|
||||
"boot/grub/grub.cfg.sig=./grub.init.cfg.sig"
|
||||
|
||||
echo "writing signed grub.efi to '$TARGET_EFI'"
|
||||
sudo cp ./grubx64.efi "$TARGET_EFI"
|
||||
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Prepare grub.cfg
|
||||
****************
|
||||
|
||||
Define the menu entry for your system in a new GRUB configuration :file:`grub.cfg`.
|
||||
For example::
|
||||
|
||||
# @/boot/efi/grub.cfg for grub secure boot
|
||||
set timeout_style=menu
|
||||
set timeout=5
|
||||
set gfxmode=auto
|
||||
set gfxpayload=keep
|
||||
terminal_output gfxterm
|
||||
|
||||
menuentry "ACRN Multiboot Ubuntu Service VM" --users "" --id ubuntu-service-vm {
|
||||
|
||||
search --no-floppy --fs-uuid --set 3df12ea1-ef12-426b-be98-774665c7483a
|
||||
|
||||
echo 'loading ACRN...'
|
||||
multiboot2 /boot/acrn/acrn.bin root=PARTUUID="c8ee7d92-8935-4e86-9e12-05dbeb412ad6"
|
||||
module2 /boot/bzImage Linux_bzImage
|
||||
}
|
||||
|
||||
Use the output of the :command:`blkid` to find the right values for the
|
||||
UUID (``--set``) and PARTUUID (``root=PARTUUID=`` parameter) of the root
|
||||
partition (e.g. `/dev/nvme0n1p2`) according to your your hardware.
|
||||
|
||||
Copy this new :file:`grub.cfg` to your ESP (e.g. `/boot/efi/EFI/`).
|
||||
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Sign grub.cfg and ACRN Binaries
|
||||
*******************************
|
||||
|
||||
The :file:`grub.cfg` and all ACRN binaries that will be loaded by GRUB
|
||||
**must** be signed with the same GPG key.
|
||||
|
||||
Here's sequence example of signing the individual binaries::
|
||||
|
||||
gpg --homedir keys --detach-sign path/to/grub.cfg
|
||||
gpg --homedir keys --detach-sign path/to/acrn.bin
|
||||
gpg --homedir keys --detach-sign path/to/sos_kernel/bzImage
|
||||
|
||||
Now, you can reboot and the system will boot with the signed GRUB EFI binary.
|
||||
GRUB will refuse to boot if any files it attempts to load have been tampered
|
||||
with.
|
||||
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Enable UEFI Secure Boot
|
||||
***********************
|
||||
|
||||
Creating UEFI Secure Boot Key
|
||||
=============================
|
||||
|
||||
-Generate your own keys for Secure Boot::
|
||||
|
||||
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=PK/" -keyout PK.key -out PK.crt -days 7300 -nodes -sha256
|
||||
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=KEK/" -keyout KEK.key -out KEK.crt -days 7300 -nodes -sha256
|
||||
openssl req -new -x509 -newkey rsa:2048 -subj "/CN=db/" -keyout db.key -out db.crt -days 7300 -nodes -sha256
|
||||
|
||||
-Convert ``*.crt`` keys to the ESL format understood for UEFI::
|
||||
|
||||
cert-to-efi-sig-list PK.crt PK.esl
|
||||
cert-to-efi-sig-list KEK.crt KEK.esl
|
||||
cert-to-efi-sig-list db.crt db.esl
|
||||
|
||||
-Sign ESL files::
|
||||
|
||||
sign-efi-sig-list -k PK.key -c PK.crt PK PK.esl PK.auth
|
||||
sign-efi-sig-list -k PK.key -c PK.crt KEK KEK.esl KEK.auth
|
||||
sign-efi-sig-list -k KEK.key -c KEK.crt db db.esl db.auth
|
||||
|
||||
The keys to be enrolled in UEFI firmware: :file:`PK.der`, :file:`KEK.der`, :file:`db.der`.
|
||||
The keys to sign bootloader image: :file:`grubx64.efi`, :file:`db.key` , :file:`db.crt`.
|
||||
|
||||
Sign GRUB Image With ``db`` Key
|
||||
================================
|
||||
|
||||
sbsign --key db.key --cert db.crt path/to/grubx64.efi
|
||||
|
||||
:file:`grubx64.efi.signed` will be created, it will be your bootloader.
|
||||
|
||||
Enroll UEFI Keys To UEFI Firmware
|
||||
=================================
|
||||
|
||||
Enroll ``PK`` (:file:`PK.der`), ``KEK`` (:file:`KEK.der`) and ``db``
|
||||
(:file:`db.der`) in Secure Boot Configuration UI, which depends on your
|
||||
platform UEFI firmware. In UEFI configuration menu UI, follow the steps
|
||||
in :ref:`this section <qemu_inject_boot_keys>` that shows how to enroll UEFI
|
||||
keys, using your own key files. From now on, only EFI binaries
|
||||
signed with any ``db`` key (:file:`grubx64.efi.signed` in this case) can
|
||||
be loaded by UEFI firmware.
|
||||
@@ -110,6 +110,9 @@ Additional scenario XML elements:
|
||||
|
||||
``SERIAL_CONSOLE`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the host serial device is used for hypervisor debugging.
|
||||
This configuration is valid only if Service VM ``legacy_vuart0``
|
||||
is enabled. Leave this field empty if Service VM ``console_vuart``
|
||||
is enabled. Using ``bootargs`` for ``console_vuart`` configuration.
|
||||
|
||||
``MEM_LOGLEVEL`` (a child node of ``DEBUG_OPTIONS``):
|
||||
Specify the default log level in memory.
|
||||
@@ -294,7 +297,7 @@ Additional scenario XML elements:
|
||||
|
||||
``bootargs`` (a child node of ``os_config``):
|
||||
For internal use only and is not configurable. Specify the kernel boot arguments
|
||||
in bootargs under the parent of board_private.
|
||||
in ``bootargs`` under the parent of ``board_private``.
|
||||
|
||||
``kern_load_addr`` (a child node of ``os_config``):
|
||||
The loading address in host memory for the VM kernel.
|
||||
@@ -302,27 +305,45 @@ Additional scenario XML elements:
|
||||
``kern_entry_addr`` (a child node of ``os_config``):
|
||||
The entry address in host memory for the VM kernel.
|
||||
|
||||
``vuart``:
|
||||
``legacy_vuart``:
|
||||
Specify the vUART (aka COM) with the vUART ID by its ``id`` attribute.
|
||||
Refer to :ref:`vuart_config` for detailed vUART settings.
|
||||
|
||||
``type`` (a child node of ``vuart``):
|
||||
``console_vuart``:
|
||||
Specify the console vUART (aka PCI based vUART) with the vUART ID by
|
||||
its ``id`` attribute.
|
||||
Refer to :ref:`vuart_config` for detailed vUART settings.
|
||||
|
||||
``communication_vuart``:
|
||||
Specify the communication vUART (aka PCI based vUART) with the vUART ID by
|
||||
its ``id`` attribute.
|
||||
Refer to :ref:`vuart_config` for detailed vUART settings.
|
||||
|
||||
``type`` (a child node of ``legacy_vuart``):
|
||||
vUART (aka COM) type; currently only supports the legacy PIO mode.
|
||||
|
||||
``base`` (a child node of ``vuart``):
|
||||
``base`` (a child node of ``legacy_vuart``, ``console_vuart``, and ``communication_vuart``):
|
||||
vUART (A.K.A COM) enabling switch. Enable by exposing its COM_BASE
|
||||
(SOS_COM_BASE for Service VM); disable by returning INVALID_COM_BASE.
|
||||
|
||||
``irq`` (a child node of ``vuart``):
|
||||
console and communication vUART (A.K.A PCI based vUART) enabling switch.
|
||||
Enable by specifying PCI_VUART; disable by returning INVALID_PCI_BASE.
|
||||
|
||||
``irq`` (a child node of ``legacy_vuart``):
|
||||
vCOM IRQ.
|
||||
|
||||
``target_vm_id`` (a child node of ``vuart1``):
|
||||
``target_vm_id`` (a child node of ``legacy_vuart1``, ``communication_vuart``):
|
||||
COM2 is used for VM communications. When it is enabled, specify which
|
||||
target VM the current VM connects to.
|
||||
|
||||
``target_uart_id`` (a child node of ``vuart1``):
|
||||
``communication_vuart`` is used for VM communications. When it is enabled, specify
|
||||
which target VM the current VM connects to.
|
||||
|
||||
``target_uart_id`` (a child node of ``legacy_vuart1`` and ``communication_vuart``):
|
||||
Target vUART ID to which the vCOM2 connects.
|
||||
|
||||
Target vUART ID to which the ``communication_vuart`` connects.
|
||||
|
||||
``pci_dev_num``:
|
||||
PCI devices number of the VM; it is hard-coded for each scenario so it
|
||||
is not configurable for now.
|
||||
@@ -486,7 +507,7 @@ Here is the offline configuration tool workflow:
|
||||
specified board name.
|
||||
|
||||
| **Native Linux requirement:**
|
||||
| **Release:** Ubuntu 18.04+ or Clear Linux 30210+
|
||||
| **Release:** Ubuntu 18.04+
|
||||
| **Tools:** cpuid, rdmsr, lspci, dmidecode (optional)
|
||||
| **Kernel cmdline:** "idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable"
|
||||
|
||||
|
||||
@@ -50,22 +50,48 @@ enable it using the :ref:`acrn_configuration_tool` with these steps:
|
||||
communication VMs in ACRN scenario XML configuration. The ``IVSHMEM_REGION``
|
||||
format is ``shm_name,shm_size,VM IDs``:
|
||||
|
||||
- ``shm_name`` - Specify a shared memory name. The name needs to start
|
||||
with the ``hv:/`` prefix. For example, ``hv:/shm_region_0``
|
||||
- ``shm_name`` - Specify a shared memory name. The name needs to start
|
||||
with the ``hv:/`` prefix. For example, ``hv:/shm_region_0``
|
||||
|
||||
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The
|
||||
size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
|
||||
For example, to set up a shared memory of 2 megabytes, use ``2``
|
||||
instead of ``shm_size``.
|
||||
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The
|
||||
size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
|
||||
For example, to set up a shared memory of 2 megabytes, use ``2``
|
||||
instead of ``shm_size``.
|
||||
|
||||
- ``VM IDs`` - Specify the VM IDs to use the same shared memory
|
||||
communication and separate it with ``:``. For example, the
|
||||
communication between VM0 and VM2, it can be written as ``0:2``
|
||||
- ``VM IDs`` - Specify the VM IDs to use the same shared memory
|
||||
communication and separate it with ``:``. For example, the
|
||||
communication between VM0 and VM2, it can be written as ``0:2``
|
||||
|
||||
.. note:: You can define up to eight ``ivshmem`` hv-land shared regions.
|
||||
|
||||
- Build the XML configuration, refer to :ref:`getting-started-building`
|
||||
|
||||
ivshmem notification mechanism
|
||||
******************************
|
||||
|
||||
Notification (doorbell) of ivshmem device allows VMs with ivshmem
|
||||
devices enabled to notify (interrupt) each other following this flow:
|
||||
|
||||
Notification Sender (VM):
|
||||
VM triggers the notification to target VM by writing target Peer ID
|
||||
(Equals to VM ID of target VM) and vector index to doorbell register of
|
||||
ivshmem device, the layout of doorbell register is described in
|
||||
:ref:`ivshmem-hld`.
|
||||
|
||||
Hypervisor:
|
||||
When doorbell register is programmed, hypervisor will search the
|
||||
target VM by target Peer ID and inject MSI interrupt to the target VM.
|
||||
|
||||
Notification Receiver (VM):
|
||||
VM receives MSI interrupt and forward it to related application.
|
||||
|
||||
ACRN supports up to 8 (MSI-X) interrupt vectors for ivshmem device.
|
||||
Guest VMs shall implement their own mechanism to forward MSI interrupts
|
||||
to applications.
|
||||
|
||||
.. note:: Notification is supported only for HV-land ivshmem devices. (Future
|
||||
support may include notification for DM-land ivshmem devices.)
|
||||
|
||||
Inter-VM Communication Examples
|
||||
*******************************
|
||||
|
||||
|
||||
@@ -27,6 +27,7 @@ The diagram below shows the overall architecture:
|
||||
|
||||
.. figure:: images/s5_overall_architecture.png
|
||||
:align: center
|
||||
:name: s5-architecture
|
||||
|
||||
S5 overall architecture
|
||||
|
||||
@@ -160,22 +161,20 @@ The procedure for enabling S5 is specific to the particular OS:
|
||||
|
||||
How to test
|
||||
***********
|
||||
As described in :ref:`vuart_config`, two vUARTs are defined in
|
||||
pre-defined ACRN scenarios: vUART0/ttyS0 for the console and
|
||||
vUART1/ttyS1 for S5-related communication (as shown in :ref:`s5-architecture`).
|
||||
|
||||
.. note:: The :ref:`CBC <IOC_virtualization_hld>` tools and service installed by
|
||||
the `software-defined-cockpit
|
||||
<https://github.com/clearlinux/clr-bundles/blob/master/bundles/software-defined-cockpit>`_ bundle
|
||||
will conflict with the vUART and hence need to be masked.
|
||||
For Yocto Project (Poky) or Ubuntu rootfs, the ``serial-getty``
|
||||
service for ``ttyS1`` conflicts with the S5-related communication
|
||||
use of ``vUART1``. We can eliminate the conflict by preventing
|
||||
that service from being started
|
||||
either automatically or manually, by masking the service
|
||||
using this command
|
||||
|
||||
::
|
||||
|
||||
systemctl mask cbc_attach
|
||||
systemctl mask cbc_thermal_fuse
|
||||
systemctl mask cbc_thermald
|
||||
systemctl mask cbc_lifecycle.service
|
||||
|
||||
Or::
|
||||
|
||||
ps -ef|grep cbc; kill -9 cbc_pid
|
||||
systemctl mask serial-getty@ttyS1.service
|
||||
|
||||
#. Refer to the :ref:`enable_s5` section to set up the S5 environment for the User VMs.
|
||||
|
||||
@@ -194,7 +193,7 @@ How to test
|
||||
.. note:: For WaaG, we need to close ``windbg`` by using the ``bcdedit /set debug off`` command
|
||||
IF you executed the ``bcdedit /set debug on`` when you set up the WaaG, because it occupies the ``COM2``.
|
||||
|
||||
#. Use the``acrnctl stop`` command on the Service VM to trigger S5 to the User VMs:
|
||||
#. Use the ``acrnctl stop`` command on the Service VM to trigger S5 to the User VMs:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
@@ -206,3 +205,45 @@ How to test
|
||||
|
||||
# acrnctl list
|
||||
vm1 stopped
|
||||
|
||||
System Shutdown
|
||||
***************
|
||||
|
||||
Using a coordinating script, ``misc/life_mngr/s5_trigger.sh``, in conjunction with
|
||||
the lifecycle manager in each VM, graceful system shutdown can be performed.
|
||||
|
||||
.. note:: Please install ``s5_trigger.sh`` manually to root's home directory.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo install -p -m 0755 -t ~root misc/life_mngr/s5_trigger.sh
|
||||
|
||||
In the ``hybrid_rt`` scenario, the script can send a shutdown command via ``ttyS1``
|
||||
in the Service VM, which is connected to ``ttyS1`` in the pre-launched VM. The
|
||||
lifecycle manager in the pre-launched VM receives the shutdown command, sends an
|
||||
ack message, and proceeds to shut itself down accordingly.
|
||||
|
||||
.. figure:: images/system_shutdown.png
|
||||
:align: center
|
||||
|
||||
Graceful system shutdown flow
|
||||
|
||||
#. The HMI Windows Guest uses the lifecycle manager to send a shutdown request to
|
||||
the Service VM
|
||||
#. The lifecycle manager in the Service VM responds with an ack message and
|
||||
executes ``s5_trigger.sh``
|
||||
#. After receiving the ack message, the lifecycle manager in the HMI Windows Guest
|
||||
shuts down the guest
|
||||
#. The ``s5_trigger.sh`` script in the Service VM shuts down the Linux Guest by
|
||||
using ``acrnctl`` to send a shutdown request
|
||||
#. After receiving the shutdown request, the lifecycle manager in the Linux Guest
|
||||
responds with an ack message and shuts down the guest
|
||||
#. The ``s5_trigger.sh`` script in the Service VM shuts down the Pre-launched RTVM
|
||||
by sending a shutdown request to its ``ttyS1``
|
||||
#. After receiving the shutdown request, the lifecycle manager in the Pre-launched
|
||||
RTVM responds with an ack message
|
||||
#. The lifecycle manager in the Pre-launched RTVM shuts down the guest using
|
||||
standard PM registers
|
||||
#. After receiving the ack message, the ``s5_trigger.sh`` script in the Service VM
|
||||
shuts down the Service VM
|
||||
#. The hypervisor shuts down the system after all of its guests have shut down
|
||||
|
||||
BIN
doc/tutorials/images/acrn_secureboot_flow.png
Normal file
BIN
doc/tutorials/images/acrn_secureboot_flow.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 10 KiB |
BIN
doc/tutorials/images/system_shutdown.png
Normal file
BIN
doc/tutorials/images/system_shutdown.png
Normal file
Binary file not shown.
|
After Width: | Height: | Size: 15 KiB |
@@ -50,8 +50,7 @@ install Ubuntu on the NVMe drive, and use grub to launch the Service VM.
|
||||
Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe
|
||||
===================================================================
|
||||
|
||||
.. important:: Need to add instructions to download the RTVM image and burn it to the
|
||||
SATA drive.
|
||||
Follow the :ref:`install-ubuntu-rtvm-sata` guide to install RT rootfs on SATA drive.
|
||||
|
||||
The Kernel should
|
||||
be on the NVMe drive along with GRUB. You'll need to copy the RT kernel
|
||||
@@ -94,6 +93,7 @@ like this:
|
||||
multiboot2 /EFI/BOOT/acrn.bin
|
||||
module2 /EFI/BOOT/bzImage_RT RT_bzImage
|
||||
module2 /EFI/BOOT/bzImage Linux_bzImage
|
||||
module2 /boot/ACPI_VM0.bin ACPI_VM0
|
||||
}
|
||||
|
||||
Reboot the system, and it will boot into Pre-Launched RT Mode
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
.. _running_deb_as_serv_vm:
|
||||
.. _running_deb_as_serv_vm:
|
||||
|
||||
Run Debian as the Service VM
|
||||
############################
|
||||
@@ -9,9 +9,8 @@ individuals who have made common cause to create a `free
|
||||
stable Debian release <https://www.debian.org/releases/stable/>`_ is
|
||||
10.0.
|
||||
|
||||
This tutorial describes how to use Debian 10.0 instead of `Clear Linux
|
||||
OS <https://clearlinux.org>`_ as the Service VM with the ACRN
|
||||
hypervisor.
|
||||
This tutorial describes how to use Debian 10.0 as the Service VM OS with
|
||||
the ACRN hypervisor.
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
@@ -24,10 +23,10 @@ Use the following instructions to install Debian.
|
||||
the bottom of the page).
|
||||
- Follow the `Debian installation guide
|
||||
<https://www.debian.org/releases/stable/amd64/index.en.html>`_ to
|
||||
install it on your Intel NUC; we are using a Kaby Lake Intel NUC (NUC7i7DNHE)
|
||||
install it on your board; we are using a Kaby Lake Intel NUC (NUC7i7DNHE)
|
||||
in this tutorial.
|
||||
- :ref:`install-build-tools-dependencies` for ACRN.
|
||||
- Update to the latest iASL (required by the ACRN Device Model):
|
||||
- Update to the newer iASL:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@@ -43,89 +42,94 @@ Use the following instructions to install Debian.
|
||||
Validated Versions
|
||||
******************
|
||||
|
||||
- **Debian version:** 10.0 (buster)
|
||||
- **ACRN hypervisor tag:** acrn-2019w35.1-140000p
|
||||
- **Debian Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
- **Debian version:** 10.1 (buster)
|
||||
- **ACRN hypervisor tag:** acrn-2020w40.1-180000p
|
||||
- **Debian Service VM Kernel version:** release_2.2
|
||||
|
||||
Install ACRN on the Debian VM
|
||||
*****************************
|
||||
|
||||
1. Clone the `Project ACRN <https://github.com/projectacrn/acrn-hypervisor>`_ code repository:
|
||||
#. Clone the `Project ACRN <https://github.com/projectacrn/acrn-hypervisor>`_ code repository:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ cd ~
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypervisor
|
||||
$ git checkout acrn-2019w35.1-140000p
|
||||
$ git checkout acrn-2020w40.1-180000p
|
||||
|
||||
#. Build and install ACRN:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ make BOARD=nuc7i7dnb FIRMWARE=uefi
|
||||
$ make all BOARD_FILE=misc/vm_configs/xmls/board-xmls/nuc7i7dnb.xml SCENARIO_FILE=misc/vm_configs/xmls/config-xmls/nuc7i7dnb/industry.xml RELEASE=0
|
||||
$ sudo make install
|
||||
$ sudo mkdir /boot/acrn/
|
||||
$ sudo cp ~/acrn-hypervisor/build/hypervisor/acrn.bin /boot/acrn/
|
||||
|
||||
#. Install the hypervisor.
|
||||
The ACRN Device Model and tools were installed as part of a previous
|
||||
step. However, ``make install`` does not install the hypervisor (acrn.efi)
|
||||
on your EFI System Partition (ESP), nor does it configure your EFI
|
||||
firmware to boot it automatically. Follow the steps below to perform
|
||||
these operations and complete the ACRN installation. Note that we are
|
||||
using a SATA disk in this section.
|
||||
|
||||
a. Add the ACRN hypervisor (as the root user):
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo mkdir /boot/efi/EFI/acrn/
|
||||
$ sudo cp ~/acrn-hypervisor/build/hypervisor/acrn.efi /boot/efi/EFI/acrn/
|
||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN Hypervisor" -u "bootloader=\EFI\debian\grubx64.efi "
|
||||
$ sudo efibootmgr -v # shows output as below
|
||||
Timeout: 1 seconds
|
||||
BootOrder: 0009,0003,0004,0007,0005,0006,0001,0008,0002,0000
|
||||
Boot0000* ACRN VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0001* ACRN VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0002* debian VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0003* UEFI : INTEL SSDPEKKW256G8 : PART 0 : OS Bootloader PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-00-00-00-00-00-00-00)/HD(1,GPT,89d38801-d55b-4bf6-be05-79a5a7b87e66,0x800,0x47000)..BO
|
||||
Boot0004* UEFI : INTEL SSDPEKKW256G8 : PART 3 : OS Bootloader PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-00-00-00-00-00-00-00)/HD(4,GPT,550e1da5-6533-4e64-8d3f-0beadfb20d33,0x1c6da800,0x47000)..BO
|
||||
Boot0005* UEFI : LAN : PXE IP4 Intel(R) Ethernet Connection I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(54b2030f4b84,0)/IPv4(0.0.0.00.0.0.0,0,0)..BO
|
||||
Boot0006* UEFI : LAN : PXE IP6 Intel(R) Ethernet Connection I219-LM PciRoot(0x0)/Pci(0x1f,0x6)/MAC(54b2030f4b84,0)/IPv6([::]:<->[::]:,0,0)..BO
|
||||
Boot0007* UEFI : Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO
|
||||
Boot0008* Linux bootloader VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
|
||||
Boot0009* ACRN Hypervisor HD(1,GPT,94597852-7166-4216-b0f1-cef5fd1f2349,0x800,0x100000)/File(\EFI\acrn\acrn.efi)b.o.o.t.l.o.a.d.e.r.=.\.E.F.I.\.d.e.b.i.a.n.\.g.r.u.b.x.6.4...e.f.i.
|
||||
|
||||
#. Install the Service VM kernel and reboot:
|
||||
#. Build and Install the Service VM kernel:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
$ mkdir ~/sos-kernel && cd ~/sos-kernel
|
||||
$ wget https://download.clearlinux.org/releases/30930/clear/x86_64/os/Packages/linux-iot-lts2018-sos-4.19.68-84.x86_64.rpm
|
||||
$ sudo apt install rpm2cpio
|
||||
$ rpm2cpio linux-iot-lts2018-sos-4.19.68-84.x86_64.rpm | cpio -idmv
|
||||
$ sudo cp -r ~/sos-kernel/usr/lib/modules/4.19.68-84.iot-lts2018-sos /lib/modules/
|
||||
$ sudo mkdir /boot/acrn/
|
||||
$ sudo cp ~/sos-kernel/usr/lib/kernel/org.clearlinux.iot-lts2018-sos.4.19.68-84 /boot/acrn/
|
||||
$ sudo vi /etc/grub.d/40_custom
|
||||
<To add below>
|
||||
menuentry 'ACRN Debian Service VM' {
|
||||
recordfail
|
||||
load_video
|
||||
insmod gzio
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
$ git clone https://github.com/projectacrn/acrn-kernel
|
||||
$ cd acrn-kernel
|
||||
$ git checkout release_2.2
|
||||
$ cp kernel_config_uefi_sos .config
|
||||
$ make olddefconfig
|
||||
$ make all
|
||||
$ sudo make modules_install
|
||||
$ sudo cp arch/x86/boot/bzImage /boot/bzImage
|
||||
|
||||
linux /boot/acrn/org.clearlinux.iot-lts2018-sos.4.19.68-84 console=tty0 console=ttyS0 root=/dev/sda2 rw rootwait ignore_loglevel no_timer_check consoleblank=0 i915.nuclear_pageflip=1 i915.avail_planes_per_pipe=0x01010F i915.domain_plane_owners=0x011111110000 i915.enable_gvt=1 i915.enable_guc=0 hvlog=2M@0x1FE00000 memmap=2M\$0x1FE00000
|
||||
}
|
||||
$ sudo vi /etc/default/grub
|
||||
<Specify the default grub to the ACRN Debian Service VM entry>
|
||||
GRUB_DEFAULT=5
|
||||
$ sudo update-grub
|
||||
$ sudo reboot
|
||||
#. Update Grub for the Debian Service VM
|
||||
|
||||
You should see the Grub menu with the new "ACRN Debian Service VM"
|
||||
entry. Select it and proceed to booting the platform. The system will
|
||||
start the Debian Desktop and you can now log in (as before).
|
||||
Update the ``/etc/grub.d/40_custom`` file as shown below.
|
||||
|
||||
.. note::
|
||||
Enter the command line for the kernel in ``/etc/grub.d/40_custom`` as
|
||||
a single line and not as multiple lines. Otherwise, the kernel will
|
||||
fail to boot.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
menuentry "ACRN Multiboot Debian Service VM" --id debian-service-vm {
|
||||
recordfail
|
||||
load_video
|
||||
insmod gzio
|
||||
insmod part_gpt
|
||||
insmod ext2
|
||||
|
||||
search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1
|
||||
echo 'loading ACRN...'
|
||||
multiboot2 /boot/acrn/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83"
|
||||
module2 /boot/bzImage Linux_bzImage
|
||||
}
|
||||
|
||||
.. note::
|
||||
Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter)
|
||||
(or use the device node directly) of the root partition (e.g.
|
||||
``/dev/nvme0n1p2``). Hint: use ``sudo blkid <device node>``.
|
||||
|
||||
Update the kernel name if you used a different name as the source
|
||||
for your Service VM kernel.
|
||||
|
||||
#. Modify the ``/etc/default/grub`` file to make the Grub menu visible when
|
||||
booting and make it load the Service VM kernel by default. Modify the
|
||||
lines shown below:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
GRUB_DEFAULT=debian-service-vm
|
||||
#GRUB_TIMEOUT_STYLE=hidden
|
||||
GRUB_TIMEOUT=5
|
||||
GRUB_CMDLINE_LINUX="text"
|
||||
|
||||
#. Update Grub on your system:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo update-grub
|
||||
$ sudo reboot
|
||||
|
||||
#. Log in to the Debian Service VM and check the ACRN status:
|
||||
|
||||
@@ -137,16 +141,10 @@ Install ACRN on the Debian VM
|
||||
[ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19
|
||||
[ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp
|
||||
|
||||
$ uname -a
|
||||
Linux debian 4.19.68-84.iot-lts2018-sos #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linux
|
||||
|
||||
#. Enable the network sharing to give network access to User VM:
|
||||
|
||||
Enable the network sharing to give network access to User VM
|
||||
************************************************************
|
||||
.. code-block:: bash
|
||||
|
||||
$ sudo systemctl enable systemd-networkd
|
||||
$ sudo systemctl start systemd-networkd
|
||||
|
||||
#. Prepare and Start a User VM.
|
||||
|
||||
.. important:: Need instructions for this.
|
||||
|
||||
@@ -9,13 +9,10 @@ Prerequisites
|
||||
This tutorial assumes you have already set up the ACRN Service VM on an
|
||||
Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
on your Intel NUC kit.
|
||||
- Follow the instructions at XXX to set up the
|
||||
Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
|
||||
|
||||
.. important:: need updated instructions that aren't Clear Linux dependent
|
||||
- Install a `Ubuntu 18.04 desktop ISO
|
||||
<http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_
|
||||
on your board.
|
||||
- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` guide to setup the Service VM.
|
||||
|
||||
We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial.
|
||||
|
||||
@@ -63,9 +60,9 @@ Hardware Configurations
|
||||
Validated Versions
|
||||
==================
|
||||
|
||||
- **Clear Linux version:** 30920
|
||||
- **ACRN hypervisor tag:** acrn-2019w36.2-140000p
|
||||
- **Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
- **Ubuntu version:** 18.04
|
||||
- **ACRN hypervisor tag:** v2.2
|
||||
- **Service VM Kernel version:** v2.2
|
||||
|
||||
Build the Debian KVM Image
|
||||
**************************
|
||||
|
||||
@@ -9,14 +9,11 @@ Prerequisites
|
||||
This tutorial assumes you have already set up the ACRN Service VM on an
|
||||
Intel NUC Kit. If you have not, refer to the following instructions:
|
||||
|
||||
- Install a `Clear Linux OS
|
||||
<https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html>`_
|
||||
on your Intel NUC kit.
|
||||
- Follow the instructions at XXX to set up the
|
||||
Service VM automatically on your Intel NUC kit. Follow steps 1 - 4.
|
||||
- Install a `Ubuntu 18.04 desktop ISO
|
||||
<http://releases.ubuntu.com/18.04.3/ubuntu-18.04.3-desktop-amd64.iso?_ga=2.160010942.221344839.1566963570-491064742.1554370503>`_
|
||||
on your board.
|
||||
- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` to set up the Service VM.
|
||||
|
||||
.. important:: need updated instructions that aren't Clear Linux
|
||||
dependent
|
||||
|
||||
Before you start this tutorial, make sure the KVM tools are installed on the
|
||||
development machine and set **IGD Aperture Size to 512** in the BIOS
|
||||
@@ -62,9 +59,9 @@ Hardware Configurations
|
||||
Validated Versions
|
||||
==================
|
||||
|
||||
- **Clear Linux version:** 30920
|
||||
- **ACRN hypervisor tag:** acrn-2019w36.2-140000p
|
||||
- **Service VM Kernel version:** 4.19.68-84.iot-lts2018-sos
|
||||
- **Ubuntu version:** 18.04
|
||||
- **ACRN hypervisor tag:** v2.2
|
||||
- **Service VM Kernel version:** v2.2
|
||||
|
||||
.. _build-the-ubuntu-kvm-image:
|
||||
|
||||
|
||||
@@ -12,25 +12,23 @@ to avoid crashing your system and to take advantage of easy
|
||||
snapshots/restores so that you can quickly roll back your system in the
|
||||
event of setup failure. (You should only install OpenStack directly on Ubuntu if
|
||||
you have a dedicated testing machine.) This setup utilizes LXC/LXD on
|
||||
Ubuntu 16.04 or 18.04.
|
||||
Ubuntu 18.04.
|
||||
|
||||
Install ACRN
|
||||
************
|
||||
|
||||
#. Install ACRN using Ubuntu 16.04 or 18.04 as its Service VM.
|
||||
|
||||
.. important:: Need instructions from deleted document (using Ubuntu
|
||||
as SOS)
|
||||
#. Install ACRN using Ubuntu 18.04 as its Service VM. Refer to
|
||||
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
|
||||
|
||||
#. Make the acrn-kernel using the `kernel_config_uefi_sos
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
|
||||
configuration file (from the ``acrn-kernel`` repo).
|
||||
|
||||
#. Add the following kernel boot arg to give the Service VM more loop
|
||||
devices. Refer to `Kernel Boot Parameters
|
||||
#. Add the following kernel boot arg to give the Service VM more memory
|
||||
and more loop devices. Refer to `Kernel Boot Parameters
|
||||
<https://wiki.ubuntu.com/Kernel/KernelBootParameters>`_ documentation::
|
||||
|
||||
max_loop=16
|
||||
hugepagesz=1G hugepages=10 max_loop=16
|
||||
|
||||
#. Boot the Service VM with this new ``acrn-kernel`` using the ACRN
|
||||
hypervisor.
|
||||
@@ -40,17 +38,15 @@ Install ACRN
|
||||
<https://maslosoft.com/kb/how-to-clean-old-snaps/>`_ to clean up old
|
||||
snap revisions if you're running out of loop devices.
|
||||
#. Make sure the networking bridge ``acrn-br0`` is created. If not,
|
||||
create it using the instructions in XXX.
|
||||
|
||||
.. important:: need instructions from deleted document (using Ubuntu
|
||||
as SOS)
|
||||
create it using the instructions in
|
||||
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
|
||||
|
||||
Set up and launch LXC/LXD
|
||||
*************************
|
||||
|
||||
1. Set up the LXC/LXD Linux container engine using these `instructions
|
||||
<https://ubuntu.com/tutorials/tutorial-setting-up-lxd-1604>`_ provided
|
||||
by Ubuntu (for release 16.04).
|
||||
by Ubuntu.
|
||||
|
||||
Refer to the following additional information for the setup
|
||||
procedure:
|
||||
@@ -59,8 +55,10 @@ Set up and launch LXC/LXD
|
||||
backend).
|
||||
- Answer ``dir`` (and not ``zfs``) when prompted for the name of the storage backend to use.
|
||||
- Set up ``lxdbr0`` as instructed.
|
||||
- Before launching a container, make sure ``lxc-checkconfig | grep missing`` does not show any missing
|
||||
kernel features.
|
||||
- Before launching a container, install lxc-utils by ``apt-get install lxc-utils``,
|
||||
make sure ``lxc-checkconfig | grep missing`` does not show any missing kernel features
|
||||
except ``CONFIG_NF_NAT_IPV4`` and ``CONFIG_NF_NAT_IPV6``, which
|
||||
were renamed in recent kernels.
|
||||
|
||||
2. Create an Ubuntu 18.04 container named ``openstack``::
|
||||
|
||||
@@ -128,7 +126,7 @@ Set up and launch LXC/LXD
|
||||
|
||||
8. Log in to the ``openstack`` container again::
|
||||
|
||||
$ xc exec openstack -- su -l
|
||||
$ lxc exec openstack -- su -l
|
||||
|
||||
9. If needed, set up the proxy inside the ``openstack`` container via
|
||||
``/etc/environment`` and make sure ``no_proxy`` is properly set up.
|
||||
@@ -139,7 +137,7 @@ Set up and launch LXC/LXD
|
||||
|
||||
10. Add a new user named **stack** and set permissions::
|
||||
|
||||
$ sudo useradd -s /bin/bash -d /opt/stack -m stack
|
||||
$ useradd -s /bin/bash -d /opt/stack -m stack
|
||||
$ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||
|
||||
11. Log off and restart the ``openstack`` container::
|
||||
@@ -166,17 +164,15 @@ Set up ACRN prerequisites inside the container
|
||||
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypervisor
|
||||
$ git checkout v1.6.1
|
||||
$ git checkout v2.3
|
||||
$ make
|
||||
$ cd misc/acrn-manager/; make
|
||||
|
||||
Install only the user-space components: ``acrn-dm``, ``acrnctl``, and
|
||||
``acrnd``
|
||||
|
||||
3. Download, compile, and install ``iasl``. Refer to XXX.
|
||||
|
||||
.. important:: need instructions from deleted document (using Ubuntu
|
||||
as SOS)
|
||||
3. Download, compile, and install ``iasl``. Refer to
|
||||
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
|
||||
|
||||
Set up libvirt
|
||||
**************
|
||||
@@ -185,7 +181,7 @@ Set up libvirt
|
||||
|
||||
$ sudo apt install libdevmapper-dev libnl-route-3-dev libnl-3-dev python \
|
||||
automake autoconf autopoint libtool xsltproc libxml2-utils gettext \
|
||||
libxml2-dev libpciaccess-dev
|
||||
libxml2-dev libpciaccess-dev gnutls-dev python3-docutils
|
||||
|
||||
|
||||
2. Download libvirt/ACRN::
|
||||
@@ -195,7 +191,9 @@ Set up libvirt
|
||||
3. Build and install libvirt::
|
||||
|
||||
$ cd acrn-libvirt
|
||||
$ ./autogen.sh --prefix=/usr --disable-werror --with-test-suite=no \
|
||||
$ mkdir build
|
||||
$ cd build
|
||||
$ ../autogen.sh --prefix=/usr --disable-werror --with-test-suite=no \
|
||||
--with-qemu=no --with-openvz=no --with-vmware=no --with-phyp=no \
|
||||
--with-vbox=no --with-lxc=no --with-uml=no --with-esx=no
|
||||
|
||||
@@ -223,11 +221,12 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
|
||||
|
||||
$ git clone https://opendev.org/openstack/devstack.git -b stable/train
|
||||
|
||||
2. Go into the ``devstack`` directory and apply an ACRN patch::
|
||||
2. Go into the ``devstack`` directory, download an ACRN patch from
|
||||
:acrn_raw:`doc/tutorials/0001-devstack-installation-for-acrn.patch`,
|
||||
and apply it ::
|
||||
|
||||
$ cd devstack
|
||||
$ curl https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/tutorials/0001-devstack-installation-for-acrn.patch \
|
||||
| git apply
|
||||
$ git apply 0001-devstack-installation-for-acrn.patch
|
||||
|
||||
3. Edit ``lib/nova_plugins/hypervisor-libvirt``:
|
||||
|
||||
|
||||
@@ -2,11 +2,10 @@
|
||||
|
||||
Getting Started Guide for ACRN hybrid mode
|
||||
##########################################
|
||||
|
||||
ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr
|
||||
or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is
|
||||
launched by a Device model in the Service VM. The following guidelines
|
||||
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
|
||||
as shown in :numref:`hybrid_scenario_on_nuc`.
|
||||
launched by a Device model in the Service VM.
|
||||
|
||||
.. figure:: images/hybrid_scenario_on_nuc.png
|
||||
:align: center
|
||||
@@ -15,6 +14,14 @@ as shown in :numref:`hybrid_scenario_on_nuc`.
|
||||
|
||||
The Hybrid scenario on the Intel NUC
|
||||
|
||||
The following guidelines
|
||||
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
|
||||
as shown in :numref:`hybrid_scenario_on_nuc`.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
- Use the `Intel NUC Kit NUC7i7DNHE <https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc7i7dnhe.html>`_.
|
||||
@@ -22,6 +29,8 @@ Prerequisites
|
||||
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your
|
||||
Intel NUC.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Update Ubuntu GRUB
|
||||
******************
|
||||
|
||||
@@ -78,8 +87,10 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
the ACRN hypervisor on the Intel NUC's display. The GRUB loader will boot the
|
||||
hypervisor, and the hypervisor will start the VMs automatically.
|
||||
|
||||
Hybrid Scenario Startup Checking
|
||||
********************************
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Hybrid Scenario Startup Check
|
||||
*****************************
|
||||
#. Use these steps to verify that the hypervisor is properly running:
|
||||
|
||||
a. Log in to the ACRN hypervisor shell from the serial console.
|
||||
|
||||
@@ -10,12 +10,16 @@ guidelines provide step-by-step instructions on how to set up the ACRN
|
||||
hypervisor logical partition scenario on Intel NUC while running two
|
||||
pre-launched VMs.
|
||||
|
||||
.. contents::
|
||||
:local:
|
||||
:depth: 1
|
||||
|
||||
Validated Versions
|
||||
******************
|
||||
|
||||
- Ubuntu version: **18.04**
|
||||
- ACRN hypervisor tag: **v2.1**
|
||||
- ACRN kernel tag: **v2.1**
|
||||
- ACRN hypervisor tag: **v2.3**
|
||||
- ACRN kernel tag: **v2.3**
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
@@ -35,6 +39,8 @@ Prerequisites
|
||||
The two pre-launched VMs will mount the root file systems via the SATA controller and
|
||||
the USB controller respectively.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Update kernel image and modules of pre-launched VM
|
||||
**************************************************
|
||||
#. On your development workstation, clone the ACRN kernel source tree, and
|
||||
@@ -97,6 +103,8 @@ Update kernel image and modules of pre-launched VM
|
||||
|
||||
$ sudo cp <path-to-kernel-image-built-in-step1>/bzImage /boot/
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Update ACRN hypervisor image
|
||||
****************************
|
||||
|
||||
@@ -137,13 +145,13 @@ Update ACRN hypervisor image
|
||||
Refer to :ref:`getting-started-building` to set up the ACRN build
|
||||
environment on your development workstation.
|
||||
|
||||
Clone the ACRN source code and check out to the tag v2.1:
|
||||
Clone the ACRN source code and check out to the tag v2.3:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor.git
|
||||
$ cd acrn-hypervisor
|
||||
$ git checkout v2.1
|
||||
$ git checkout v2.3
|
||||
|
||||
Build the ACRN hypervisor and ACPI binaries for pre-launched VMs with default xmls:
|
||||
|
||||
@@ -179,6 +187,8 @@ Update ACRN hypervisor image
|
||||
#. Copy the ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` from the removable disk to ``/boot``
|
||||
directory.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Update Ubuntu GRUB to boot hypervisor and load kernel image
|
||||
***********************************************************
|
||||
|
||||
@@ -237,8 +247,10 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
|
||||
the Intel NUC's display. The GRUB loader will boot the hypervisor, and the
|
||||
hypervisor will automatically start the two pre-launched VMs.
|
||||
|
||||
Logical partition scenario startup checking
|
||||
*******************************************
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Logical partition scenario startup check
|
||||
****************************************
|
||||
|
||||
#. Use these steps to verify that the hypervisor is properly running:
|
||||
|
||||
|
||||
@@ -5,7 +5,7 @@ Run VxWorks as the User VM
|
||||
|
||||
`VxWorks`_\* is a real-time proprietary OS designed for use in embedded systems requiring real-time, deterministic
|
||||
performance. This tutorial describes how to run VxWorks as the User VM on the ACRN hypervisor
|
||||
based on Clear Linux 29970 (ACRN tag v1.1).
|
||||
based on Ubuntu Service VM (ACRN tag v2.0).
|
||||
|
||||
.. note:: You'll need to be a Wind River* customer and have purchased VxWorks to follow this tutorial.
|
||||
|
||||
@@ -92,10 +92,8 @@ Steps for Using VxWorks as User VM
|
||||
|
||||
You now have a virtual disk image with bootable VxWorks in ``VxWorks.img``.
|
||||
|
||||
#. Follow XXX to boot the ACRN Service VM.
|
||||
#. Follow :ref:`install-ubuntu-Service VM-NVMe` to boot the ACRN Service VM.
|
||||
|
||||
.. important:: need instructions from deleted document (using SDC
|
||||
mode on the Intel NUC)
|
||||
|
||||
#. Boot VxWorks as User VM.
|
||||
|
||||
|
||||
@@ -92,11 +92,9 @@ Steps for Using Zephyr as User VM
|
||||
the ACRN Service VM, then you will need to transfer this image to the
|
||||
ACRN Service VM (via, e.g, a USB drive or network)
|
||||
|
||||
#. Follow XXX to boot "The ACRN Service OS" based on Clear Linux OS 28620
|
||||
(ACRN tag: acrn-2019w14.3-140000p)
|
||||
#. Follow :ref:`install-ubuntu-Service VM-NVMe`
|
||||
to boot "The ACRN Service OS" based on Ubnuntu OS (ACRN tag: v2.2)
|
||||
|
||||
.. important:: need to remove reference to Clear Linux and reference
|
||||
to deleted document (use SDC mode on the Intel NUC)
|
||||
|
||||
#. Boot Zephyr as User VM
|
||||
|
||||
|
||||
@@ -48,6 +48,8 @@ Console enable list
|
||||
| | (vUART enable) | (vUART enable) | RTVM | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
|
||||
.. _how-to-configure-a-console-port:
|
||||
|
||||
How to configure a console port
|
||||
===============================
|
||||
|
||||
@@ -71,6 +73,8 @@ Example:
|
||||
.irq = COM1_IRQ,
|
||||
},
|
||||
|
||||
.. _how-to-configure-a-communication-port:
|
||||
|
||||
How to configure a communication port
|
||||
=====================================
|
||||
|
||||
@@ -139,7 +143,7 @@ Test the communication port
|
||||
===========================
|
||||
|
||||
After you have configured the communication port in hypervisor, you can
|
||||
access the corresponding port. For example, in Clear Linux:
|
||||
access the corresponding port. For example, in Linux OS:
|
||||
|
||||
1. With ``echo`` and ``cat``
|
||||
|
||||
@@ -214,3 +218,191 @@ started, as shown in the diagram below:
|
||||
the hypervisor is not sufficient. Currently, we recommend that you use
|
||||
the configuration in the figure 3 data flow. This may be refined in the
|
||||
future.
|
||||
|
||||
Use PCI-vUART
|
||||
#############
|
||||
|
||||
PCI Interface of ACRN vUART
|
||||
===========================
|
||||
|
||||
When you set :ref:`vuart[0] and vuart[1] <vuart_config>`, the ACRN
|
||||
hypervisor emulates virtual legacy serial devices (I/O port and IRQ) for
|
||||
VMs. So ``vuart[0]`` and ``vuart[1]`` are legacy vUARTs. ACRN
|
||||
hypervisor can also emulate virtual PCI serial devices (BDF, MMIO
|
||||
registers and MSIX capability). These virtual PCI serial devices are
|
||||
called PCI-vUART, and have an advantage in device enumeration for the
|
||||
guest OS. It is easy to add new PCI-vUART ports to a VM.
|
||||
|
||||
.. _index-of-vuart:
|
||||
|
||||
Index of vUART
|
||||
==============
|
||||
|
||||
ACRN hypervisor supports PCI-vUARTs and legacy vUARTs as ACRN vUARTs.
|
||||
Each vUART port has its own ``vuart_idx``. ACRN hypervisor supports up
|
||||
to 8 vUART for each VM, from ``vuart_idx=0`` to ``vuart_idx=7``.
|
||||
Suppose we use vUART0 for a port with ``vuart_idx=0``, vUART1 for
|
||||
``vuart_idx=1``, and so on.
|
||||
|
||||
Please pay attention to these points:
|
||||
|
||||
* vUART0 is the console port, vUART1-vUART7 are inter-VM communication ports.
|
||||
* Each communication port must set the connection to another communication vUART port of another VM.
|
||||
* When legacy ``vuart[0]`` is available, it is vUART0. A PCI-vUART can't
|
||||
be vUART0 unless ``vuart[0]`` is not set.
|
||||
* When legacy ``vuart[1]`` is available, it is vUART1. PCI-vUART can't
|
||||
be vUART1 unless ``vuart[1]`` is not set.
|
||||
|
||||
Setup ACRN vUART Using Configuration Tools
|
||||
==========================================
|
||||
|
||||
When you set up ACRN VM configurations with PCI-vUART, it is better to
|
||||
use the ACRN configuration tools because of all the PCI resources required: BDF number,
|
||||
address and size of mmio registers, and address and size of MSIX entry
|
||||
tables. These settings can't conflict with another PCI device. Furthermore,
|
||||
whether PCI-vUART can use ``vuart_idx=0`` and ``vuart_idx=1`` depends on legacy
|
||||
vUART settings. Configuration tools will override your settings in
|
||||
:ref:`How to Configure a Console Port <how-to-configure-a-console-port>`
|
||||
and :ref:`How to Configure a Communication Port
|
||||
<how-to-configure-a-communication-port>`.
|
||||
|
||||
You can configure both Legacy vUART and PCI-vUART in
|
||||
``./misc/vm_configs/xmls/config-xmls/<board>/<scenario>.xml``. For
|
||||
example, if VM0 has a legacy vUART0 and a PCI-vUART1, VM1 has no legacy
|
||||
vUART but has a PCI-vUART0 and a PCI-vUART1, VM0's PCI-vUART1 and VM1's
|
||||
PCI-vUART1 are connected to each other. You should configure then like this:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
<vm id="0">
|
||||
<legacy_vuart id="0">
|
||||
<type>VUART_LEGACY_PIO</type> /* vuart[0] is console port */
|
||||
<base>COM1_BASE</base> /* vuart[0] is used */
|
||||
<irq>COM1_IRQ</irq>
|
||||
</legacy_vuart>
|
||||
<legacy_vuart id="1">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>INVALID_COM_BASE</base> /* vuart[1] is not used */
|
||||
</legacy_vuart>
|
||||
<console_vuart id="0">
|
||||
<base>INVALID_PCI_BASE</base> /* PCI-vUART0 can't be used, because vuart[0] */
|
||||
</console_vuart>
|
||||
<communication_vuart id="1">
|
||||
<base>PCI_VUART</base> /* PCI-vUART1 is communication port, connect to vUART1 of VM1 */
|
||||
<target_vm_id>1</target_vm_id>
|
||||
<target_uart_id>1</target_uart_id>
|
||||
</communication_vuart>
|
||||
</vm>
|
||||
|
||||
<vm id="1">
|
||||
<legacy_vuart id="0">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>INVALID_COM_BASE</base> /* vuart[0] is not used */
|
||||
</legacy_vuart>
|
||||
<legacy_vuart id="1">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>INVALID_COM_BASE</base> /* vuart[1] is not used */
|
||||
</legacy_vuart>
|
||||
<console_vuart id="0">
|
||||
<base>PCI_VUART</base> /* PCI-vUART0 is console port */
|
||||
</console_vuart>
|
||||
<communication_vuart id="1">
|
||||
<base>PCI_VUART</base> /* PCI-vUART1 is communication port, connect to vUART1 of VM0 */
|
||||
<target_vm_id>0</target_vm_id>
|
||||
<target_uart_id>1</target_uart_id>
|
||||
</communication_vuart>
|
||||
</vm>
|
||||
|
||||
The ACRN vUART related XML fields:
|
||||
|
||||
- ``id`` in ``<legacy_vuart>``, value of ``vuart_idx``, ``id=0`` is for
|
||||
legacy ``vuart[0]`` configuration, ``id=1`` is for ``vuart[1]``.
|
||||
- ``type`` in ``<legacy_vuart>``, type is always ``VUART_LEGACY_PIO``
|
||||
for legacy vUART.
|
||||
- ``base`` in ``<legacy_vuart>``, if use the legacy vUART port, set
|
||||
COM1_BASE for ``vuart[0]``, set ``COM2_BASE`` for ``vuart[1]``.
|
||||
``INVALID_COM_BASE`` means do not use the legacy vUART port.
|
||||
- ``irq`` in ``<legacy_vuart>``, if you use the legacy vUART port, set
|
||||
``COM1_IRQ`` for ``vuart[0]``, set ``COM2_IRQ`` for ``vuart[1]``.
|
||||
- ``id`` in ``<console_vuart>`` and ``<communication_vuart>``,
|
||||
``vuart_idx`` for PCI-vUART
|
||||
- ``base`` in ``<console_vuart>`` and ``<communication_vuart>``,
|
||||
``PCI_VUART`` means use this PCI-vUART, ``INVALID_PCI_BASE`` means do
|
||||
not use this PCI-VUART.
|
||||
- ``target_vm_id`` and ``target_uart_id``, connection settings for this
|
||||
vUART port.
|
||||
|
||||
Run the command to build ACRN with this XML configuration file::
|
||||
|
||||
make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/<board>.xml \
|
||||
SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/<board>/<scenario>.xml
|
||||
|
||||
The configuration tools will test your settings, and check :ref:`vUART
|
||||
Rules <index-of-vuart>` for compilation issue. After compiling, you can find
|
||||
``./misc/vm_configs/scenarios/<scenario>/<board>/pci_dev.c`` has been
|
||||
changed by the configuration tools based on the XML settings, something like:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
struct acrn_vm_pci_dev_config vm0_pci_devs[] = {
|
||||
{
|
||||
.emu_type = PCI_DEV_TYPE_HVEMUL,
|
||||
.vbdf.bits = {.b = 0x00U, .d = 0x05U, .f = 0x00U},
|
||||
.vdev_ops = &vmcs9900_ops,
|
||||
.vbar_base[0] = 0x80003000,
|
||||
.vbar_base[1] = 0x80004000,
|
||||
.vuart_idx = 1, /* PCI-vUART1 of VM0 */
|
||||
.t_vuart.vm_id = 1U, /* connected to VM1's vUART1 */
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
}
|
||||
|
||||
This struct shows a PCI-vUART with ``vuart_idx=1``, ``BDF 00:05.0``, its
|
||||
a PCI-vUART1 of
|
||||
VM0, and it is connected to VM1's vUART1 port. When VM0 wants to communicate
|
||||
with VM1, it can use ``/dev/ttyS*``, the character device file of
|
||||
VM0's PCI-vUART1. Usually, legacy ``vuart[0]`` is ``ttyS0`` in VM, and
|
||||
``vuart[1]`` is ``ttyS1``. So we hope PCI-vUART0 is ``ttyS0``,
|
||||
PCI-VUART1 is ``ttyS1`` and so on through
|
||||
PCI-vUART7 is ``ttyS7``, but that is not true. We can use BDF to identify
|
||||
PCI-vUART in VM.
|
||||
|
||||
If you run ``dmesg | grep tty`` at a VM shell, you may see:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
[ 1.276891] 0000:00:05.0: ttyS4 at MMIO 0xa1414000 (irq = 124, base_baud = 115200) is a 16550A
|
||||
|
||||
We know for VM0 guest OS, ``ttyS4`` has BDF 00:05.0 and is PCI-vUART1.
|
||||
VM0 can communicate with VM1 by reading from or writing to ``/dev/ttyS4``.
|
||||
|
||||
If VM0 and VM1 are pre-launched VMs, or Service VM, ACRN hypervisor will
|
||||
create PCI-vUART virtual devices automatically. For post-launched VMs,
|
||||
created by ``acrn-dm``, an additional ``acrn-dm`` option is needed
|
||||
to create a PCI-vUART virtual device:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
-s <slot>,uart,vuart_idx:<val>
|
||||
|
||||
Kernel Config for Legacy vUART
|
||||
==============================
|
||||
|
||||
When ACRN hypervisor passthroughs a local APIC to a VM, there is IRQ
|
||||
injection issue for legacy vUART. The kernel driver must work in
|
||||
polling mode to avoid the problem. The VM kernel should have these config
|
||||
symbols set:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
CONFIG_SERIAL_8250_EXTENDED=y
|
||||
CONFIG_SERIAL_8250_DETECT_IRQ=y
|
||||
|
||||
Kernel Cmdline for PCI-vUART console
|
||||
====================================
|
||||
|
||||
When an ACRN VM does not have a legacy ``vuart[0]`` but has a
|
||||
PCI-vUART0, you can use PCI-vUART0 for VM serial input/output. Check
|
||||
which tty has the BDF of PCI-vUART0; usually it is not ``/dev/ttyS0``.
|
||||
For example, if ``/dev/ttyS4`` is PCI-vUART0, you must set
|
||||
``console=/dev/ttyS4`` in the kernel cmdline.
|
||||
|
||||
@@ -473,6 +473,8 @@ Notes:
|
||||
|
||||
- Make sure your GCC is 5.X. GCC 6 and above is NOT supported.
|
||||
|
||||
.. _qemu_inject_boot_keys:
|
||||
|
||||
Use QEMU to inject secure boot keys into OVMF
|
||||
*********************************************
|
||||
|
||||
|
||||
Reference in New Issue
Block a user