mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-26 23:36:51 +00:00
doc: remove UEFI/de-privilege boot mode from docs
Also clear Linux is no longer supported either as SOS or post-launched VM kernel. - When it mentions clear Linux, mostly replaced by Ubuntu. - remove all contents re/lated to "UEFI boot". - remove the term de-privilege mode, and direct mode as well. Tracked-On: #5197 Signed-off-by: Zide Chen <zide.chen@intel.com>
This commit is contained in:
parent
75ecc0e3b1
commit
f945fe27ab
@ -115,7 +115,7 @@ Here's an example showing how to run a VM with:
|
|||||||
-s 0:0,hostbridge \
|
-s 0:0,hostbridge \
|
||||||
-s 1:0,lpc -l com1,stdio \
|
-s 1:0,lpc -l com1,stdio \
|
||||||
-s 5,virtio-console,@pty:pty_port \
|
-s 5,virtio-console,@pty:pty_port \
|
||||||
-s 3,virtio-blk,b,/data/clearlinux/clearlinux.img \
|
-s 3,virtio-blk,b,/home/acrn/uos.img \
|
||||||
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
|
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
|
||||||
--acpidev_pt MSFT0101 \
|
--acpidev_pt MSFT0101 \
|
||||||
--intr_monitor 10000,10,1,100 \
|
--intr_monitor 10000,10,1,100 \
|
||||||
@ -769,7 +769,7 @@ example:
|
|||||||
-s 0:0,hostbridge \
|
-s 0:0,hostbridge \
|
||||||
-s 1:0,lpc -l com1,stdio \
|
-s 1:0,lpc -l com1,stdio \
|
||||||
-s 5,virtio-console,@pty:pty_port \
|
-s 5,virtio-console,@pty:pty_port \
|
||||||
-s 3,virtio-blk,b,/data/clearlinux/clearlinux.img \
|
-s 3,virtio-blk,b,/home/acrn/uos.img \
|
||||||
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
|
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
|
||||||
-B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \
|
-B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \
|
||||||
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
|
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
|
||||||
@ -789,9 +789,6 @@ the bus hierarchy would be:
|
|||||||
00:04.0 Ethernet controller: Red Hat, Inc. Virtio network device
|
00:04.0 Ethernet controller: Red Hat, Inc. Virtio network device
|
||||||
00:05.0 Serial controller: Red Hat, Inc. Virtio console
|
00:05.0 Serial controller: Red Hat, Inc. Virtio console
|
||||||
|
|
||||||
.. note:: For Clear Linux OS, the ``lspci`` command can be installed
|
|
||||||
from the "sysadmin-basic" bundle.
|
|
||||||
|
|
||||||
ACPI Virtualization
|
ACPI Virtualization
|
||||||
*******************
|
*******************
|
||||||
|
|
||||||
|
@ -42,7 +42,7 @@ A typical In-Vehicle Infotainment (IVI) system supports:
|
|||||||
- connection to IVI front system and mobile devices (cloud
|
- connection to IVI front system and mobile devices (cloud
|
||||||
connectivity)
|
connectivity)
|
||||||
|
|
||||||
ACRN supports guest OSes of Clear Linux OS and Android. OEMs can use the ACRN
|
ACRN supports guest OSes of Linux and Android. OEMs can use the ACRN
|
||||||
hypervisor and the Linux or Android guest OS reference code to implement their own
|
hypervisor and the Linux or Android guest OS reference code to implement their own
|
||||||
VMs for a customized IC/IVI/RSE.
|
VMs for a customized IC/IVI/RSE.
|
||||||
|
|
||||||
@ -88,8 +88,8 @@ ACRN is a type-I hypervisor that runs on top of bare metal. It supports
|
|||||||
Intel APL & KBL platforms and can be easily extended to support future
|
Intel APL & KBL platforms and can be easily extended to support future
|
||||||
platforms. ACRN implements a hybrid VMM architecture, using a privileged
|
platforms. ACRN implements a hybrid VMM architecture, using a privileged
|
||||||
service VM to manage I/O devices and
|
service VM to manage I/O devices and
|
||||||
provide I/O mediation. Multiple user VMs can be supported, running Clear
|
provide I/O mediation. Multiple user VMs can be supported, running Ubuntu
|
||||||
Linux OS or Android OS as the User VM.
|
or Android OS as the User VM.
|
||||||
|
|
||||||
ACRN 1.0
|
ACRN 1.0
|
||||||
========
|
========
|
||||||
|
@ -255,7 +255,7 @@ In ACRN, User VM Secure Boot can be enabled by below steps.
|
|||||||
Service VM Hardening
|
Service VM Hardening
|
||||||
--------------------
|
--------------------
|
||||||
|
|
||||||
In the ACRN project, the reference Service VM is based on the Clear Linux OS.
|
In the ACRN project, the reference Service VM is based on Ubuntu.
|
||||||
Customers may choose to use different open source OSes or their own
|
Customers may choose to use different open source OSes or their own
|
||||||
proprietary OS systems. To minimize the attack surfaces and achieve the
|
proprietary OS systems. To minimize the attack surfaces and achieve the
|
||||||
goal of "defense in depth", there are many common guidelines to ensure the
|
goal of "defense in depth", there are many common guidelines to ensure the
|
||||||
|
@ -17,7 +17,7 @@ There is PCI host bridge emulation in DM. The bus hierarchy is determined by ``a
|
|||||||
-s 2,pci-gvt -G "$2" \
|
-s 2,pci-gvt -G "$2" \
|
||||||
-s 5,virtio-console,@stdio:stdio_port \
|
-s 5,virtio-console,@stdio:stdio_port \
|
||||||
-s 6,virtio-hyper_dmabuf \
|
-s 6,virtio-hyper_dmabuf \
|
||||||
-s 3,virtio-blk,/home/clear/uos/uos.img \
|
-s 3,virtio-blk,/home/acrn/uos.img \
|
||||||
-s 4,virtio-net,tap0 \
|
-s 4,virtio-net,tap0 \
|
||||||
-s 7,virtio-rnd \
|
-s 7,virtio-rnd \
|
||||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||||
@ -38,5 +38,3 @@ the bus hierarchy would be:
|
|||||||
00:06.0 RAM memory: Intel Corporation Device 8606
|
00:06.0 RAM memory: Intel Corporation Device 8606
|
||||||
00:08.0 Network and computing encryption device: Red Hat, Inc. Virtio RNG
|
00:08.0 Network and computing encryption device: Red Hat, Inc. Virtio RNG
|
||||||
00:09.0 Ethernet controller: Red Hat, Inc. Virtio network device
|
00:09.0 Ethernet controller: Red Hat, Inc. Virtio network device
|
||||||
|
|
||||||
.. note:: For Clear Linux OS, the ``lspci`` command can be installed from the ``sysadmin-basic`` bundle.
|
|
||||||
|
@ -6,7 +6,7 @@ Hypervisor Startup
|
|||||||
This section is an overview of the ACRN hypervisor startup.
|
This section is an overview of the ACRN hypervisor startup.
|
||||||
The ACRN hypervisor
|
The ACRN hypervisor
|
||||||
compiles to a 32-bit multiboot-compliant ELF file.
|
compiles to a 32-bit multiboot-compliant ELF file.
|
||||||
The bootloader (ABL/SBL or UEFI) loads the hypervisor according to the
|
The bootloader (ABL/SBL or GRUB) loads the hypervisor according to the
|
||||||
addresses specified in the ELF header. The BSP starts the hypervisor
|
addresses specified in the ELF header. The BSP starts the hypervisor
|
||||||
with an initial state compliant to multiboot 1 specification, after the
|
with an initial state compliant to multiboot 1 specification, after the
|
||||||
bootloader prepares full configurations including ACPI, E820, etc.
|
bootloader prepares full configurations including ACPI, E820, etc.
|
||||||
@ -158,14 +158,10 @@ The main steps include:
|
|||||||
- **SW Load:** Prepares for each VM's SW configuration according to guest OS
|
- **SW Load:** Prepares for each VM's SW configuration according to guest OS
|
||||||
requirement, which may include kernel entry address, ramdisk address,
|
requirement, which may include kernel entry address, ramdisk address,
|
||||||
bootargs, or zero page for launching bzImage etc.
|
bootargs, or zero page for launching bzImage etc.
|
||||||
This is done by the hypervisor for pre-launched or Service VM, while by DM
|
This is done by the hypervisor for pre-launched or Service VM, and the VM will
|
||||||
for post-launched User VMs.
|
start from the standard real or protected mode which is not related to the
|
||||||
Meanwhile, there are two kinds of boot modes - de-privilege and direct boot
|
native environment. For post-launched VMs, the VM's SW configuration is done
|
||||||
mode. The de-privilege boot mode is combined with ACRN UEFI-stub, and only
|
by DM.
|
||||||
applies to the Service VM, which ensures that the native UEFI environment could be restored
|
|
||||||
and keep running in the Service VM. The direct boot mode is applied to both the
|
|
||||||
pre-launched and Service VM. In this mode, the VM will start from the standard
|
|
||||||
real or protected mode which is not related to the native environment.
|
|
||||||
|
|
||||||
- **Start VM:** The vBSP of vCPUs in this VM is kick to do schedule.
|
- **Start VM:** The vBSP of vCPUs in this VM is kick to do schedule.
|
||||||
|
|
||||||
|
@ -138,7 +138,7 @@ post-launched VMs (VM1 and VM2).
|
|||||||
-s 2,pci-gvt -G "$2" \
|
-s 2,pci-gvt -G "$2" \
|
||||||
-s 5,virtio-console,@stdio:stdio_port \
|
-s 5,virtio-console,@stdio:stdio_port \
|
||||||
-s 6,virtio-hyper_dmabuf \
|
-s 6,virtio-hyper_dmabuf \
|
||||||
-s 3,virtio-blk,/home/clear/uos/uos1.img \
|
-s 3,virtio-blk,/home/acrn/uos1.img \
|
||||||
-s 4,virtio-net,tap0 \
|
-s 4,virtio-net,tap0 \
|
||||||
-s 6,ivshmem,test,4096 \
|
-s 6,ivshmem,test,4096 \
|
||||||
-s 7,virtio-rnd \
|
-s 7,virtio-rnd \
|
||||||
@ -153,7 +153,7 @@ post-launched VMs (VM1 and VM2).
|
|||||||
|
|
||||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||||
-s 2,pci-gvt -G "$2" \
|
-s 2,pci-gvt -G "$2" \
|
||||||
-s 3,virtio-blk,/home/clear/uos/uos2.img \
|
-s 3,virtio-blk,/home/acrn/uos2.img \
|
||||||
-s 4,virtio-net,tap0 \
|
-s 4,virtio-net,tap0 \
|
||||||
-s 5,ivshmem,test,4096 \
|
-s 5,ivshmem,test,4096 \
|
||||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||||
@ -247,7 +247,7 @@ architecture and threat model for your application.
|
|||||||
|
|
||||||
- The previously highlighted technologies rely on the kernel, as a secure component, to enforce such policies. Because of this, we strongly recommend enabling secure boot for the Service VM, and extend the secureboot chain to any post-launched VM kernels.
|
- The previously highlighted technologies rely on the kernel, as a secure component, to enforce such policies. Because of this, we strongly recommend enabling secure boot for the Service VM, and extend the secureboot chain to any post-launched VM kernels.
|
||||||
- To ensure no malicious software is introduced or persists, utilize the filesystem (FS) verification methods on every boot to extend the secure boot chain for post-launch VMs (kernel/FS).
|
- To ensure no malicious software is introduced or persists, utilize the filesystem (FS) verification methods on every boot to extend the secure boot chain for post-launch VMs (kernel/FS).
|
||||||
- Reference: ACRN secure boot extension guide (`ClearLinux <https://projectacrn.github.io/latest/tutorials/enable_laag_secure_boot.html?highlight=secure%20boot>`_, `Windows <https://projectacrn.github.io/latest/tutorials/waag-secure-boot.html>`_)
|
- Reference: :ref:`how-to-enable-secure-boot-for-windows`
|
||||||
- Reference Stack: `dm-verity <https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/verity.html>`_
|
- Reference Stack: `dm-verity <https://www.kernel.org/doc/html/latest/admin-guide/device-mapper/verity.html>`_
|
||||||
|
|
||||||
.. note:: All the mentioned hardening techniques might require minor extra development efforts.
|
.. note:: All the mentioned hardening techniques might require minor extra development efforts.
|
||||||
|
@ -177,7 +177,7 @@ shown in :numref:`rules_arch_level` below.
|
|||||||
| External resource | Invalid E820 table or | Yes | The hypervisor shall | Invalid E820 table or |
|
| External resource | Invalid E820 table or | Yes | The hypervisor shall | Invalid E820 table or |
|
||||||
| provided by | invalid boot information| | panic during platform | invalid boot information|
|
| provided by | invalid boot information| | panic during platform | invalid boot information|
|
||||||
| bootloader | | | initialization | |
|
| bootloader | | | initialization | |
|
||||||
| (UEFI or SBL) | | | | |
|
| (GRUB or SBL) | | | | |
|
||||||
+--------------------+-------------------------+--------------+---------------------------+-------------------------+
|
+--------------------+-------------------------+--------------+---------------------------+-------------------------+
|
||||||
| Physical resource | 1GB page is not | Yes | The hypervisor shall | 1GB page is not |
|
| Physical resource | 1GB page is not | Yes | The hypervisor shall | 1GB page is not |
|
||||||
| used by the | available on the | | panic during platform | available on the |
|
| used by the | available on the | | panic during platform | available on the |
|
||||||
@ -574,7 +574,6 @@ The following table shows some use cases of module level configuration design:
|
|||||||
* - Configuration data provided by firmware
|
* - Configuration data provided by firmware
|
||||||
- This module is used to interact with firmware (UEFI or SBL), and the
|
- This module is used to interact with firmware (UEFI or SBL), and the
|
||||||
configuration data is provided by firmware.
|
configuration data is provided by firmware.
|
||||||
For example, UP2 uses SBL and KBL NUC uses UEFI.
|
|
||||||
- If a function pointer is used, the prerequisite is
|
- If a function pointer is used, the prerequisite is
|
||||||
"hv_operation_mode != DETECT".
|
"hv_operation_mode != DETECT".
|
||||||
|
|
||||||
|
@ -60,13 +60,6 @@ instructions on how to build ACRN using a container.
|
|||||||
|
|
||||||
Install the necessary tools for the following systems:
|
Install the necessary tools for the following systems:
|
||||||
|
|
||||||
* Clear Linux OS development system:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
|
|
||||||
$ sudo swupd bundle-add os-clr-on-clr os-core-dev python3-basic
|
|
||||||
$ pip3 install --user kconfiglib
|
|
||||||
|
|
||||||
* Ubuntu development system:
|
* Ubuntu development system:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -218,7 +211,7 @@ To modify the hypervisor configurations, you can either edit ``.config``
|
|||||||
manually, or you can invoke a TUI-based menuconfig (powered by kconfiglib) by
|
manually, or you can invoke a TUI-based menuconfig (powered by kconfiglib) by
|
||||||
executing ``make menuconfig``. As an example, the following commands
|
executing ``make menuconfig``. As an example, the following commands
|
||||||
(assuming that you are at the top level of the acrn-hypervisor directory)
|
(assuming that you are at the top level of the acrn-hypervisor directory)
|
||||||
generate a default configuration file for UEFI, allowing you to modify some
|
generate a default configuration file, allowing you to modify some
|
||||||
configurations and build the hypervisor using the updated ``.config``:
|
configurations and build the hypervisor using the updated ``.config``:
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
@ -246,7 +239,7 @@ Now you can build all these components at once as follows:
|
|||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ make FIRMWARE=uefi # Build the UEFI hypervisor with the new .config
|
$ make # Build hypervisor with the new .config
|
||||||
|
|
||||||
The build results are found in the ``build`` directory. You can specify
|
The build results are found in the ``build`` directory. You can specify
|
||||||
a different Output folder by setting the ``O`` ``make`` parameter,
|
a different Output folder by setting the ``O`` ``make`` parameter,
|
||||||
@ -271,7 +264,7 @@ of the acrn-hypervisor directory):
|
|||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
|
|
||||||
$ make BOARD_FILE=$PWD/misc/vm_configs/xmls/board-xmls/nuc7i7dnb.xml \
|
$ make BOARD_FILE=$PWD/misc/vm_configs/xmls/board-xmls/nuc7i7dnb.xml \
|
||||||
SCENARIO_FILE=$PWD/misc/vm_configs/xmls/config-xmls/nuc7i7dnb/industry.xml FIRMWARE=uefi TARGET_DIR=xxx
|
SCENARIO_FILE=$PWD/misc/vm_configs/xmls/config-xmls/nuc7i7dnb/industry.xml TARGET_DIR=xxx
|
||||||
|
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
|
@ -67,7 +67,7 @@ VM. The service VM can access hardware resources directly by running
|
|||||||
native drivers and it provides device sharing services to the user VMs
|
native drivers and it provides device sharing services to the user VMs
|
||||||
through the Device Model. Currently, the service VM is based on Linux,
|
through the Device Model. Currently, the service VM is based on Linux,
|
||||||
but it can also use other operating systems as long as the ACRN Device
|
but it can also use other operating systems as long as the ACRN Device
|
||||||
Model is ported into it. A user VM can be Clear Linux*, Ubuntu*, Android*,
|
Model is ported into it. A user VM can be Ubuntu*, Android*,
|
||||||
Windows* or VxWorks*. There is one special user VM, called a
|
Windows* or VxWorks*. There is one special user VM, called a
|
||||||
post-launched Real-Time VM (RTVM), designed to run a hard real-time OS,
|
post-launched Real-Time VM (RTVM), designed to run a hard real-time OS,
|
||||||
such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM
|
such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM
|
||||||
@ -394,71 +394,21 @@ and many other features.
|
|||||||
Boot Sequence
|
Boot Sequence
|
||||||
*************
|
*************
|
||||||
|
|
||||||
.. _systemd-boot: https://www.freedesktop.org/software/systemd/man/systemd-boot.html
|
|
||||||
.. _grub: https://www.gnu.org/software/grub/manual/grub/
|
.. _grub: https://www.gnu.org/software/grub/manual/grub/
|
||||||
.. _Slim Bootloader: https://www.intel.com/content/www/us/en/design/products-and-solutions/technologies/slim-bootloader/overview.html
|
.. _Slim Bootloader: https://www.intel.com/content/www/us/en/design/products-and-solutions/technologies/slim-bootloader/overview.html
|
||||||
|
|
||||||
ACRN supports two kinds of boots: **De-privilege boot mode** and **Direct
|
|
||||||
boot mode**.
|
|
||||||
|
|
||||||
De-privilege boot mode
|
|
||||||
======================
|
|
||||||
|
|
||||||
**De-privilege boot mode** is loaded by ``acrn.efi`` under a UEFI
|
|
||||||
environment. The Service VM must be the first launched VM, (i.e. VM0).
|
|
||||||
|
|
||||||
In :numref:`boot-flow`, we show a verified Boot Sequence with UEFI
|
|
||||||
on an Intel Architecture platform NUC (see :ref:`hardware`).
|
|
||||||
|
|
||||||
.. graphviz:: images/boot-flow.dot
|
|
||||||
:name: boot-flow
|
|
||||||
:align: center
|
|
||||||
:caption: ACRN Hypervisor De-privilege boot mode Flow
|
|
||||||
|
|
||||||
The Boot process proceeds as follows:
|
|
||||||
|
|
||||||
#. UEFI verifies and boots the ACRN hypervisor and Service VM Bootloader.
|
|
||||||
#. UEFI (or Service VM Bootloader) verifies and boots the Service VM kernel.
|
|
||||||
#. The Service VM kernel verifies and loads the ACRN Device Model and the Virtual
|
|
||||||
bootloader through ``dm-verity``.
|
|
||||||
#. The virtual bootloader starts the User-side verified boot process.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
To avoid a hardware resources conflict with the ACRN hypervisor, UEFI
|
|
||||||
services shall not use IOMMU. In addition, we only currently support the
|
|
||||||
UEFI timer with the HPET MSI.
|
|
||||||
|
|
||||||
In this boot mode, both the Service and User VM boot options (e.g. Linux
|
|
||||||
command-line parameters) are configured following the instructions for the EFI
|
|
||||||
bootloader used by the Operating System (OS).
|
|
||||||
|
|
||||||
* In the case of Clear Linux, the EFI bootloader is `systemd-boot`_ and the Linux
|
|
||||||
kernel command-line parameters are defined in the ``.conf`` files.
|
|
||||||
|
|
||||||
.. note::
|
|
||||||
|
|
||||||
A virtual `Slim Bootloader`_ called ``vSBL``, can also be used to start User VMs. The
|
|
||||||
:ref:`acrn-dm_parameters` provides more information on how to boot a
|
|
||||||
User VM using ``vSBL``. Note that in this case, the kernel command-line
|
|
||||||
parameters are defined by the combination of the ``cmdline.txt`` passed
|
|
||||||
on to the ``iasimage`` script and in the launch script, via the ``-B``
|
|
||||||
option.
|
|
||||||
|
|
||||||
Direct boot mode
|
|
||||||
================
|
|
||||||
|
|
||||||
The ACRN hypervisor can be booted from a third-party bootloader
|
The ACRN hypervisor can be booted from a third-party bootloader
|
||||||
directly, called **Direct boot mode**. A popular bootloader is `grub`_ and is
|
directly. A popular bootloader is `grub`_ and is
|
||||||
also widely used by Linux distributions.
|
also widely used by Linux distributions.
|
||||||
|
|
||||||
:ref:`using_grub` has a introduction on how to boot ACRN hypervisor with GRUB.
|
:ref:`using_grub` has a introduction on how to boot ACRN hypervisor with GRUB.
|
||||||
|
|
||||||
In :numref:`boot-flow-2`, we show the **Direct boot mode** sequence:
|
In :numref:`boot-flow-2`, we show the boot sequence:
|
||||||
|
|
||||||
.. graphviz:: images/boot-flow-2.dot
|
.. graphviz:: images/boot-flow-2.dot
|
||||||
:name: boot-flow-2
|
:name: boot-flow-2
|
||||||
:align: center
|
:align: center
|
||||||
:caption: ACRN Hypervisor Direct boot mode Boot Flow
|
:caption: ACRN Hypervisor Boot Flow
|
||||||
|
|
||||||
The Boot process proceeds as follows:
|
The Boot process proceeds as follows:
|
||||||
|
|
||||||
@ -482,7 +432,7 @@ launch scripts.
|
|||||||
.. note::
|
.. note::
|
||||||
|
|
||||||
`Slim Bootloader`_ is an alternative boot firmware that can be used to
|
`Slim Bootloader`_ is an alternative boot firmware that can be used to
|
||||||
boot ACRN in **Direct boot mode**. The `Boot ACRN Hypervisor
|
boot ACRN. The `Boot ACRN Hypervisor
|
||||||
<https://slimbootloader.github.io/how-tos/boot-acrn.html>`_ tutorial
|
<https://slimbootloader.github.io/how-tos/boot-acrn.html>`_ tutorial
|
||||||
provides more information on how to use SBL with ACRN.
|
provides more information on how to use SBL with ACRN.
|
||||||
|
|
||||||
|
@ -214,9 +214,6 @@ Additional scenario XML elements:
|
|||||||
``GPU_SBDF`` (a child node of ``MISC_CFG``):
|
``GPU_SBDF`` (a child node of ``MISC_CFG``):
|
||||||
Specify the Segment, Bus, Device, and function of the GPU.
|
Specify the Segment, Bus, Device, and function of the GPU.
|
||||||
|
|
||||||
``UEFI_OS_LOADER_NAME`` (a child node of ``MISC_CFG``):
|
|
||||||
Specify the UEFI OS loader name.
|
|
||||||
|
|
||||||
``vm``:
|
``vm``:
|
||||||
Specify the VM with VMID by its "id" attribute.
|
Specify the VM with VMID by its "id" attribute.
|
||||||
|
|
||||||
|
@ -32,34 +32,3 @@ For example:
|
|||||||
module /boot/bzImage Linux_bzImage
|
module /boot/bzImage Linux_bzImage
|
||||||
module /boot/bzImage2 Linux_bzImage2
|
module /boot/bzImage2 Linux_bzImage2
|
||||||
}
|
}
|
||||||
|
|
||||||
For de-privilege mode, the parameters are specified in the ``efibootmgr -u`` command:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
:emphasize-lines: 2
|
|
||||||
|
|
||||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN NUC Hypervisor" \
|
|
||||||
-u "uart=disabled"
|
|
||||||
|
|
||||||
|
|
||||||
De-privilege mode hypervisor parameters
|
|
||||||
***************************************
|
|
||||||
|
|
||||||
The de-privilege mode hypervisor parameters can only be specified in the efibootmgr command.
|
|
||||||
Currently we support the ``bootloader=`` parameter:
|
|
||||||
|
|
||||||
+-----------------+-------------------------------------------------+-------------------------------------------------------------------------+
|
|
||||||
| Parameter | Value | Description |
|
|
||||||
+=================+=================================================+=========================================================================+
|
|
||||||
| bootloader= | ``\EFI\org.clearlinux\bootloaderx64.efi`` | This sets the EFI executable to be loaded once the hypervisor is up |
|
|
||||||
| | | and running. This is typically the bootloader of the Service OS. |
|
|
||||||
| | | i.e. : ``\EFI\org.clearlinux\bootloaderx64.efi`` |
|
|
||||||
+-----------------+-------------------------------------------------+-------------------------------------------------------------------------+
|
|
||||||
|
|
||||||
For example:
|
|
||||||
|
|
||||||
.. code-block:: none
|
|
||||||
:emphasize-lines: 2
|
|
||||||
|
|
||||||
$ sudo efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN NUC Hypervisor" \
|
|
||||||
-u "bootloader=\EFI\boot\bootloaderx64.efi"
|
|
||||||
|
@ -459,8 +459,6 @@ In the current configuration, we will set
|
|||||||
``i915.enable_initial_modeset=1`` in Service VM and
|
``i915.enable_initial_modeset=1`` in Service VM and
|
||||||
``i915.enable_initial_modeset=0`` in User VM.
|
``i915.enable_initial_modeset=0`` in User VM.
|
||||||
|
|
||||||
This parameter is not used on UEFI platforms.
|
|
||||||
|
|
||||||
i915.domain_scaler_owner
|
i915.domain_scaler_owner
|
||||||
========================
|
========================
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user