doc: update release_3.0 docs for draft publishing
For testing the daily configurator's references to published documentation (for more details) we need a draft of the 3.0 docs published. Here are all the available updates from the master branch applied to the release_3.0 branch. We'll update these again for the final 3.0 doc release. Tracked-On: #5692 Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
@ -48,9 +48,9 @@ the TSC and its membership, are described in the project's `technical-charter`_.
|
||||
|
||||
These are the current TSC voting members and chair person:
|
||||
|
||||
- Anthony Xu (chair): anthony.xu@intel.com
|
||||
- Junjie Mao (chair): junjie.mao@intel.com
|
||||
- Helmut Buchsbaum: helmut.buchsbaum@tttech-industrial.com
|
||||
- Thomas Gleixner: tglx@linutronix.de
|
||||
- Thomas Gleixner: thomas.gleixner@intel.com
|
||||
|
||||
.. _ACRN user mailing list: https://lists.projectacrn.org/g/acrn-user
|
||||
.. _BSD 3-Clause license: https://github.com/projectacrn/acrn-hypervisor/blob/master/LICENSE
|
||||
|
@ -3,7 +3,7 @@
|
||||
#
|
||||
^WARNING: Not copying tabs assets! Not compatible with latex builder
|
||||
#
|
||||
^Latexmk: Summary of warnings:
|
||||
^Latexmk: Summary of warnings.*:
|
||||
^ Latex failed to resolve [0-9]+ reference\(s\)
|
||||
^ Latex failed to resolve [0-9]+ citation\(s\)
|
||||
#
|
||||
@ -20,3 +20,7 @@
|
||||
^ =====Latex reported missing or unavailable character\(s\).
|
||||
^=====See log file for details.
|
||||
#
|
||||
^Collected error summary \(may duplicate other messages\):
|
||||
^ pdflatex: Command for 'pdflatex' gave return code 1
|
||||
^ Refer to 'acrn.log' for details
|
||||
#
|
||||
|
@ -15,6 +15,9 @@ BUILDDIR ?= _build
|
||||
SOURCEDIR = $(BUILDDIR)/rst
|
||||
LATEXMKOPTS = -silent
|
||||
|
||||
# should the config option doc show hidden config options?
|
||||
XSLTPARAM ?= --stringparam showHidden 'n'
|
||||
|
||||
# document publication assumes the folder structure is setup
|
||||
# with the acrn-hypervisor and projectacrn.github.io repos as
|
||||
# sibling folders and make is run inside the acrn-hypervisor/docs
|
||||
@ -54,7 +57,7 @@ content:
|
||||
$(Q)scripts/extract_content.py $(SOURCEDIR) misc
|
||||
$(Q)mkdir -p $(SOURCEDIR)/misc/config_tools/schema
|
||||
$(Q)rsync -rt ../misc/config_tools/schema/*.xsd $(SOURCEDIR)/misc/config_tools/schema
|
||||
$(Q)xsltproc -xinclude ./scripts/configdoc.xsl $(SOURCEDIR)/misc/config_tools/schema/config.xsd > $(SOURCEDIR)/reference/configdoc.txt
|
||||
$(Q)xsltproc $(XSLTPARAM) -xinclude ./scripts/configdoc.xsl $(SOURCEDIR)/misc/config_tools/schema/config.xsd > $(SOURCEDIR)/reference/configdoc.txt
|
||||
|
||||
|
||||
html: content doxy
|
||||
@ -69,7 +72,7 @@ singlehtml: content doxy
|
||||
|
||||
pdf: html
|
||||
@echo now making $(BUILDDIR)/latex/acrn.pdf
|
||||
-$(Q)make -silent latexpdf LATEXMKOPTS=$(LATEXMKOPTS) >> $(BUILDDIR)/doc.log 2>&1
|
||||
$(Q)make -silent latexpdf LATEXMKOPTS=$(LATEXMKOPTS) >> $(BUILDDIR)/doc.log 2>&1
|
||||
$(Q)./scripts/filter-doc-log.sh $(BUILDDIR)/doc.log
|
||||
|
||||
|
||||
|
@ -319,7 +319,7 @@ VerbatimBorderColor={HTML}{00285A}',
|
||||
# author, documentclass [howto, manual, or own class]).
|
||||
latex_documents = [
|
||||
(master_doc, 'acrn.tex', u'Project ACRN Documentation',
|
||||
u'Project ACRN', 'manual'),
|
||||
u'Project ACRN', 'manual',True),
|
||||
]
|
||||
|
||||
latex_logo = 'images/ACRN_Logo_PrimaryLockup_COLOR-300x300-1.png'
|
||||
|
@ -39,9 +39,9 @@ Configuration Tutorials
|
||||
|
||||
tutorials/acrn_configuration_tool
|
||||
tutorials/board_inspector_tool
|
||||
tutorials/acrn_configurator_tool
|
||||
tutorials/upgrading_configuration
|
||||
reference/config-options
|
||||
reference/config-options-launch
|
||||
reference/hv-make-options
|
||||
user-guides/hv-parameters
|
||||
user-guides/kernel-parameters
|
||||
@ -57,7 +57,6 @@ Advanced Features
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/nvmx_virtualization
|
||||
tutorials/vuart_configuration
|
||||
tutorials/rdt_configuration
|
||||
tutorials/vcat_configuration
|
||||
|
@ -72,8 +72,9 @@ See :ref:`hardware`.
|
||||
Select Your Scenario
|
||||
====================
|
||||
|
||||
A scenario defines a specific ACRN configuration, such as the type and number of
|
||||
VMs that can be run, their attributes, and the resources they have access to.
|
||||
A scenario defines a specific ACRN configuration, such as hypervisor
|
||||
capabilities, the type and number of VMs that can be run, their attributes, and
|
||||
the resources they have access to.
|
||||
|
||||
This image shows an example of an ACRN scenario to illustrate the types of VMs
|
||||
that ACRN offers:
|
||||
@ -82,13 +83,12 @@ that ACRN offers:
|
||||
|
||||
ACRN offers three types of VMs:
|
||||
|
||||
* **Pre-launched User VMs**: These VMs are automatically launched at boot time
|
||||
by the hypervisor. They run independently of other VMs and own dedicated
|
||||
hardware resources, such as a CPU core, memory, and I/O devices. Other VMs,
|
||||
including the Service VM, may not even be aware of a pre-launched VM's
|
||||
existence. The configuration of pre-launched VMs is static and must be defined
|
||||
at build time. They are well-suited for safety-critical applications and where
|
||||
very strict isolation, including from the Service VM, is desirable.
|
||||
* **Pre-launched User VMs**: These VMs run independently of other VMs and own
|
||||
dedicated hardware resources, such as CPU cores, memory, and I/O devices.
|
||||
Other VMs, including the Service VM, may not even be aware of a pre-launched
|
||||
VM's existence. The configuration of pre-launched VMs is static and must be
|
||||
defined at build time. They are well-suited for safety-critical applications
|
||||
and where very strict isolation, including from the Service VM, is desirable.
|
||||
|
||||
* **Service VM**: A special VM, required for scenarios that have post-launched
|
||||
User VMs. The Service VM can access hardware resources directly by running
|
||||
@ -116,29 +116,29 @@ meet their requirements without pre-launched VMs. Even if your application has
|
||||
stringent real-time requirements, start by testing the application on a
|
||||
post-launched VM before considering a pre-launched VM.
|
||||
|
||||
Predefined Scenarios
|
||||
---------------------
|
||||
Scenario Types
|
||||
---------------
|
||||
|
||||
To help accelerate the configuration process, ACRN offers the following
|
||||
:ref:`predefined sample scenarios <usage-scenarios>`:
|
||||
ACRN categorizes scenarios into :ref:`three types <usage-scenarios>`:
|
||||
|
||||
* **Shared scenario:** This scenario represents a traditional computing, memory,
|
||||
and device resource sharing model among VMs. It has post-launched User VMs and
|
||||
the required Service VM. There are no pre-launched VMs in this scenario.
|
||||
|
||||
* **Partitioned scenario:** This scenario has pre-launched User VMs to
|
||||
demonstrate VM partitioning: the User VMs are independent and isolated, and
|
||||
they do not share resources. There is no need for the Service VM or Device
|
||||
Model because all partitioned VMs run native device drivers and directly
|
||||
access their configured resources.
|
||||
* **Partitioned scenario:** This scenario has pre-launched User VMs only. It
|
||||
demonstrates VM partitioning: the User VMs are independent and isolated, and
|
||||
they do not share resources. For example, a pre-launched VM may not share a
|
||||
storage device with any other VM, so each pre-launched VM requires its own
|
||||
boot device. There is no need for the Service VM or Device Model because all
|
||||
partitioned VMs run native device drivers and directly access their configured
|
||||
resources.
|
||||
|
||||
* **Hybrid scenario:** This scenario simultaneously supports both sharing and
|
||||
partitioning on the consolidated system. It has pre-launched and
|
||||
partitioning on the consolidated system. It has pre-launched VMs and
|
||||
post-launched VMs, along with the Service VM.
|
||||
|
||||
ACRN provides predefined configuration files and documentation to help you set
|
||||
up these scenarios. You can customize the files for your use case, as described
|
||||
later in :ref:`overview_dev_config_editor`.
|
||||
While designing your scenario, keep these concepts in mind as you will see them
|
||||
mentioned in ACRN components and documentation.
|
||||
|
||||
|icon_host| Step 2: Prepare the Development Computer
|
||||
****************************************************
|
||||
@ -160,7 +160,7 @@ ACRN:
|
||||
|
||||
.. |icon_target| image:: ./images/icon_target.png
|
||||
|
||||
The :ref:`board_inspector_tool` ``board_inspector.py``, found in the ACRN
|
||||
The :ref:`board_inspector_tool`, found in the ACRN
|
||||
hypervisor source code, enables you to generate a board configuration file on
|
||||
the target system.
|
||||
|
||||
@ -181,8 +181,8 @@ You must configure all of your target's BIOS settings before running the Board
|
||||
Inspector tool, because the tool records the current BIOS settings in the board
|
||||
configuration file.
|
||||
|
||||
ACRN requires the BIOS settings listed in :ref:`gsg-board-setup` of the Getting
|
||||
Started Guide.
|
||||
ACRN requires the BIOS settings listed in :ref:`gsg-board-setup` of
|
||||
the Getting Started Guide.
|
||||
|
||||
Use the Board Inspector to Generate a Board Configuration File
|
||||
==============================================================
|
||||
@ -208,67 +208,22 @@ and :ref:`overview_dev_build`.
|
||||
|icon_host| Step 4: Generate a Scenario Configuration File and Launch Scripts
|
||||
*****************************************************************************
|
||||
|
||||
The :ref:`acrn_configurator_tool` ``acrn_configurator.py`` enables you to
|
||||
configure your ACRN hypervisor and VMs via a web-based user interface on your
|
||||
development computer. Using the tool, you define your scenario settings and save
|
||||
them to a scenario configuration file. For scenarios with post-launched User
|
||||
VMs, you must also configure and generate launch scripts.
|
||||
The :ref:`acrn_configurator_tool` lets you configure your scenario settings via
|
||||
a graphical user interface (GUI) on your development computer.
|
||||
|
||||
The following sections provide an overview and important information to keep
|
||||
in mind when using the ACRN Configurator.
|
||||
The tool imports the board configuration file that you generated in
|
||||
:ref:`overview_dev_board_config`. Then you can configure your scenario, such as
|
||||
set hypervisor capabilities, add VMs, modify their attributes, and delete VMs.
|
||||
The tool validates your inputs against your board configuration file to ensure
|
||||
the scenario is supported by the target hardware. The tool saves your settings
|
||||
to a **scenario configuration file** in XML format. You will need this file in
|
||||
:ref:`overview_dev_build`.
|
||||
|
||||
Generate a Scenario Configuration File
|
||||
======================================
|
||||
|
||||
A **scenario configuration file** defines a working scenario by configuring
|
||||
hypervisor capabilities and defining some VM attributes and resources. We call
|
||||
these settings "static" because they are used to build the hypervisor. The file
|
||||
contains:
|
||||
|
||||
* All hypervisor settings
|
||||
* All pre-launched User VM settings
|
||||
* All Service VM settings
|
||||
* Some post-launched User VM settings, while other settings are in
|
||||
the launch script
|
||||
|
||||
Before using the ACRN Configurator to generate a scenario configuration
|
||||
file, be sure you have the board configuration file that you generated in
|
||||
:ref:`overview_dev_board_config`. The tool needs the board configuration file to
|
||||
validate that your custom scenario is supported by the target hardware.
|
||||
|
||||
You can use the tool to create a new scenario configuration file or modify an
|
||||
existing one, such as a predefined scenario described in
|
||||
:ref:`overview_dev_hw_scenario`. The tool's GUI enables you to edit the
|
||||
configurable items in the file, such as adding VMs, modifying VM attributes, or
|
||||
deleting VMs. The tool validates your inputs against your board configuration
|
||||
file. After validation is successful, the tool generates your custom scenario
|
||||
configuration file in XML format.
|
||||
|
||||
Generate Launch Scripts
|
||||
=======================
|
||||
|
||||
A **launch script** invokes the Service VM's Device Model to create a
|
||||
post-launched User VM. The launch script defines settings needed to launch the
|
||||
User VM and emulate the devices configured for sharing with that User VM. We
|
||||
call these settings "dynamic" because they are used at runtime.
|
||||
|
||||
Before using the ACRN Configurator to generate a launch script, be sure
|
||||
you have your board configuration file and scenario configuration file. The tool
|
||||
needs both files to validate your launch script configuration.
|
||||
|
||||
The process of generating launch scripts begins by choosing to create a new
|
||||
launch configuration or modify an existing one. You then use the GUI to
|
||||
edit the configurable settings of each post-launched User VM in your scenario.
|
||||
The tool validates your inputs against your board configuration file and
|
||||
scenario configuration file. After validation is successful, the tool generates
|
||||
your custom launch configuration file in XML format. You then use the tool to
|
||||
generate the launch scripts. The tool creates one launch script for each VM
|
||||
defined in the launch configuration file.
|
||||
|
||||
.. note::
|
||||
The ACRN Configurator may not show all editable
|
||||
parameters for scenario configuration files and launch scripts. You can edit
|
||||
the parameters manually. See :ref:`acrn_config_data`.
|
||||
If your scenario configuration has post-launched User VMs, the tool also
|
||||
generates a **launch script** for each of those VMs. The launch script contains
|
||||
the settings needed to launch the User VM and emulate the devices configured for
|
||||
sharing with that User VM. You will run this script in the Service VM in
|
||||
:ref:`overview_dev_install`.
|
||||
|
||||
.. _overview_dev_build:
|
||||
|
||||
@ -284,8 +239,7 @@ If your scenario has a Service VM, you also need to build the ACRN kernel for
|
||||
the Service VM. The ACRN kernel source code provides a predefined configuration
|
||||
file and a makefile to build the ACRN kernel binary and associated components.
|
||||
The kernel build can take 15 minutes or less on a fast computer, but could take
|
||||
an hour or more depending on the performance of your development computer. For
|
||||
more information about the kernel parameters, see :ref:`kernel-parameters`.
|
||||
an hour or more depending on the performance of your development computer.
|
||||
|
||||
.. _overview_dev_install:
|
||||
|
||||
@ -303,9 +257,10 @@ At a high level, you will:
|
||||
* Configure GRUB to boot the ACRN hypervisor, pre-launched VMs, and Service VM.
|
||||
Reboot the target, and launch ACRN.
|
||||
|
||||
* If your scenario contains a post-launched VM, install an OS image for the
|
||||
* If your scenario contains a post-launched User VM, install an OS image for the
|
||||
post-launched VM and run the launch script you created in
|
||||
:ref:`overview_dev_config_editor`.
|
||||
:ref:`overview_dev_config_editor`. The script invokes the Service VM's Device
|
||||
Model to create the User VM.
|
||||
|
||||
Learn More
|
||||
**********
|
||||
@ -313,6 +268,5 @@ Learn More
|
||||
* To get ACRN up and running for the first time, see the :ref:`gsg` for
|
||||
step-by-step instructions.
|
||||
|
||||
* If you have already completed the :ref:`gsg` , see the
|
||||
:ref:`develop_acrn` for more information about complex scenarios, advanced
|
||||
features, and debugging.
|
||||
* If you have already completed the :ref:`gsg` , see the :ref:`develop_acrn` for
|
||||
more information about configuring and debugging ACRN.
|
||||
|
@ -1,92 +0,0 @@
|
||||
.. _launch-config-options:
|
||||
|
||||
Launch Configuration Options
|
||||
##############################
|
||||
|
||||
As explained in :ref:`acrn_configuration_tool`, launch configuration files
|
||||
define post-launched User VM settings. This document describes these option settings.
|
||||
|
||||
``user_vm``:
|
||||
Specify the User VM ``id`` to the Service VM.
|
||||
|
||||
``user_vm_type``:
|
||||
Specify the User VM type, such as ``CLEARLINUX``, ``ANDROID``, ``ALIOS``,
|
||||
``PREEMPT-RT LINUX``, ``GENERIC LINUX``, ``WINDOWS``, ``YOCTO``, ``UBUNTU``,
|
||||
``ZEPHYR`` or ``VXWORKS``.
|
||||
|
||||
``rtos_type``:
|
||||
Specify the User VM Real-time capability: Soft RT, Hard RT, or none of them.
|
||||
|
||||
``mem_size``:
|
||||
Specify the User VM memory size in megabytes.
|
||||
|
||||
``vbootloader``:
|
||||
Virtual bootloader type; only supports OVMF.
|
||||
|
||||
``vuart0``:
|
||||
Specify whether the Device Model emulates the vUART0 (vCOM1); refer to
|
||||
:ref:`vuart_config` for details. If set to ``Enable``, the vUART0 is
|
||||
emulated by the Device Model; if set to ``Disable``, the vUART0 is
|
||||
emulated by the hypervisor if it is configured in the scenario XML.
|
||||
|
||||
``enable_ptm``:
|
||||
Enable the Precision Timing Measurement (PTM) feature.
|
||||
|
||||
``usb_xhci``:
|
||||
USB xHCI mediator configuration. Input format:
|
||||
``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``.
|
||||
Refer to :ref:`usb_virtualization` for details.
|
||||
|
||||
``shm_regions``:
|
||||
List of shared memory regions for inter-VM communication.
|
||||
|
||||
``shm_region`` (a child node of ``shm_regions``):
|
||||
Configure the shared memory regions for the current VM, input format:
|
||||
``[hv|dm]:/<shm name>,<shm size in MB>``. Refer to :ref:`ivshmem-hld`
|
||||
for details.
|
||||
|
||||
``console_vuart``:
|
||||
Enable a PCI-based console vUART. Refer to :ref:`vuart_config` for details.
|
||||
|
||||
``communication_vuarts``:
|
||||
List of PCI-based communication vUARTs. Refer to :ref:`vuart_config` for
|
||||
details.
|
||||
|
||||
``communication_vuart`` (a child node of ``communication_vuarts``):
|
||||
Enable a PCI-based communication vUART with its ID. Refer to
|
||||
:ref:`vuart_config` for details.
|
||||
|
||||
``passthrough_devices``:
|
||||
Select the passthrough device from the PCI device list. We support:
|
||||
``usb_xdci``, ``audio``, ``audio_codec``, ``ipu``, ``ipu_i2c``,
|
||||
``cse``, ``wifi``, ``bluetooth``, ``sd_card``,
|
||||
``ethernet``, ``sata``, and ``nvme``.
|
||||
|
||||
``network`` (a child node of ``virtio_devices``):
|
||||
The virtio network device setting.
|
||||
Input format: ``<device_name>[,vhost][,mac=<XX:XX:XX:XX:XX:XX>]``.
|
||||
The ``<device_name>`` is the name of the TAP (or MacVTap) device.
|
||||
It must include the keyword ``tap``. ``vhost`` specifies the
|
||||
vhost backend; otherwise, the VBSU backend is used. The ``mac``
|
||||
address is optional.
|
||||
|
||||
``block`` (a child node of ``virtio_devices``):
|
||||
The virtio block device setting.
|
||||
Input format: ``[blk partition:][img path]`` e.g.: ``/dev/sda3:./a/b.img``.
|
||||
|
||||
``console`` (a child node of ``virtio_devices``):
|
||||
The virtio console device setting.
|
||||
Input format:
|
||||
``[@]stdio|tty|pty|sock:portname[=portpath][,[@]stdio|tty|pty:portname[=portpath]]``.
|
||||
|
||||
``cpu_affinity``:
|
||||
A comma-separated list of Service VM vCPUs assigned to this VM. A Service VM vCPU is identified
|
||||
by its lapic ID.
|
||||
|
||||
.. note::
|
||||
|
||||
The ``configurable`` and ``readonly`` attributes are used to mark
|
||||
whether the item is configurable for users. When ``configurable="n"``
|
||||
and ``readonly="y"``, the item is not configurable from the web
|
||||
interface. When ``configurable="n"``, the item does not appear on the
|
||||
interface.
|
@ -37,19 +37,19 @@ String
|
||||
|
||||
.. comment These images are used in generated option documentation
|
||||
|
||||
.. |icon-advanced| image:: images/Advanced.svg
|
||||
.. |icon-advanced| image:: images/Advanced.png
|
||||
:alt: Find this option on the Configurator's Advanced Parameters tab
|
||||
.. |icon-basic| image:: images/Basic.svg
|
||||
.. |icon-basic| image:: images/Basic.png
|
||||
:alt: Find this option on the Configurator's Basic Parameters tab
|
||||
.. |icon-not-available| image:: images/Not-available.svg
|
||||
.. |icon-not-available| image:: images/Not-available.png
|
||||
:alt: This is a hidden option and not user-editable using the Configurator
|
||||
.. |icon-post-launched-vm| image:: images/Post-launched-VM.svg
|
||||
.. |icon-post-launched-vm| image:: images/Post-launched-VM.png
|
||||
:alt: Find this option on a Configurator Post-launched VM tab
|
||||
.. |icon-pre-launched-vm| image:: images/Pre-launched-VM.svg
|
||||
.. |icon-pre-launched-vm| image:: images/Pre-launched-VM.png
|
||||
:alt: Find this option on a Configurator Pre-launched VM tab
|
||||
.. |icon-service-vm| image:: images/Service-VM.svg
|
||||
.. |icon-service-vm| image:: images/Service-VM.png
|
||||
:alt: Find this option on the Configurator Service VM tab
|
||||
.. |icon-hypervisor| image:: images/Hypervisor.svg
|
||||
.. |icon-hypervisor| image:: images/Hypervisor.png
|
||||
:alt: Find this option on the Configurator's Hypervisor Global Settings tab
|
||||
|
||||
We use icons within an option description to indicate where the option can be
|
||||
|
BIN
doc/reference/images/Advanced.png
Normal file
After Width: | Height: | Size: 1.7 KiB |
BIN
doc/reference/images/Basic.png
Normal file
After Width: | Height: | Size: 1.5 KiB |
BIN
doc/reference/images/Hypervisor.png
Normal file
After Width: | Height: | Size: 1.3 KiB |
BIN
doc/reference/images/Not-available.png
Normal file
After Width: | Height: | Size: 1.5 KiB |
BIN
doc/reference/images/Post-launched-VM.png
Normal file
After Width: | Height: | Size: 1.8 KiB |
BIN
doc/reference/images/Pre-launched-VM.png
Normal file
After Width: | Height: | Size: 1.7 KiB |
BIN
doc/reference/images/Service-VM.png
Normal file
After Width: | Height: | Size: 1.3 KiB |
@ -5,6 +5,8 @@
|
||||
<xsl:variable name="section_adornment" select="'#*=-%+@`'"/>
|
||||
<xsl:variable name="vLower" select="'abcdefghijklmnopqrstuvwxyz'"/>
|
||||
<xsl:variable name="vUpper" select="'ABCDEFGHIJKLMNOPQRSTUVWXYZ'"/>
|
||||
<!-- Default is to not show hidden options (acrn:views='') overridded by passing - -paramstring showHidden 'y' to xsltproc -->
|
||||
<xsl:param name="showHidden" select="n" />
|
||||
<!-- xslt script to autogenerate config option documentation -->
|
||||
<!-- Get things started with the ACRNConfigType element -->
|
||||
<xsl:template match="/xs:schema">
|
||||
@ -57,7 +59,7 @@
|
||||
described as an option -->
|
||||
<xsl:choose>
|
||||
<!-- don't document elements if not viewable -->
|
||||
<xsl:when test="xs:annotation/@acrn:views=''">
|
||||
<xsl:when test="xs:annotation/@acrn:views='' and $showHidden='n'">
|
||||
</xsl:when>
|
||||
<xsl:when test="//xs:complexType[@name=$ty]">
|
||||
<!-- The section header -->
|
||||
|
@ -202,6 +202,3 @@ The ``scenario`` attribute specifies the scenario name and must match the
|
||||
|
||||
The ``user_vm_launcher`` attribute specifies the number of post-launched User
|
||||
VMs in a scenario.
|
||||
|
||||
See :ref:`launch-config-options` for a full explanation of available launch
|
||||
XML elements.
|
||||
|
360
doc/tutorials/acrn_configurator_tool.rst
Normal file
@ -0,0 +1,360 @@
|
||||
.. _acrn_configurator_tool:
|
||||
|
||||
ACRN Configurator Tool
|
||||
######################
|
||||
|
||||
This guide describes all features and uses of the tool.
|
||||
|
||||
About the ACRN Configurator Tool
|
||||
*********************************
|
||||
|
||||
The ACRN Configurator ``acrn_configurator.py`` provides a user interface to help
|
||||
you customize your :ref:`ACRN configuration <acrn_configuration_tool>`.
|
||||
Capabilities:
|
||||
|
||||
* Reads board information from the specified board configuration file
|
||||
* Helps you configure a scenario of hypervisor and VM settings
|
||||
* Generates a scenario configuration file that stores the configured settings in
|
||||
XML format
|
||||
* Generates a launch script for each post-launched User VM
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
This guide assumes you have a board configuration file and have successfully
|
||||
launched the ACRN Configurator. For steps, see the following Getting Started
|
||||
Guide sections:
|
||||
|
||||
* :ref:`gsg-target-hardware`
|
||||
* :ref:`gsg-dev-computer`
|
||||
* :ref:`gsg-board-setup`
|
||||
* :ref:`gsg-dev-setup`
|
||||
|
||||
Start with a New or Existing Configuration
|
||||
******************************************
|
||||
|
||||
When the ACRN Configurator opens, the introduction screen appears.
|
||||
|
||||
.. image:: images/configurator-intro.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
The introduction screen lets you start a new configuration or use an existing
|
||||
one by selecting a working folder.
|
||||
|
||||
As described in :ref:`acrn_configuration_tool`, a configuration defines one
|
||||
ACRN instance, and its data is stored in a set of configuration files:
|
||||
|
||||
* One board configuration file
|
||||
* One scenario configuration file
|
||||
* One launch script per post-launched VM
|
||||
|
||||
When you use the ACRN Configurator, it saves these files in the selected working
|
||||
folder.
|
||||
|
||||
Each configuration must have a unique working folder. For example, imagine you
|
||||
want to create three configurations. Perhaps you want to create a configuration
|
||||
for three different boards, or you have one board but want to create three sets
|
||||
of hypervisor settings to test on it. You would need to select a different
|
||||
working folder for each configuration. After you have selected the working
|
||||
folder in the ACRN Configurator, it saves the configuration files there. The
|
||||
following figure shows an example file system consisting of a top-level folder,
|
||||
``acrn-work``, and a working folder for each configuration, ``ConfigA``,
|
||||
``ConfigB``, and ``ConfigC``.
|
||||
|
||||
.. image:: images/config-file.png
|
||||
:align: center
|
||||
|
||||
Start a New Configuration
|
||||
==========================
|
||||
|
||||
You can start by selecting a new working folder. The tool assumes you are
|
||||
starting from scratch. It checks the folder for existing configuration files,
|
||||
such as a board configuration file, scenario configuration file, and launch
|
||||
scripts. If it finds any, it will delete them.
|
||||
|
||||
1. Under **Start a new configuration**, use the displayed working folder or
|
||||
select a different folder by clicking **Browse for folder**.
|
||||
|
||||
.. image:: images/configurator-newconfig.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. If the folder contains configuration files, the tool displays a message about
|
||||
deleting the files. Click **OK** to delete the files.
|
||||
|
||||
#. Click **Use This Folder**.
|
||||
|
||||
Use an Existing Configuration
|
||||
=============================
|
||||
|
||||
You can use an existing configuration by selecting a working folder that has one
|
||||
or more configuration files in it. For example, the folder can contain a board
|
||||
configuration file alone, or a board configuration file and scenario
|
||||
configuration file. The tool uses the information in the files to populate the
|
||||
UI, so that you can continue working on the configuration where you left off.
|
||||
|
||||
1. Under **Use an existing configuration**, use the displayed working folder or
|
||||
select a different folder by clicking **Browse for folder**.
|
||||
|
||||
.. image:: images/configurator-exconfig.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Click **Open Folder**.
|
||||
|
||||
Navigate the Configuration Screen
|
||||
*********************************
|
||||
|
||||
After you have selected a working folder, the tool opens the second (and final)
|
||||
screen, where you can customize your configuration. The following figure shows
|
||||
an example:
|
||||
|
||||
.. image:: images/configurator-configscreen.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
At the top of the screen, the tool shows the selected working folder. To return
|
||||
to the introduction screen, click the arrow next to the working folder path:
|
||||
|
||||
.. image:: images/configurator-backintro.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
The rest of the configuration screen is divided into three panels:
|
||||
|
||||
1. Import a board configuration file
|
||||
#. Create new or import an existing scenario
|
||||
#. Configure settings for scenario and launch scripts
|
||||
|
||||
The panels are labeled 1, 2, and 3 to guide you through the configuration steps.
|
||||
The tool also enforces this order of operation by enabling each panel only after
|
||||
you have completed the preceding panel.
|
||||
|
||||
The title bar of each panel has an arrow icon. Click the icon to expand
|
||||
or collapse the panel.
|
||||
|
||||
.. image:: images/configurator-expand.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Import a Board Configuration File
|
||||
**********************************
|
||||
|
||||
The first step in the configuration process is to import the board configuration
|
||||
file generated via the :ref:`board_inspector_tool`. You can import a board configuration file for the first time, or replace the existing file.
|
||||
|
||||
Import a Board Configuration File for the First Time
|
||||
====================================================
|
||||
|
||||
If the working folder doesn't have a board configuration file, the tool shows
|
||||
that no board information has been imported yet.
|
||||
|
||||
To import a board configuration file for the first time:
|
||||
|
||||
1. Under **Import a board configuration file**, select a scenario configuration
|
||||
file from the dropdown menu or click **Browse for file** to select a
|
||||
different file.
|
||||
|
||||
.. image:: images/configurator-board01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Click **Import Board File**.
|
||||
|
||||
The tool makes a copy of your board configuration file, changes the
|
||||
file extension to ``.board.xml``, and saves the file in the working folder.
|
||||
|
||||
The tool displays the current board information. Example:
|
||||
|
||||
.. image:: images/configurator-board02.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Replace an Existing Board Configuration File
|
||||
============================================
|
||||
|
||||
After a board configuration file has been imported, you can choose to replace it
|
||||
at any time. This option is useful, for example, when you need to iterate your
|
||||
board's configuration while you are customizing your hypervisor settings.
|
||||
Whenever you change the configuration of your board, you must generate a new
|
||||
board configuration file via the :ref:`board_inspector_tool`. Examples include
|
||||
changing any BIOS setting such as hyper-threading, adding or removing a physical
|
||||
device, or adding or removing memory. If this happens after you've started
|
||||
customizing your hypervisor in the ACRN Configurator, you can import the new
|
||||
board file into your existing configuration and continue editing.
|
||||
|
||||
To replace an existing board configuration file:
|
||||
|
||||
1. Under **Import a board configuration file**, click **Use a Different Board**.
|
||||
|
||||
.. image:: images/configurator-board03.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Browse to the board configuration file and click **Open**.
|
||||
|
||||
#. The tool displays a warning message about overwriting the existing file.
|
||||
Click **Ok** to proceed.
|
||||
|
||||
The tool replaces the file and displays the new board information.
|
||||
|
||||
Create New or Import an Existing Scenario
|
||||
*******************************************
|
||||
|
||||
After importing the board configuration file, the next step is to specify an
|
||||
initial scenario. You can create a new scenario, or import an existing scenario
|
||||
configuration file. In both cases, this step is a starting point for configuring
|
||||
your hypervisor and VMs. Later, you can choose to change the configuration, such
|
||||
as adding or deleting VMs.
|
||||
|
||||
Create a Scenario
|
||||
=================
|
||||
|
||||
You can create a scenario by specifying an initial number of VMs.
|
||||
|
||||
1. Under **Create new or import an existing scenario**, click **Create
|
||||
Scenario**.
|
||||
|
||||
.. image:: images/configurator-newscenario01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. In the dialog box, select a scenario type and number of VMs. The tool
|
||||
enforces dependencies. For example, a scenario with post-launched VMs must
|
||||
have a Service VM, so the tool adds a Service VM and doesn't allow you to
|
||||
delete it here.
|
||||
|
||||
.. image:: images/configurator-newscenario02.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Click **Ok**.
|
||||
|
||||
The tool displays the name of the scenario configuration file, but it doesn't
|
||||
save it to the working folder until you click **Save Scenario And Launch
|
||||
Scripts** in the third panel.
|
||||
|
||||
Import a Scenario Configuration File
|
||||
====================================
|
||||
|
||||
You can import an existing scenario configuration file. The tool uses the
|
||||
information in the file to populate the UI, so that you can continue working on
|
||||
the configuration where you left off.
|
||||
|
||||
1. Due to the strict validation ACRN adopts, scenario configuration files for a
|
||||
former release may not work for a latter if they are not upgraded. Starting
|
||||
from v3.0, upgrade an older scenario XML per the steps in
|
||||
:ref:`upgrading_configuration` then import the upgraded file into the tool in
|
||||
the next step.
|
||||
|
||||
#. Under **Create new or import an existing scenario**, go to the right side of
|
||||
the screen and select a scenario configuration file from the dropdown menu or
|
||||
click **Browse for scenario file** to select a different file.
|
||||
|
||||
.. image:: images/configurator-exscenario.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Click **Import Scenario**.
|
||||
|
||||
The tool displays the name of the scenario configuration file, but it doesn't
|
||||
save it to the working folder until you click **Save Scenario And Launch
|
||||
Scripts** in the third panel.
|
||||
|
||||
Configure Settings for Scenario and Launch Scripts
|
||||
**************************************************
|
||||
|
||||
After creating a scenario or importing an existing one, you can configure
|
||||
hypervisor and VM parameters, as well as add or delete VMs.
|
||||
|
||||
Configure the Hypervisor and VM Parameters
|
||||
==========================================
|
||||
|
||||
1. Click the hypervisor or VM tab in the selector menu. The selected tab is
|
||||
darker in color.
|
||||
|
||||
.. image:: images/configurator-selecthypervisor.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Click the Basic Parameters tab or Advanced Parameters tab and make updates.
|
||||
To learn more about each parameter, hover over the |tooltip| icon for a short
|
||||
description or go to :ref:`scenario-config-options` for documentation.
|
||||
|
||||
.. |tooltip| image:: images/tooltip.png
|
||||
|
||||
Basic parameters are generally defined as:
|
||||
|
||||
* Parameters that are necessary for ACRN configuration, compilation, and
|
||||
execution.
|
||||
|
||||
* Parameters that are common for software like ACRN.
|
||||
|
||||
Advanced parameters are generally defined as:
|
||||
|
||||
* Parameters that are optional for ACRN configuration, compilation, and
|
||||
execution.
|
||||
|
||||
* Parameters that are used for fine-grained tuning, such as reducing code
|
||||
lines or optimizing performance. Default values cover most use cases.
|
||||
|
||||
Add a VM
|
||||
=========
|
||||
|
||||
In the selector menu, click **+** to add a pre-launched VM or post-launched VM.
|
||||
|
||||
.. image:: images/configurator-addvm.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Delete a VM
|
||||
============
|
||||
|
||||
1. In the selector menu, click the VM tab. The selected tab is darker in color.
|
||||
|
||||
#. Click **Delete VM**.
|
||||
|
||||
.. image:: images/configurator-deletevm.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Save and Check for Errors
|
||||
=========================
|
||||
|
||||
#. To save your configuration, click **Save Scenario And Launch Scripts** at the
|
||||
top of the panel.
|
||||
|
||||
.. image:: images/configurator-save.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
The tool saves your configuration data in a set of files in the working folder:
|
||||
|
||||
* Scenario configuration file (``scenario.xml``): Raw format of all
|
||||
hypervisor and VM settings. You will need this file to build ACRN.
|
||||
|
||||
* One launch script per post-launched VM (``launch_user_vm_id*.sh``): This
|
||||
file is used to start the post-launched VM in the Service VM. You can find
|
||||
the VM's name inside the script:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
# Launch script for VM name: <name>
|
||||
|
||||
The tool validates hypervisor and VM settings whenever you save. If an error
|
||||
occurs, such as an empty required field, the tool saves the changes to the
|
||||
files, but prompts you to correct the error. Error messages appear below the
|
||||
applicable settings. Example:
|
||||
|
||||
.. image:: images/configurator-rederror.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Fix the errors and save again to generate a valid configuration.
|
||||
|
||||
Next Steps
|
||||
==========
|
||||
|
||||
After generating a valid scenario configuration file, you can build ACRN. See
|
||||
:ref:`gsg_build`.
|
BIN
doc/tutorials/images/config-file.png
Normal file
After Width: | Height: | Size: 60 KiB |
BIN
doc/tutorials/images/configurator-addvm.png
Normal file
After Width: | Height: | Size: 49 KiB |
BIN
doc/tutorials/images/configurator-backintro.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
doc/tutorials/images/configurator-board01.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
doc/tutorials/images/configurator-board02.png
Normal file
After Width: | Height: | Size: 84 KiB |
BIN
doc/tutorials/images/configurator-board03.png
Normal file
After Width: | Height: | Size: 83 KiB |
BIN
doc/tutorials/images/configurator-configscreen.png
Normal file
After Width: | Height: | Size: 115 KiB |
BIN
doc/tutorials/images/configurator-deletevm.png
Normal file
After Width: | Height: | Size: 56 KiB |
BIN
doc/tutorials/images/configurator-exconfig.png
Normal file
After Width: | Height: | Size: 118 KiB |
BIN
doc/tutorials/images/configurator-expand.png
Normal file
After Width: | Height: | Size: 113 KiB |
BIN
doc/tutorials/images/configurator-exscenario.png
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
doc/tutorials/images/configurator-intro.png
Normal file
After Width: | Height: | Size: 117 KiB |
BIN
doc/tutorials/images/configurator-newconfig.png
Normal file
After Width: | Height: | Size: 118 KiB |
BIN
doc/tutorials/images/configurator-newscenario01.png
Normal file
After Width: | Height: | Size: 26 KiB |
BIN
doc/tutorials/images/configurator-newscenario02.png
Normal file
After Width: | Height: | Size: 85 KiB |
BIN
doc/tutorials/images/configurator-rederror.png
Normal file
After Width: | Height: | Size: 13 KiB |
BIN
doc/tutorials/images/configurator-save.png
Normal file
After Width: | Height: | Size: 62 KiB |
BIN
doc/tutorials/images/configurator-selecthypervisor.png
Normal file
After Width: | Height: | Size: 49 KiB |
BIN
doc/tutorials/images/tooltip.png
Normal file
After Width: | Height: | Size: 1.7 KiB |
@ -1,351 +0,0 @@
|
||||
.. _nested_virt:
|
||||
|
||||
Enable Nested Virtualization
|
||||
############################
|
||||
|
||||
With nested virtualization enabled in ACRN, you can run virtual machine
|
||||
instances inside of a guest VM (also called a User VM) running on the ACRN hypervisor.
|
||||
Although both "level 1" guest VMs and nested guest VMs can be launched
|
||||
from the Service VM, the following distinction is worth noting:
|
||||
|
||||
* The VMX feature (``CPUID01.01H:ECX[5]``) does not need to be visible to the Service VM
|
||||
in order to launch guest VMs. A guest VM not running on top of the
|
||||
Service VM is considered a level 1 (L1) guest.
|
||||
|
||||
* The VMX feature must be visible to an L1 guest to launch a nested VM. An instance
|
||||
of a guest hypervisor (KVM) runs on the L1 guest and works with the
|
||||
L0 ACRN hypervisor to run the nested VM.
|
||||
|
||||
The conventional single-level virtualization has two levels - the L0 host
|
||||
(ACRN hypervisor) and the L1 guest VMs. With nested virtualization enabled,
|
||||
ACRN can run guest VMs with their associated virtual machines that define a
|
||||
third level:
|
||||
|
||||
* The host (ACRN hypervisor), which we call the L0 hypervisor
|
||||
* The guest hypervisor (KVM), which we call the L1 hypervisor
|
||||
* The nested guest VMs, which we call the L2 guest VMs
|
||||
|
||||
.. figure:: images/nvmx_1.png
|
||||
:width: 700px
|
||||
:align: center
|
||||
|
||||
Generic Nested Virtualization
|
||||
|
||||
|
||||
High-Level ACRN Nested Virtualization Design
|
||||
********************************************
|
||||
|
||||
The high-level design of nested virtualization in ACRN is shown in :numref:`nested_virt_hld`.
|
||||
Nested VMX is enabled by allowing a guest VM to use VMX instructions,
|
||||
and emulating them using the single level of VMX available in the hardware.
|
||||
|
||||
In x86, a logical processor uses a VM control structure (named VMCS in Intel
|
||||
processors) to manage the state for each vCPU of its guest VMs. These VMCSs
|
||||
manage VM entries and VM exits as well as processor behavior in VMX non-root
|
||||
operation. We'll suffix each VMCS with two digits, the hypervisor level managing
|
||||
it, and the VM level it represents. For example, L0 stores the state of L1 in
|
||||
VMCS01. The trick of nVMX emulation is ACRN builds a VMCS02 out of the VMCS01,
|
||||
which is the VMCS ACRN uses to run the L1 VM, and VMCS12 which is built by L1
|
||||
hypervisor to actually run the L2 guest.
|
||||
|
||||
.. figure:: images/nvmx_arch_1.png
|
||||
:width: 400px
|
||||
:align: center
|
||||
:name: nested_virt_hld
|
||||
|
||||
Nested Virtualization in ACRN
|
||||
|
||||
#. L0 hypervisor (ACRN) runs L1 guest with VMCS01
|
||||
|
||||
#. L1 hypervisor (KVM) creates VMCS12 to run a L2 guest
|
||||
|
||||
#. VMX instructions from L1 hypervisor trigger VMExits to L0 hypervisor:
|
||||
|
||||
#. L0 hypervisor runs a L2 guest with VMCS02
|
||||
|
||||
a. L0 caches VMCS12 in host memory
|
||||
#. L0 merges VMCS01 and VMCS12 to create VMCS02
|
||||
|
||||
#. L2 guest runs until triggering VMExits to L0
|
||||
|
||||
a. L0 reflects most VMExits to L1 hypervisor
|
||||
#. L0 runs L1 guest with VMCS01 and VMCS02 as the shadow VMCS
|
||||
|
||||
|
||||
Restrictions and Constraints
|
||||
****************************
|
||||
|
||||
Nested virtualization is considered an experimental feature, and only tested
|
||||
on Tiger Lake and Kaby Lake platforms (see :ref:`hardware`).
|
||||
|
||||
L1 VMs have the following restrictions:
|
||||
|
||||
* KVM is the only L1 hypervisor supported by ACRN
|
||||
* KVM runs in 64-bit mode
|
||||
* KVM enables EPT for L2 guests
|
||||
* QEMU is used to launch L2 guests
|
||||
|
||||
Constraints on L1 guest configuration:
|
||||
|
||||
* Local APIC passthrough must be enabled
|
||||
* Only the ``SCHED_NOOP`` scheduler is supported. ACRN can't receive timer interrupts
|
||||
on LAPIC passthrough pCPUs
|
||||
|
||||
VPID Allocation
|
||||
===============
|
||||
|
||||
ACRN doesn't emulate L2 VPIDs and allocates VPIDs for L1 VMs from the reserved top
|
||||
16-bit VPID range (``0x10000U - CONFIG_MAX_VM_NUM * MAX_VCPUS_PER_VM`` and up).
|
||||
If the L1 hypervisor enables VPID for L2 VMs and allocates L2 VPIDs not in this
|
||||
range, ACRN doesn't need to flush L2 VPID during L2 VMX transitions.
|
||||
|
||||
This is the expected behavior most of the time. But in special cases where a
|
||||
L2 VPID allocated by L1 hypervisor is within this reserved range, it's possible
|
||||
that this L2 VPID may conflict with a L1 VPID. In this case, ACRN flushes VPID
|
||||
on L2 VMExit/VMEntry that are associated with this L2 VPID, which may significantly
|
||||
negatively impact performances of this L2 VM.
|
||||
|
||||
|
||||
Service VM Configuration
|
||||
*************************
|
||||
|
||||
ACRN only supports enabling the nested virtualization feature on the Service VM, not on pre-launched
|
||||
VMs.
|
||||
|
||||
The nested virtualization feature is disabled by default in ACRN. You can
|
||||
enable it using the :ref:`ACRN Configurator <acrn_configurator_tool>`
|
||||
with these settings:
|
||||
|
||||
.. note:: Normally you'd use the ACRN Configurator GUI to edit the scenario XML file.
|
||||
The tool wasn't updated in time for the v2.5 release, so you'll need to manually edit
|
||||
the ACRN scenario XML configuration file to edit the ``SCHEDULER``, ``pcpu_id``,
|
||||
``guest_flags``, ``legacy_vuart``, and ``console_vuart`` settings for
|
||||
the Service VM, as shown below.
|
||||
|
||||
#. Configure system level features:
|
||||
|
||||
- Edit ``hv.features.scheduler`` to ``SCHED_NOOP`` to disable CPU sharing
|
||||
|
||||
.. code-block:: xml
|
||||
:emphasize-lines: 3,18
|
||||
|
||||
<FEATURES>
|
||||
<RELOC>y</RELOC>
|
||||
<SCHEDULER>SCHED_NOOP</SCHEDULER>
|
||||
<MULTIBOOT2>y</MULTIBOOT2>
|
||||
<ENFORCE_TURNOFF_AC>y</ENFORCE_TURNOFF_AC>
|
||||
<RDT>
|
||||
<RDT_ENABLED>n</RDT_ENABLED>
|
||||
<CDP_ENABLED>y</CDP_ENABLED>
|
||||
<CLOS_MASK>0xfff</CLOS_MASK>
|
||||
<CLOS_MASK>0xfff</CLOS_MASK>
|
||||
<CLOS_MASK>0xfff</CLOS_MASK>
|
||||
<CLOS_MASK>0xfff</CLOS_MASK>
|
||||
<CLOS_MASK>0xfff</CLOS_MASK>
|
||||
<CLOS_MASK>0xfff</CLOS_MASK>
|
||||
<CLOS_MASK>0xfff</CLOS_MASK>
|
||||
<CLOS_MASK>0xfff</CLOS_MASK>
|
||||
</RDT>
|
||||
<HYPERV_ENABLED>y</HYPERV_ENABLED>
|
||||
|
||||
#. In each guest VM configuration:
|
||||
|
||||
- Edit ``vm.nested_virtualization_support`` on the Service VM section and set it to `y`
|
||||
to enable the nested virtualization feature on the Service VM.
|
||||
- Edit ``vm.lapic_passthrough`` and set it to `y` to enable local
|
||||
APIC passthrough on the Service VM.
|
||||
- Edit ``vm.cpu_affinity.pcpu_id`` to assign ``pCPU`` IDs to run the Service VM. If you are
|
||||
using debug build and need the hypervisor console, don't assign
|
||||
``pCPU0`` to the Service VM.
|
||||
|
||||
.. code-block:: xml
|
||||
:emphasize-lines: 5,6,7,10,11
|
||||
|
||||
<vm id="1">
|
||||
<vm_type>SERVICE_VM</vm_type>
|
||||
<name>ACRN_Service_VM</name>
|
||||
<cpu_affinity>
|
||||
<pcpu_id>1</pcpu_id>
|
||||
<pcpu_id>2</pcpu_id>
|
||||
<pcpu_id>3</pcpu_id>
|
||||
</cpu_affinity>
|
||||
<guest_flags>
|
||||
<guest_flag>GUEST_FLAG_NVMX_ENABLED</guest_flag>
|
||||
<guest_flag>GUEST_FLAG_LAPIC_PASSTHROUGH</guest_flag>
|
||||
</guest_flags>
|
||||
|
||||
The Service VM's virtual legacy UART interrupt doesn't work with LAPIC
|
||||
passthrough, which may prevent the Service VM from booting. Instead, we need to use
|
||||
the PCI-vUART for the Service VM. Refer to :ref:`Enable vUART Configurations <vuart_config>`
|
||||
for more details about VUART configuration.
|
||||
|
||||
- Set ``vm.console_vuart`` to ``PCI``
|
||||
|
||||
.. code-block:: xml
|
||||
:emphasize-lines: 1
|
||||
|
||||
<console_vuart>PCI</console_vuart>
|
||||
|
||||
#. Remove CPU sharing VMs
|
||||
|
||||
Since CPU sharing is disabled, you may need to delete all ``POST_STD_VM`` and
|
||||
``KATA_VM`` VMs from the scenario configuration file, which may share a pCPU
|
||||
with the Service VM.
|
||||
|
||||
#. Follow instructions in :ref:`gsg` and build with this XML configuration.
|
||||
|
||||
|
||||
Prepare for Service VM Kernel and rootfs
|
||||
****************************************
|
||||
|
||||
The Service VM can run Ubuntu or other Linux distributions.
|
||||
Instructions on how to boot Ubuntu as the Service VM can be found in
|
||||
:ref:`gsg`.
|
||||
|
||||
The Service VM kernel needs to be built from the ``acrn-kernel`` repo, and some changes
|
||||
to the kernel ``.config`` are needed.
|
||||
Instructions on how to build and install the Service VM kernel can be found
|
||||
in :ref:`gsg`.
|
||||
|
||||
Here is a summary of how to modify and build the kernel:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
git clone https://github.com/projectacrn/acrn-kernel
|
||||
cd acrn-kernel
|
||||
cp kernel_config_service_vm .config
|
||||
make olddefconfig
|
||||
|
||||
The following configuration entries are needed to launch nested
|
||||
guests on the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
CONFIG_KVM=y
|
||||
CONFIG_KVM_INTEL=y
|
||||
CONFIG_ACRN_GUEST=y
|
||||
|
||||
After you make these configuration modifications, build and install the kernel
|
||||
as described in :ref:`gsg`.
|
||||
|
||||
|
||||
Launch a Nested Guest VM
|
||||
************************
|
||||
|
||||
Create an Ubuntu KVM Image
|
||||
==========================
|
||||
|
||||
Refer to :ref:`Build the Ubuntu KVM Image <build-the-ubuntu-kvm-image>`
|
||||
on how to create an Ubuntu KVM image as the nested guest VM's root filesystem.
|
||||
There is no particular requirement for this image, e.g., it could be of either
|
||||
qcow2 or raw format.
|
||||
|
||||
Prepare for Launch Scripts
|
||||
==========================
|
||||
|
||||
Install QEMU on the Service VM that will launch the nested guest VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo apt-get install qemu-kvm qemu virt-manager virt-viewer libvirt-bin
|
||||
|
||||
.. important:: The QEMU ``-cpu host`` option is needed to launch a nested guest VM, and ``-nographics``
|
||||
is required to run nested guest VMs reliably.
|
||||
|
||||
You can prepare the script just like the one you use to launch a VM
|
||||
on native Linux. For example, other than ``-hda``, you can use the following option to launch
|
||||
a virtio block based RAW image::
|
||||
|
||||
-drive format=raw,file=/root/ubuntu-20.04.img,if=virtio
|
||||
|
||||
Use the following option to enable Ethernet on the guest VM::
|
||||
|
||||
-netdev tap,id=net0 -device virtio-net-pci,netdev=net0,mac=a6:cd:47:5f:20:dc
|
||||
|
||||
The following is a simple example for the script to launch a nested guest VM.
|
||||
|
||||
.. code-block:: bash
|
||||
:emphasize-lines: 2-4
|
||||
|
||||
sudo qemu-system-x86_64 \
|
||||
-enable-kvm \
|
||||
-cpu host \
|
||||
-nographic \
|
||||
-m 2G -smp 2 -hda /root/ubuntu-20.04.qcow2 \
|
||||
-net nic,macaddr=00:16:3d:60:0a:80 -net tap,script=/etc/qemu-ifup
|
||||
|
||||
Launch the Guest VM
|
||||
===================
|
||||
|
||||
You can launch the nested guest VM from the Service VM's virtual serial console
|
||||
or from an SSH remote login.
|
||||
|
||||
If the nested VM is launched successfully, you should see the nested
|
||||
VM's login prompt:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
[ OK ] Started Terminate Plymouth Boot Screen.
|
||||
[ OK ] Started Hold until boot process finishes up.
|
||||
[ OK ] Starting Set console scheme...
|
||||
[ OK ] Started Serial Getty on ttyS0.
|
||||
[ OK ] Started LXD - container startup/shutdown.
|
||||
[ OK ] Started Set console scheme.
|
||||
[ OK ] Started Getty on tty1.
|
||||
[ OK ] Reached target Login Prompts.
|
||||
[ OK ] Reached target Multi-User System.
|
||||
[ OK ] Started Update UTMP about System Runlevel Changes.
|
||||
|
||||
Ubuntu 20.04 LTS ubuntu_vm ttyS0
|
||||
|
||||
ubuntu_vm login:
|
||||
|
||||
You won't see the nested guest from a ``vcpu_list`` or ``vm_list`` command
|
||||
on the ACRN hypervisor console because these commands only show level 1 VMs.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
ACRN:\>vm_list
|
||||
|
||||
VM_UUID VM_ID VM_NAME VM_STATE
|
||||
================================ ===== ==========================
|
||||
dbbbd4347a574216a12c2201f1ab0240 0 ACRN_Service_VM Running
|
||||
ACRN:\>vcpu_list
|
||||
|
||||
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE THREAD STATE
|
||||
===== ======= ======= ========= ========== ============
|
||||
0 1 0 PRIMARY Running RUNNING
|
||||
0 2 1 SECONDARY Running RUNNING
|
||||
0 3 2 SECONDARY Running RUNNING
|
||||
|
||||
On the nested guest VM console, run an ``lshw`` or ``dmidecode`` command
|
||||
and you'll see that this is a QEMU-managed virtual machine:
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 4,5
|
||||
|
||||
$ sudo lshw -c system
|
||||
ubuntu_vm
|
||||
description: Computer
|
||||
product: Standard PC (i440FX + PIIX, 1996)
|
||||
vendor: QEMU
|
||||
version: pc-i440fx-5.2
|
||||
width: 64 bits
|
||||
capabilities: smbios-2.8 dmi-2.8 smp vsyscall32
|
||||
configuration: boot=normal
|
||||
|
||||
For example, compare this to the same command run on the L1 guest (Service VM):
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 4,5
|
||||
|
||||
$ sudo lshw -c system
|
||||
localhost.localdomain
|
||||
description: Computer
|
||||
product: NUC7i5DNHE
|
||||
vendor: Intel Corporation
|
||||
version: J57828-507
|
||||
serial: DW1710099900081
|
||||
width: 64 bits
|
||||
capabilities: smbios-3.1 dmi-3.1 smp vsyscall32
|
||||
configuration: boot=normal family=Intel NUC uuid=36711CA2-A784-AD49-B0DC-54B2030B16AB
|