doc: update release_3.0 with changes from master
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
@ -23,6 +23,8 @@ User VM Tutorials
|
||||
.. toctree::
|
||||
:maxdepth: 1
|
||||
|
||||
tutorials/user_vm_guide
|
||||
tutorials/using_ubuntu_as_user_vm
|
||||
tutorials/using_windows_as_user_vm
|
||||
tutorials/using_xenomai_as_user_vm
|
||||
tutorials/using_vxworks_as_user_vm
|
||||
@ -59,7 +61,6 @@ Advanced Features
|
||||
|
||||
tutorials/vuart_configuration
|
||||
tutorials/rdt_configuration
|
||||
tutorials/vcat_configuration
|
||||
tutorials/waag-secure-boot
|
||||
tutorials/enable_s5
|
||||
tutorials/cpu_sharing
|
||||
@ -67,7 +68,6 @@ Advanced Features
|
||||
tutorials/gpu-passthru
|
||||
tutorials/run_kata_containers
|
||||
tutorials/rtvm_workload_design_guideline
|
||||
tutorials/setup_openstack_libvirt
|
||||
tutorials/acrn_on_qemu
|
||||
tutorials/using_grub
|
||||
tutorials/acrn-secure-boot-with-grub
|
||||
|
@ -1,7 +1,7 @@
|
||||
.. _hv_vcat:
|
||||
|
||||
Enable vCAT
|
||||
###########
|
||||
Virtual Cache Allocation Technology (vCAT)
|
||||
###########################################
|
||||
|
||||
vCAT refers to the virtualization of Cache Allocation Technology (CAT), one of the
|
||||
RDT (Resource Director Technology) technologies.
|
||||
@ -26,7 +26,7 @@ When assigning cache ways, however, the VM can be given exclusive, shared, or mi
|
||||
ways depending on particular performance needs. For example, use dedicated cache ways for RTVM, and use
|
||||
shared cache ways between low priority VMs.
|
||||
|
||||
In ACRN, the CAT resources allocated for vCAT VMs are determined in :ref:`vcat_configuration`.
|
||||
In ACRN, the CAT resources allocated for vCAT VMs are determined in :ref:`rdt_configuration`.
|
||||
|
||||
For further details on the RDT, refer to the ACRN RDT high-level design :ref:`hv_rdt`.
|
||||
|
||||
|
@ -135,7 +135,7 @@ To set up the ACRN build environment on the development computer:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo pip3 install lxml xmlschema defusedxml tqdm
|
||||
sudo pip3 install "elementpath<=2.5.0" lxml xmlschema defusedxml tqdm
|
||||
|
||||
#. Create a working directory:
|
||||
|
||||
@ -155,19 +155,19 @@ To set up the ACRN build environment on the development computer:
|
||||
make clean && make iasl
|
||||
sudo cp ./generate/unix/bin/iasl /usr/sbin
|
||||
|
||||
#. Get the ACRN hypervisor and kernel source code. (Because the ``acrn-kernel`` repo
|
||||
has a lot of Linux kernel history, you can clone the relevant release branch
|
||||
with minimal history, as shown here.)
|
||||
#. Get the ACRN hypervisor and kernel source code.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/acrn-work
|
||||
git clone https://github.com/projectacrn/acrn-hypervisor.git
|
||||
cd acrn-hypervisor
|
||||
git checkout release_3.0
|
||||
git checkout v3.0
|
||||
|
||||
cd ..
|
||||
git clone --depth 1 --branch release_3.0 https://github.com/projectacrn/acrn-kernel.git
|
||||
git clone https://github.com/projectacrn/acrn-kernel.git
|
||||
cd acrn-kernel
|
||||
git checkout acrn-v3.0
|
||||
|
||||
.. _gsg-board-setup:
|
||||
|
||||
@ -207,8 +207,8 @@ To set up the target hardware environment:
|
||||
|
||||
Example of a target system with cables connected:
|
||||
|
||||
.. image:: ./images/gsg_nuc.png
|
||||
:scale: 25%
|
||||
.. image:: ./images/gsg_vecow.png
|
||||
:align: center
|
||||
|
||||
Install OS on the Target
|
||||
============================
|
||||
@ -231,12 +231,14 @@ To install Ubuntu 20.04:
|
||||
updates requires the target to have an Internet connection).
|
||||
|
||||
.. image:: ./images/gsg_ubuntu_install_01.png
|
||||
:align: center
|
||||
|
||||
#. Use the check boxes to choose whether you'd like to install Ubuntu alongside
|
||||
another operating system, or delete your existing operating system and
|
||||
replace it with Ubuntu:
|
||||
|
||||
.. image:: ./images/gsg_ubuntu_install_02.png
|
||||
:align: center
|
||||
|
||||
#. Complete the Ubuntu installation and create a new user account ``acrn`` and
|
||||
set a password.
|
||||
@ -382,8 +384,9 @@ Generate a Board Configuration File
|
||||
Generate a Scenario Configuration File and Launch Script
|
||||
********************************************************
|
||||
|
||||
In this step, you will use the **ACRN Configurator** to generate a scenario
|
||||
configuration file and launch script.
|
||||
In this step, you will download, install, and use the `ACRN Configurator
|
||||
<https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.0/acrn-configurator-3.0.deb>`__
|
||||
to generate a scenario configuration file and launch script.
|
||||
|
||||
A **scenario configuration file** is an XML file that holds the parameters of
|
||||
a specific ACRN configuration, such as the number of VMs that can be run,
|
||||
@ -392,11 +395,26 @@ their attributes, and the resources they have access to.
|
||||
A **launch script** is a shell script that is used to configure and create a
|
||||
post-launched User VM. Each User VM has its own launch script.
|
||||
|
||||
#. On the development computer, install the ACRN Configurator:
|
||||
#. On the development computer, download and install the ACRN Configurator
|
||||
Debian package:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt install -y ~/acrn-work/acrn-hypervisor/build/acrn-configurator_*_amd64.deb # TODO update file path
|
||||
cd ~/acrn-work
|
||||
wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.0/acrn-configurator-3.0.deb
|
||||
|
||||
If you already have a previous version of the acrn-configurator installed,
|
||||
you should first remove it:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt purge acrn-configurator
|
||||
|
||||
Then you can install this new version:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt install -y ./acrn-configurator-3.0.deb
|
||||
|
||||
#. Launch the ACRN Configurator:
|
||||
|
||||
@ -475,7 +493,7 @@ post-launched User VM. Each User VM has its own launch script.
|
||||
#. Click the **VM1 Post-launched > Basic Parameters** tab and change the VM
|
||||
name to ``POST_STD_VM1`` for this example.
|
||||
|
||||
#. Confirm that the **OS type** is ``Standard``. In the previous step,
|
||||
#. Confirm that the **VM type** is ``Standard``. In the previous step,
|
||||
``STD`` in the VM name is short for Standard.
|
||||
|
||||
#. Scroll down to **Memory size (MB)** and change the value to ``1024``. For
|
||||
@ -485,6 +503,10 @@ post-launched User VM. Each User VM has its own launch script.
|
||||
#. For **Physical CPU affinity**, select pCPU ID ``0``, then click **+** and
|
||||
select pCPU ID ``1`` to affine the VM to CPU cores 0 and 1.
|
||||
|
||||
#. For **Virtio console device**, click **+** to add a device and keep the
|
||||
default options. This parameter specifies the console that you will use to
|
||||
log in to the User VM later in this guide.
|
||||
|
||||
#. For **Virtio block device**, click **+** and enter
|
||||
``~/acrn-work/ubuntu-20.04.4-desktop-amd64.iso``. This parameter
|
||||
specifies the VM's OS image and its location on the target system. Later
|
||||
@ -531,10 +553,11 @@ Build ACRN
|
||||
|
||||
cd ./build
|
||||
ls *.deb
|
||||
acrn-my_board-shared-2.7.deb # TODO update file name
|
||||
acrn-my_board-MyConfiguration*.deb
|
||||
|
||||
The Debian package contains the ACRN hypervisor and tools to ease installing
|
||||
ACRN on the target.
|
||||
ACRN on the target. The Debian file name contains the board name (``my_board``)
|
||||
and the working folder name (``MyConfiguration``).
|
||||
|
||||
#. Build the ACRN kernel for the Service VM:
|
||||
|
||||
@ -564,10 +587,10 @@ Build ACRN
|
||||
|
||||
cd ..
|
||||
ls *.deb
|
||||
linux-headers-5.10.78-acrn-service-vm_5.10.78-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.10.78-acrn-service-vm_5.10.78-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.10.78-acrn-service-vm-dbg_5.10.78-acrn-service-vm-1_amd64.deb
|
||||
linux-libc-dev_5.10.78-acrn-service-vm-1_amd64.deb
|
||||
linux-headers-5.10.115-acrn-service-vm_5.10.115-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.10.115-acrn-service-vm_5.10.115-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.10.115-acrn-service-vm-dbg_5.10.115-acrn-service-vm-1_amd64.deb
|
||||
linux-libc-dev_5.10.115-acrn-service-vm-1_amd64.deb
|
||||
|
||||
#. Copy all the necessary files generated on the development computer to the
|
||||
target system by USB disk as follows:
|
||||
@ -577,9 +600,9 @@ Build ACRN
|
||||
.. code-block:: bash
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp ~/acrn-work/acrn-hypervisor/build/acrn-my_board-shared-2.7.deb "$disk"/ # TODO update file name
|
||||
cp ~/acrn-work/acrn-hypervisor/build/acrn-my_board-MyConfiguration*.deb "$disk"/
|
||||
cp ~/acrn-work/*acrn-service-vm*.deb "$disk"/
|
||||
cp ~/acrn-work/my_board/output/launch_user_vm_id3.sh "$disk"/
|
||||
cp ~/acrn-work/MyConfiguration/launch_user_vm_id1.sh "$disk"/
|
||||
cp ~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl "$disk"/
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
@ -592,9 +615,9 @@ Build ACRN
|
||||
.. code-block:: bash
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp "$disk"/acrn-my_board-shared-2.7.deb ~/acrn-work # TODO update file name
|
||||
cp "$disk"/acrn-my_board-MyConfiguration*.deb ~/acrn-work
|
||||
cp "$disk"/*acrn-service-vm*.deb ~/acrn-work
|
||||
cp "$disk"/launch_user_vm_id3.sh ~/acrn-work
|
||||
cp "$disk"/launch_user_vm_id1.sh ~/acrn-work
|
||||
sudo cp "$disk"/iasl /usr/sbin/
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
@ -611,7 +634,7 @@ Install ACRN
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/acrn-work
|
||||
sudo apt install ./acrn-my_board-shared-2.7.deb # TODO update file name
|
||||
sudo apt install ./acrn-my_board-MyConfiguration*.deb
|
||||
sudo apt install ./*acrn-service-vm*.deb
|
||||
|
||||
#. Reboot the system:
|
||||
@ -685,8 +708,8 @@ Launch the User VM
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo chmod +x ~/acrn-work/launch_user_vm_id3.sh # TODO update file name
|
||||
sudo ~/acrn-work/launch_user_vm_id3.sh # TODO update file name
|
||||
sudo chmod +x ~/acrn-work/launch_user_vm_id1.sh
|
||||
sudo ~/acrn-work/launch_user_vm_id1.sh
|
||||
|
||||
#. It may take about one minute for the User VM to boot and start running the
|
||||
Ubuntu image. You will see a lot of output, then the console of the User VM
|
||||
@ -705,7 +728,7 @@ Launch the User VM
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.11.0-27-generic x86_64)
|
||||
Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-30-generic x86_64)
|
||||
|
||||
* Documentation: https://help.ubuntu.com
|
||||
* Management: https://landscape.canonical.com
|
||||
@ -734,7 +757,7 @@ Launch the User VM
|
||||
.. code-block:: console
|
||||
|
||||
ubuntu@ubuntu:~$ uname -r
|
||||
5.11.0-27-generic
|
||||
5.13.0-30-generic
|
||||
|
||||
Then open a new terminal window and use the command to see that the Service
|
||||
VM is running the ``acrn-kernel`` Service VM image:
|
||||
@ -742,7 +765,7 @@ Launch the User VM
|
||||
.. code-block:: console
|
||||
|
||||
acrn@vecow:~$ uname -r
|
||||
5.10.78-acrn-service-vm
|
||||
5.10.115-acrn-service-vm
|
||||
|
||||
The User VM has launched successfully. You have completed this ACRN setup.
|
||||
|
||||
|
Before Width: | Height: | Size: 48 KiB After Width: | Height: | Size: 45 KiB |
Before Width: | Height: | Size: 138 KiB After Width: | Height: | Size: 158 KiB |
Before Width: | Height: | Size: 24 KiB After Width: | Height: | Size: 21 KiB |
Before Width: | Height: | Size: 1.9 MiB |
BIN
doc/getting-started/images/gsg_vecow.png
Normal file
After Width: | Height: | Size: 228 KiB |
@ -101,20 +101,21 @@ ACRN offers three types of VMs:
|
||||
|
||||
* **Post-launched User VMs**: These VMs typically share hardware resources via
|
||||
the Service VM and Device Model. They can also access hardware devices
|
||||
directly if they've been configured as passthrough devices. Unlike
|
||||
pre-launched VMs, you can change the configuration at runtime. They are
|
||||
well-suited for non-safety applications, including human machine interface
|
||||
(HMI), artificial intelligence (AI), computer vision, real-time, and others.
|
||||
directly if they've been configured as passthrough devices. The configuration
|
||||
of a post-launched VM can be static (defined at build time) or dynamic
|
||||
(defined at runtime without rebuilding ACRN). They are well-suited for
|
||||
non-safety applications, including human machine interface (HMI), artificial
|
||||
intelligence (AI), computer vision, real-time, and others.
|
||||
|
||||
The names "pre-launched" and "post-launched" refer to the boot order of these
|
||||
VMs. The ACRN hypervisor launches the pre-launched VMs first, then launches the
|
||||
Service VM. The Service VM launches the post-launched VMs.
|
||||
|
||||
Due to the static configuration of pre-launched VMs, they are recommended only
|
||||
if you need complete isolation from the rest of the system. Most use cases can
|
||||
meet their requirements without pre-launched VMs. Even if your application has
|
||||
stringent real-time requirements, start by testing the application on a
|
||||
post-launched VM before considering a pre-launched VM.
|
||||
Pre-launched VMs are recommended only if you need complete isolation from the
|
||||
rest of the system. Most use cases can meet their requirements without
|
||||
pre-launched VMs. Even if your application has stringent real-time requirements,
|
||||
start by testing the application on a post-launched VM before considering a
|
||||
pre-launched VM.
|
||||
|
||||
Scenario Types
|
||||
---------------
|
||||
|
Before Width: | Height: | Size: 16 KiB After Width: | Height: | Size: 13 KiB |
@ -170,6 +170,130 @@ stopping, and pausing a VM, and pausing or resuming a virtual CPU.
|
||||
See the :ref:`hld-overview` developer reference material for more in-depth
|
||||
information.
|
||||
|
||||
.. _static-configuration-scenarios:
|
||||
|
||||
Static Configuration Based on Scenarios
|
||||
***************************************
|
||||
|
||||
Scenarios are a way to describe the system configuration settings of the ACRN
|
||||
hypervisor, VMs, and resources they have access to that meet your specific
|
||||
application's needs such as compute, memory, storage, graphics, networking, and
|
||||
other devices. Scenario configurations are stored in an XML format file and
|
||||
edited using the ACRN Configurator.
|
||||
|
||||
Following a general embedded-system programming model, the ACRN hypervisor is
|
||||
designed to be statically customized at build time per hardware and scenario,
|
||||
rather than providing one binary for all scenarios. Dynamic configuration
|
||||
parsing is not used in the ACRN hypervisor for these reasons:
|
||||
|
||||
* **Reduce complexity**. ACRN is a lightweight reference hypervisor, built for
|
||||
embedded IoT and Edge. As new platforms for embedded systems are rapidly introduced,
|
||||
support for one binary could require more and more complexity in the
|
||||
hypervisor, which is something we strive to avoid.
|
||||
* **Maintain small footprint**. Implementing dynamic parsing introduces hundreds or
|
||||
thousands of lines of code. Avoiding dynamic parsing helps keep the
|
||||
hypervisor's Lines of Code (LOC) in a desirable range (less than 40K).
|
||||
* **Improve boot time**. Dynamic parsing at runtime increases the boot time. Using a
|
||||
static build-time configuration and not dynamic parsing helps improve the boot
|
||||
time of the hypervisor.
|
||||
|
||||
The scenario XML file together with a target board XML file are used to build
|
||||
the ACRN hypervisor image tailored to your hardware and application needs. The ACRN
|
||||
project provides the Board Inspector tool to automatically create the board XML
|
||||
file by inspecting the target hardware. ACRN also provides the
|
||||
:ref:`ACRN Configurator tool <acrn_configuration_tool>`
|
||||
to create and edit a tailored scenario XML file based on predefined sample
|
||||
scenario configurations.
|
||||
|
||||
.. _usage-scenarios:
|
||||
|
||||
Scenario Types
|
||||
**************
|
||||
|
||||
Here are three sample scenario types and diagrams to illustrate how you
|
||||
can define your own configuration scenarios.
|
||||
|
||||
|
||||
* **Shared** is a traditional
|
||||
computing, memory, and device resource sharing
|
||||
model among VMs. The ACRN hypervisor launches the Service VM. The Service VM
|
||||
then launches any post-launched User VMs and provides device and resource
|
||||
sharing mediation through the Device Model. The Service VM runs the native
|
||||
device drivers to access the hardware and provides I/O mediation to the User
|
||||
VMs.
|
||||
|
||||
.. figure:: images/ACRN-industry-example-1-0.75x.png
|
||||
:align: center
|
||||
:name: arch-shared-example
|
||||
|
||||
ACRN High-Level Architecture Shared Example
|
||||
|
||||
Virtualization is especially important in industrial environments because of
|
||||
device and application longevity. Virtualization enables factories to
|
||||
modernize their control system hardware by using VMs to run older control
|
||||
systems and operating systems far beyond their intended retirement dates.
|
||||
|
||||
The ACRN hypervisor needs to run different workloads with little-to-no
|
||||
interference, increase security functions that safeguard the system, run hard
|
||||
real-time sensitive workloads together with general computing workloads, and
|
||||
conduct data analytics for timely actions and predictive maintenance.
|
||||
|
||||
In this example, one post-launched User VM provides Human Machine Interface
|
||||
(HMI) capability, another provides Artificial Intelligence (AI) capability,
|
||||
some compute function is run in the Kata Container, and the RTVM runs the soft
|
||||
Programmable Logic Controller (PLC) that requires hard real-time
|
||||
characteristics.
|
||||
|
||||
- The Service VM provides device sharing functionalities, such as disk and
|
||||
network mediation, to other virtual machines. It can also run an
|
||||
orchestration agent allowing User VM orchestration with tools such as
|
||||
Kubernetes.
|
||||
- The HMI Application OS can be Windows or Linux. Windows is dominant in
|
||||
Industrial HMI environments.
|
||||
- ACRN can support a soft real-time OS such as preempt-rt Linux for soft-PLC
|
||||
control, or a hard real-time OS that offers less jitter.
|
||||
|
||||
* **Partitioned** is a VM resource partitioning model when a User VM requires
|
||||
independence and isolation from other VMs. A partitioned VM's resources are
|
||||
statically configured and are not shared with other VMs. Partitioned User VMs
|
||||
can be Real-Time VMs, Safety VMs, or standard VMs and are launched at boot
|
||||
time by the hypervisor. There is no need for the Service VM or Device Model
|
||||
since all partitioned VMs run native device drivers and directly access their
|
||||
configured resources.
|
||||
|
||||
.. figure:: images/ACRN-partitioned-example-1-0.75x.png
|
||||
:align: center
|
||||
:name: arch-partitioned-example
|
||||
|
||||
ACRN High-Level Architecture Partitioned Example
|
||||
|
||||
This scenario is a simplified configuration showing VM partitioning: both
|
||||
User VMs are independent and isolated, they do not share resources, and both
|
||||
are automatically launched at boot time by the hypervisor. The User VMs can
|
||||
be Real-Time VMs (RTVMs), Safety VMs, or standard User VMs.
|
||||
|
||||
* **Hybrid** scenario simultaneously supports both sharing and partitioning on
|
||||
the consolidated system. The pre-launched (partitioned) User VMs, with their
|
||||
statically configured and unshared resources, are started by the hypervisor.
|
||||
The hypervisor then launches the Service VM. The post-launched (shared) User
|
||||
VMs are started by the Device Model in the Service VM and share the remaining
|
||||
resources.
|
||||
|
||||
.. figure:: images/ACRN-hybrid-rt-example-1-0.75x.png
|
||||
:align: center
|
||||
:name: arch-hybrid-rt-example
|
||||
|
||||
ACRN High-Level Architecture Hybrid-RT Example
|
||||
|
||||
In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the
|
||||
hypervisor. The Service VM runs a post-launched User VM that runs non-safety or
|
||||
non-real-time tasks.
|
||||
|
||||
The :ref:`acrn_configuration_tool` tutorial explains how to use the ACRN
|
||||
Configurator to create your own scenario, or to view and modify an existing one.
|
||||
|
||||
.. _dm_architecture_intro:
|
||||
|
||||
ACRN Device Model Architecture
|
||||
******************************
|
||||
|
||||
@ -278,129 +402,7 @@ including Xen, KVM, and ACRN hypervisor. In most cases, the User VM OS
|
||||
must be compiled to support passthrough by using kernel
|
||||
build-time options.
|
||||
|
||||
.. _static-configuration-scenarios:
|
||||
|
||||
Static Configuration Based on Scenarios
|
||||
***************************************
|
||||
|
||||
Scenarios are a way to describe the system configuration settings of the ACRN
|
||||
hypervisor, VMs, and resources they have access to that meet your specific
|
||||
application's needs such as compute, memory, storage, graphics, networking, and
|
||||
other devices. Scenario configurations are stored in an XML format file and
|
||||
edited using the ACRN Configurator.
|
||||
|
||||
Following a general embedded-system programming model, the ACRN hypervisor is
|
||||
designed to be statically customized at build time per hardware and scenario,
|
||||
rather than providing one binary for all scenarios. Dynamic configuration
|
||||
parsing is not used in the ACRN hypervisor for these reasons:
|
||||
|
||||
* **Reduce complexity**. ACRN is a lightweight reference hypervisor, built for
|
||||
embedded IoT and Edge. As new platforms for embedded systems are rapidly introduced,
|
||||
support for one binary could require more and more complexity in the
|
||||
hypervisor, which is something we strive to avoid.
|
||||
* **Maintain small footprint**. Implementing dynamic parsing introduces hundreds or
|
||||
thousands of lines of code. Avoiding dynamic parsing helps keep the
|
||||
hypervisor's Lines of Code (LOC) in a desirable range (less than 40K).
|
||||
* **Improve boot time**. Dynamic parsing at runtime increases the boot time. Using a
|
||||
static build-time configuration and not dynamic parsing helps improve the boot
|
||||
time of the hypervisor.
|
||||
|
||||
The scenario XML file together with a target board XML file are used to build
|
||||
the ACRN hypervisor image tailored to your hardware and application needs. The ACRN
|
||||
project provides the Board Inspector tool to automatically create the board XML
|
||||
file by inspecting the target hardware. ACRN also provides the
|
||||
:ref:`ACRN Configurator tool <acrn_configuration_tool>`
|
||||
to create and edit a tailored scenario XML file based on predefined sample
|
||||
scenario configurations.
|
||||
|
||||
.. _usage-scenarios:
|
||||
|
||||
Predefined Sample Scenarios
|
||||
***************************
|
||||
|
||||
Project ACRN provides some predefined sample scenarios to illustrate how you
|
||||
can define your own configuration scenarios.
|
||||
|
||||
|
||||
* **Shared** (called **Industry** in previous releases) is a traditional
|
||||
computing, memory, and device resource sharing
|
||||
model among VMs. The ACRN hypervisor launches the Service VM. The Service VM
|
||||
then launches any post-launched User VMs and provides device and resource
|
||||
sharing mediation through the Device Model. The Service VM runs the native
|
||||
device drivers to access the hardware and provides I/O mediation to the User
|
||||
VMs.
|
||||
|
||||
.. figure:: images/ACRN-industry-example-1-0.75x.png
|
||||
:align: center
|
||||
:name: arch-shared-example
|
||||
|
||||
ACRN High-Level Architecture Shared Example
|
||||
|
||||
Virtualization is especially important in industrial environments because of
|
||||
device and application longevity. Virtualization enables factories to
|
||||
modernize their control system hardware by using VMs to run older control
|
||||
systems and operating systems far beyond their intended retirement dates.
|
||||
|
||||
The ACRN hypervisor needs to run different workloads with little-to-no
|
||||
interference, increase security functions that safeguard the system, run hard
|
||||
real-time sensitive workloads together with general computing workloads, and
|
||||
conduct data analytics for timely actions and predictive maintenance.
|
||||
|
||||
In this example, one post-launched User VM provides Human Machine Interface
|
||||
(HMI) capability, another provides Artificial Intelligence (AI) capability,
|
||||
some compute function is run in the Kata Container, and the RTVM runs the soft
|
||||
Programmable Logic Controller (PLC) that requires hard real-time
|
||||
characteristics.
|
||||
|
||||
- The Service VM provides device sharing functionalities, such as disk and
|
||||
network mediation, to other virtual machines. It can also run an
|
||||
orchestration agent allowing User VM orchestration with tools such as
|
||||
Kubernetes.
|
||||
- The HMI Application OS can be Windows or Linux. Windows is dominant in
|
||||
Industrial HMI environments.
|
||||
- ACRN can support a soft real-time OS such as preempt-rt Linux for soft-PLC
|
||||
control, or a hard real-time OS that offers less jitter.
|
||||
|
||||
* **Partitioned** is a VM resource partitioning model when a User VM requires
|
||||
independence and isolation from other VMs. A partitioned VM's resources are
|
||||
statically configured and are not shared with other VMs. Partitioned User VMs
|
||||
can be Real-Time VMs, Safety VMs, or standard VMs and are launched at boot
|
||||
time by the hypervisor. There is no need for the Service VM or Device Model
|
||||
since all partitioned VMs run native device drivers and directly access their
|
||||
configured resources.
|
||||
|
||||
.. figure:: images/ACRN-partitioned-example-1-0.75x.png
|
||||
:align: center
|
||||
:name: arch-partitioned-example
|
||||
|
||||
ACRN High-Level Architecture Partitioned Example
|
||||
|
||||
This scenario is a simplified configuration showing VM partitioning: both
|
||||
User VMs are independent and isolated, they do not share resources, and both
|
||||
are automatically launched at boot time by the hypervisor. The User VMs can
|
||||
be Real-Time VMs (RTVMs), Safety VMs, or standard User VMs.
|
||||
|
||||
* **Hybrid** scenario simultaneously supports both sharing and partitioning on
|
||||
the consolidated system. The pre-launched (partitioned) User VMs, with their
|
||||
statically configured and unshared resources, are started by the hypervisor.
|
||||
The hypervisor then launches the Service VM. The post-launched (shared) User
|
||||
VMs are started by the Device Model in the Service VM and share the remaining
|
||||
resources.
|
||||
|
||||
.. figure:: images/ACRN-hybrid-rt-example-1-0.75x.png
|
||||
:align: center
|
||||
:name: arch-hybrid-rt-example
|
||||
|
||||
ACRN High-Level Architecture Hybrid-RT Example
|
||||
|
||||
In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the
|
||||
hypervisor. The Service VM runs a post-launched User VM that runs non-safety or
|
||||
non-real-time tasks.
|
||||
|
||||
You can find the predefined scenario XML files in the
|
||||
:acrn_file:`misc/config_tools/data` folder in the hypervisor source code. The
|
||||
:ref:`acrn_configuration_tool` tutorial explains how to use the ACRN
|
||||
Configurator to create your own scenario, or to view and modify an existing one.
|
||||
|
||||
Boot Sequence
|
||||
*************
|
||||
|
@ -51,42 +51,51 @@ level includes the activities described in the lower levels.
|
||||
.. _WHL-IPC-I5:
|
||||
http://www.maxtangpc.com/industrialmotherboards/142.html#parameters
|
||||
|
||||
.. _Vecow SPC-7100:
|
||||
https://marketplace.intel.com/s/offering/a5b3b000000PReMAAW/vecow-spc7100-series-11th-gen-intel-core-i7i5i3-processor-ultracompact-f
|
||||
|
||||
.. _UP2-N3350:
|
||||
.. _UP2-N4200:
|
||||
.. _UP2-x5-E3940:
|
||||
.. _UP2 Shop:
|
||||
https://up-shop.org/home/270-up-squared.html
|
||||
|
||||
+------------------------+------------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+
|
||||
| | | .. rst-class:: centered |
|
||||
| | | |
|
||||
| | | ACRN Version |
|
||||
+------------------------+------------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+
|
||||
| Intel Processor Family | Tested Product | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | centered | centered | centered | centered | centered | centered |
|
||||
| | | | | | | | |
|
||||
| | | v1.0 | v1.6.1 | v2.0 | v2.5 | v2.6 | v2.7 |
|
||||
+========================+====================================+========================+========================+========================+========================+========================+========================+
|
||||
| Tiger Lake | `NUC11TNHi5`_ | | | | .. rst-class:: | .. rst-class:: |
|
||||
| | | | | | centered | centered |
|
||||
| | | | | | | |
|
||||
| | | | | | Release | Maintenance |
|
||||
+------------------------+------------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+
|
||||
| Whiskey Lake | `WHL-IPC-I5`_ | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | | centered | centered | centered |
|
||||
| | | | | | | |
|
||||
| | | | | Release | Maintenance | Community |
|
||||
+------------------------+------------------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+------------------------+
|
||||
| Kaby Lake | `NUC7i7DNHE`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | centered | centered | centered |
|
||||
| | | | | | |
|
||||
| | | | Release | Maintenance | Community |
|
||||
+------------------------+------------------------------------+------------------------+------------------------+-------------------------------------------------+-------------------------------------------------+
|
||||
| Apollo Lake | | `NUC6CAYH`_, | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | `UP2-N3350`_, | centered | centered | centered |
|
||||
| | | `UP2-N4200`_, | | | |
|
||||
| | | `UP2-x5-E3940`_ | Release | Maintenance | Community |
|
||||
+------------------------+------------------------------------+------------------------+------------------------+---------------------------------------------------------------------------------------------------+
|
||||
+------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | | .. rst-class:: |
|
||||
| | | centered |
|
||||
| | | |
|
||||
| | | ACRN Version |
|
||||
| | +-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Intel Processor Family | Tested Products | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | centered | centered | centered | centered | centered | centered | centered |
|
||||
| | | | | | | | | |
|
||||
| | | v1.0 | v1.6.1 | v2.0 | v2.5 | v2.6 | v2.7 | v3.0 |
|
||||
+========================+======================+===================+===================+===================+===================+===================+===================+===================+
|
||||
| Tiger Lake | `Vecow SPC-7100`_ | | .. rst-class:: |
|
||||
| | | | centered |
|
||||
| | | | |
|
||||
| | | | Maintenance |
|
||||
+------------------------+----------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Tiger Lake | `NUC11TNHi5`_ | | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | | | centered | centered | centered |
|
||||
| | | | | | | | |
|
||||
| | | | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Whiskey Lake | `WHL-IPC-I5`_ | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | | centered | centered | centered |
|
||||
| | | | | | | |
|
||||
| | | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------+-------------------+-------------------+-------------------+-------------------+-------------------+---------------------------------------+
|
||||
| Kaby Lake | `NUC7i7DNHE`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | centered | centered | centered |
|
||||
| | | | | | |
|
||||
| | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------+-------------------+-------------------+---------------------------------------+-----------------------------------------------------------+
|
||||
| Apollo Lake | | `NUC6CAYH`_, | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | `UP2-N3350`_, | centered | centered | centered |
|
||||
| | | `UP2-N4200`_, | | | |
|
||||
| | | `UP2-x5-E3940`_ | Release | Maintenance | Community |
|
||||
+------------------------+----------------------+-------------------+-------------------+---------------------------------------------------------------------------------------------------+
|
||||
|
||||
* **Release**: New ACRN features are complete and tested for the listed product.
|
||||
This product is recommended for this ACRN version. Support for older products
|
||||
@ -97,7 +106,7 @@ level includes the activities described in the lower levels.
|
||||
verify our :ref:`gsg` instructions to ensure the baseline development workflow
|
||||
works and the hypervisor will boot on the listed products. While we don't
|
||||
verify that all new features will work on this product, we will do best-effort
|
||||
support on reported issues. Maintenance support for a hardware product
|
||||
support on reported issues. Maintenance-level support for a hardware product
|
||||
is typically done for two subsequent ACRN releases (about six months).
|
||||
|
||||
* **Community**: Community responds with best-effort support for that
|
||||
|
@ -1,7 +1,7 @@
|
||||
.. _release_notes_3.0:
|
||||
|
||||
ACRN v3.0 (DRAFT)
|
||||
#################
|
||||
ACRN v3.0 (Jun 2002)
|
||||
####################
|
||||
|
||||
We are pleased to announce the release of the Project ACRN hypervisor
|
||||
version 3.0.
|
||||
@ -35,47 +35,383 @@ ACRN v3.0 requires Ubuntu 20.04. Follow the instructions in the
|
||||
What's New in v3.0
|
||||
******************
|
||||
|
||||
Redesigned ACRN Configuration
|
||||
We heard your feedback: ACRN configuration is difficult, confusing, and had
|
||||
too many parameters that were not easy to understand. Release v3.0 features a
|
||||
new ACRN Configurator UI tool with a more intuitive design and workflow that
|
||||
simplifies getting the setup for the ACRN hypervisor right. You'll also see
|
||||
changes for configuring individual VMs. We've greatly reduced the number of
|
||||
parameters needing your attention, organized them into basic and advanced
|
||||
categories, provided practical defaults, and added error checking so you can
|
||||
be much more confident in your configuration before building ACRN. We've also
|
||||
integrated the previously separated scenario and launch options into a merged
|
||||
scenario XML configuration file managed by the new Configurator. Read more
|
||||
in the :ref:`acrn_configurator_tool` page.
|
||||
|
||||
Upgrading to v3.0 From Previous Releases
|
||||
This is our first major step of continued ACRN user experience improvements.
|
||||
If you have feedback on this, or other aspects of ACRN, please share them on
|
||||
the `ACRN users mailing list <https://lists.projectacrn.org/g/acrn-users>`_.
|
||||
|
||||
We've also simplified installation of the Configurator by providing a Debian
|
||||
package that you can download and install. See the :ref:`gsg` for more
|
||||
information.
|
||||
|
||||
Improved Board Inspector Collection and Reporting
|
||||
You run the ACRN Board Inspector tool to collect information about your target
|
||||
system's processors, memory, devices, and more. The generated board XML file
|
||||
is used by the ACRN Configurator to determine which ACRN configuration options
|
||||
are possible, as well as possible values for target system resources. The v3.0
|
||||
Board Inspector has improved scanning and provides more messages about
|
||||
potential issues or limitations of your target system that could impact ACRN
|
||||
configuration options. Read more in :ref:`board_inspector_tool`.
|
||||
|
||||
.. _Vecow SPC-7100:
|
||||
https://marketplace.intel.com/s/offering/a5b3b000000PReMAAW/vecow-spc7100-series-11th-gen-intel-core-i7i5i3-processor-ultracompact-f
|
||||
|
||||
|
||||
Commercial off-the-shelf Tiger Lake machine support
|
||||
The `Vecow SPC-7100`_ system is validated and supported by ACRN. This is a
|
||||
commercially available 11th Generation Intel® Core™ Processor (codenamed Tiger
|
||||
Lake) from Vecow. Read more in the :ref:`hardware` documentation.
|
||||
|
||||
Refined shutdown & reset sequence
|
||||
A Windows User VM can now shut down or reset the system gracefully. This
|
||||
supports a user model where a Windows-based VM provides a system management
|
||||
interface. This shutdown capability is achieved by lifecycle managers in each
|
||||
VM that talk to each other via a virtual UART channel.
|
||||
|
||||
Hypervisor Real Time Clock (RTC)
|
||||
Each VM now has its own PC/AT-compatible RTC/CMOS device emulated by the
|
||||
hypervisor. With this, we can avoid any sudden jump in a VM's system clock
|
||||
that may confuse certain applications.
|
||||
|
||||
ACRN Debianization
|
||||
We appreciate a big contribution from the ACRN community! Helmut Buchsbaum from
|
||||
TTTech Industrial submitted a "debianization" feature that lets developers
|
||||
build and package ACRN into several Debian packages, install them on the target Ubuntu
|
||||
or Debian OS, and reboot the machine with ACRN running. Read more in
|
||||
:acrn_file:`debian/README.rst`.
|
||||
|
||||
|
||||
Upgrading to v3.0 from Previous Releases
|
||||
****************************************
|
||||
|
||||
We highly recommended that you follow these instructions to
|
||||
upgrade to v3.0 from previous ACRN releases.
|
||||
With the introduction of the Configurator UI tool, the need for manually editing
|
||||
XML files is gone. While working on this improved Configurator, we've also made
|
||||
many adjustments to available options in the underlying XML files, including
|
||||
merging the previous scenario and launch XML files into a combined scenario XML
|
||||
file. The board XML file generated by the v3.0 Board Inspector tool includes
|
||||
more information about the target system that is needed by the v3.0
|
||||
Configurator.
|
||||
|
||||
We recommend you generate a new board XML for your target system with the v3.0
|
||||
Board Inspector. You should also use the v3.0 Configurator to generate a new
|
||||
scenario XML file and launch scripts. Scenario XML files and launch scripts
|
||||
created by previous ACRN versions will not work with the v3.0 ACRN hypervisor
|
||||
build process and could produce unexpected errors during the build.
|
||||
|
||||
Given the scope of changes for the v3.0 release, we have recommendations for how
|
||||
to upgrade from prior ACRN versions:
|
||||
|
||||
1. Start fresh from our :ref:`gsg`. This is the best way to ensure you have a
|
||||
v3.0-ready board XML file from your target system and generate a new scenario
|
||||
XML and launch scripts from the new ACRN Configurator that are consistent and
|
||||
will work for the v3.0 build system.
|
||||
#. Use the :ref:`upgrade tool <upgrading_configuration>` to attempt
|
||||
upgrading configuration files that worked with a release before v3.0. You’ll
|
||||
need the matched pair of scenario XML and launch XML files from a prior
|
||||
configuration, and use them to create a new merged scenario XML file. See
|
||||
:ref:`upgrading_configuration` for details.
|
||||
#. Manually edit your prior scenario XML and launch XML files to make them
|
||||
compatible with v3.0. This is not our recommended approach.
|
||||
|
||||
Here are some additional details about upgrading to the v3.0 release.
|
||||
|
||||
Generate New Board XML
|
||||
======================
|
||||
|
||||
Board XML files, generated by ACRN board inspector, contain board information
|
||||
that is essential to build ACRN. Compared to previous versions, ACRN v3.0 adds
|
||||
the following hardware information to board XMLs to support new features and
|
||||
fixes.
|
||||
that is essential for building the ACRN hypervisor and setting up User VMs.
|
||||
Compared to previous versions, ACRN v3.0 adds the following information to the board
|
||||
XML file for supporting new features and fixes:
|
||||
|
||||
- TBD
|
||||
- Add ``--add-llc-cat`` to Board Inspector command line options to manually
|
||||
provide Cache Allocation Technology (CAT) to the generated board XML when
|
||||
the target hardware does not report availability of this feature. See
|
||||
:ref:`Board Inspector Command-Line Options <board_inspector_cl>` and PR `#7331
|
||||
<https://github.com/projectacrn/acrn-hypervisor/pull/7331>`_.
|
||||
- Collect all information about SR-IOV devices: see PR `#7302 <https://github.com/projectacrn/acrn-hypervisor/pull/7302>`_.
|
||||
- Extract all serial TTYs and virtio input devices: see PR `#7219 <https://github.com/projectacrn/acrn-hypervisor/pull/7219>`_.
|
||||
- Extract common ioapic information such as ioapic id, address, gsi base, and gsi num:
|
||||
see PR `#6987 <https://github.com/projectacrn/acrn-hypervisor/pull/6987>`_.
|
||||
- Add another level of `die` node even though the hardware reports die topology in CPUID:
|
||||
see PR `#7080 <https://github.com/projectacrn/acrn-hypervisor/pull/7080>`_.
|
||||
- Bring up all cores online so Board Inspector can run cpuid to extract all available cores'
|
||||
information: see PR `#7120 <https://github.com/projectacrn/acrn-hypervisor/pull/7120>`_.
|
||||
- Add CPU capability and BIOS invalid setting checks: see PR `#7216 <https://github.com/projectacrn/acrn-hypervisor/pull/7216>`_.
|
||||
- Improve Board Inspector summary and logging based on log levels option: see PR
|
||||
`#7429 <https://github.com/projectacrn/acrn-hypervisor/pull/7429>`_.
|
||||
|
||||
The new board XML can be generated using the ACRN board inspector in the same
|
||||
way as ACRN v2.7. Refer to :ref:`acrn_config_workflow` for a complete list of
|
||||
steps to deploy and run the tool.
|
||||
See the :ref:`board_inspector_tool` documentation for a complete list of steps
|
||||
to install and run the tool.
|
||||
|
||||
Update Configuration Options
|
||||
============================
|
||||
|
||||
Complete overhaul of configurator in v3.0...
|
||||
In v3.0, data in a launch XML are now merged into the scenario XML for the new
|
||||
Configurator. When practical, we recommend generating a new scenario and launch
|
||||
scripts by using the Configurator.
|
||||
|
||||
As explained in this :ref:`upgrading_configuration` document, we do provide a
|
||||
tool that can assist upgrading your existing pre-v3.0 scenario and launch XML
|
||||
files in the new merged v3.0 format. From there, you can use the v3.0 ACRN
|
||||
Configurator to open upgraded scenario file for viewing and further editing if the
|
||||
upgrader tool lost meaningful data during the conversion.
|
||||
|
||||
As part of the developer experience improvements to ACRN configuration, the following XML elements
|
||||
were refined in the scenario XML file:
|
||||
|
||||
.. rst-class:: rst-columns3
|
||||
|
||||
- ``RDT``
|
||||
- ``vUART``
|
||||
- ``IVSHMEM``
|
||||
- ``Memory``
|
||||
- ``virtio devices``
|
||||
|
||||
The following elements are added to scenario XML files.
|
||||
|
||||
.. rst-class:: rst-columns3
|
||||
|
||||
- ``vm.lapic_passthrough``
|
||||
- ``vm.io_completion_polling``
|
||||
- ``vm.nested_virtualization_support``
|
||||
- ``vm.virtual_cat_support``
|
||||
- ``vm.secure_world_support``
|
||||
- ``vm.hide_mtrr_support``
|
||||
- ``vm.security_vm``
|
||||
|
||||
The following elements were removed.
|
||||
|
||||
.. rst-class:: rst-columns3
|
||||
|
||||
- ``hv.FEATURES.NVMX_ENABLED``
|
||||
- ``hv.DEBUG_OPTIONS.LOG_BUF_SIZE``
|
||||
- ``hv.MEMORY.PLATFORM_RAM_SIZE``
|
||||
- ``hv.MEMORY.LOW_RAM_SIZE``
|
||||
- ``hv.CAPACITIES.MAX_IR_ENTRIES``
|
||||
- ``hv.CAPACITIES.IOMMU_BUS_NUM``
|
||||
- ``vm.guest_flags``
|
||||
- ``vm.board_private``
|
||||
|
||||
See the :ref:`scenario-config-options` documentation for details about all the
|
||||
available configuration options in the new Configurator.
|
||||
|
||||
In v3.0, we refine the structure of the generated scripts so that PCI functions
|
||||
are identified only by their BDF. This change serves as a mandatory step to align
|
||||
how passthrough devices are configured for pre-launched and post-launched VMs.
|
||||
This allows us to present a unified view in the ACRN Configurator for
|
||||
assigning passthrough device. We removed some obsolete dynamic parameters and updated the
|
||||
usage of the Device Model (``acrn-dm``) ``--cpu_affinity`` parameter in launch script generation logic to use the lapic ID
|
||||
instead of pCPU ID. See :ref:`acrn-dm_parameters-and-launch-script` for details.
|
||||
|
||||
Document Updates
|
||||
****************
|
||||
|
||||
With the introduction of the improved Configurator, we could improve our
|
||||
:ref:`gsg` documentation and let you quickly build a simple ACRN hypervisor and
|
||||
User VM configuration from scratch instead of using a contrived pre-defined scenario
|
||||
configuration. That also let us reorganize and change configuration option
|
||||
documentation to use the newly defined developer-friendly names for
|
||||
configuration options.
|
||||
|
||||
Check out our improved Getting Started and Configuration documents:
|
||||
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
* :ref:`introduction`
|
||||
* :ref:`gsg`
|
||||
* :ref:`overview_dev`
|
||||
* :ref:`scenario-config-options`
|
||||
* :ref:`acrn_configuration_tool`
|
||||
* :ref:`board_inspector_tool`
|
||||
* :ref:`acrn_configurator_tool`
|
||||
* :ref:`upgrading_configuration`
|
||||
* :ref:`user_vm_guide`
|
||||
* :ref:`acrn-dm_parameters-and-launch-script`
|
||||
|
||||
|
||||
Here are some of the high-level design documents that were updated since the
|
||||
v2.7 release:
|
||||
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
* :ref:`hld-overview`
|
||||
* :ref:`atkbdc_virt_hld`
|
||||
* :ref:`hld-devicemodel`
|
||||
* :ref:`hld-emulated-devices`
|
||||
* :ref:`hld-power-management`
|
||||
* :ref:`hld-security`
|
||||
* :ref:`hld-virtio-devices`
|
||||
* :ref:`hostbridge_virt_hld`
|
||||
* :ref:`hv-cpu-virt`
|
||||
* :ref:`hv-device-passthrough`
|
||||
* :ref:`hv-hypercall`
|
||||
* :ref:`interrupt-hld`
|
||||
* :ref:`hld-io-emulation`
|
||||
* :ref:`IOC_virtualization_hld`
|
||||
* :ref:`virtual-interrupt-hld`
|
||||
* :ref:`ivshmem-hld`
|
||||
* :ref:`system-timer-hld`
|
||||
* :ref:`uart_virtualization`
|
||||
* :ref:`virtio-blk`
|
||||
* :ref:`virtio-console`
|
||||
* :ref:`virtio-input`
|
||||
* :ref:`virtio-net`
|
||||
* :ref:`vuart_virtualization`
|
||||
* :ref:`l1tf`
|
||||
* :ref:`trusty_tee`
|
||||
|
||||
We've also made edits throughout the documentation to improve clarity,
|
||||
formatting, and presentation.
|
||||
formatting, and presentation. We started updating feature enabling tutorials
|
||||
based on the new Configurator, and will continue updating them after the v3.0
|
||||
release (in the `latest documentation <https://docs.projectacrn.org>`_).
|
||||
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
* :ref:`develop_acrn`
|
||||
* :ref:`doc_guidelines`
|
||||
* :ref:`acrn_doc`
|
||||
* :ref:`hardware`
|
||||
* :ref:`acrn_on_qemu`
|
||||
* :ref:`cpu_sharing`
|
||||
* :ref:`enable_ivshmem`
|
||||
* :ref:`enable-s5`
|
||||
* :ref:`gpu-passthrough`
|
||||
* :ref:`inter-vm_communication`
|
||||
* :ref:`rdt_configuration`
|
||||
* :ref:`rt_performance_tuning`
|
||||
* :ref:`rt_perf_tips_rtvm`
|
||||
* :ref:`using_hybrid_mode_on_nuc`
|
||||
* :ref:`using_ubuntu_as_user_vm`
|
||||
* :ref:`using_windows_as_uos`
|
||||
* :ref:`vuart_config`
|
||||
* :ref:`acrnshell`
|
||||
* :ref:`hv-parameters`
|
||||
* :ref:`debian_packaging`
|
||||
* :ref:`acrnctl`
|
||||
|
||||
Some obsolete documents were removed from the v3.0 documentation, but can still
|
||||
be found in the archived versions of previous release documentation, such as for
|
||||
`v2.7 <https://docs.projectacrn.org/2.7/>`_.
|
||||
|
||||
|
||||
Fixed Issues Details
|
||||
********************
|
||||
|
||||
.. comment example item
|
||||
- :acrn-issue:`5626` - [CFL][industry] Host Call Trace once detected
|
||||
- :acrn-issue:`5626` - Host Call Trace once detected
|
||||
|
||||
- :acrn-issue:`7712` - [config_tool] Make Basic tab the default view
|
||||
- :acrn-issue:`7657` - [acrn-configuration-tool] Failed to build acrn with make BOARD=xx SCENARIO=shared RELEASE=y
|
||||
- :acrn-issue:`7641` - config-tools: No launch scripts is generated when clicking save
|
||||
- :acrn-issue:`7637` - config-tools: vsock refine
|
||||
- :acrn-issue:`7634` - [DX][TeamFooding][ADL-S] Failed to build acrn when disable multiboot2 in configurator
|
||||
- :acrn-issue:`7623` - [config_tool] igd-vf para is no longer needed in launch script
|
||||
- :acrn-issue:`7609` - [Config-Tool][UI]build acrn failed after delete VMs via UI
|
||||
- :acrn-issue:`7606` - uninitialized variables are used in hpet.c
|
||||
- :acrn-issue:`7597` - [config_tool] Use Different Board - delete old board file
|
||||
- :acrn-issue:`7592` - [acrn-configuration-tool] Hide PCI option for Console virtual UART type
|
||||
- :acrn-issue:`7581` - [ADL-S][shared]The SOS cmdline parameter shouldn't be added manually and should be changed in Debian package.
|
||||
- :acrn-issue:`7571` - [config_tool] Working folder with my_board.xml behavior
|
||||
- :acrn-issue:`7563` - [ADL-S][SSRAM]RTCM Unit run failed with 2G memory size
|
||||
- :acrn-issue:`7556` - [config_tool] VUART is configured to PCI and generate launch script without relevant parameters
|
||||
- :acrn-issue:`7546` - [acrn-configuration-tool]Scenario files generated with acrn-configurator for boards without serial ports ACRN debug build fails
|
||||
- :acrn-issue:`7540` - [config_tool]: rename virtio console as virtio serial port(as console)
|
||||
- :acrn-issue:`7538` - configurator: CPU Affinity should be hidden from service vm
|
||||
- :acrn-issue:`7535` - config-tools: add vhost vsock in v3.0
|
||||
- :acrn-issue:`7532` - [config_tool] dialog box and deleting VM related issues
|
||||
- :acrn-issue:`7530` - [configurator] Maximum Virtual CLOS configuration value should not allow negative numbers
|
||||
- :acrn-issue:`7526` - [configurator] Assigning one cpu to multiple Pre-launched VMs is not reported as error
|
||||
- :acrn-issue:`7519` - [config_tool] Duplicate VM name
|
||||
- :acrn-issue:`7514` - fix FEATURES to restore basic view
|
||||
- :acrn-issue:`7506` - config-tools: configurator widget vuart connection needs validation
|
||||
- :acrn-issue:`7500` - [config_tool] Failed to delete post-launched VM due to IVSHMEM
|
||||
- :acrn-issue:`7498` - board_inspector.py fails to run on target with clean Ubuntu installation
|
||||
- :acrn-issue:`7495` - [config_tool] Starting new configuration deletes all files in existing working folder
|
||||
- :acrn-issue:`7492` - configurator: fix configurator build issue
|
||||
- :acrn-issue:`7488` - Configurator version confusion
|
||||
- :acrn-issue:`7486` - [config_tool] Duplicate VM name
|
||||
- :acrn-issue:`7484` - configurator: User-input working directory not working
|
||||
- :acrn-issue:`7481` - [config_tool] No validation for required fields in widgets
|
||||
- :acrn-issue:`7470` - [config_tool][UI] scenario.xml is still generated even though there are setting errors
|
||||
- :acrn-issue:`7469` - [config_tool][UI] No promption on save button if there are wrong settings
|
||||
- :acrn-issue:`7455` - configurator: vUART widget not working
|
||||
- :acrn-issue:`7450` - config-tools: bugfix for file related issues in UI
|
||||
- :acrn-issue:`7445` - [Config-Tool][UI]The shared VMs_name for IVSHMEM is not consistent with the VM_name modification
|
||||
- :acrn-issue:`7442` - [config_tool] Tooltip runs off screen
|
||||
- :acrn-issue:`7435` - configurator: Steps should be inactive until prior step complete
|
||||
- :acrn-issue:`7425` - Cache was not locked after post-RTVM power off and restart
|
||||
- :acrn-issue:`7424` - [config_tool] Virtual USB HCI should be a dropdown menu
|
||||
- :acrn-issue:`7421` - configurator: Unable to display PCI devices droplist
|
||||
- :acrn-issue:`7420` - configurator: Unable to set physical CPU affinity
|
||||
- :acrn-issue:`7419` - configurator: prelaunched VM assigned wrong VMID
|
||||
- :acrn-issue:`7418` - configurator: open folder path incorrect
|
||||
- :acrn-issue:`7413` - config-tools: bugfix for UI
|
||||
- :acrn-issue:`7402` - [acrn-deb] install board_inspector overwrites grub cmdline
|
||||
- :acrn-issue:`7401` - Post-RTVM boot failure with SSRAM enabled
|
||||
- :acrn-issue:`7400` - [acrn-configuration-tool][acrn-deb] grub is not update correctly after install the acrn-deb
|
||||
- :acrn-issue:`7392` - There is no virtio_devices node in generic scenario xml
|
||||
- :acrn-issue:`7383` - [acrn-configuration tool] make scenario shared file error cp error
|
||||
- :acrn-issue:`7376` - Virtio-GPU in guest_vm fails to get the EDID
|
||||
- :acrn-issue:`7370` - [acrn-deb] install_compile_package function is not consistent with gsg
|
||||
- :acrn-issue:`7366` - [acrn-configuration tool] make scenario shared file error cp error
|
||||
- :acrn-issue:`7365` - [config_tool] Get errors running board_inspector
|
||||
- :acrn-issue:`7361` - config_tool: Add check for RTVM pCPU assignment
|
||||
- :acrn-issue:`7356` - [UI] Board info not updated when user changed the board XML
|
||||
- :acrn-issue:`7349` - [UI]Not delete all VMs while delete Service_VM on UI
|
||||
- :acrn-issue:`7345` - Build will fail when using absolute path
|
||||
- :acrn-issue:`7337` - Memory leak after creating udmabuf for virtio-gpu zero_copy
|
||||
- :acrn-issue:`7330` - [PCI UART] Fail to build hypervisor without pci uart bdf value
|
||||
- :acrn-issue:`7327` - refine pgentry_present field in struct pgtable
|
||||
- :acrn-issue:`7301` - [Virtio-GPU]Not enough free memory reserved in SOS
|
||||
- :acrn-issue:`7298` - boot time issue for acrn-dm
|
||||
- :acrn-issue:`7297` - update parameter in schema
|
||||
- :acrn-issue:`7296` - Segment fault is triggered in course of Virtio-gpu rebooting test
|
||||
- :acrn-issue:`7270` - combined cpu_affinity warning for service vm
|
||||
- :acrn-issue:`7267` - service vm cpu affinity issue
|
||||
- :acrn-issue:`7265` - About 20s after booting uaag with usb mediator,usb disk isn't recognized
|
||||
- :acrn-issue:`7261` - Hide PTM in Configurator UI
|
||||
- :acrn-issue:`7256` - Remove SCHED_IORR and KERNEL_RAWIMAGE
|
||||
- :acrn-issue:`7249` - doc: Exception in Sphinx processing doesn't display error message
|
||||
- :acrn-issue:`7248` - restore copyright notice of original author of some files.
|
||||
- :acrn-issue:`7246` - Can't fully parse the xml content that was saved by the same version configurator
|
||||
- :acrn-issue:`7241` - need copyright notice and license in virtio_gpu.c
|
||||
- :acrn-issue:`7212` - Exception"not enough space in guest VE820 SSRAM area" showed when built ACRN with RTCT table
|
||||
- :acrn-issue:`7208` - iasl segfault when reboot user VM with virtio-i2c devices
|
||||
- :acrn-issue:`7197` - The error message is found after adding the specific mac address in the launch script
|
||||
- :acrn-issue:`7172` - [acrn-configuration-tool] offline_cpus won't be executed in NOOP mode
|
||||
- :acrn-issue:`7171` - Link to instead of including old release notes in the current release
|
||||
- :acrn-issue:`7159` - acrn-config: config tool get_node apic_id failed
|
||||
- :acrn-issue:`7136` - [acrn-configuration-tool] Share memory should never support 512M since the HV_RAM_SIZE_MAX is limited to 0x40000000 but not a platform specific problem
|
||||
- :acrn-issue:`7133` - Guest VM system reset may fail and ACRN DM program hang
|
||||
- :acrn-issue:`7127` - config-tools: remove SERIAL_CONSOLE extracion for bootargs of SOS
|
||||
- :acrn-issue:`7124` - [DM]: Fail to boot the Laag guest if the boot option of "pci=nomsi" is added for Guest kernel
|
||||
- :acrn-issue:`7119` - config-tools: bring all cores online
|
||||
- :acrn-issue:`7109` - Python traceback if the 'dpkg' tool is not available
|
||||
- :acrn-issue:`7098` - Memory leakage bug induced by opendir() in ACRN applications
|
||||
- :acrn-issue:`7084` - config_tools: append passthrough gpu bdf in hexadecimal format
|
||||
- :acrn-issue:`7077` - config-tools: find pci hole based on all pci hostbridge
|
||||
- :acrn-issue:`7058` - config_tools: board_inspector cannot generate compliable xml for Qemu
|
||||
- :acrn-issue:`7045` - Segmentation fault when passthrough TSN to post_launched VM with enable_ptm option
|
||||
- :acrn-issue:`7022` - ACRN debian package not complete when source is not cloned to standard folder
|
||||
- :acrn-issue:`7018` - No expected exception generated on some platform.
|
||||
|
||||
|
||||
Known Issues
|
||||
************
|
||||
|
||||
- :acrn-issue:`6631` - [KATA] Kata support is broken since v2.7
|
||||
- :acrn-issue:`6978` - openstack failed since ACRN v2.7
|
||||
- :acrn-issue:`7827` - [Configurator] Pre_launched standard VMs cannot share CPU with Service VM
|
||||
- :acrn-issue:`7831` - [Configurator] Need to save twice to generate vUART and IVSHMEM addresses
|
||||
|
@ -8,16 +8,19 @@ This guide describes all features and uses of the tool.
|
||||
About the ACRN Configurator Tool
|
||||
*********************************
|
||||
|
||||
The ACRN Configurator ``acrn_configurator.py`` provides a user interface to help
|
||||
The ACRN Configurator provides a user interface to help
|
||||
you customize your :ref:`ACRN configuration <acrn_configuration_tool>`.
|
||||
Capabilities:
|
||||
|
||||
* Reads board information from the specified board configuration file
|
||||
* Reads board information from the board configuration file generated by the
|
||||
:ref:`board_inspector_tool`
|
||||
* Helps you configure a scenario of hypervisor and VM settings
|
||||
* Generates a scenario configuration file that stores the configured settings in
|
||||
XML format
|
||||
* Generates a launch script for each post-launched User VM
|
||||
|
||||
.. _acrn_configurator_tool_prerequisites:
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
@ -29,6 +32,10 @@ Guide sections:
|
||||
* :ref:`gsg-board-setup`
|
||||
* :ref:`gsg-dev-setup`
|
||||
|
||||
The above Getting Started Guide steps use a prebuilt Debian package to install
|
||||
the ACRN Configurator. :ref:`acrn_configurator_tool_source` describes how to
|
||||
build the Debian package.
|
||||
|
||||
Start with a New or Existing Configuration
|
||||
******************************************
|
||||
|
||||
@ -73,7 +80,8 @@ such as a board configuration file, scenario configuration file, and launch
|
||||
scripts. If it finds any, it will delete them.
|
||||
|
||||
1. Under **Start a new configuration**, use the displayed working folder or
|
||||
select a different folder by clicking **Browse for folder**.
|
||||
select a different folder by clicking **Browse for folder**. Use a
|
||||
folder name that is meaningful to you.
|
||||
|
||||
.. image:: images/configurator-newconfig.png
|
||||
:align: center
|
||||
@ -151,7 +159,7 @@ that no board information has been imported yet.
|
||||
|
||||
To import a board configuration file for the first time:
|
||||
|
||||
1. Under **Import a board configuration file**, select a scenario configuration
|
||||
1. Under **Import a board configuration file**, select a
|
||||
file from the dropdown menu or click **Browse for file** to select a
|
||||
different file.
|
||||
|
||||
@ -174,7 +182,7 @@ Replace an Existing Board Configuration File
|
||||
============================================
|
||||
|
||||
After a board configuration file has been imported, you can choose to replace it
|
||||
at any time. This option is useful, for example, when you need to iterate your
|
||||
at any time. This option is useful, for example, when you need to change your
|
||||
board's configuration while you are customizing your hypervisor settings.
|
||||
Whenever you change the configuration of your board, you must generate a new
|
||||
board configuration file via the :ref:`board_inspector_tool`. Examples include
|
||||
@ -242,8 +250,8 @@ information in the file to populate the UI, so that you can continue working on
|
||||
the configuration where you left off.
|
||||
|
||||
1. Due to the strict validation ACRN adopts, scenario configuration files for a
|
||||
former release may not work for a latter if they are not upgraded. Starting
|
||||
from v3.0, upgrade an older scenario XML per the steps in
|
||||
former release may not work in the current release unless they are upgraded.
|
||||
Starting from v3.0, upgrade an older scenario XML per the steps in
|
||||
:ref:`upgrading_configuration` then import the upgraded file into the tool in
|
||||
the next step.
|
||||
|
||||
@ -290,13 +298,16 @@ Basic parameters are generally defined as:
|
||||
|
||||
* Parameters that are common for software like ACRN.
|
||||
|
||||
* Parameters that are anticipated to be commonly used for typical ACRN use
|
||||
cases.
|
||||
|
||||
Advanced parameters are generally defined as:
|
||||
|
||||
* Parameters that are optional for ACRN configuration, compilation, and
|
||||
execution.
|
||||
execution. Default values cover most use cases.
|
||||
|
||||
* Parameters that are used for fine-grained tuning, such as reducing code
|
||||
lines or optimizing performance. Default values cover most use cases.
|
||||
lines or optimizing performance.
|
||||
|
||||
Add a VM
|
||||
=========
|
||||
@ -328,7 +339,10 @@ Save and Check for Errors
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
The tool saves your configuration data in a set of files in the working folder:
|
||||
The tool validates hypervisor and VM settings whenever you save.
|
||||
|
||||
If no errors occur, the tool saves your configuration data in a set of files
|
||||
in the working folder:
|
||||
|
||||
* Scenario configuration file (``scenario.xml``): Raw format of all
|
||||
hypervisor and VM settings. You will need this file to build ACRN.
|
||||
@ -341,19 +355,125 @@ Save and Check for Errors
|
||||
|
||||
# Launch script for VM name: <name>
|
||||
|
||||
The tool validates hypervisor and VM settings whenever you save. If an error
|
||||
occurs, such as an empty required field, the tool saves the changes to the
|
||||
files, but prompts you to correct the error. Error messages appear below the
|
||||
applicable settings. Example:
|
||||
If an error occurs, such as an empty required field, the tool saves the
|
||||
changes to the scenario configuration file, but prompts you to correct the
|
||||
error.
|
||||
|
||||
.. image:: images/configurator-rederror.png
|
||||
#. On the selector menu, check for error messages on all tabs that have an error
|
||||
icon. The following figure shows that the Hypervisor tab and the VM1 tab
|
||||
contain errors.
|
||||
|
||||
.. image:: images/configurator-erroricon.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Fix the errors and save again to generate a valid configuration.
|
||||
Error messages appear below the selector menu or below the applicable
|
||||
parameter.
|
||||
|
||||
#. Fix all errors and save again to generate a valid configuration.
|
||||
|
||||
#. Click the **x** in the upper-right corner to close the ACRN Configurator.
|
||||
|
||||
Next Steps
|
||||
==========
|
||||
|
||||
After generating a valid scenario configuration file, you can build ACRN. See
|
||||
:ref:`gsg_build`.
|
||||
:ref:`gsg_build`.
|
||||
|
||||
.. _acrn_configurator_tool_source:
|
||||
|
||||
Build ACRN Configurator From Source Code
|
||||
*****************************************
|
||||
|
||||
The :ref:`prerequisites<acrn_configurator_tool_prerequisites>` use a prebuilt
|
||||
Debian package to install the ACRN Configurator. The following steps describe
|
||||
how to build the Debian package from source code.
|
||||
|
||||
#. On the development computer, complete the steps in :ref:`gsg-dev-computer`.
|
||||
|
||||
#. Install the ACRN Configurator build tools:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt install -y libwebkit2gtk-4.0-dev \
|
||||
build-essential \
|
||||
curl \
|
||||
wget \
|
||||
libssl-dev \
|
||||
libgtk-3-dev \
|
||||
libappindicator3-dev \
|
||||
librsvg2-dev \
|
||||
python3-venv
|
||||
|
||||
#. Install Node.js (npm included) as follows:
|
||||
|
||||
a. We recommend using nvm to manage your Node.js runtime. It allows you to
|
||||
switch versions and update Node.js easily.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.35.2/install.sh | bash
|
||||
|
||||
#. Rerun your ``.bashrc`` initialization script and then install the latest
|
||||
version of Node.js and npm:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
source ~/.bashrc
|
||||
nvm install node --latest-npm
|
||||
nvm use node
|
||||
|
||||
#. Install and upgrade Yarn:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
npm install --global yarn
|
||||
|
||||
#. Install rustup, the official installer for Rust:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
|
||||
|
||||
When prompted by the Rust installation script, type ``1`` and press
|
||||
:kbd:`Enter`.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
1) Proceed with installation (default)
|
||||
2) Customize installation
|
||||
3) Cancel installation
|
||||
>1
|
||||
|
||||
#. Configure the current shell:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
source $HOME/.cargo/env
|
||||
|
||||
#. Install additional ACRN Configurator dependencies:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/acrn-work/acrn-hypervisor/misc/config_tools/configurator
|
||||
python3 -m pip install -r requirements.txt
|
||||
yarn
|
||||
|
||||
#. Build the ACRN Configurator Debian package:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/acrn-work/acrn-hypervisor
|
||||
make configurator
|
||||
|
||||
#. Install the ACRN Configurator:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt install -y ~/acrn-work/acrn-hypervisor/build/acrn-configurator*.deb
|
||||
|
||||
#. Launch the ACRN Configurator:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
acrn-configurator
|
@ -11,8 +11,8 @@ configuration.
|
||||
|
||||
This setup was tested with the following configuration:
|
||||
|
||||
- ACRN hypervisor: ``v2.7`` tag
|
||||
- ACRN kernel: ``v2.7`` tag
|
||||
- ACRN hypervisor: ``v3.0`` tag
|
||||
- ACRN kernel: ``acrn-v3.0`` tag
|
||||
- QEMU emulator version: 4.2.1
|
||||
- Host OS: Ubuntu 20.04
|
||||
- Service VM/User VM OS: Ubuntu 20.04
|
||||
@ -67,11 +67,11 @@ Prepare Service VM (L1 Guest)
|
||||
--vcpus 4 \
|
||||
--virt-type kvm \
|
||||
--os-type linux \
|
||||
--os-variant ubuntu18.04 \
|
||||
--os-variant ubuntu20.04 \
|
||||
--graphics none \
|
||||
--clock offset=utc,tsc_present=yes,kvmclock_present=no \
|
||||
--qemu-commandline="-machine kernel-irqchip=split -cpu Denverton,+invtsc,+lm,+nx,+smep,+smap,+mtrr,+clflushopt,+vmx,+x2apic,+popcnt,-xsave,+sse,+rdrand,-vmx-apicv-vid,+vmx-apicv-xapic,+vmx-apicv-x2apic,+vmx-flexpriority,+tsc-deadline,+pdpe1gb -device intel-iommu,intremap=on,caching-mode=on,aw-bits=48" \
|
||||
--location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \
|
||||
--location 'http://archive.ubuntu.com/ubuntu/dists/focal/main/installer-amd64/' \
|
||||
--extra-args "console=tty0 console=ttyS0,115200n8"
|
||||
|
||||
#. Walk through the installation steps as prompted. Here are a few things to note:
|
||||
@ -118,6 +118,8 @@ Prepare Service VM (L1 Guest)
|
||||
GRUB_CMDLINE_LINUX_DEFAULT=""
|
||||
GRUB_GFXMODE=text
|
||||
|
||||
#. Check the rootfs partition with ``lsblk``, it is ``vda5`` in this example.
|
||||
|
||||
#. The Service VM guest can also be launched again later using
|
||||
``virsh start ServiceVM --console``. Make sure to use the domain name you
|
||||
used while creating the VM in case it is different than ``ServiceVM``.
|
||||
@ -136,19 +138,24 @@ Install ACRN Hypervisor
|
||||
.. important:: All the steps below are performed **inside** the Service VM
|
||||
guest that we built in the previous section.
|
||||
|
||||
#. Install the ACRN build tools and dependencies following the :ref:`gsg`.
|
||||
|
||||
#. Switch to the ACRN hypervisor ``v2.7`` tag.
|
||||
#. Install the ACRN build tools and dependencies following the :ref:`gsg`. Note
|
||||
again that we're doing these steps within the Service VM and not on a development
|
||||
system as described in the Getting Started Guide.
|
||||
#. Switch to the ACRN hypervisor ``v3.0`` tag.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cd ~
|
||||
git clone https://github.com/projectacrn/acrn-hypervisor.git
|
||||
cd acrn-hypervisor
|
||||
git checkout v2.7
|
||||
git checkout v3.0
|
||||
|
||||
#. Build ACRN for QEMU:
|
||||
|
||||
We're using the qemu board XML and shared scenario XML files
|
||||
supplied from the repo (``misc/config_tools/data/qemu``) and not
|
||||
generated by the board inspector or configurator tools.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
make BOARD=qemu SCENARIO=shared
|
||||
@ -168,9 +175,9 @@ Install ACRN Hypervisor
|
||||
sudo cp build/hypervisor/acrn.32.out /boot
|
||||
|
||||
#. Clone and configure the Service VM kernel repository following the
|
||||
instructions in the :ref:`gsg` and using the ``v2.7`` tag. The User VM (L2
|
||||
instructions in the :ref:`gsg` and using the ``acrn-v3.0`` tag. The User VM (L2
|
||||
guest) uses the ``virtio-blk`` driver to mount the rootfs. This driver is
|
||||
included in the default kernel configuration as of the ``v2.7`` tag.
|
||||
included in the default kernel configuration as of the ``acrn-v3.0`` tag.
|
||||
|
||||
#. Update GRUB to boot the ACRN hypervisor and load the Service VM kernel.
|
||||
Append the following configuration to the :file:`/etc/grub.d/40_custom`.
|
||||
@ -186,10 +193,14 @@ Install ACRN Hypervisor
|
||||
insmod ext2
|
||||
|
||||
echo 'Loading ACRN hypervisor ...'
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out
|
||||
module /boot/bzImage Linux_bzImage
|
||||
multiboot --quirk-modules-after-kernel /boot/acrn.32.out root=/dev/vda5
|
||||
module /boot/vmlinuz-5.10.115-acrn-service-vm Linux_bzImage
|
||||
}
|
||||
|
||||
.. note::
|
||||
If your rootfs partition isn't vda5, please change it to match with yours.
|
||||
vmlinuz-5.10.115-acrn-service-vm is the kernel image of Service VM.
|
||||
|
||||
#. Update GRUB:
|
||||
|
||||
.. code-block:: none
|
||||
|
@ -1,229 +1,97 @@
|
||||
.. _enable_ivshmem:
|
||||
|
||||
Enable Inter-VM Communication Based on Ivshmem
|
||||
##############################################
|
||||
Enable Inter-VM Shared Memory Communication (IVSHMEM)
|
||||
#####################################################
|
||||
|
||||
You can use inter-VM communication based on the ``ivshmem`` dm-land
|
||||
solution or hv-land solution, according to the usage scenario needs.
|
||||
(See :ref:`ivshmem-hld` for a high-level description of these solutions.)
|
||||
While both solutions can be used at the same time, VMs using different
|
||||
solutions cannot communicate with each other.
|
||||
About Inter-VM Shared Memory Communication (IVSHMEM)
|
||||
****************************************************
|
||||
|
||||
Enable Ivshmem Support
|
||||
Inter-VM shared memory communication allows VMs to communicate with each other
|
||||
via a shared memory mechanism.
|
||||
|
||||
As an example, users in the industrial segment can use a shared memory region to
|
||||
exchange commands and responses between a Windows VM that is taking inputs from
|
||||
operators and a real-time VM that is running real-time tasks.
|
||||
|
||||
The ACRN Device Model or hypervisor emulates a virtual PCI device (called an
|
||||
IVSHMEM device) to expose this shared memory's base address and size.
|
||||
|
||||
* Device Model: The IVSHMEM device is emulated in the ACRN Device Model, and the
|
||||
shared memory regions are reserved in the Service VM's memory space. This
|
||||
solution only supports communication between post-launched User VMs.
|
||||
|
||||
* Hypervisor: The IVSHMEM device is emulated in the hypervisor, and the shared
|
||||
memory regions are reserved in the hypervisor's memory space. This solution
|
||||
works for both pre-launched and post-launched User VMs.
|
||||
|
||||
While both solutions can be used in the same ACRN configuration, VMs using
|
||||
different solutions cannot communicate with each other.
|
||||
|
||||
Dependencies and Constraints
|
||||
****************************
|
||||
|
||||
Consider the following dependencies and constraints:
|
||||
|
||||
* Inter-VM shared memory communication is a hardware-neutral feature.
|
||||
|
||||
* Guest OSes are required to have either of the following:
|
||||
|
||||
- An IVSHMEM driver, such as `virtio-WIN
|
||||
<https://github.com/virtio-win/kvm-guest-drivers-windows>`__ for Windows and
|
||||
`ivshmem APIs
|
||||
<https://docs.zephyrproject.org/apidoc/latest/group__ivshmem.html>`__ in
|
||||
Zephyr
|
||||
|
||||
- A mechanism granting user-space applications access to a PCI device, such as
|
||||
the `Userspace I/O (UIO) driver
|
||||
<https://www.kernel.org/doc/html/latest/driver-api/uio-howto.html>`__ in
|
||||
Linux
|
||||
|
||||
Configuration Overview
|
||||
**********************
|
||||
|
||||
The ``ivshmem`` solution is disabled by default in ACRN. You can enable
|
||||
it using the :ref:`ACRN Configurator <acrn_configurator_tool>` with these
|
||||
steps:
|
||||
The :ref:`acrn_configurator_tool` lets you configure inter-VM shared memory
|
||||
communication among VMs. The following documentation is a general overview of
|
||||
the configuration process.
|
||||
|
||||
- Enable ``ivshmem`` via ACRN Configurator GUI.
|
||||
To configure inter-VM shared memory communication among VMs, go to the
|
||||
**Hypervisor Global Settings > Basic Parameters > InterVM shared memory**. Click
|
||||
**+** to add the first shared memory region.
|
||||
|
||||
- Set ``hv.FEATURES.IVSHMEM.IVSHMEM_ENABLED`` to ``y``
|
||||
.. image:: images/configurator-ivshmem01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
- Edit ``hv.FEATURES.IVSHMEM.IVSHMEM_REGION`` to specify the shared
|
||||
memory name, size and
|
||||
communication VMs. The ``IVSHMEM_REGION`` format is ``shm_name,shm_size,VM IDs``:
|
||||
For the shared memory region:
|
||||
|
||||
- ``shm_name`` - Specify a shared memory name. The name needs to start
|
||||
with the ``hv:/`` prefix for hv-land, or ``dm:/`` for dm-land.
|
||||
For example, ``hv:/shm_region_0`` for hv-land and ``dm:/shm_region_0``
|
||||
for dm-land.
|
||||
#. Enter a name for the shared memory region.
|
||||
#. Select the source of the emulation, either Hypervisor or Device Model.
|
||||
#. Select the size of the shared memory region.
|
||||
#. Select at least two VMs that can use the shared memory region.
|
||||
#. Enter a virtual Board:Device.Function (BDF) address for each VM or leave it
|
||||
blank. If the field is blank, the tool provides an address when the
|
||||
configuration is saved.
|
||||
|
||||
- ``shm_size`` - Specify a shared memory size. The unit is megabyte. The
|
||||
size ranges from 2 megabytes to 512 megabytes and must be a power of 2 megabytes.
|
||||
For example, to set up a shared memory of 2 megabytes, use ``2``
|
||||
instead of ``shm_size``.
|
||||
.. note::
|
||||
|
||||
- ``VM IDs`` - Specify the VM IDs to use the same shared memory
|
||||
communication and separate it with ``:``. For example, the
|
||||
communication between VM0 and VM2, it can be written as ``0:2``
|
||||
The release v3.0 ACRN Configurator has an issue where you need to save the
|
||||
configuration twice to see the generated BDF address in the shared memory
|
||||
setting. (:acrn-issue:`7831`)
|
||||
|
||||
- Build with the XML configuration, refer to :ref:`gsg`.
|
||||
#. Add more VMs to the shared memory region by clicking **+** on the right
|
||||
side of an existing VM. Or click **-** to delete a VM.
|
||||
|
||||
Ivshmem DM-Land Usage
|
||||
*********************
|
||||
To add another shared memory region, click **+** on the right side of an
|
||||
existing region. Or click **-** to delete a region.
|
||||
|
||||
Follow `Enable Ivshmem Support`_ and
|
||||
add below line as an ``acrn-dm`` boot parameter::
|
||||
.. image:: images/configurator-ivshmem02.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
-s slot,ivshmem,shm_name,shm_size
|
||||
Learn More
|
||||
**********
|
||||
|
||||
where
|
||||
ACRN supports multiple inter-VM communication methods. For a comparison, see
|
||||
:ref:`inter-vm_communication`.
|
||||
|
||||
- ``-s slot`` - Specify the virtual PCI slot number
|
||||
|
||||
- ``ivshmem`` - Virtual PCI device emulating the Shared Memory
|
||||
|
||||
- ``shm_name`` - Specify a shared memory name. This ``shm_name`` must be listed
|
||||
in ``hv.FEATURES.IVSHMEM.IVSHMEM_REGION`` in `Enable Ivshmem Support`_ section and needs to start
|
||||
with ``dm:/`` prefix.
|
||||
|
||||
- ``shm_size`` - Shared memory size of selected ``shm_name``.
|
||||
|
||||
There are two ways to insert the above boot parameter for ``acrn-dm``:
|
||||
|
||||
- Manually edit the launch script file. In this case, ensure that both
|
||||
``shm_name`` and ``shm_size`` match those defined via the ACRN Configurator
|
||||
tool.
|
||||
|
||||
- Use the following command to create a launch script, when IVSHMEM is enabled
|
||||
and ``hv.FEATURES.IVSHMEM.IVSHMEM_REGION`` is properly configured via
|
||||
the ACRN Configurator.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5
|
||||
|
||||
python3 misc/config_tools/launch_config/launch_cfg_gen.py \
|
||||
--board <path_to_your_board_xml> \
|
||||
--scenario <path_to_your_scenario_xml> \
|
||||
--launch <path_to_your_launch_script_xml> \
|
||||
--user_vmid <desired_single_vmid_or_0_for_all_vmids>
|
||||
|
||||
.. note:: This device can be used with real-time VM (RTVM) as well.
|
||||
|
||||
.. _ivshmem-hv:
|
||||
|
||||
Ivshmem HV-Land Usage
|
||||
*********************
|
||||
|
||||
Follow `Enable Ivshmem Support`_ to setup HV-Land Ivshmem support.
|
||||
|
||||
Ivshmem Notification Mechanism
|
||||
******************************
|
||||
|
||||
Notification (doorbell) of ivshmem device allows VMs with ivshmem
|
||||
devices enabled to notify (interrupt) each other following this flow:
|
||||
|
||||
Notification Sender (VM):
|
||||
VM triggers the notification to target VM by writing target Peer ID
|
||||
(Equals to VM ID of target VM) and vector index to doorbell register of
|
||||
ivshmem device, the layout of doorbell register is described in
|
||||
:ref:`ivshmem-hld`.
|
||||
|
||||
Hypervisor:
|
||||
When doorbell register is programmed, hypervisor will search the
|
||||
target VM by target Peer ID and inject MSI interrupt to the target VM.
|
||||
|
||||
Notification Receiver (VM):
|
||||
VM receives MSI interrupt and forwards it to related application.
|
||||
|
||||
ACRN supports up to 8 (MSI-X) interrupt vectors for ivshmem device.
|
||||
Guest VMs shall implement their own mechanism to forward MSI interrupts
|
||||
to applications.
|
||||
|
||||
.. note:: Notification is supported only for HV-land ivshmem devices. (Future
|
||||
support may include notification for DM-land ivshmem devices.)
|
||||
|
||||
Inter-VM Communication Examples
|
||||
*******************************
|
||||
|
||||
DM-Land Example
|
||||
===============
|
||||
|
||||
This example uses dm-land inter-VM communication between two
|
||||
Linux-based post-launched VMs (VM1 and VM2).
|
||||
|
||||
.. note:: An ``ivshmem`` Windows driver exists and can be found
|
||||
`here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_.
|
||||
|
||||
1. Add a new virtual PCI device for both VMs: the device type is
|
||||
``ivshmem``, shared memory name is ``dm:/test``, and shared memory
|
||||
size is 2MB. Both VMs must have the same shared memory name and size:
|
||||
|
||||
- VM1 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 6
|
||||
|
||||
acrn-dm -m $mem_size -s 0:0,hostbridge \
|
||||
-s 5,virtio-console,@stdio:stdio_port \
|
||||
-s 6,virtio-hyper_dmabuf \
|
||||
-s 3,virtio-blk,/home/acrn/UserVM1.img \
|
||||
-s 4,virtio-net,tap=tap0 \
|
||||
-s 6,ivshmem,dm:/test,2 \
|
||||
-s 7,virtio-rnd \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
|
||||
- VM2 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 4
|
||||
|
||||
acrn-dm -m $mem_size -s 0:0,hostbridge \
|
||||
-s 3,virtio-blk,/home/acrn/UserVM2.img \
|
||||
-s 4,virtio-net,tap=tap0 \
|
||||
-s 5,ivshmem,dm:/test,2 \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
2. Boot two VMs and use ``lspci | grep "shared memory"`` to verify that the virtual device is ready for each VM.
|
||||
|
||||
- For VM1, it shows ``00:06.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
|
||||
3. As recorded in the `PCI ID Repository <https://pci-ids.ucw.cz/read/PC/1af4>`_,
|
||||
the ``ivshmem`` device vendor ID is ``1af4`` (Red Hat) and device ID is ``1110``
|
||||
(Inter-VM shared memory). Use these commands to probe the device::
|
||||
|
||||
sudo modprobe uio
|
||||
sudo modprobe uio_pci_generic
|
||||
sudo echo "1af4 1110" > /sys/bus/pci/drivers/uio_pci_generic/new_id
|
||||
|
||||
.. note:: These commands are applicable to Linux-based guests with ``CONFIG_UIO`` and ``CONFIG_UIO_PCI_GENERIC`` enabled.
|
||||
|
||||
4. Finally, a user application can get the shared memory base address from
|
||||
the ``ivshmem`` device BAR resource
|
||||
(``/sys/class/uio/uioX/device/resource2``) and the shared memory size from
|
||||
the ``ivshmem`` device config resource
|
||||
(``/sys/class/uio/uioX/device/config``).
|
||||
|
||||
The ``X`` in ``uioX`` above, is a number that can be retrieved using the
|
||||
``ls`` command:
|
||||
|
||||
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
|
||||
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
|
||||
|
||||
HV-Land Example
|
||||
===============
|
||||
|
||||
This example uses hv-land inter-VM communication between two
|
||||
Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM).
|
||||
|
||||
1. Make a copy of the predefined hybrid_rt scenario on whl-ipc-i5 (available at
|
||||
``acrn-hypervisor/misc/config_tools/data/whl-ipc-i5/hybrid_rt.xml``) and
|
||||
configure shared memory for the communication between VM0 and VM2. The shared
|
||||
memory name is ``hv:/shm_region_0``, and shared memory size is 2M bytes. The
|
||||
resulting scenario XML should look like this:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2,3
|
||||
|
||||
<IVSHMEM>
|
||||
<IVSHMEM_ENABLED>y</IVSHMEM_ENABLED>
|
||||
<IVSHMEM_REGION>hv:/shm_region_0, 2, 0:2</IVSHMEM_REGION>
|
||||
</IVSHMEM>
|
||||
|
||||
2. Build ACRN based on the XML configuration for hybrid_rt scenario on whl-ipc-i5 board::
|
||||
|
||||
make BOARD=whl-ipc-i5 SCENARIO=<path/to/edited/scenario.xml>
|
||||
|
||||
3. Add a new virtual PCI device for VM2 (post-launched VM): the device type is
|
||||
``ivshmem``, shared memory name is ``hv:/shm_region_0``, and shared memory
|
||||
size is 2MB.
|
||||
|
||||
- VM2 Launch Script Sample
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 4
|
||||
|
||||
acrn-dm -m $mem_size -s 0:0,hostbridge \
|
||||
-s 3,virtio-blk,/home/acrn/UserVM2.img \
|
||||
-s 4,virtio-net,tap=tap0 \
|
||||
-s 5,ivshmem,hv:/shm_region_0,2 \
|
||||
--ovmf /usr/share/acrn/bios/OVMF.fd \
|
||||
$vm_name
|
||||
|
||||
4. Continue following the dm-land steps 2-4 and the ``ivshmem`` device BDF may be different
|
||||
depending on the configuration.
|
||||
For details on ACRN IVSHMEM high-level design, see :ref:`ivshmem-hld`.
|
@ -1,99 +1,93 @@
|
||||
.. _gpu-passthrough:
|
||||
|
||||
Enable GVT-d in ACRN
|
||||
####################
|
||||
Enable GPU Passthrough (GVT-d)
|
||||
##############################
|
||||
|
||||
This tutorial describes how to enable GVT-d in ACRN.
|
||||
About GVT-d
|
||||
************
|
||||
|
||||
GVT-d is a graphics virtualization approach that is also known as the
|
||||
Intel-Graphics-Device passthrough feature. It allows for direct assignment of a
|
||||
GPU to a single VM, passing the native driver capabilities through to the
|
||||
hypervisor without any limitations. For example, you can pass through a VGA
|
||||
controller to a VM, allowing users to access a Windows or Ubuntu desktop.
|
||||
|
||||
A typical use of GVT-d is to give a post-launched VM full control to the
|
||||
graphics card when that VM serves as the main user interface while all the other
|
||||
VMs are in the background and headless.
|
||||
|
||||
Dependencies and Constraints
|
||||
****************************
|
||||
|
||||
Consider the following dependencies and constraints:
|
||||
|
||||
* GVT-d applies only to an Intel integrated graphics card. Discrete graphics
|
||||
cards are passed through to VMs as a standard PCI device.
|
||||
|
||||
* When a device is assigned to a VM via GVT-d, no other VMs can use it.
|
||||
|
||||
.. note:: After GVT-d is enabled, have either a serial port
|
||||
or SSH session open in the Service VM to interact with it.
|
||||
|
||||
Introduction
|
||||
************
|
||||
Configuration Overview
|
||||
**********************
|
||||
|
||||
Intel GVT-d is a graphics virtualization approach that is also known as
|
||||
the Intel-Graphics-Device passthrough feature. Based on Intel VT-d
|
||||
technology, it offers useful special graphics-related configurations.
|
||||
It allows for direct assignment of an entire GPU's prowess to a single
|
||||
user, passing the native driver capabilities through to the hypervisor
|
||||
without any limitations.
|
||||
The :ref:`acrn_configurator_tool` lets you select PCI devices, such as VGA
|
||||
controllers, as passthrough devices for a VM. The following documentation is a
|
||||
general overview of the configuration process.
|
||||
|
||||
Verified Version
|
||||
*****************
|
||||
To select a passthrough device for a VM, select the VM and go to **Basic
|
||||
Parameters > PCI devices**. Click **+** to add a device.
|
||||
|
||||
- ACRN-hypervisor tag: **acrn-2020w17.4-140000p**
|
||||
- ACRN-Kernel (Service VM kernel): **master** branch, commit ID **095509221660daf82584ebdd8c50ea0078da3c2d**
|
||||
- ACRN-EDK2 (OVMF): **ovmf-acrn** branch, commit ID **0ff86f6b9a3500e4c7ea0c1064e77d98e9745947**
|
||||
.. image:: images/configurator-gvtd02.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
Select the device from the list.
|
||||
|
||||
Follow :ref:`these instructions <gsg>` to set up
|
||||
Ubuntu as the ACRN Service VM.
|
||||
.. image:: images/configurator-gvtd01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Supported Hardware Platform
|
||||
***************************
|
||||
To add another passthrough device, click **+**. Or click **x** to delete a
|
||||
device.
|
||||
|
||||
ACRN has enabled GVT-d on the following platforms:
|
||||
Example Configuration
|
||||
*********************
|
||||
|
||||
* Kaby Lake
|
||||
* Whiskey Lake
|
||||
* Elkhart Lake
|
||||
The following steps show how to select and verify a passthrough VGA controller.
|
||||
The example extends the information provided in the :ref:`gsg`.
|
||||
|
||||
BIOS Settings
|
||||
*************
|
||||
#. In the ACRN Configurator, create a shared scenario with a Service VM and one
|
||||
post-launched User VM.
|
||||
|
||||
Kaby Lake Platform
|
||||
==================
|
||||
#. Select the post-launched User VM and go to **Basic Parameters > PCI
|
||||
devices**.
|
||||
|
||||
* Set **IGD Minimum Memory** to **64MB** in **Devices** |rarr|
|
||||
**Video** |rarr| **IGD Minimum Memory**.
|
||||
#. Click **+** to add a device, and select the VGA controller.
|
||||
|
||||
Whiskey Lake Platform
|
||||
=====================
|
||||
.. image:: images/configurator-gvtd01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
* Set **PM Support** to **Enabled** in **Chipset** |rarr| **System
|
||||
Agent (SA) Configuration** |rarr| **Graphics Configuration** |rarr|
|
||||
**PM support**.
|
||||
* Set **DVMT Pre-Allocated** to **64MB** in **Chipset** |rarr|
|
||||
**System Agent (SA) Configuration**
|
||||
|rarr| **Graphics Configuration** |rarr| **DVMT Pre-Allocated**.
|
||||
#. Save the scenario and launch script.
|
||||
|
||||
Elkhart Lake Platform
|
||||
=====================
|
||||
#. Build ACRN, copy all the necessary files from the development computer to the
|
||||
target system, and launch the Service VM and post-launched User VM.
|
||||
|
||||
* Set **DMVT Pre-Allocated** to **64MB** in **Intel Advanced Menu**
|
||||
|rarr| **System Agent(SA) Configuration** |rarr|
|
||||
**Graphics Configuration** |rarr| **DMVT Pre-Allocated**.
|
||||
#. Verify that the VM can access the VGA controller: Run the following command
|
||||
in the post-launched User VM:
|
||||
|
||||
Passthrough the GPU to Guest
|
||||
****************************
|
||||
.. code-block:: console
|
||||
|
||||
1. Copy ``/usr/share/acrn/samples/nuc/launch_win.sh`` to ``install_win.sh``
|
||||
root@acrn-Standard-PC-i440FX-PIIX-1996:~# lspci |grep VGA
|
||||
00:02.0 VGA compatible controller: Intel Corporation Device 4680 (rev 0c)
|
||||
|
||||
::
|
||||
|
||||
cp /usr/share/acrn/samples/nuc/launch_win.sh ~/install_win.sh
|
||||
|
||||
2. Modify the ``install_win.sh`` script to specify the Windows image you use.
|
||||
|
||||
3. Modify the ``install_win.sh`` script to enable GVT-d:
|
||||
|
||||
Add the following commands before ``acrn-dm -m $mem_size -s 0:0,hostbridge \``
|
||||
|
||||
::
|
||||
|
||||
gpudevice=`cat /sys/bus/pci/devices/0000:00:02.0/device`
|
||||
|
||||
echo "8086 $gpudevice" > /sys/bus/pci/drivers/pci-stub/new_id
|
||||
echo "0000:00:02.0" > /sys/bus/pci/devices/0000:00:02.0/driver/unbind
|
||||
echo "0000:00:02.0" > /sys/bus/pci/drivers/pci-stub/bind
|
||||
|
||||
|
||||
4. Run ``launch_win.sh``.
|
||||
Troubleshooting
|
||||
***************
|
||||
|
||||
Enable the GVT-d GOP Driver
|
||||
***************************
|
||||
===========================
|
||||
|
||||
When enabling GVT-d, the Guest OS cannot light up the physical screen
|
||||
before the OS driver loads. As a result, the Guest BIOS and the Grub UI
|
||||
@ -107,7 +101,7 @@ passthrough environment. The physical display can be initialized
|
||||
by the GOP and used by the Guest BIOS and Guest Grub.
|
||||
|
||||
Steps
|
||||
=====
|
||||
-----
|
||||
|
||||
1. Fetch the ACRN OVMF:
|
||||
|
||||
@ -166,7 +160,7 @@ Keep in mind the following:
|
||||
- Modify the launch script to specify the OVMF you built just now.
|
||||
|
||||
Script
|
||||
======
|
||||
------
|
||||
|
||||
Once you've installed the Docker environment, you can use this
|
||||
`script <../_static/downloads/build_acrn_ovmf.sh>`_ to build ACRN OVMF
|
||||
|
BIN
doc/tutorials/images/configurator-cache01.png
Normal file
After Width: | Height: | Size: 29 KiB |
BIN
doc/tutorials/images/configurator-cache02.png
Normal file
After Width: | Height: | Size: 29 KiB |
BIN
doc/tutorials/images/configurator-cache03.png
Normal file
After Width: | Height: | Size: 44 KiB |
BIN
doc/tutorials/images/configurator-cache04.png
Normal file
After Width: | Height: | Size: 46 KiB |
BIN
doc/tutorials/images/configurator-cache05.png
Normal file
After Width: | Height: | Size: 57 KiB |
BIN
doc/tutorials/images/configurator-erroricon.png
Normal file
After Width: | Height: | Size: 23 KiB |
BIN
doc/tutorials/images/configurator-gvtd01.png
Normal file
After Width: | Height: | Size: 21 KiB |
BIN
doc/tutorials/images/configurator-gvtd02.png
Normal file
After Width: | Height: | Size: 11 KiB |
BIN
doc/tutorials/images/configurator-ivshmem01.png
Normal file
After Width: | Height: | Size: 11 KiB |
BIN
doc/tutorials/images/configurator-ivshmem02.png
Normal file
After Width: | Height: | Size: 49 KiB |
BIN
doc/tutorials/images/configurator-rt01.png
Normal file
After Width: | Height: | Size: 46 KiB |
BIN
doc/tutorials/images/configurator-vcat01.png
Normal file
After Width: | Height: | Size: 9.0 KiB |
BIN
doc/tutorials/images/configurator-vuartconn01.png
Normal file
After Width: | Height: | Size: 60 KiB |
BIN
doc/tutorials/images/configurator-vuartconn02.png
Normal file
After Width: | Height: | Size: 6.7 KiB |
BIN
doc/tutorials/images/configurator-vuartconn03.png
Normal file
After Width: | Height: | Size: 18 KiB |
BIN
doc/tutorials/images/ubuntu-uservm-0.png
Normal file
After Width: | Height: | Size: 53 KiB |
BIN
doc/tutorials/images/ubuntu-uservm-1.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
doc/tutorials/images/ubuntu-uservm-2.png
Normal file
After Width: | Height: | Size: 34 KiB |
BIN
doc/tutorials/images/ubuntu-uservm-3.png
Normal file
After Width: | Height: | Size: 90 KiB |
BIN
doc/tutorials/images/ubuntu-uservm-4.png
Normal file
After Width: | Height: | Size: 225 KiB |
BIN
doc/tutorials/images/ubuntu-uservm-5.png
Normal file
After Width: | Height: | Size: 69 KiB |
BIN
doc/tutorials/images/ubuntu_uservm_01.png
Normal file
After Width: | Height: | Size: 15 KiB |
BIN
doc/tutorials/images/ubuntu_uservm_02.png
Normal file
After Width: | Height: | Size: 36 KiB |
BIN
doc/tutorials/images/ubuntu_uservm_03.png
Normal file
After Width: | Height: | Size: 34 KiB |
BIN
doc/tutorials/images/ubuntu_uservm_begin_install.png
Normal file
After Width: | Height: | Size: 103 KiB |
BIN
doc/tutorials/images/ubuntu_uservm_customize.png
Normal file
After Width: | Height: | Size: 39 KiB |
BIN
doc/tutorials/images/ubuntu_uservm_storage.png
Normal file
After Width: | Height: | Size: 31 KiB |
BIN
doc/tutorials/images/ubuntu_uservm_virtioblock.png
Normal file
After Width: | Height: | Size: 13 KiB |
BIN
doc/tutorials/images/vm-guide-create-new-scenario.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
doc/tutorials/images/vm-guide-image-virtio-block.png
Normal file
After Width: | Height: | Size: 12 KiB |
BIN
doc/tutorials/images/vm-guide-set-vm-basic-parameters.png
Normal file
After Width: | Height: | Size: 31 KiB |
@ -86,6 +86,7 @@ background introductions).
|
||||
- Applications need to implement protocols such as a handshake, data transfer, and data
|
||||
integrity.
|
||||
|
||||
.. _inter-vm_communication_ivshmem_app:
|
||||
|
||||
How to implement an Ivshmem application on ACRN
|
||||
***********************************************
|
||||
|
@ -1,254 +1,229 @@
|
||||
.. _rdt_configuration:
|
||||
|
||||
Enable RDT Configuration
|
||||
########################
|
||||
Enable Intel Resource Director Technology (RDT) Configurations
|
||||
###############################################################
|
||||
|
||||
About Intel Resource Director Technology (RDT)
|
||||
**********************************************
|
||||
|
||||
On x86 platforms that support Intel Resource Director Technology (RDT)
|
||||
allocation features such as Cache Allocation Technology (CAT) and Memory
|
||||
Bandwidth Allocation (MBA), the ACRN hypervisor can be used to limit regular
|
||||
VMs that may be over-utilizing common resources such as cache and memory
|
||||
bandwidth relative to their priorities so that the performance of other
|
||||
higher priority VMs (such as RTVMs) is not impacted.
|
||||
allocation features, the ACRN hypervisor can partition the shared cache among
|
||||
VMs to minimize performance impacts on higher-priority VMs, such as real-time
|
||||
VMs (RTVMs). “Shared cache” refers to cache that is shared among multiple CPU
|
||||
cores. By default, VMs running on these cores are configured to use the entire
|
||||
cache, effectively sharing the cache among all VMs without any partitions. This
|
||||
design may cause too many cache misses for applications running in
|
||||
higher-priority VMs, negatively affecting their performance. The ACRN hypervisor
|
||||
can help minimize cache misses by isolating a portion of the shared cache for
|
||||
a specific VM.
|
||||
|
||||
Using RDT includes three steps:
|
||||
ACRN supports the following features:
|
||||
|
||||
1. Detect and enumerate RDT allocation capabilities on supported
|
||||
resources such as cache and memory bandwidth.
|
||||
#. Set up resource mask array MSRs (model-specific registers) for each
|
||||
CLOS (class of service, which is a resource allocation), basically to
|
||||
limit or allow access to resource usage.
|
||||
#. Select the CLOS for the CPU associated with the VM that will apply
|
||||
the resource mask on the CP.
|
||||
* Cache Allocation Technology (CAT)
|
||||
* Code and Data Prioritization (CDP)
|
||||
* Virtual Cache Allocation Technology (vCAT)
|
||||
|
||||
Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
|
||||
The CAT support in the hypervisor isolates a portion of the cache for a VM from
|
||||
other VMs. Generally, certain cache resources are allocated for the RTVMs to
|
||||
reduce performance interference by other VMs attempting to use the same cache.
|
||||
|
||||
* Using an HV debug shell (See `Tuning RDT resources in HV debug shell`_)
|
||||
* Using a VM configuration (See `Configure RDT for VM using VM Configuration`_)
|
||||
The CDP feature in RDT is an extension of CAT that enables separate control over
|
||||
code and data placement in the cache. The CDP support in the hypervisor isolates
|
||||
a portion of the cache for code and another portion for data for the same VM.
|
||||
|
||||
The following sections discuss how to detect, enumerate capabilities, and
|
||||
configure RDT resources for VMs in the ACRN hypervisor.
|
||||
ACRN also supports the virtualization of CAT, referred to as vCAT. With
|
||||
vCAT enabled, the hypervisor presents CAT to a selected set of VMs to allow the
|
||||
guest OSes to further isolate the cache used by higher-priority processes in
|
||||
those VMs.
|
||||
|
||||
For further details, refer to the ACRN RDT high-level design :ref:`hv_rdt` and
|
||||
Dependencies and Constraints
|
||||
*****************************
|
||||
|
||||
Consider the following dependencies and constraints:
|
||||
|
||||
* The hardware must support RDT in order for ACRN to enable RDT support in the
|
||||
hypervisor.
|
||||
|
||||
* The cache must be shared cache (cache shared across multiple CPU cores), as
|
||||
opposed to private cache (cache that is owned by only one CPU core). If the
|
||||
cache is private, CAT, CDP, and vCAT have no benefit because the cache is
|
||||
already exclusively used by one core. For this reason, the ACRN Configurator
|
||||
will not allow you to configure private cache.
|
||||
|
||||
* The ACRN Configurator relies on the board configuration file to provide CAT
|
||||
information that it can use to display configuration parameters. On Tiger Lake
|
||||
systems, L3 CAT, also known as LLC CAT, is model specific and
|
||||
non-architectural. For these reasons, the Board Inspector doesn't detect LLC
|
||||
CAT, and therefore doesn't provide LLC CAT information in the board
|
||||
configuration file even if the board has LLC CAT capabilities. The Board
|
||||
Inspector offers a way to manually add LLC CAT information to the board
|
||||
configuration file via a command-line option described in
|
||||
:ref:`board_inspector_tool`. Run the Board Inspector with the command-line
|
||||
option, then import the board configuration file into the ACRN Configurator.
|
||||
|
||||
* The guest OS in a VM with vCAT enabled requires utilities in that OS for
|
||||
further cache allocation configurations. An example is the `resctrl
|
||||
<https://docs.kernel.org/x86/resctrl.html>`__ framework in Linux.
|
||||
|
||||
Configuration Overview
|
||||
**********************
|
||||
|
||||
You can allocate cache to each VM at the virtual CPU (vCPU) level. For example,
|
||||
you can create a post-launched real-time VM and assign three physical CPU
|
||||
cores to it. ACRN assigns a vCPU ID to each physical CPU. Furthermore, you can
|
||||
specify a vCPU as a real-time vCPU. Then you can allocate a portion of the cache
|
||||
to the real-time vCPU and allocate the rest of the cache to be shared among the
|
||||
other vCPUs. This type of configuration allows the real-time vCPU to use its
|
||||
assigned cache without interference from the other vCPUs, thus improving the
|
||||
performance of applications running on the real-time vCPU. The following
|
||||
documentation is a general overview of the configuration process.
|
||||
|
||||
The :ref:`acrn_configurator_tool` provides a user interface to help you allocate
|
||||
cache to vCPUs. The configuration process requires setting VM parameters, then
|
||||
allocating cache to the VMs via an interface in the hypervisor parameters. This
|
||||
documentation presents the configuration process as a linear flow, but in
|
||||
reality you may find yourself moving back and forth between setting the
|
||||
hypervisor parameters and the VM parameters until you are satisfied with the
|
||||
entire configuration.
|
||||
|
||||
For a real-time VM, you must set the following parameters in the VM's **Basic
|
||||
Parameters**:
|
||||
|
||||
* **VM type**: Select **Real-time**.
|
||||
|
||||
* **pCPU ID**: Select the physical CPU affinity for the VM.
|
||||
|
||||
* **Virtual CPU ID**: Note the vCPU ID that the tool assigns to each physical
|
||||
CPU. You will need to know the ID when you are ready to allocate cache.
|
||||
|
||||
* **Real-time vCPU**: Select the Real-time vCPU check box next to each real-time
|
||||
vCPU. The ACRN Configurator uses this information to create default cache
|
||||
configurations, as you will see later in this documentation. If you change the
|
||||
VM type from Real-time to Standard, the ACRN Configurator disables the
|
||||
Real-time vCPU check box.
|
||||
|
||||
.. image:: images/configurator-rt01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
To use vCAT for the VM, you must also set the following parameters in the VM's
|
||||
**Advanced Parameters**:
|
||||
|
||||
* **Maximum virtual CLOS**: Select the maximum number of virtual CLOS masks.
|
||||
This parameter defines the number of cache chunks that you will see in the
|
||||
hypervisor parameters.
|
||||
|
||||
* Select **VM Virtual Cache Allocation Tech**.
|
||||
|
||||
.. image:: images/configurator-vcat01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Next, you can enable Intel RDT features in **Hypervisor Global Settings >
|
||||
Advanced Parameters > Memory Isolation for Performance**. You can enable one of
|
||||
the following combinations of features:
|
||||
|
||||
* Cache Allocation Technology (CAT) alone
|
||||
|
||||
* Cache Allocation Technology plus Code and Data Prioritization (CDP)
|
||||
|
||||
* Cache Allocation Technology plus Virtual Cache Allocation Technology (vCAT)
|
||||
|
||||
The following figure shows Cache Allocation Technology enabled:
|
||||
|
||||
.. image:: images/configurator-cache01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
When CDP or vCAT is enabled, CAT must be enabled too. The tool selects CAT if it's not already selected.
|
||||
|
||||
.. image:: images/configurator-cache02.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
CDP and vCAT can't be enabled at the same time, so the tool clears the vCAT check box when CDP is selected and vice versa.
|
||||
|
||||
Based on your selection, the tool displays the available cache in tables.
|
||||
Example:
|
||||
|
||||
.. image:: images/configurator-cache03.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
The table title shows important information:
|
||||
|
||||
* Cache level, such as Level 3 (L3) or Level 2 (L2)
|
||||
|
||||
* Physical CPU cores that can access the cache
|
||||
|
||||
The above example shows an L2 cache table. VMs assigned to any CPU cores 2-6 can
|
||||
have cache allocated to them.
|
||||
|
||||
The table's y-axis shows the names of all VMs that are assigned to the CPU cores
|
||||
noted in the table title, as well as their vCPU IDs. The table categorizes the
|
||||
vCPUs as either standard or real-time. The real-time vCPUs are those that are
|
||||
set as real-time in the VM's parameters. All other vCPUs are considered
|
||||
standard. The above example shows one real-time vCPU (VM1 vCPU 2) and two
|
||||
standard vCPUs (VM0 vCPU 2 and 6).
|
||||
|
||||
.. note::
|
||||
|
||||
The Service VM is automatically assigned to all CPUs, so it appears in the standard category in all cache tables.
|
||||
|
||||
The table's x-axis shows the number of available cache chunks. You can see the
|
||||
size of each cache chunk in the note below the table. In the above example, 20
|
||||
cache chunks are available to allocate to the VMs, and each cache chunk is 64KB.
|
||||
All cache chunks are yellow, which means all of them are allocated to all VMs.
|
||||
All VMs share the entire cache.
|
||||
|
||||
The **Apply basic real-time defaults** button creates a basic real-time
|
||||
configuration if real-time vCPUs exist. If there are no real-time vCPUs, the
|
||||
button will not do anything.
|
||||
|
||||
If you select Cache Allocation Technology (CAT) alone, the **Apply basic
|
||||
real-time defaults** button allocates a different cache chunk to each real-time
|
||||
vCPU, making sure it doesn't overlap the cache of any other vCPU. The rest of
|
||||
the cache is shared among the standard vCPUs. In the following example, only VM1
|
||||
vCPU 2 can use cache chunk19, while all other vCPUs share the rest of the cache.
|
||||
|
||||
.. image:: images/configurator-cache04.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
If you select CAT with Code and Data Prioritization, you can allocate different
|
||||
cache chunks to code or data on the same vCPU. The **Apply basic real-time
|
||||
defaults** button allocates one cache chunk to code on the real-time vCPU and a
|
||||
different cache chunk to data on the same vCPU, making sure the cache chunks
|
||||
don't overlap any others. In the following example, VM1 vCPU 2 can use cache
|
||||
chunk19 for code and chunk18 for data, while all other vCPUs share the rest of
|
||||
the cache.
|
||||
|
||||
.. image:: images/configurator-cache05.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
To further customize the cache allocation, you can drag the right or left edges
|
||||
of the yellow boxes to cover the cache chunks that you want to allocate to
|
||||
specific VMs.
|
||||
|
||||
.. note::
|
||||
|
||||
If you have a real-time VM, ensure its cache chunks do not overlap with any
|
||||
other VM's cache chunks.
|
||||
|
||||
The tool helps you create valid configurations based on the underlying platform
|
||||
architecture. For example, it is only possible to assign consecutive cache
|
||||
chunks to a vCPU; there can be no gaps. Also, a vCPU must have access to at
|
||||
least one cache chunk.
|
||||
|
||||
Learn More
|
||||
**********
|
||||
|
||||
For details on the ACRN RDT high-level design, see :ref:`hv_rdt`.
|
||||
|
||||
For details about RDT, see
|
||||
`Intel 64 and IA-32 Architectures Software Developer's Manual (SDM), Volume 3,
|
||||
(Section 17.19 Intel Resource Director Technology Allocation Features)
|
||||
<https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html>`_.
|
||||
|
||||
.. _rdt_detection_capabilities:
|
||||
|
||||
RDT Detection and Resource Capabilities
|
||||
***************************************
|
||||
From the ACRN HV debug shell, use ``cpuid`` to detect and identify the
|
||||
resource capabilities. Use the platform's serial port for the HV shell.
|
||||
|
||||
Check if the platform supports RDT with ``cpuid``. First, run
|
||||
``cpuid 0x7 0x0``; the return value EBX [bit 15] is set to 1 if the
|
||||
platform supports RDT. Next, run ``cpuid 0x10 0x0`` and check the EBX
|
||||
[3-1] bits. EBX [bit 1] indicates that L3 CAT is supported. EBX [bit 2]
|
||||
indicates that L2 CAT is supported. EBX [bit 3] indicates that MBA is
|
||||
supported. To query the capabilities of the supported resources, use the
|
||||
bit position as a subleaf index. For example, run ``cpuid 0x10 0x2`` to
|
||||
query the L2 CAT capability.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>cpuid 0x7 0x0
|
||||
cpuid leaf: 0x7, subleaf: 0x0, 0x0:0xd39ffffb:0x00000818:0xbc000400
|
||||
|
||||
L3/L2 bit encoding:
|
||||
|
||||
* EAX [bit 4:0] reports the length of the cache mask minus one. For
|
||||
example, a value 0xa means the cache mask is 0x7ff.
|
||||
* EBX [bit 31:0] reports a bit mask. Each set bit indicates the
|
||||
corresponding unit of the cache allocation that can be used by other
|
||||
entities in the platform (e.g. integrated graphics engine).
|
||||
* ECX [bit 2] if set, indicates that cache Code and Data Prioritization
|
||||
Technology is supported.
|
||||
* EDX [bit 15:0] reports the maximum CLOS supported for the resource
|
||||
minus one. For example, a value of 0xf means the max CLOS supported
|
||||
is 0x10.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>cpuid 0x10 0x0
|
||||
cpuid leaf: 0x10, subleaf: 0x0, 0x0:0xa:0x0:0x0
|
||||
|
||||
ACRN:\>cpuid 0x10 **0x1**
|
||||
cpuid leaf: 0x10, subleaf: 0x1, 0xa:0x600:0x4:0xf
|
||||
|
||||
MBA bit encoding:
|
||||
|
||||
* EAX [bit 11:0] reports the maximum MBA throttling value minus one. For example, a value 0x59 means the max delay value is 0x60.
|
||||
* EBX [bit 31:0] reserved.
|
||||
* ECX [bit 2] reports whether the response of the delay values is linear.
|
||||
* EDX [bit 15:0] reports the maximum CLOS supported for the resource minus one. For example, a value of 0x7 means the max CLOS supported is 0x8.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>cpuid 0x10 0x0
|
||||
cpuid leaf: 0x10, subleaf: 0x0, 0x0:0xa:0x0:0x0
|
||||
|
||||
ACRN:\>cpuid 0x10 **0x3**
|
||||
cpuid leaf: 0x10, subleaf: 0x3, 0x59:0x0:0x4:0x7
|
||||
|
||||
.. note::
|
||||
ACRN takes the lowest common CLOS max value between the supported
|
||||
resources as maximum supported CLOS ID. For example, if max CLOS
|
||||
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
|
||||
to 8. ACRN recommends having consistent capabilities across all RDT
|
||||
resources by using a common subset CLOS. This is done in order to minimize
|
||||
misconfiguration errors.
|
||||
|
||||
Tuning RDT Resources in HV Debug Shell
|
||||
**************************************
|
||||
This section explains how to configure the RDT resources from the HV debug
|
||||
shell.
|
||||
|
||||
#. Check the pCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is
|
||||
running on pCPU0, and VM1 is running on pCPU1:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>vcpu_list
|
||||
|
||||
VM ID pCPU ID VCPU ID VCPU ROLE VCPU STATE
|
||||
===== ======= ======= ========= ==========
|
||||
0 0 0 PRIMARY Running
|
||||
1 1 0 PRIMARY Running
|
||||
|
||||
#. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``.
|
||||
For example, if you want to restrict VM1 to use the
|
||||
lower 4 ways of LLC cache and you want to allocate the upper 7 ways of
|
||||
LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0
|
||||
is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that
|
||||
corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is
|
||||
programmed to 0x7f0. Finally, resource mask the MSR that corresponds to
|
||||
CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>wrmsr -p1 0xc90 0x7f0
|
||||
ACRN:\>wrmsr -p1 0xc91 0xf
|
||||
|
||||
#. Assign CLOS1 to pCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32]
|
||||
(0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to pCPU 0 by
|
||||
programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that
|
||||
IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>wrmsr -p0 0xc8f 0x000000000 (this is default and can be skipped)
|
||||
ACRN:\>wrmsr -p1 0xc8f 0x100000000
|
||||
|
||||
.. _rdt_vm_configuration:
|
||||
|
||||
Configure RDT for VM Using VM Configuration
|
||||
*******************************************
|
||||
|
||||
#. RDT hardware feature is enabled by default on supported platforms. This
|
||||
information can be found using an offline tool that generates a
|
||||
platform-specific XML file that helps ACRN identify RDT-supported
|
||||
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
|
||||
sub-section of the scenario XML file as in the below example. For
|
||||
details on building ACRN with a scenario, refer to :ref:`gsg`.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 6
|
||||
|
||||
<FEATURES>
|
||||
<RELOC>y</RELOC>
|
||||
<SCHEDULER>SCHED_BVT</SCHEDULER>
|
||||
<MULTIBOOT2>y</MULTIBOOT2>
|
||||
<RDT>
|
||||
<RDT_ENABLED>y</RDT_ENABLED>
|
||||
<CDP_ENABLED>n</CDP_ENABLED>
|
||||
<CLOS_MASK></CLOS_MASK>
|
||||
<MBA_DELAY></MBA_DELAY>
|
||||
</RDT>
|
||||
|
||||
#. Once RDT is enabled in the scenario XML file, the next step is to program
|
||||
the desired cache mask or/and the MBA delay value as needed in the
|
||||
scenario file. Each cache mask or MBA delay configuration corresponds
|
||||
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
|
||||
cache mask settings needs to be in place where each setting corresponds
|
||||
to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and
|
||||
use default delay value for MBA, it can be done as shown in the example below.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 8,9,10,11,12
|
||||
|
||||
<FEATURES>
|
||||
<RELOC>y</RELOC>
|
||||
<SCHEDULER>SCHED_BVT</SCHEDULER>
|
||||
<MULTIBOOT2>y</MULTIBOOT2>
|
||||
<RDT>
|
||||
<RDT_ENABLED>y</RDT_ENABLED>
|
||||
<CDP_ENABLED>n</CDP_ENABLED>
|
||||
<CLOS_MASK>0xff</CLOS_MASK>
|
||||
<CLOS_MASK>0x3f</CLOS_MASK>
|
||||
<CLOS_MASK>0xf</CLOS_MASK>
|
||||
<CLOS_MASK>0x3</CLOS_MASK>
|
||||
<MBA_DELAY>0</MBA_DELAY>
|
||||
</RDT>
|
||||
|
||||
.. note::
|
||||
Users can change the mask values, but the cache mask must have
|
||||
**continuous bits** or a #GP fault can be triggered. Similarly, when
|
||||
programming an MBA delay value, be sure to set the value to less than or
|
||||
equal to the MAX delay value.
|
||||
|
||||
#. Configure each CPU in VMs to a desired CLOS ID in the ``VM`` section of the
|
||||
scenario file. Follow `RDT detection and resource capabilities`_
|
||||
to identify the maximum supported CLOS ID that can be used. ACRN uses
|
||||
**the lowest common MAX CLOS** value among all RDT resources to avoid
|
||||
resource misconfigurations.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 5,6,7,8
|
||||
|
||||
<vm id="0">
|
||||
<vm_type readonly="true">PRE_STD_VM</vm_type>
|
||||
<name>ACRN PRE-LAUNCHED VM0</name>
|
||||
<clos>
|
||||
<vcpu_clos>0</vcpu_clos>
|
||||
<vcpu_clos>1</vcpu_clos>
|
||||
</clos>
|
||||
</vm>
|
||||
|
||||
.. note::
|
||||
In ACRN, Lower CLOS always means higher priority (CLOS 0 > CLOS 1 > CLOS 2 > ... CLOS n).
|
||||
So, carefully program each VM's CLOS accordingly.
|
||||
|
||||
#. Careful consideration should be made when assigning vCPU affinity. In
|
||||
a cache isolation configuration, in addition to isolating CAT-capable
|
||||
caches, you must also isolate lower-level caches. In the following
|
||||
example, logical processor #0 and #2 share L1 and L2 caches. In this
|
||||
case, do not assign LP #0 and LP #2 to different VMs that need to do
|
||||
cache isolation. Assign LP #1 and LP #3 with similar consideration:
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 3
|
||||
|
||||
# lstopo-no-graphics -v
|
||||
Package L#0 (P#0 CPUVendor=GenuineIntel CPUFamilyNumber=6 CPUModelNumber=142)
|
||||
L3Cache L#0 (size=3072KB linesize=64 ways=12 Inclusive=1)
|
||||
L2Cache L#0 (size=256KB linesize=64 ways=4 Inclusive=0)
|
||||
L1dCache L#0 (size=32KB linesize=64 ways=8 Inclusive=0)
|
||||
L1iCache L#0 (size=32KB linesize=64 ways=8 Inclusive=0)
|
||||
Core L#0 (P#0)
|
||||
PU L#0 (P#0)
|
||||
PU L#1 (P#2)
|
||||
L2Cache L#1 (size=256KB linesize=64 ways=4 Inclusive=0)
|
||||
L1dCache L#1 (size=32KB linesize=64 ways=8 Inclusive=0)
|
||||
L1iCache L#1 (size=32KB linesize=64 ways=8 Inclusive=0)
|
||||
Core L#1 (P#1)
|
||||
PU L#2 (P#1)
|
||||
PU L#3 (P#3)
|
||||
|
||||
#. Bandwidth control is per-core (not per LP), so max delay values of
|
||||
per-LP CLOS is applied to the core. If HT is turned on, don't place high
|
||||
priority threads on sibling LPs running lower priority threads.
|
||||
|
||||
#. Based on our scenario, build and install ACRN. See :ref:`gsg`
|
||||
for building and installing instructions.
|
||||
|
||||
#. Restart the platform.
|
||||
For details on the ACRN vCAT high-level design, see :ref:`hv_vcat`.
|
@ -1,627 +0,0 @@
|
||||
.. _setup_openstack_libvirt:
|
||||
|
||||
Configure ACRN Using OpenStack and Libvirt
|
||||
##########################################
|
||||
|
||||
Introduction
|
||||
************
|
||||
|
||||
This document provides instructions for setting up libvirt to configure
|
||||
ACRN. We use OpenStack to use libvirt. We'll show how to install OpenStack in a
|
||||
container to avoid crashing your system and to take advantage of easy
|
||||
snapshots and restores so that you can quickly roll back your system in the
|
||||
event of setup failure. (You should only install OpenStack directly on Ubuntu if
|
||||
you have a dedicated testing machine.) This setup utilizes LXC/LXD on
|
||||
Ubuntu 20.04.
|
||||
|
||||
Install ACRN
|
||||
************
|
||||
|
||||
#. Install ACRN using Ubuntu 20.04 as its Service VM. Refer to
|
||||
:ref:`gsg`.
|
||||
|
||||
#. Make the acrn-kernel using the `kernel_config_service_vm
|
||||
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_service_vm>`_
|
||||
configuration file (from the ``acrn-kernel`` repo).
|
||||
|
||||
#. Append the following kernel boot arguments to the ``multiboot2`` line in
|
||||
:file:`/etc/grub.d/40_custom` and run ``sudo update-grub`` before rebooting the system.
|
||||
It will give the Service VM more memory and more loop devices::
|
||||
|
||||
hugepagesz=1G hugepages=10 max_loop=16
|
||||
|
||||
#. Boot the Service VM with this new ``acrn-kernel`` using the ACRN
|
||||
hypervisor.
|
||||
#. Use the command: ``losetup -a`` to verify that Ubuntu's snap service is **not**
|
||||
using all available loop devices. Typically, OpenStack needs at least 4
|
||||
available loop devices. Follow the `snaps guide
|
||||
<https://maslosoft.com/kb/how-to-clean-old-snaps/>`_ to clean up old
|
||||
snap revisions if you're running out of loop devices.
|
||||
#. Make sure the networking bridge ``acrn-br0`` is created. See
|
||||
:ref:`hostbridge_virt_hld` for more information.
|
||||
|
||||
Set Up and Launch LXC/LXD
|
||||
*************************
|
||||
|
||||
1. Set up the LXC/LXD Linux container engine::
|
||||
|
||||
$ sudo snap install lxd
|
||||
$ lxd init --auto
|
||||
|
||||
Use all default values if running ``lxd init`` in interactive mode.
|
||||
|
||||
2. Create an Ubuntu 18.04 container named ``openstack``::
|
||||
|
||||
$ lxc init ubuntu:18.04 openstack
|
||||
|
||||
3. Export the kernel interfaces necessary to launch a Service VM in the
|
||||
``openstack`` container:
|
||||
|
||||
a. Edit the ``openstack`` config file using the command::
|
||||
|
||||
$ lxc config edit openstack
|
||||
|
||||
In the editor, add the following lines in the **config** section:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
linux.kernel_modules: iptable_nat, ip6table_nat, ebtables, openvswitch
|
||||
raw.lxc: |-
|
||||
lxc.cgroup.devices.allow = c 10:237 rwm
|
||||
lxc.cgroup.devices.allow = b 7:* rwm
|
||||
lxc.cgroup.devices.allow = c 243:0 rwm
|
||||
lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0
|
||||
lxc.mount.auto=proc:rw sys:rw cgroup:rw
|
||||
lxc.apparmor.profile=unconfined
|
||||
security.nesting: "true"
|
||||
security.privileged: "true"
|
||||
|
||||
Save and exit the editor.
|
||||
|
||||
.. note::
|
||||
|
||||
Make sure to respect the indentation as to keep these options within
|
||||
the **config** section. After saving your changes,
|
||||
check that they have been correctly recorded (``lxc config show openstack``).
|
||||
|
||||
b. Run the following commands to configure ``openstack``::
|
||||
|
||||
$ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0
|
||||
$ lxc config device add openstack acrn_hsm unix-char path=/dev/acrn_hsm
|
||||
$ lxc config device add openstack loop-control unix-char path=/dev/loop-control
|
||||
$ for n in {0..15}; do lxc config device add openstack loop$n unix-block path=/dev/loop$n; done;
|
||||
|
||||
4. Launch the ``openstack`` container::
|
||||
|
||||
$ lxc start openstack
|
||||
|
||||
5. Log in to the ``openstack`` container::
|
||||
|
||||
$ lxc exec openstack -- su -l
|
||||
|
||||
6. Let ``systemd`` manage **eth1** in the container, with **eth0** as the
|
||||
default route:
|
||||
|
||||
Edit ``/etc/netplan/50-cloud-init.yaml`` as follows:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
network:
|
||||
version: 2
|
||||
ethernets:
|
||||
eth0:
|
||||
dhcp4: true
|
||||
eth1:
|
||||
dhcp4: true
|
||||
dhcp4-overrides:
|
||||
route-metric: 200
|
||||
|
||||
|
||||
7. Log off and restart the ``openstack`` container::
|
||||
|
||||
$ lxc restart openstack
|
||||
|
||||
8. Log in to the ``openstack`` container again::
|
||||
|
||||
$ lxc exec openstack -- su -l
|
||||
|
||||
9. If needed, set up the proxy inside the ``openstack`` container via
|
||||
``/etc/environment`` and make sure ``no_proxy`` is properly set up.
|
||||
Both IP addresses assigned to **eth0** and
|
||||
**eth1** and their subnets must be included. For example::
|
||||
|
||||
no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16
|
||||
|
||||
10. Add a new user named **stack** and set permissions:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# useradd -s /bin/bash -d /opt/stack -m stack
|
||||
# echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||
|
||||
11. Log off and restart the ``openstack`` container::
|
||||
|
||||
$ lxc restart openstack
|
||||
|
||||
The ``openstack`` container is now properly configured for OpenStack.
|
||||
Use the ``lxc list`` command to verify that both **eth0** and **eth1**
|
||||
appear in the container.
|
||||
|
||||
Set Up ACRN Prerequisites Inside the Container
|
||||
**********************************************
|
||||
|
||||
1. Log in to the ``openstack`` container as the **stack** user::
|
||||
|
||||
$ lxc exec openstack -- su -l stack
|
||||
|
||||
2. Download and compile ACRN's source code. Refer to :ref:`gsg`.
|
||||
|
||||
.. note::
|
||||
All tools and build dependencies must be installed before you run the first ``make`` command.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd ~
|
||||
$ git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
$ cd acrn-hypervisor
|
||||
$ make
|
||||
$ sudo make devicemodel-install
|
||||
|
||||
Install only the user-space component: ``acrn-dm`` as shown above.
|
||||
|
||||
.. note:: Use the tag that matches the version of the ACRN hypervisor (``acrn.bin``)
|
||||
that runs on your system.
|
||||
|
||||
Set Up Libvirt
|
||||
**************
|
||||
|
||||
1. Install the required packages::
|
||||
|
||||
$ sudo apt install libdevmapper-dev libnl-route-3-dev libnl-3-dev python \
|
||||
automake autoconf autopoint libtool xsltproc libxml2-utils gettext \
|
||||
libxml2-dev libpciaccess-dev gnutls-dev python3-docutils libyajl-dev
|
||||
|
||||
|
||||
2. Download libvirt/ACRN::
|
||||
|
||||
$ cd ~
|
||||
$ git clone https://github.com/projectacrn/acrn-libvirt.git
|
||||
|
||||
3. Build and install libvirt::
|
||||
|
||||
$ cd acrn-libvirt
|
||||
$ mkdir build
|
||||
$ cd build
|
||||
$ ../autogen.sh --prefix=/usr --disable-werror --with-test-suite=no \
|
||||
--with-qemu=no --with-openvz=no --with-vmware=no --with-phyp=no \
|
||||
--with-vbox=no --with-lxc=no --with-uml=no --with-esx=no --with-yajl
|
||||
|
||||
$ make
|
||||
$ sudo make install
|
||||
|
||||
.. note:: The ``dev-acrn-v6.1.0`` branch is used in this tutorial and is
|
||||
the default branch.
|
||||
|
||||
4. Edit and enable these options in ``/etc/libvirt/libvirtd.conf``::
|
||||
|
||||
unix_sock_ro_perms = "0777"
|
||||
unix_sock_rw_perms = "0777"
|
||||
unix_sock_admin_perms = "0777"
|
||||
|
||||
5. Restart the libvirt daemon::
|
||||
|
||||
$ sudo systemctl daemon-reload
|
||||
|
||||
|
||||
Set Up OpenStack
|
||||
****************
|
||||
|
||||
Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://docs.openstack.org/devstack/>`_.
|
||||
|
||||
1. Use the latest maintenance branch **stable/train** to ensure OpenStack
|
||||
stability::
|
||||
|
||||
$ cd ~
|
||||
$ git clone https://opendev.org/openstack/devstack.git -b stable/train
|
||||
|
||||
2. Go into the ``devstack`` directory, and apply the
|
||||
:file:`doc/tutorials/0001-devstack-installation-for-acrn.patch`::
|
||||
|
||||
$ cd devstack
|
||||
$ git apply ~/acrn-hypervisor/doc/tutorials/0001-devstack-installation-for-acrn.patch
|
||||
|
||||
3. Edit ``lib/nova_plugins/hypervisor-libvirt``:
|
||||
|
||||
Change ``xen_hvmloader_path`` to the location of your OVMF image
|
||||
file: ``/usr/share/acrn/bios/OVMF.fd``. Or use the stock image that is included
|
||||
in the ACRN source tree (``devicemodel/bios/OVMF.fd``).
|
||||
|
||||
4. Create a ``devstack/local.conf`` file as shown below (setting the
|
||||
passwords as appropriate):
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
||||
[[local|localrc]]
|
||||
PUBLIC_INTERFACE=eth1
|
||||
|
||||
ADMIN_PASSWORD=<password>
|
||||
DATABASE_PASSWORD=<password>
|
||||
RABBIT_PASSWORD=<password>
|
||||
SERVICE_PASSWORD=<password>
|
||||
|
||||
ENABLE_KSM=False
|
||||
VIRT_DRIVER=libvirt
|
||||
LIBVIRT_TYPE=acrn
|
||||
DEBUG_LIBVIRT=True
|
||||
DEBUG_LIBVIRT_COREDUMPS=True
|
||||
USE_PYTHON3=True
|
||||
|
||||
.. note::
|
||||
|
||||
Now is a great time to take a snapshot of the container using ``lxc
|
||||
snapshot``. If the OpenStack installation fails, manually rolling back
|
||||
to the previous state can be difficult. No step exists to
|
||||
reliably restart OpenStack after restarting the container.
|
||||
|
||||
5. Install OpenStack::
|
||||
|
||||
$ ./stack.sh
|
||||
|
||||
The installation should take about 20-30 minutes. Upon successful
|
||||
installation, the installer reports the URL of OpenStack's management
|
||||
interface. This URL is accessible from the native Ubuntu.
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
...
|
||||
|
||||
Horizon is now available at http://<IP_address>/dashboard
|
||||
|
||||
...
|
||||
|
||||
2020-04-09 01:21:37.504 | stack.sh completed in 1755 seconds.
|
||||
|
||||
6. Verify using the command ``systemctl status libvirtd.service`` that libvirtd is active
|
||||
and running.
|
||||
|
||||
7. Set up SNAT for OpenStack instances to connect to the external network.
|
||||
|
||||
a. Inside the container, use the command ``ip a`` to identify the ``br-ex`` bridge
|
||||
interface. ``br-ex`` should have two IPs. One should be visible to
|
||||
the native Ubuntu's ``acrn-br0`` interface (for example, iNet 192.168.1.104/24).
|
||||
The other one is internal to OpenStack (for example, iNet 172.24.4.1/24). The
|
||||
latter corresponds to the public network in OpenStack.
|
||||
|
||||
b. Set up SNAT to establish a link between ``acrn-br0`` and OpenStack.
|
||||
For example::
|
||||
|
||||
$ sudo iptables -t nat -A POSTROUTING -s 172.24.4.1/24 -o br-ex -j SNAT --to-source 192.168.1.104
|
||||
|
||||
Configure and Create OpenStack Instance
|
||||
***************************************
|
||||
|
||||
We'll be using the Ubuntu 20.04 (Focal) Cloud image as the OS image (qcow2
|
||||
format). Download the Cloud image from https://cloud-images.ubuntu.com/releases/focal,
|
||||
for example::
|
||||
|
||||
$ wget https://cloud-images.ubuntu.com/releases/focal/release-20210201/ubuntu-20.04-server-cloudimg-amd64.img
|
||||
|
||||
Use the OpenStack management interface URL reported in a previous step
|
||||
to finish setting up the network and configure and create an OpenStack
|
||||
instance.
|
||||
|
||||
1. Begin by using your browser to log in as **admin** to the OpenStack management
|
||||
dashboard (using the URL reported previously). Use the admin
|
||||
password you set in the ``devstack/local.conf`` file:
|
||||
|
||||
.. figure:: images/OpenStack-01-login.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-01-login
|
||||
|
||||
Click **Project / Network Topology** and then the **Topology** tab
|
||||
to view the existing **public** (external) and **shared** (internal) networks:
|
||||
|
||||
.. figure:: images/OpenStack-02-topology.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-02-topology
|
||||
|
||||
#. A **router** acts as a bridge between the internal and external
|
||||
networks. Create a router using **Project / Network / Routers /
|
||||
+Create Router**:
|
||||
|
||||
.. figure:: images/OpenStack-03-create-router.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-03-router
|
||||
|
||||
Give it a name (**acrn_router**), select **public** for the external network,
|
||||
and select **Create Router**:
|
||||
|
||||
.. figure:: images/OpenStack-03a-create-router.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-03a-router
|
||||
|
||||
That added the external network to the router. Now add
|
||||
the internal network too. Click the acrn_router name:
|
||||
|
||||
.. figure:: images/OpenStack-03b-created-router.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-03b-router
|
||||
|
||||
Go to the **Interfaces** tab, and click **+Add interface**:
|
||||
|
||||
.. figure:: images/OpenStack-04a-add-interface.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-04a-add-interface
|
||||
|
||||
Select the subnet of the shared (private) network and click **Submit**:
|
||||
|
||||
.. figure:: images/OpenStack-04b-add-interface.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-04b-add-interface
|
||||
|
||||
The router now has interfaces between the external and internal
|
||||
networks:
|
||||
|
||||
.. figure:: images/OpenStack-04c-add-interface.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-04c-add-interface
|
||||
|
||||
View the router graphically by clicking the **Network Topology** tab:
|
||||
|
||||
.. figure:: images/OpenStack-05-topology.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-05-topology
|
||||
|
||||
With the router set up, we've completed configuring the OpenStack
|
||||
networking.
|
||||
|
||||
#. Next, we'll prepare for launching an OpenStack instance.
|
||||
Click the **Admin / Compute / Image** tab and then the **+Create
|
||||
Image** button:
|
||||
|
||||
.. figure:: images/OpenStack-06-create-image.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-06-create-image
|
||||
|
||||
Browse for and select the Ubuntu Cloud image file we
|
||||
downloaded earlier:
|
||||
|
||||
.. figure:: images/OpenStack-06a-create-image-browse.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-06a-create-image
|
||||
|
||||
.. figure:: images/OpenStack-06b-create-image-select.png
|
||||
:align: center
|
||||
:name: os-06b-create-image
|
||||
|
||||
Give the image a name (**Ubuntu20.04**), select the **QCOW2 - QEMU
|
||||
Emulator** format, and click **Create Image**:
|
||||
|
||||
.. figure:: images/OpenStack-06e-create-image.png
|
||||
:align: center
|
||||
:width: 900px
|
||||
:name: os-063-create-image
|
||||
|
||||
This task will take a few minutes to complete.
|
||||
|
||||
#. Next, click the **Admin / Compute / Flavors** tab and then the
|
||||
**+Create Flavor** button. Define a machine flavor name
|
||||
(**UbuntuCloud**), and specify its resource requirements: the number of vCPUs (**2**), RAM size
|
||||
(**512MB**), and root disk size (**4GB**):
|
||||
|
||||
.. figure:: images/OpenStack-07a-create-flavor.png
|
||||
:align: center
|
||||
:width: 700px
|
||||
:name: os-07a-create-flavor
|
||||
|
||||
Click **Create Flavor** and you'll return to see a list of
|
||||
available flavors plus the new one you created (**UbuntuCloud**):
|
||||
|
||||
.. figure:: images/OpenStack-07b-flavor-created.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-07b-create-flavor
|
||||
|
||||
#. OpenStack security groups act as a virtual firewall controlling
|
||||
connections between instances, allowing connections such as SSH and
|
||||
HTTPS. These next steps create a security group allowing SSH and ICMP
|
||||
connections.
|
||||
|
||||
Go to **Project / Network / Security Groups** and click the **+Create
|
||||
Security Group** button:
|
||||
|
||||
.. figure:: images/OpenStack-08-security-group.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-08-security-group
|
||||
|
||||
Name this security group (**acrnSecuGroup**) and click **Create
|
||||
Security Group**:
|
||||
|
||||
.. figure:: images/OpenStack-08a-create-security-group.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-08a-security-group
|
||||
|
||||
You'll return to a rule management screen for this new group. Click
|
||||
the **+Add Rule** button:
|
||||
|
||||
.. figure:: images/OpenStack-08b-add-rule.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-08b-security-group
|
||||
|
||||
Select **SSH** from the Rule list and click **Add**:
|
||||
|
||||
.. figure:: images/OpenStack-08c-add-SSH-rule.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-08c-security-group
|
||||
|
||||
Similarly, add another rule to add an **All ICMP** rule too:
|
||||
|
||||
.. figure:: images/OpenStack-08d-add-All-ICMP-rule.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-08d-security-group
|
||||
|
||||
#. Create a public/private keypair used to access the created instance.
|
||||
Go to **Project / Compute / Key Pairs** and click **+Create Key
|
||||
Pair**, give the keypair a name (**acrnKeyPair**) and Key Type
|
||||
(**SSH Key**) and click **Create Key Pair**:
|
||||
|
||||
.. figure:: images/OpenStack-09a-create-key-pair.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-09a-key-pair
|
||||
|
||||
Save the **private** keypair file safely,
|
||||
for future use:
|
||||
|
||||
.. figure:: images/OpenStack-09c-key-pair-private-key.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-09c-key-pair
|
||||
|
||||
#. Now we're ready to launch an instance. Go to **Project / Compute /
|
||||
Instance**, click the **Launch Instance** button, give it a name
|
||||
(**UbuntuOnACRN**) and click **Next**:
|
||||
|
||||
.. figure:: images/OpenStack-10a-launch-instance-name.png
|
||||
:align: center
|
||||
:width: 900px
|
||||
:name: os-10a-launch
|
||||
|
||||
Select **No** for "Create New Volume", and click the up-arrow button
|
||||
for uploaded (**Ubuntu20.04**) image as the "Available source" for this
|
||||
instance:
|
||||
|
||||
.. figure:: images/OpenStack-10b-no-new-vol-select-allocated.png
|
||||
:align: center
|
||||
:width: 900px
|
||||
:name: os-10b-launch
|
||||
|
||||
Click **Next**, and select the machine flavor you created earlier
|
||||
(**UbuntuCloud**):
|
||||
|
||||
.. figure:: images/OpenStack-10c-select-flavor.png
|
||||
:align: center
|
||||
:width: 900px
|
||||
:name: os-10c-launch
|
||||
|
||||
Click **>** next to the Allocated **UbuntuCloud** flavor and see
|
||||
details about your choice:
|
||||
|
||||
.. figure:: images/OpenStack-10d-flavor-selected.png
|
||||
:align: center
|
||||
:width: 900px
|
||||
:name: os-10d-launch
|
||||
|
||||
Click the **Networks** tab, and select the internal **shared**
|
||||
network from the "Available" list:
|
||||
|
||||
.. figure:: images/OpenStack-10e-select-network.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-10e-launch
|
||||
|
||||
Click the **Security Groups** tab and select
|
||||
the **acrnSecuGroup** security group you created earlier. Remove the
|
||||
**default** security group if it's in the "Allocated" list:
|
||||
|
||||
.. figure:: images/OpenStack-10d-only-acrn-security-group.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-10d-security
|
||||
|
||||
Click the **Key Pair** tab and verify the **acrnKeyPair** you
|
||||
created earlier is in the "Allocated" list, and click **Launch
|
||||
Instance**:
|
||||
|
||||
.. figure:: images/OpenStack-10g-show-keypair-launch.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-10g-launch
|
||||
|
||||
It will take a few minutes to complete launching the instance.
|
||||
|
||||
#. Click the **Project / Compute / Instances** tab to monitor
|
||||
progress. When the instance status is "Active" and power state is
|
||||
"Running", associate a floating IP to the instance
|
||||
so you can access it:
|
||||
|
||||
.. figure:: images/OpenStack-11-wait-for-running-create-snapshot.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-11-running
|
||||
|
||||
On the **Manage Floating IP Associations** screen, click the **+**
|
||||
to add an association:
|
||||
|
||||
.. figure:: images/OpenStack-11a-manage-floating-ip.png
|
||||
:align: center
|
||||
:width: 700px
|
||||
:name: os-11a-running
|
||||
|
||||
Select **public** pool, and click **Allocate IP**:
|
||||
|
||||
.. figure:: images/OpenStack-11b-allocate-floating-ip.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-11b-running
|
||||
|
||||
Finally, click **Associate** after the IP address is assigned:
|
||||
|
||||
.. figure:: images/OpenStack-11c-allocate-floating-ip-success-associate.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-11c-running
|
||||
|
||||
|
||||
Final Steps
|
||||
***********
|
||||
|
||||
The OpenStack instance is now running and connected to the
|
||||
network. You can confirm by returning to the **Project /
|
||||
Network / Network Topology** view:
|
||||
|
||||
.. figure:: images/OpenStack-12b-running-topology-instance.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-12b-running
|
||||
|
||||
You can also see a hypervisor summary by clicking **Admin / Compute /
|
||||
Hypervisors**:
|
||||
|
||||
.. figure:: images/OpenStack-12d-compute-hypervisor.png
|
||||
:align: center
|
||||
:width: 1200px
|
||||
:name: os-12d-running
|
||||
|
||||
.. note::
|
||||
OpenStack logs to the ``systemd`` journal and ``libvirt`` logs to
|
||||
``/var/log/libvirt/libvirtd.log``.
|
||||
|
||||
Here are some other tasks you can try when the instance is created and
|
||||
running:
|
||||
|
||||
* Use the hypervisor console to verify the instance is running by using
|
||||
the ``vm_list`` command.
|
||||
|
||||
* Ping the instance inside the container using the instance's floating IP
|
||||
address.
|
||||
|
||||
For more advanced CLI usage, refer to this `OpenStack cheat sheet
|
||||
<https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html>`_.
|
186
doc/tutorials/user_vm_guide.rst
Normal file
@ -0,0 +1,186 @@
|
||||
.. _user_vm_guide:
|
||||
|
||||
User VM Guide
|
||||
#############
|
||||
|
||||
The ACRN hypervisor uses a Linux-based Service VM and can run User VMs
|
||||
simultaneously, providing a powerful software platform to build complex
|
||||
computing systems. A User VM can run a variety of OSs including Linux or
|
||||
Windows, or an RTOS such as Zephyr or VxWorks. As shown in the :ref:`gsg`,
|
||||
you use the ACRN Configurator to define options used to build the static
|
||||
configuration of your ACRN system based on your system design, available
|
||||
capabilities of your target hardware, and performance characteristics such as
|
||||
those required for real-time needs. The Configurator also lets you define
|
||||
dynamic settings for launching User VMs and specifying the resources they need
|
||||
access to.
|
||||
|
||||
ACRN provides a framework called the :ref:`ACRN Device Model
|
||||
<dm_architecture_intro>`, for sharing physical system resources and providing
|
||||
virtual device emulation with rich I/O mediators. The Device Model also
|
||||
supports non-emulated device passthrough access to satisfy time-sensitive
|
||||
requirements and low-latency access needs of real-time applications.
|
||||
|
||||
In this and accompanying documents, we provide guidance for configuring and
|
||||
deploying post-launched User VMs, and additional information about OS-specific
|
||||
settings based on your User VM OS choice. We also show how to configure the
|
||||
system for accessing virtual devices emulated by the ACRN Device Model or
|
||||
passthrough resources dedicated to a specific VM.
|
||||
|
||||
ACRN also supports pre-launched User VMs, with dedicated resources that are not
|
||||
shared with other VMs. Pre-launched VMs are configured statically using the
|
||||
Configurator, with their own boot devices, CPU, memory, and other system
|
||||
resources. We'll discuss pre-launched VMs separately.
|
||||
|
||||
|
||||
User VM OS Choices
|
||||
******************
|
||||
|
||||
ACRN has no restrictions on what OS is used by a User VM. The ACRN team has
|
||||
tested Standard User VMs running Ubuntu 20.04 and Windows 10, and real-time User
|
||||
VMs running Zephyr, VxWorks, Xenomai, and Linux VMs running on a
|
||||
PREEMPT_RT-patched kernel.
|
||||
|
||||
The ACRN Service VM and its Device Model run on Ubuntu 20.04 using a patched
|
||||
kernel built from the `acrn-kernel GitHub repository
|
||||
<https://github.com/projectacrn/acrn-kernel>`_. The Service VM can access
|
||||
hardware resources directly by running native drivers and provides device
|
||||
sharing services to post-launched User VMs. The Service VM is not dependent on
|
||||
any settings in your User VM configuration and is used for both Standard and
|
||||
Real-time post-launched User VMs. The Service VM is not used by pre-launched
|
||||
User VMs.
|
||||
|
||||
Configuration Overview
|
||||
**********************
|
||||
|
||||
In the following sections, we provide general guidance and then link to
|
||||
OS-specific guidance documents based on your choice of User VM OS.
|
||||
|
||||
Separately, and out of scope for this document, you'll need to create a combined
|
||||
application and User VM OS image for each User VM.
|
||||
|
||||
The ACRN Device Model within the Service VM starts your User VM image using a
|
||||
launch script created by the ACRN Configurator, based on the settings you
|
||||
provided. These settings include the location on the target system storage
|
||||
device for that image, User VM memory size, console, vUART settings for
|
||||
communication, virtual I/O devices your application uses, and more. Available
|
||||
launch script options are documented in the :ref:`acrn-dm_parameters` and the
|
||||
:ref:`scenario-config-options` documentation. We'll also provide examples for
|
||||
selected capabilities in our OS-specific guidance.
|
||||
|
||||
This guide assumes you've already followed the Getting Started Guide and have
|
||||
followed steps to prepare the development computer and installed development
|
||||
system prerequisites, prepared the target and generated a board configuration
|
||||
file, and have installed the ACRN Configurator.
|
||||
|
||||
Using the ACRN Configurator
|
||||
===========================
|
||||
|
||||
Independent of your User VM OS choice, run the ACRN Configurator and create a
|
||||
scenario with a Post-launched VM for each User VM you will be running. We use
|
||||
one Ubuntu-based User VM in this overview:
|
||||
|
||||
.. figure:: images/vm-guide-create-new-scenario.png
|
||||
:align: center
|
||||
:width: 600px
|
||||
:name: vm-guide-create-new-scenario
|
||||
:class: drop-shadow
|
||||
|
||||
Creating a new scenario in the Configurator
|
||||
|
||||
Use the Configurator to give the VM a name, and define configuration options
|
||||
specific for this VM, such as memory and VM type (Standard or Real-time):
|
||||
|
||||
.. figure:: images/vm-guide-set-vm-basic-parameters.png
|
||||
:align: center
|
||||
:width: 600px
|
||||
:name: vm-guide-set-vm-basic-parameters
|
||||
:class: drop-shadow
|
||||
|
||||
Set VM basic configuration options
|
||||
|
||||
And define where the User VM image will be on the target system (in this
|
||||
example, an Ubuntu 20.04 desktop ISO image):
|
||||
|
||||
.. figure:: images/vm-guide-image-virtio-block.png
|
||||
:align: center
|
||||
:width: 600px
|
||||
:name: vm-guide-image-virtio-block
|
||||
:class: drop-shadow
|
||||
|
||||
Set VM image using virtio block device
|
||||
|
||||
After the configuration settings are to your liking, save the configuration.
|
||||
When saving, the ACRN Configurator first validates your scenario configuration and
|
||||
reports any issues that need your attention. If successful, it writes out the
|
||||
updated scenario XML file and launch script to your working directory. You'll
|
||||
use this launch script to start the User VM on the target.
|
||||
|
||||
Rebuild the ACRN Hypervisor
|
||||
===========================
|
||||
|
||||
After exiting the ACRN Configurator, build the ACRN hypervisor (based on the static
|
||||
configuration parameters in your scenario) on your development computer, as was
|
||||
done in the :ref:`gsg`::
|
||||
|
||||
cd ~/acrn-work/acrn-hypervisor
|
||||
make clean && make BOARD=~/acrn-work/MyConfiguration/my_board.board.xml SCENARIO=~/acrn-work/MyConfiguration/scenario.xml
|
||||
|
||||
The build typically takes a few minutes. When done, the build generates a Debian
|
||||
package in the ``./build`` directory.
|
||||
|
||||
This Debian package contains the ACRN hypervisor and tools to ease installing ACRN on the target.
|
||||
|
||||
Transfer Files to the Target, Install, and Reboot
|
||||
=================================================
|
||||
|
||||
We'll need to get the Debian package containing the hypervisor files we built to
|
||||
the target system, along with the launch scripts and User VM images. In the
|
||||
:ref:`gsg`, we used a USB stick, but you could also use the network to copy
|
||||
files using ``scp``. Install the Debian package and reboot to run ACRN and the
|
||||
Service VM. Then use the launch script to start each User VM.
|
||||
|
||||
User VM Persistence
|
||||
*******************
|
||||
|
||||
In the :ref:`gsg` (and in the previous overview), we used a standard Ubuntu
|
||||
20.04 ISO image as our User VM image. By its nature, an ISO image is read-only.
|
||||
This means that when you reboot the ACRN system, any changes you made to the
|
||||
User VM such as installing a new package, would be lost; the unmodified ISO
|
||||
image is used again for the User VM when the system is rebooted. While this
|
||||
could be the usage model you'd like, an alternative is to set up the User VM
|
||||
image as read-write so it will retain any changes made while it was running and
|
||||
return to that state after a reboot.
|
||||
|
||||
One way to create a persistent VM image is by using KVM to define virtual disk
|
||||
partitions, boot the underlying OS, add additional packages and even an
|
||||
application to that image, and then save and convert that QCOW2 image to a raw
|
||||
format we can use with ACRN.
|
||||
|
||||
In separate companion documentation, we provide detail guides for running
|
||||
User VMs with different OSs, and provide considerations for each of those
|
||||
standard and real-time OS configurations.
|
||||
|
||||
Standard VM OS Considerations
|
||||
*****************************
|
||||
|
||||
Here is a list of Standard User VM OS guides with details and topics to consider
|
||||
when using one of these OSs:
|
||||
|
||||
* :ref:`using_ubuntu_as_user_vm`
|
||||
* :ref:`using_windows_as_user_vm`
|
||||
|
||||
Real-time VM OS Considerations
|
||||
******************************
|
||||
|
||||
Here is a list of real-time User VM OS guides with details and topics to consider
|
||||
when using one of these OSs:
|
||||
|
||||
* :ref:`using_xenomai_as_user_vm`
|
||||
* :ref:`using_vxworks_as_user_vm`
|
||||
* :ref:`using_zephyr_as_user_vm`
|
||||
|
||||
We also recommend reading these RTVM performance guides:
|
||||
|
||||
* :ref:`rtvm_workload_guideline`
|
||||
* :ref:`rt_perf_tips_rtvm`
|
||||
* :ref:`rt_performance_tuning`
|
@ -10,13 +10,13 @@ launched by a Device Model in the Service VM.
|
||||
.. figure:: images/ACRN-Hybrid.png
|
||||
:align: center
|
||||
:width: 600px
|
||||
:name: hybrid_scenario_on_nuc
|
||||
:name: hybrid_scenario_on_Vecow
|
||||
|
||||
The Hybrid Scenario on the Intel NUC
|
||||
The Hybrid Scenario on the Vecow SPC-7100
|
||||
|
||||
The following guidelines
|
||||
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
|
||||
as shown in :numref:`hybrid_scenario_on_nuc`.
|
||||
describe how to set up the ACRN hypervisor hybrid scenario on the Vecow SPC-7100,
|
||||
as shown in :numref:`hybrid_scenario_on_Vecow`.
|
||||
|
||||
.. note::
|
||||
|
||||
@ -32,10 +32,10 @@ as shown in :numref:`hybrid_scenario_on_nuc`.
|
||||
Set-up base installation
|
||||
************************
|
||||
|
||||
- Use the `Intel NUC Kit NUC11TNBi5 <https://ark.intel.com/content/www/us/en/ark/products/205596/intel-nuc-11-pro-board-nuc11tnbi5.html>`_.
|
||||
- Use the `Vecow SPC-7100 <https://marketplace.intel.com/s/offering/a5b3b000000PReMAAW/vecow-spc7100-series-11th-gen-intel-core-i7i5i3-processor-ultracompact-f>`_.
|
||||
- Connect to the serial port as described in :ref:`Connecting to the serial port <connect_serial_port>`.
|
||||
- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your
|
||||
Intel NUC.
|
||||
- Install Ubuntu 20.04 on your SATA device or on the NVME disk of your
|
||||
Intel Vecow.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
@ -58,9 +58,9 @@ Set-up ACRN on your device
|
||||
**************************
|
||||
|
||||
- Follow the instructions in :Ref:`gsg` to build ACRN using the
|
||||
``hybrid`` scenario. Here is the build command-line for the `Intel NUC Kit NUC11TNBi5 <https://ark.intel.com/content/www/us/en/ark/products/205596/intel-nuc-11-pro-board-nuc11tnbi5.html>`_::
|
||||
``hybrid`` scenario. Here is the build command-line for the `Vecow SPC-7100 <https://marketplace.intel.com/s/offering/a5b3b000000PReMAAW/vecow-spc7100-series-11th-gen-intel-core-i7i5i3-processor-ultracompact-f>`_::
|
||||
|
||||
make clean && make BOARD=nuc11tnbi5 SCENARIO=hybrid
|
||||
make clean && make BOARD=tgl-vecow-spc-7100-Corei7 SCENARIO=hybrid
|
||||
|
||||
- Install the ACRN hypervisor and tools
|
||||
|
||||
@ -112,12 +112,12 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
|
||||
|
||||
.. note:: The module ``/boot/zephyr.elf`` is the VM0 (Zephyr) kernel file.
|
||||
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
|
||||
``kern_mod`` of VM0, which is configured in the ``misc/config_tools/data/nuc11tnbi5/hybrid.xml``
|
||||
``kern_mod`` of VM0, which is configured in the ``misc/config_tools/data/tgl-vecow-spc-7100-Corei7/hybrid.xml``
|
||||
file. The multiboot module ``/boot/bzImage`` is the Service VM kernel
|
||||
file. The param ``yyyyyy`` is the bzImage tag and must exactly match the
|
||||
``kern_mod`` of VM1 in the ``misc/config_tools/data/nuc11tnbi5/hybrid.xml``
|
||||
``kern_mod`` of VM1 in the ``misc/config_tools/data/tgl-vecow-spc-7100-Corei7/hybrid.xml``
|
||||
file. The kernel command-line arguments used to boot the Service VM are
|
||||
``bootargs`` of VM1 in the ``misc/config_tools/data/nuc11tnbi5/hybrid.xml``.
|
||||
``bootargs`` of VM1 in the ``misc/config_tools/data/tgl-vecow-spc-7100-Corei7/hybrid.xml``.
|
||||
The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0 (Zephyr).
|
||||
The parameter ``ACPI_VM0`` is VM0's ACPI tag and should not be modified.
|
||||
|
||||
|
158
doc/tutorials/using_ubuntu_as_user_vm.rst
Normal file
@ -0,0 +1,158 @@
|
||||
.. _using_ubuntu_as_user_vm:
|
||||
|
||||
Run Ubuntu as the User VM OS
|
||||
############################
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
|
||||
.. _Ubuntu 20.04 desktop ISO:
|
||||
http://releases.ubuntu.com/focal/ubuntu-20.04.4-desktop-amd64.iso
|
||||
|
||||
This tutorial assumes you have already set up the ACRN Service VM on your target
|
||||
system following the instructions in the :ref:`gsg`.
|
||||
|
||||
Install these KVM tools on your development system:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils virt-manager ovmf
|
||||
|
||||
Validated Versions
|
||||
==================
|
||||
|
||||
- **Ubuntu version:** 20.04
|
||||
- **ACRN hypervisor tag:** v3.0
|
||||
- **Service VM Kernel version:** release_3.0
|
||||
|
||||
.. _build-the-ubuntu-kvm-image:
|
||||
|
||||
Build the Ubuntu KVM Image
|
||||
**************************
|
||||
|
||||
This tutorial uses the Ubuntu 20.04 desktop ISO as the base image.
|
||||
|
||||
#. Download the `Ubuntu 20.04 desktop ISO`_ on your development machine:
|
||||
|
||||
#. Install Ubuntu via the virt-manager tool:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo virt-manager
|
||||
|
||||
#. Verify that you can see the main menu as shown in :numref:`vmmanager-ubun` below.
|
||||
|
||||
.. figure:: images/ubuntu_uservm_01.png
|
||||
:align: center
|
||||
:name: vmmanager-ubun
|
||||
:class: drop-shadow
|
||||
|
||||
Virtual Machine Manager
|
||||
|
||||
#. Right-click **QEMU/KVM** and select **New**.
|
||||
|
||||
a. Choose **Local install media (ISO image or CD-ROM)** and then click
|
||||
**Forward**.
|
||||
|
||||
.. figure:: images/ubuntu_uservm_02.png
|
||||
:align: center
|
||||
:name: vmmanager-local-install
|
||||
:class: drop-shadow
|
||||
|
||||
Choosing Local install media
|
||||
|
||||
A **Create a new virtual machine** box displays. Click **Browse** and
|
||||
select the Ubuntu ISO file that you downloaded earlier.
|
||||
If not already auto selected, choose the **OS type:** Linux, **Version:**
|
||||
Ubuntu 20.04 LTS and then click **Forward**.
|
||||
|
||||
.. figure:: images/ubuntu_uservm_03.png
|
||||
:align: center
|
||||
:name: newVM-ubun-image
|
||||
:class: drop-shadow
|
||||
|
||||
Select Ubuntu ISO file previously downloaded
|
||||
|
||||
#. Choose **Enable storage** and **Create a disk image for the virtual machine**.
|
||||
Set the storage to 20 GB or more if necessary and click **Forward**.
|
||||
|
||||
.. figure:: images/ubuntu_uservm_storage.png
|
||||
:align: center
|
||||
:name: newVM-ubun-storage
|
||||
:class: drop-shadow
|
||||
|
||||
Select VM disk storage
|
||||
|
||||
#. Rename the image if you desire. Check the
|
||||
**customize configuration before install** option before you finish all stages.
|
||||
|
||||
.. figure:: images/ubuntu_uservm_customize.png
|
||||
:align: center
|
||||
:name: newVM-ubun-customize
|
||||
:class: drop-shadow
|
||||
|
||||
Ready to customize image
|
||||
|
||||
#. Verify the Firmware and Chipset settings are as shown in this Overview screen:
|
||||
|
||||
.. figure:: images/ubuntu_uservm_begin_install.png
|
||||
:align: center
|
||||
:name: ubun-begin-install
|
||||
:class: drop-shadow
|
||||
|
||||
Ready to begin installation
|
||||
|
||||
#. Click **Apply** and **Begin Installation** (in the top left corner). Complete
|
||||
the normal Ubuntu installation within the QEMU emulator. Verify that you have
|
||||
set up the disk partition as follows:
|
||||
|
||||
- /dev/vda1: EFI System Partition
|
||||
- /dev/vda2: File System Partition
|
||||
|
||||
#. Upon installation completion, click **Restart** Now to make sure the Ubuntu
|
||||
OS boots successfully. Save the QEMU state and exit.
|
||||
|
||||
#. The KVM image is created in the ``/var/lib/libvirt/images`` folder.
|
||||
Convert the ``qcow2`` format to ``img`` **as the root user**:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cd ~ && mkdir ubuntu_images && cd ubuntu_images
|
||||
sudo qemu-img convert -f qcow2 -O raw /var/lib/libvirt/images/ubuntu20.04.qcow2 ubuntu_uservm.img
|
||||
|
||||
|
||||
Launch the Ubuntu Image as the User VM
|
||||
**************************************
|
||||
|
||||
In the :ref:`gsg`, we used the ACRN configurator to create a scenario with a
|
||||
Service VM and an Ubuntu **ISO** image for the post-launched User VM. We can use
|
||||
that same scenario with a slight edit for the User VM image name by changing
|
||||
the file name in the Virtio block device for the post-launched User VM.
|
||||
|
||||
1. Change the virtio block device to use the new Ubuntu image we created using
|
||||
KVM above:
|
||||
|
||||
.. figure:: images/ubuntu_uservm_virtioblock.png
|
||||
:align: center
|
||||
:name: ubun-virtio-block
|
||||
:class: drop-shadow
|
||||
|
||||
Update virtio block device with image location
|
||||
|
||||
Then save this new configuration and write out the updated launch script.
|
||||
|
||||
#. Copy the ``ubuntu_uservm.img`` and the updated launch script from the
|
||||
development system to your target system. For example, if the development
|
||||
and target systems are on the same network, you could use ``scp``:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
scp ~/ubuntu_images/ubuntu_uservm.img ~/acrn-work/MyConfiguration/launch_user_vm_id1.sh user_name@ip_address:~/acrn-work/
|
||||
|
||||
#. On the target system, launch the Ubuntu User VM after logging in to the Service VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
cd ~/acrn-work
|
||||
sudo launch_user_vm_id1.sh
|
||||
|
@ -1,8 +1,8 @@
|
||||
.. _using_vxworks_as_uos:
|
||||
.. _using_vxworks_as_user_vm:
|
||||
|
||||
Run VxWorks as the User VM
|
||||
##########################
|
||||
Run VxWorks as the User RTVM OS
|
||||
###############################
|
||||
|
||||
`VxWorks`_\* is a real-time proprietary OS designed for use in embedded systems requiring real-time, deterministic
|
||||
performance. This tutorial describes how to run VxWorks as the User VM on the ACRN hypervisor
|
||||
|
@ -1,10 +1,10 @@
|
||||
.. _using_windows_as_uos:
|
||||
.. _using_windows_as_user_vm:
|
||||
|
||||
Launch Windows as the Guest VM on ACRN
|
||||
######################################
|
||||
Run Windows as the User VM OS
|
||||
#############################
|
||||
|
||||
This tutorial describes how to launch Windows as a Guest (WaaG) VM on the
|
||||
This tutorial describes how to launch Windows as a Guest (WaaG) User VM on the
|
||||
ACRN hypervisor.
|
||||
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
.. _using_xenomai_as_uos:
|
||||
.. _using_xenomai_as_user_vm:
|
||||
|
||||
Run Xenomai as the User VM OS (Real-Time VM)
|
||||
############################################
|
||||
Run Xenomai as the User RTVM OS
|
||||
###############################
|
||||
|
||||
`Xenomai`_ is a versatile real-time framework that provides support to user space applications that are seamlessly integrated into Linux environments.
|
||||
|
||||
|
@ -1,8 +1,8 @@
|
||||
.. _using_zephyr_as_uos:
|
||||
.. _using_zephyr_as_user_vm:
|
||||
|
||||
Run Zephyr as the User VM
|
||||
#########################
|
||||
Run Zephyr as the User RTVM OS
|
||||
##############################
|
||||
|
||||
This tutorial describes how to run Zephyr as the User VM on the ACRN hypervisor. We are using
|
||||
Kaby Lake-based Intel NUC (model NUC7i5DNHE) in this tutorial.
|
||||
|
@ -1,88 +0,0 @@
|
||||
.. _vcat_configuration:
|
||||
|
||||
Enable vCAT Configuration
|
||||
#########################
|
||||
|
||||
vCAT is built on top of RDT, so to use vCAT we must first enable RDT.
|
||||
For details on enabling RDT configuration on ACRN, see :ref:`rdt_configuration`.
|
||||
For details on ACRN vCAT high-level design, see :ref:`hv_vcat`.
|
||||
|
||||
The vCAT feature is disabled by default in ACRN. You can enable vCAT via the UI,
|
||||
the steps listed below serve as an FYI to show how those settings are translated
|
||||
into XML in the scenario file:
|
||||
|
||||
#. Configure system level features:
|
||||
|
||||
- Edit ``hv.FEATURES.RDT.RDT_ENABLED`` to `y` to enable RDT
|
||||
|
||||
- Edit ``hv.FEATURES.RDT.CDP_ENABLED`` to `n` to disable CDP.
|
||||
vCAT requires CDP to be disabled.
|
||||
|
||||
- Edit ``hv.FEATURES.RDT.VCAT_ENABLED`` to `y` to enable vCAT
|
||||
|
||||
.. code-block:: xml
|
||||
:emphasize-lines: 3,4,5
|
||||
|
||||
<FEATURES>
|
||||
<RDT>
|
||||
<RDT_ENABLED>y</RDT_ENABLED>
|
||||
<CDP_ENABLED>n</CDP_ENABLED>
|
||||
<VCAT_ENABLED>y</VCAT_ENABLED>
|
||||
<CLOS_MASK></CLOS_MASK>
|
||||
</RDT>
|
||||
</FEATURES>
|
||||
|
||||
#. In each Guest VM configuration:
|
||||
|
||||
- Edit ``vm.virtual_cat_support`` to 'y' to enable the vCAT feature on the VM.
|
||||
|
||||
- Edit ``vm.clos.vcpu_clos`` to assign COS IDs to the VM.
|
||||
|
||||
If ``GUEST_FLAG_VCAT_ENABLED`` is not specified for a VM (abbreviated as RDT VM):
|
||||
``vcpu_clos`` is per CPU in a VM and it configures each CPU in a VM to a desired COS ID.
|
||||
So the number of vcpu_closes is equal to the number of vCPUs assigned.
|
||||
|
||||
If ``GUEST_FLAG_VCAT_ENABLED`` is specified for a VM (abbreviated as vCAT VM):
|
||||
``vcpu_clos`` is not per CPU anymore; instead, it specifies a list of physical COS IDs (minimum 2)
|
||||
that are assigned to a vCAT VM. The number of vcpu_closes is not necessarily equal to
|
||||
the number of vCPUs assigned, but may be not only greater than the number of vCPUs assigned but
|
||||
less than this number. Each vcpu_clos will be mapped to a virtual COS ID, the first vcpu_clos
|
||||
is mapped to virtual COS ID 0 and the second is mapped to virtual COS ID 1, etc.
|
||||
|
||||
.. code-block:: xml
|
||||
:emphasize-lines: 3,10,11,12,13
|
||||
|
||||
<vm id="1">
|
||||
<guest_flags>
|
||||
<guest_flag>GUEST_FLAG_VCAT_ENABLED</guest_flag>
|
||||
</guest_flags>
|
||||
<cpu_affinity>
|
||||
<pcpu_id>1</pcpu_id>
|
||||
<pcpu_id>2</pcpu_id>
|
||||
</cpu_affinity>
|
||||
<clos>
|
||||
<vcpu_clos>2</vcpu_clos>
|
||||
<vcpu_clos>4</vcpu_clos>
|
||||
<vcpu_clos>5</vcpu_clos>
|
||||
<vcpu_clos>7</vcpu_clos>
|
||||
</clos>
|
||||
</vm>
|
||||
|
||||
.. note::
|
||||
CLOS_MASK defined in scenario file is a capacity bitmask (CBM) starting
|
||||
at bit position low (the lowest assigned physical cache way) and ending at position
|
||||
high (the highest assigned physical cache way, inclusive). As CBM only allows
|
||||
contiguous '1' combinations, so CLOS_MASK essentially is the maximum CBM that covers
|
||||
all the physical cache ways assigned to a vCAT VM.
|
||||
|
||||
The config tool imposes oversight to prevent any problems with invalid configuration data for vCAT VMs:
|
||||
|
||||
* For a vCAT VM, its vcpu_closes cannot be set to 0, COS ID 0 is reserved to be used only by hypervisor
|
||||
|
||||
* There should not be any COS ID overlap between a vCAT VM and any other VMs. e.g. the vCAT VM has exclusive use of the assigned COS IDs
|
||||
|
||||
* For a vCAT VM, each vcpu_clos must be less than L2/L3 COS_MAX
|
||||
|
||||
* For a vCAT VM, its vcpu_closes cannot contain duplicate values
|
||||
|
||||
#. Follow instructions in :ref:`gsg` and build with this XML configuration.
|
@ -3,386 +3,165 @@
|
||||
Enable vUART Configurations
|
||||
###########################
|
||||
|
||||
Introduction
|
||||
About vUART
|
||||
============
|
||||
|
||||
The virtual universal asynchronous receiver/transmitter (vUART) supports
|
||||
two functions: one is the console, the other is communication. vUART
|
||||
only works on a single function.
|
||||
A virtual universal asynchronous receiver/transmitter (vUART) can be a console
|
||||
port or a communication port.
|
||||
|
||||
Only two vUART configurations are added to the predefined scenarios,
|
||||
but you can customize the scenarios to enable more using the :ref:`ACRN
|
||||
Configurator <acrn_configurator_tool>`.
|
||||
A vUART can exchange data between the hypervisor and a VM
|
||||
or between two VMs. Typical use cases of a vUART include:
|
||||
|
||||
Console Enable List
|
||||
===================
|
||||
* Access the console of a VM from the hypervisor or another VM. A VM console,
|
||||
when enabled by the OS in that VM, typically provides logs and a shell to
|
||||
log in and execute commands. (vUART console)
|
||||
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Scenarios | vm0 | vm1 | vm2 | vm3 |
|
||||
+=================+=======================+====================+================+================+
|
||||
| Hybrid | Pre-launched (Zephyr) | Service VM | Post-launched | |
|
||||
| | (vUART enable) | (vUART enable) | | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Shared | Service VM | Post-launched | Post-launched | Post-launched |
|
||||
| | (vUART enable) | | (vUART enable) | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Partitioned | Pre-launched | Pre-launched RTVM | Post-launched | |
|
||||
| | (vUART enable) | (vUART enable) | RTVM | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
* Exchange user-specific, low-speed data between two VMs. (vUART communication)
|
||||
|
||||
.. _how-to-configure-a-console-port:
|
||||
To the VMs, the vUARTs are presented in a 8250-compatible manner.
|
||||
|
||||
How to Configure a Console Port
|
||||
===============================
|
||||
To exchange high-speed (for example, megabytes or gigabytes per second) data
|
||||
between two VMs, you can use the inter-VM shared memory feature
|
||||
(IVSHMEM) instead.
|
||||
|
||||
To enable the console port for a VM, change only the ``port_base`` and
|
||||
``irq``. If the IRQ number is already in use in your system (``cat
|
||||
/proc/interrupt``), choose another IRQ number. If you set the ``.irq =0``,
|
||||
the vUART will work in polling mode.
|
||||
Dependencies and Constraints
|
||||
=============================
|
||||
|
||||
- ``COM1_BASE (0x3F8) + COM1_IRQ(4)``
|
||||
- ``COM2_BASE (0x2F8) + COM2_IRQ(3)``
|
||||
- ``COM3_BASE (0x3E8) + COM3_IRQ(6)``
|
||||
- ``COM4_BASE (0x2E8) + COM4_IRQ(7)``
|
||||
Consider the following dependencies and constraints:
|
||||
|
||||
Example:
|
||||
* The OSes of the VMs need an 8250-compatible serial driver.
|
||||
|
||||
.. code-block:: none
|
||||
* To access the hypervisor shell, you must have a physical UART.
|
||||
|
||||
.vuart[0] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM1_BASE,
|
||||
.irq = COM1_IRQ,
|
||||
},
|
||||
* Although a vUART is available to all kinds of VMs, you should not
|
||||
enable a vUART to access the console of or exchange data with a real-time VM.
|
||||
Exchanging data via a vUART imposes a performance
|
||||
penalty that could delay the response of asynchronous events in real-time VMs.
|
||||
|
||||
.. _how-to-configure-a-communication-port:
|
||||
* A VM can have one console vUART and multiple communication vUARTs.
|
||||
|
||||
How to Configure a Communication Port
|
||||
=====================================
|
||||
* A single vUART connection cannot support both console and communication.
|
||||
|
||||
To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate.
|
||||
Configuration Overview
|
||||
======================
|
||||
|
||||
The port_base and IRQ should differ from the ``vuart[0]`` in the same VM.
|
||||
The :ref:`acrn_configurator_tool` lets you configure vUART connections. The
|
||||
following documentation is a general overview of the configuration process.
|
||||
|
||||
``t_vuart.vm_id`` is the target VM's vm_id, start from 0. (0 means VM0)
|
||||
To configure access to the console of a VM from the hypervisor, go to the **VM
|
||||
Basic Parameters > Console virtual UART type**, and select a COM port.
|
||||
|
||||
``t_vuart.vuart_id`` is the target vUART index in the target VM. Start
|
||||
from ``1``. (``1`` means ``vuart[1]``)
|
||||
|
||||
Example:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
/* VM0 */
|
||||
...
|
||||
/* VM1 */
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM2_BASE,
|
||||
.irq = COM2_IRQ,
|
||||
.t_vuart.vm_id = 2U,
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
...
|
||||
/* VM2 */
|
||||
.vuart[1] = {
|
||||
.type = VUART_LEGACY_PIO,
|
||||
.addr.port_base = COM2_BASE,
|
||||
.irq = COM2_IRQ,
|
||||
.t_vuart.vm_id = 1U,
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
|
||||
Communication vUART Enable List
|
||||
===============================
|
||||
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Scenarios | vm0 | vm1 | vm2 | vm3 |
|
||||
+=================+=======================+====================+=====================+================+
|
||||
| Hybrid | Pre-launched (Zephyr) | Service VM | Post-launched | |
|
||||
| | (vUART enable COM2) | (vUART enable COM2)| | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Shared | Service VM | Post-launched | Post-launched RTVM | Post-launched |
|
||||
| | (vUART enable COM2) | | (vUART enable COM2) | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Partitioned | Pre-launched | Pre-launched RTVM | | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
|
||||
Launch Script
|
||||
=============
|
||||
|
||||
- ``-s 1:0,lpc -l com1,stdio``
|
||||
This option is only needed for WaaG and VxWorks (and also when using
|
||||
OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide
|
||||
the ACPI table for UART.
|
||||
|
||||
- ``-B " ....,console=ttyS0, ..."``
|
||||
Add this to the kernel-based system.
|
||||
|
||||
Test the Communication Port
|
||||
===========================
|
||||
|
||||
After you have configured the communication port in hypervisor, you can
|
||||
access the corresponding port. For example, in Linux OS:
|
||||
|
||||
1. With ``echo`` and ``cat``
|
||||
|
||||
On VM1: ``# cat /dev/ttyS1``
|
||||
|
||||
On VM2: ``# echo "test test" > /dev/ttyS1``
|
||||
|
||||
You can find the message from VM1 ``/dev/ttyS1``.
|
||||
|
||||
If you are not sure which one is the communication port, you can run
|
||||
``dmesg | grep ttyS`` under the Linux shell to check the base address.
|
||||
If it matches what you have set in the ``vm_configuration.c`` file, it
|
||||
is the correct port.
|
||||
|
||||
|
||||
#. With Minicom
|
||||
|
||||
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test``
|
||||
in VM1's Minicom. The message should appear in VM2's Minicom. Disable
|
||||
flow control in Minicom.
|
||||
|
||||
|
||||
#. Limitations
|
||||
|
||||
- The msg cannot be longer than 256 bytes.
|
||||
- This cannot be used to transfer files because flow control is
|
||||
not supported so data may be lost.
|
||||
|
||||
vUART Design
|
||||
============
|
||||
|
||||
**Console vUART**
|
||||
|
||||
.. figure:: images/vuart-config-1.png
|
||||
.. image:: images/configurator-vuartconn02.png
|
||||
:align: center
|
||||
:name: console-vuart
|
||||
:class: drop-shadow
|
||||
|
||||
**Communication vUART (between VM0 and VM1)**
|
||||
To configure communication between two VMs, go to the **Hypervisor Global
|
||||
Settings > Basic Parameters > InterVM Virtual UART Connection**. Click **+**
|
||||
to add the first vUART connection.
|
||||
|
||||
.. figure:: images/vuart-config-2.png
|
||||
.. image:: images/configurator-vuartconn03.png
|
||||
:align: center
|
||||
:name: communication-vuart
|
||||
:class: drop-shadow
|
||||
|
||||
COM Port Configurations for Post-Launched VMs
|
||||
=============================================
|
||||
For the connection:
|
||||
|
||||
For a post-launched VM, the ``acrn-dm`` cmdline also provides a COM port configuration:
|
||||
#. Select the two VMs to connect.
|
||||
|
||||
``-s 1:0,lpc -l com1,stdio``
|
||||
#. Select the vUART type, either Legacy or PCI.
|
||||
|
||||
This adds ``com1 (0x3f8)`` and ``com2 (0x2f8)`` modules in the post-launched VM, including the ACPI info for these two ports.
|
||||
#. If you select Legacy, the tool displays a virtual I/O address field for each
|
||||
VM. If you select PCI, the tool displays a virtual Board:Device.Function
|
||||
(BDF) address field for each VM. In both cases, you can enter an address or
|
||||
leave it blank. If the field is blank, the tool provides an address when the
|
||||
configuration is saved.
|
||||
|
||||
**Data Flows**
|
||||
.. note::
|
||||
|
||||
Three different data flows exist based on how the post-launched VM is
|
||||
started, as shown in the diagram below:
|
||||
The release v3.0 ACRN Configurator has an issue where you need to save the
|
||||
configuration twice to see the generated I/O or BDF address in the vUART
|
||||
setting. (:acrn-issue:`7831`)
|
||||
|
||||
* Figure 1 data flow: The post-launched VM is started with the vUART
|
||||
enabled in the hypervisor configuration file only.
|
||||
* Figure 2 data flow: The post-launched VM is started with the
|
||||
``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
|
||||
* Figure 3 data flow: The post-launched VM is started with both vUART
|
||||
enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
|
||||
To add another connection, click **+** on the right side of an existing
|
||||
connection. Or click **-** to delete a connection.
|
||||
|
||||
.. figure:: images/vuart-config-post-launch.png
|
||||
.. image:: images/configurator-vuartconn01.png
|
||||
:align: center
|
||||
:name: Post-Launched VMs
|
||||
:class: drop-shadow
|
||||
|
||||
.. note::
|
||||
For operating systems such as VxWorks and Windows that depend on the
|
||||
ACPI table to probe the UART driver, adding the vUART configuration in
|
||||
the hypervisor is not sufficient. We recommend that you use
|
||||
the configuration in the figure 3 data flow. This may be refined in the
|
||||
future.
|
||||
Example Configuration
|
||||
=====================
|
||||
|
||||
Use PCI-vUART
|
||||
#############
|
||||
The following steps show how to configure and verify a vUART
|
||||
connection between two VMs. The example extends the information provided in the
|
||||
:ref:`gsg`.
|
||||
|
||||
PCI Interface of ACRN vUART
|
||||
===========================
|
||||
#. In the ACRN Configurator, create a shared scenario with a Service VM and one
|
||||
post-launched User VM.
|
||||
|
||||
When you set :ref:`vuart[0] and vuart[1] <vuart_config>`, the ACRN
|
||||
hypervisor emulates virtual legacy serial devices (I/O port and IRQ) for
|
||||
VMs. So ``vuart[0]`` and ``vuart[1]`` are legacy vUARTs. ACRN
|
||||
hypervisor can also emulate virtual PCI serial devices (BDF, MMIO
|
||||
registers and MSIX capability). These virtual PCI serial devices are
|
||||
called PCI-vUART, and have an advantage in device enumeration for the
|
||||
guest OS. It is easy to add new PCI-vUART ports to a VM.
|
||||
#. Go to **Hypervisor Global Settings > Basic Parameters > InterVM Virtual UART
|
||||
Connection**.
|
||||
|
||||
.. _index-of-vuart:
|
||||
a. Click **+** to add a vUART connection.
|
||||
|
||||
Index of vUART
|
||||
==============
|
||||
#. Select the Service VM (ACRN_Service_VM) and the post-launched User VM
|
||||
(POST_STD_VM1).
|
||||
|
||||
ACRN hypervisor supports PCI-vUARTs and legacy vUARTs as ACRN vUARTs.
|
||||
Each vUART port has its own ``vuart_idx``. ACRN hypervisor supports up
|
||||
to 8 vUARTs for each VM, from ``vuart_idx=0`` to ``vuart_idx=7``.
|
||||
Suppose we use vUART0 for a port with ``vuart_idx=0``, vUART1 for
|
||||
``vuart_idx=1``, and so on.
|
||||
#. For the vUART type, this example uses ``Legacy``.
|
||||
|
||||
Pay attention to these points:
|
||||
#. For the virtual I/O address, this example uses ``0x2f8``.
|
||||
|
||||
* vUART0 is the console port, vUART1-vUART7 are inter-VM communication ports.
|
||||
* Each communication port must set the connection to another communication vUART port of another VM.
|
||||
* When legacy ``vuart[0]`` is available, it is vUART0. A PCI-vUART can't
|
||||
be vUART0 unless ``vuart[0]`` is not set.
|
||||
* When legacy ``vuart[1]`` is available, it is vUART1. A PCI-vUART can't
|
||||
be vUART1 unless ``vuart[1]`` is not set.
|
||||
.. image:: images/configurator-vuartconn01.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Setup ACRN vUART Using Configuration Tools
|
||||
==========================================
|
||||
#. Save the scenario and launch script.
|
||||
|
||||
When you set up ACRN VM configurations with PCI-vUART, it is better to
|
||||
use the ACRN configuration tools because of all the PCI resources required: BDF number,
|
||||
address and size of mmio registers, and address and size of MSIX entry
|
||||
tables. These settings can't conflict with another PCI device. Furthermore,
|
||||
whether PCI-vUART can use ``vuart_idx=0`` and ``vuart_idx=1`` depends on legacy
|
||||
vUART settings. Configuration tools will override your settings in
|
||||
:ref:`How to Configure a Console Port <how-to-configure-a-console-port>`
|
||||
and :ref:`How to Configure a Communication Port
|
||||
<how-to-configure-a-communication-port>`.
|
||||
#. Build ACRN, copy all the necessary files from the development computer to the
|
||||
target system, and launch the Service VM and post-launched User VM.
|
||||
|
||||
You can configure both Legacy vUART and PCI-vUART in :ref:`scenario
|
||||
configurations <acrn_config_types>`. For
|
||||
example, if VM0 has a legacy vUART0 and a PCI-vUART1, VM1 has no legacy
|
||||
vUART but has a PCI-vUART0 and a PCI-vUART1, VM0's PCI-vUART1 and VM1's
|
||||
PCI-vUART1 are connected to each other. You should configure then like this:
|
||||
#. To verify the connection:
|
||||
|
||||
.. code-block:: none
|
||||
a. In the Service VM, check the communication port via the ``dmesg | grep
|
||||
tty`` command. In this example, we know the port is ``ttyS1`` because the
|
||||
I/O address matches the address in the ACRN Configurator.
|
||||
|
||||
<vm id="0">
|
||||
<legacy_vuart id="0">
|
||||
<type>VUART_LEGACY_PIO</type> /* vuart[0] is console port */
|
||||
<base>COM1_BASE</base> /* vuart[0] is used */
|
||||
<irq>COM1_IRQ</irq>
|
||||
</legacy_vuart>
|
||||
<legacy_vuart id="1">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>INVALID_COM_BASE</base> /* vuart[1] is not used */
|
||||
</legacy_vuart>
|
||||
<console_vuart id="0">
|
||||
<base>INVALID_PCI_BASE</base> /* PCI-vUART0 can't be used, because vuart[0] */
|
||||
</console_vuart>
|
||||
<communication_vuart id="1">
|
||||
<base>PCI_VUART</base> /* PCI-vUART1 is communication port, connect to vUART1 of VM1 */
|
||||
<target_vm_id>1</target_vm_id>
|
||||
<target_uart_id>1</target_uart_id>
|
||||
</communication_vuart>
|
||||
</vm>
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 7
|
||||
|
||||
<vm id="1">
|
||||
<legacy_vuart id="0">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>INVALID_COM_BASE</base> /* vuart[0] is not used */
|
||||
</legacy_vuart>
|
||||
<legacy_vuart id="1">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>INVALID_COM_BASE</base> /* vuart[1] is not used */
|
||||
</legacy_vuart>
|
||||
<console_vuart id="0">
|
||||
<base>PCI_VUART</base> /* PCI-vUART0 is console port */
|
||||
</console_vuart>
|
||||
<communication_vuart id="1">
|
||||
<base>PCI_VUART</base> /* PCI-vUART1 is communication port, connect to vUART1 of VM0 */
|
||||
<target_vm_id>0</target_vm_id>
|
||||
<target_uart_id>1</target_uart_id>
|
||||
</communication_vuart>
|
||||
</vm>
|
||||
root@10239146120sos-dom0:~# dmesg |grep tty
|
||||
[ 0.000000] Command line: root=/dev/nvme0n1p2 idle=halt rw rootwait console=ttyS0 console=tty0 earlyprintk=serial,ttyS0,115200 cons_timer_check consoleblank=0 no_timer_check quiet loglevel=3 i915.nuclear_pageflip=1 nokaslr i915.force_probe=* i915.enable_guc=0x7 maxcpus=16 hugepagesz=1G hugepages=26 hugepagesz=2M hugepages=388 root=PARTUUID=25302f3f-5c45-4ba4-a811-3de2b64ae6f6
|
||||
[ 0.038630] Kernel command line: root=/dev/nvme0n1p2 idle=halt rw rootwait console=ttyS0 console=tty0 earlyprintk=serial,ttyS0,115200 cons_timer_check consoleblank=0 no_timer_check quiet loglevel=3 i915.nuclear_pageflip=1 nokaslr i915.force_probe=* i915.enable_guc=0x7 maxcpus=16 hugepagesz=1G hugepages=26 hugepagesz=2M hugepages=388 root=PARTUUID=25302f3f-5c45-4ba4-a811-3de2b64ae6f6
|
||||
[ 0.105303] printk: console [tty0] enabled
|
||||
[ 0.105319] printk: console [ttyS0] enabled
|
||||
[ 1.391979] 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
|
||||
[ 1.649819] serial8250: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
|
||||
[ 3.394543] systemd[1]: Created slice system-serial\x2dgetty.slice.
|
||||
|
||||
The ACRN vUART related XML fields:
|
||||
#. Test vUART communication:
|
||||
|
||||
- ``id`` in ``<legacy_vuart>``, value of ``vuart_idx``, ``id=0`` is for
|
||||
legacy ``vuart[0]`` configuration, ``id=1`` is for ``vuart[1]``.
|
||||
- ``type`` in ``<legacy_vuart>``, type is always ``VUART_LEGACY_PIO``
|
||||
for legacy vUART.
|
||||
- ``base`` in ``<legacy_vuart>``, if using the legacy vUART port, set
|
||||
``COM1_BASE`` for ``vuart[0]``, set ``COM2_BASE`` for ``vuart[1]``.
|
||||
``INVALID_COM_BASE`` means do not use the legacy vUART port.
|
||||
- ``irq`` in ``<legacy_vuart>``, if you use the legacy vUART port, set
|
||||
``COM1_IRQ`` for ``vuart[0]``, set ``COM2_IRQ`` for ``vuart[1]``.
|
||||
- ``id`` in ``<console_vuart>`` and ``<communication_vuart>``,
|
||||
``vuart_idx`` for PCI-vUART
|
||||
- ``base`` in ``<console_vuart>`` and ``<communication_vuart>``,
|
||||
``PCI_VUART`` means use this PCI-vUART, ``INVALID_PCI_BASE`` means do
|
||||
not use this PCI-VUART.
|
||||
- ``target_vm_id`` and ``target_uart_id``, connection settings for this
|
||||
vUART port.
|
||||
In the Service VM, run the following command to write ``acrn`` to the
|
||||
communication port:
|
||||
|
||||
Run the command to build ACRN with this XML configuration file::
|
||||
.. code-block:: console
|
||||
|
||||
make BOARD=<board> SCENARIO=<scenario>
|
||||
root@10239146120sos-dom0:~/kino# echo "acrn" > /dev/ttyS1
|
||||
|
||||
The configuration tools will test your settings, and check :ref:`vUART
|
||||
Rules <index-of-vuart>` for compilation issue. After compiling, you can find
|
||||
the generated sources under
|
||||
``build/hypervisor/configs/scenarios/<scenario>/pci_dev.c``,
|
||||
based on the XML settings, something like:
|
||||
In the User VM, read the communication port to confirm that ``acrn`` was
|
||||
received:
|
||||
|
||||
.. code-block:: none
|
||||
.. code-block:: console
|
||||
|
||||
struct acrn_vm_pci_dev_config vm0_pci_devs[] = {
|
||||
{
|
||||
.emu_type = PCI_DEV_TYPE_HVEMUL,
|
||||
.vbdf.bits = {.b = 0x00U, .d = 0x05U, .f = 0x00U},
|
||||
.vdev_ops = &vmcs9900_ops,
|
||||
.vbar_base[0] = 0x80003000,
|
||||
.vbar_base[1] = 0x80004000,
|
||||
.vuart_idx = 1, /* PCI-vUART1 of VM0 */
|
||||
.t_vuart.vm_id = 1U, /* connected to VM1's vUART1 */
|
||||
.t_vuart.vuart_id = 1U,
|
||||
},
|
||||
}
|
||||
$ root@intel-corei7-64:~# cat /dev/ttyS1
|
||||
acrn
|
||||
|
||||
This struct shows a PCI-vUART with ``vuart_idx=1``, ``BDF 00:05.0``, it's
|
||||
a PCI-vUART1 of
|
||||
VM0, and it is connected to VM1's vUART1 port. When VM0 wants to communicate
|
||||
with VM1, it can use ``/dev/ttyS*``, the character device file of
|
||||
VM0's PCI-vUART1. Usually, legacy ``vuart[0]`` is ``ttyS0`` in VM, and
|
||||
``vuart[1]`` is ``ttyS1``. So we hope PCI-vUART0 is ``ttyS0``,
|
||||
PCI-VUART1 is ``ttyS1`` and so on through
|
||||
PCI-vUART7 is ``ttyS7``, but that is not true. We can use BDF to identify
|
||||
PCI-vUART in VM.
|
||||
Learn More
|
||||
==========
|
||||
|
||||
If you run ``dmesg | grep tty`` at a VM shell, you may see:
|
||||
ACRN supports multiple inter-VM communication methods. For a comparison, see
|
||||
:ref:`inter-vm_communication`.
|
||||
|
||||
.. code-block:: none
|
||||
For details on ACRN vUART high-level design, see:
|
||||
|
||||
[ 1.276891] 0000:00:05.0: ttyS4 at MMIO 0xa1414000 (irq = 124, base_baud = 115200) is a 16550A
|
||||
|
||||
We know for VM0 guest OS, ``ttyS4`` has BDF 00:05.0 and is PCI-vUART1.
|
||||
VM0 can communicate with VM1 by reading from or writing to ``/dev/ttyS4``.
|
||||
|
||||
If VM0 and VM1 are pre-launched VMs, or Service VM, ACRN hypervisor will
|
||||
create PCI-vUART virtual devices automatically. For post-launched VMs,
|
||||
created by ``acrn-dm``, an additional ``acrn-dm`` option is needed
|
||||
to create a PCI-vUART virtual device:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
-s <slot>,uart,vuart_idx:<val>
|
||||
|
||||
Kernel Config for Legacy vUART
|
||||
==============================
|
||||
|
||||
When ACRN hypervisor passthroughs a local APIC to a VM, there is IRQ
|
||||
injection issue for legacy vUART. The kernel driver must work in
|
||||
polling mode to avoid the problem. The VM kernel should have these config
|
||||
symbols set:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
CONFIG_SERIAL_8250_EXTENDED=y
|
||||
CONFIG_SERIAL_8250_DETECT_IRQ=y
|
||||
|
||||
Kernel Cmdline for PCI-vUART Console
|
||||
====================================
|
||||
|
||||
When an ACRN VM does not have a legacy ``vuart[0]`` but has a
|
||||
PCI-vUART0, you can use PCI-vUART0 for VM serial input/output. Check
|
||||
which TTY has the BDF of PCI-vUART0; usually it is not ``/dev/ttyS0``.
|
||||
For example, if ``/dev/ttyS4`` is PCI-vUART0, you must set
|
||||
``console=/dev/ttyS4`` in the kernel cmdline.
|
||||
* :ref:`hv-console-shell-uart`
|
||||
* :ref:`vuart_virtualization`
|
||||
* :ref:`uart_virtualization`
|
@ -1,12 +1,28 @@
|
||||
.. _acrn-dm_parameters:
|
||||
.. _acrn-dm_parameters-and-launch-script:
|
||||
|
||||
Device Model Parameters
|
||||
#######################
|
||||
Device Model Parameters and Launch Script
|
||||
#########################################
|
||||
|
||||
Hypervisor Device Model (DM) is a QEMU-like application in the Service
|
||||
VM responsible for creating a User VM and then performing devices
|
||||
emulation based on command line configurations, as introduced in
|
||||
:ref:`hld-devicemodel`.
|
||||
the :ref:`hld-devicemodel`. ACRN Configurator generates launch scripts for
|
||||
Post-launched VMs that include a call to the ``acrn-dm`` command with
|
||||
parameter values that were set in the configurator. Generally, you should not
|
||||
edit these launch scripts and change the parameters manually. Any edits you
|
||||
make would be overwritten if you run the configurator again and save the
|
||||
configuration and launch scripts.
|
||||
|
||||
The rest of this document provides details about the ``acrn-dm`` parameters as a
|
||||
reference, and should help you understand what the generated launch scripts
|
||||
are doing. We also include information useful to ACRN contributors about how
|
||||
setting in the scenario file, created by the configurator, are transformed
|
||||
into the launch script.
|
||||
|
||||
.. _acrn-dm_parameters:
|
||||
|
||||
Device Model Parameters
|
||||
***********************
|
||||
|
||||
Here are descriptions for each of these ``acrn-dm`` command line parameters:
|
||||
|
||||
@ -37,7 +53,7 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
|
||||
``--enable_trusty``
|
||||
Enable trusty for guest. For Android guest OS, ACRN provides a VM
|
||||
environment with two worlds: normal world and trusty world. The Android
|
||||
OS runs in the the normal world. The trusty OS and security sensitive
|
||||
OS runs in the normal world. The trusty OS and security sensitive
|
||||
applications runs in the trusty world. The trusty world can see the memory
|
||||
of normal world but not vice versa. See :ref:`trusty_tee` for more
|
||||
information.
|
||||
@ -180,6 +196,8 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters:
|
||||
|
||||
----
|
||||
|
||||
.. _cpu_affinity:
|
||||
|
||||
``--cpu_affinity <list of lapic_ids>``
|
||||
comma-separated list of vCPUs assigned to this VM. Each CPU has a Local Programmable
|
||||
Interrupt Controller (LAPIC). The unique ID of the LAPIC (lapic_id) is used to identify vCPU.
|
||||
@ -361,9 +379,6 @@ arguments used for configuration. Here is a table describing these emulated dev
|
||||
Packet Interface device (for CD-ROM emulation). ``ahci-cd`` supports the same
|
||||
parameters than ``ahci``.
|
||||
|
||||
* - ``amd_hostbridge``
|
||||
- Virtualized PCI AMD hostbridge
|
||||
|
||||
* - ``hostbridge``
|
||||
- Virtualized PCI hostbridge, a hardware bridge between the CPU's
|
||||
high-speed system local bus and the Peripheral Component Interconnect
|
||||
@ -396,17 +411,9 @@ arguments used for configuration. Here is a table describing these emulated dev
|
||||
n,virtio-input,/dev/input/eventX[,serial]``. ``serial`` is an optional
|
||||
string used as the unique identification code of guest virtio input device.
|
||||
|
||||
* - ``virtio-ipu``
|
||||
- Virtio image processing unit (IPU), it is used to connect
|
||||
camera device to system, to convert the raw Bayer image into YUV domain.
|
||||
|
||||
* - ``virtio-console``
|
||||
- Virtio console type device for data input and output.
|
||||
|
||||
* - ``virtio-hyper_dmabuf``
|
||||
- Virtio device that allows sharing data buffers between VMs using a
|
||||
dmabuf-like interface.
|
||||
|
||||
* - ``virtio-heci``
|
||||
- Virtio Host Embedded Controller Interface, parameters should be appended
|
||||
with the format ``<bus>:<device>:<function>,d<0~8>``. You can find the BDF
|
||||
@ -449,9 +456,6 @@ arguments used for configuration. Here is a table describing these emulated dev
|
||||
``physical_rpmb`` to specify RPMB in physical mode,
|
||||
otherwise RPMB is in simulated mode.
|
||||
|
||||
* - ``virtio-audio``
|
||||
- Virtio audio type device
|
||||
|
||||
* - ``virtio-net``
|
||||
- Virtio network type device, parameter should be appended with the format:
|
||||
``virtio-net,<device_type>=<name>[,vhost][,mac=<XX:XX:XX:XX:XX:XX> | mac_seed=<seed_string>]``.
|
||||
@ -516,3 +520,129 @@ arguments used for configuration. Here is a table describing these emulated dev
|
||||
* - ``wdt-i6300esb``
|
||||
- Emulated i6300ESB PCI Watch Dog Timer (WDT) Intel processors use to
|
||||
monitor User VMs.
|
||||
|
||||
Launch Script
|
||||
*************
|
||||
|
||||
A launch script is used to start a User VM from the Service VM command line. It
|
||||
is generated by the ACRN configurator according to several settings for a User
|
||||
VM. Normally, you should not manually edit these generated launch scripts or
|
||||
change ``acrn-dm`` command line parameters. If you do so, your changes could
|
||||
be overwritten the next time you run the configurator.
|
||||
|
||||
In this section we describe how setting in the scenario file,
|
||||
created by the configurator, are transformed into the launch script.
|
||||
This information would be useful to ACRN contributors or developers
|
||||
interested in knowing how the launch scripts are created.
|
||||
|
||||
Most configurator settings for User VMs are used at launch time.
|
||||
When you exit the configurator, these settings are saved in the
|
||||
``scenario.xml`` file and then processed by
|
||||
``misc/config_tools/launch_config/launch_cfg_gen.py``
|
||||
to add shell commands to create the launch script, according to the template
|
||||
``misc/config_tools/launch_config/launch_script_template.sh``.
|
||||
The template uses following helper functions to do system settings or to
|
||||
generate an ``acrn-dm`` command line parameter. For details about all
|
||||
``acrn-dm`` parameters, refer to the previous section.
|
||||
|
||||
``probe_modules``
|
||||
Install necessary modules before launching a Post-launched VM. For
|
||||
example, ``pci_stub`` is used to provide a stub pci driver that does
|
||||
nothing on attached pci devices. Passthrough PCIe devices will be unbound
|
||||
from their original driver and bound to the stub, so that they can be safely
|
||||
controlled by the User VM.
|
||||
|
||||
``offline_cpus <cpu_apicid>...``
|
||||
This is called if we are launching an RTVM or VM whose scheduler is
|
||||
"SCHED_NOOP". In both situations, CPU sharing between multiple VMs is
|
||||
prevented.
|
||||
This function will trigger taking a CPU offline (done by the Service VM
|
||||
kernel), and then inform the hypervisor through Hypervisor Service Module
|
||||
(HSM). The hypervisor will offline vCPU and freeze vCPU thread.
|
||||
|
||||
``unbind_device <bdf>``
|
||||
Unbind a PCIe device with specified BDF (bus, device and function) number
|
||||
from its original driver and re-bind it to the pci-stub driver. After that
|
||||
the Service VM kernel will not operate on that device any more and it can
|
||||
be passed through to User VM safely.
|
||||
|
||||
``create_tap <tap>``
|
||||
Create or reuse the tap interface that is attached to ``acrn-br0`` bridge.
|
||||
``acrn-br0`` is registered to ``systemd-networkd.service`` after installing
|
||||
the ACRN Debian package (``.deb``). You also need to enable and start the
|
||||
service to create the bridge from the Service VM using::
|
||||
|
||||
sudo systemctl enable --now systemd-networkd
|
||||
|
||||
The bridge is used to add ``virtio-net``
|
||||
interface to a User VM. ``virtio-net`` interfaces for all User VMs are
|
||||
virtually connected to a subnet behind the ACRN bridge.
|
||||
|
||||
``mount_partition <partition>``
|
||||
Mount specified partition to a temporary directory created by ``mktemp -d``,
|
||||
and return the temporary directory for later unmount.
|
||||
Typically this function is called to mount an image file in order to use
|
||||
inner rootfs file as ``virtio-blk`` backend. For example, user could set
|
||||
"<imgfile>:/boot/initrd.img*" in the ``virtio-blk`` input box in the ACRN
|
||||
Configurator. After the ``acrn-dm`` instance exits, ``unmount_partition``
|
||||
will be called to unmount the image file.
|
||||
|
||||
``unmount_partition <dir>``
|
||||
Unmount partition from specified directory.
|
||||
|
||||
``add_cpus <cpu_apicid>...``
|
||||
Return an ``acrn-dm`` command line parameter fragment to set
|
||||
``cpu_affinity``. Refer to `cpu_affinity`_ for details.
|
||||
``offline_cpus`` is called if the User VM is an RTVM or its scheduler is
|
||||
``SCHED_NOOP``.
|
||||
|
||||
``add_interrupt_storm_monitor <threshold_per_sec> <probe_period_in_sec> <inject_delay_in_ms> <delay_duration_in_ms>``
|
||||
This is added if PCIe devices, other than an integrated GPU, are passed through to
|
||||
the User VM to monitor if interrupt storm occurred on those devices.
|
||||
This function and parameters are not visible in the ACRN Configurator and
|
||||
handled by config scripts.
|
||||
It returns ``acrn-dm`` command line segment to set ``intr_monitor``.
|
||||
|
||||
``add_logger_settings console=<n> kmsg=<n> disk=<n>``
|
||||
set log level of each ``acrn-dm`` logging channel: console, kmsg, disk.
|
||||
These settings are not exposed to user in the ACRN Configurator.
|
||||
|
||||
``add_virtual_device <slot> <kind> <options>``
|
||||
Add specified kind of virtual device to the specified PCIe device slot.
|
||||
Some devices need options to configure further behaviors. ``<slot>`` numbers
|
||||
for virtual devices and passthrough devices are automatically allocated
|
||||
by ``launch_cfg_gen.py``.
|
||||
|
||||
Typical use cases:
|
||||
|
||||
- ``hostbridge``
|
||||
PCIe host bridge. ``<slot>`` must be 0.
|
||||
|
||||
- ``uart vuart_idx:<int>``
|
||||
Add a PCIe vuart with specified index.
|
||||
|
||||
- ``xhci <bus>-<port>[:<bus>-<port>]...``
|
||||
Config USB mediator. A list of USB ports each specified by
|
||||
``<bus>-<port>`` will be connected to the User VM.
|
||||
|
||||
- ``virtio-net tap=<tapname>[,vhost],mac_seed=<str>``
|
||||
The TAP should already be created by ``create_tap``.
|
||||
|
||||
- ``virtio-blk <imgfile>[,writethru|writeback|ro]``
|
||||
Add a virtio block device to User VM. The backend is a raw image
|
||||
file. Options can be specified to control access right.
|
||||
|
||||
For all types of virtual devices and options, refer to
|
||||
:ref:`emul_config`.
|
||||
|
||||
``add_passthrough_device <slot> <bus>/<device>/<function> <options>``
|
||||
Passthrough PCIe device to User VM in specified ``<slot>``.
|
||||
Some kinds of devices may need extra ``<options>`` to control internal
|
||||
behavior. Refer to the ``passthru`` section in :ref:`emul_config`.
|
||||
|
||||
These functions in the template are copied to the target launch script. Then
|
||||
``launch_cfg_gen.py`` generates the following dynamic part. It first defines
|
||||
necessary variables such as ``vm_type`` and ``scheduler``, and uses the
|
||||
functions described above to construct the ``dm_params`` parameters per the
|
||||
user settings in ``scenario.xml``.
|
||||
Finally, ``acrn-dm`` is executed to launch a User VM with these parameters.
|
||||
|