doc: update draft 3.1 published docs to include changes on master
Signed-off-by: Amy Reyes <amy.reyes@intel.com>
@ -131,6 +131,6 @@ Disable Split-Locked Access Detection
|
||||
If the CPU supports Split-locked Access detection, the ACRN hypervisor
|
||||
uses it to prevent any VM running with potential system performance
|
||||
impacting split-locked instructions. This detection can be disabled
|
||||
(by changing the :option:`hv.FEATURES.ENFORCE_TURNOFF_AC` setting in
|
||||
(by deselecting the :term:`Enable split lock detection` option in
|
||||
the ACRN Configurator tool) for customers not
|
||||
caring about system performance.
|
||||
|
@ -9,7 +9,30 @@ is from the hypervisor to the guest VM. The hypervisor supports
|
||||
hypercall APIs for VM management, I/O request distribution, interrupt injection,
|
||||
PCI assignment, guest memory mapping, power management, and secure world switch.
|
||||
|
||||
There are some restrictions for hypercall and upcall:
|
||||
The application binary interface (ABI) of ACRN hypercalls is defined as follows.
|
||||
|
||||
- A guest VM executes the ``vmcall`` instruction to trigger a hypercall.
|
||||
|
||||
- Input parameters of a hypercall include:
|
||||
|
||||
- A hypercall ID in register ``R8``, which specifies the kind of service
|
||||
requested by the guest VM.
|
||||
|
||||
- The first parameter in register ``RDI`` and the second in register
|
||||
``RSI``. The semantics of those two parameters vary among different kinds of
|
||||
hypercalls and are defined in the :ref:`hv-hypercall-ref`. For hypercalls
|
||||
requesting operations on a specific VM, the first parameter is typically the
|
||||
ID of that VM.
|
||||
|
||||
- The register ``RAX`` contains the return value of the hypercall after a guest
|
||||
VM executes the ``vmcall`` instruction, unless the ``vmcall`` instruction
|
||||
triggers an exception. Other general-purpose registers are not modified by a
|
||||
hypercall.
|
||||
|
||||
- If a hypercall parameter is defined as a pointer to a data structure,
|
||||
fields in that structure can be either input, output, or inout.
|
||||
|
||||
There are some restrictions for hypercalls and upcalls:
|
||||
|
||||
#. Only specific VMs (the Service VM and the VM with Trusty enabled)
|
||||
can invoke hypercalls. A VM that cannot invoke hypercalls gets ``#UD``
|
||||
@ -35,6 +58,8 @@ Service VM registers the IRQ handler for vector (0xF3) and notifies the I/O
|
||||
emulation module in the Service VM once the IRQ is triggered. View the detailed
|
||||
upcall process at :ref:`ipi-management`.
|
||||
|
||||
.. _hv-hypercall-ref:
|
||||
|
||||
Hypercall APIs Reference
|
||||
************************
|
||||
|
||||
|
@ -53,21 +53,22 @@ Before you begin, make sure your machines have the following prerequisites:
|
||||
- USB keyboard and mouse
|
||||
- Monitor
|
||||
- Ethernet cable and Internet access
|
||||
- A second USB disk with minimum 1GB capacity to copy files between the
|
||||
development computer and target system (this guide offers steps for
|
||||
copying via USB disk, but you can use another method, such as using ``scp``
|
||||
to copy files over the local network, if you prefer)
|
||||
- A second USB disk with minimum 16GB capacity. Format your USB disk with a
|
||||
file system that supports files greater than 4GB: extFAT or NTFS, but not
|
||||
FAT32. We'll use this USB disk to copy files between the development
|
||||
computer and target system. Instead of a USB drive, you can copy files
|
||||
between systems over the network using the ``scp`` command.
|
||||
- Local storage device (NVMe or SATA drive, for example). We recommend having
|
||||
40GB or more of free space.
|
||||
|
||||
.. note::
|
||||
If you're working behind a corporate firewall, you'll likely need to
|
||||
configure a proxy for accessing the internet, if you haven't done so already.
|
||||
configure a proxy for accessing the Internet, if you haven't done so already.
|
||||
While some tools use the environment variables ``http_proxy`` and ``https_proxy`` to
|
||||
get their proxy settings, some use their own configuration files, most
|
||||
notably ``apt`` and ``git``. If a proxy is needed and it's not configured,
|
||||
commands that access the internet may time out and you may see errors such
|
||||
as, "unable to access ..." or "couldn't resolve host ...".
|
||||
commands that access the Internet may time out and you may see errors such
|
||||
as "unable to access ..." or "couldn't resolve host ...".
|
||||
|
||||
.. _gsg-dev-computer:
|
||||
|
||||
@ -110,7 +111,7 @@ To set up the ACRN build environment on the development computer:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir ~/acrn-work
|
||||
mkdir -p ~/acrn-work
|
||||
|
||||
#. Install the necessary ACRN build tools:
|
||||
|
||||
@ -131,18 +132,18 @@ To set up the ACRN build environment on the development computer:
|
||||
cd ~/acrn-work
|
||||
git clone https://github.com/projectacrn/acrn-hypervisor.git
|
||||
cd acrn-hypervisor
|
||||
git checkout v3.0
|
||||
git checkout v3.1
|
||||
|
||||
cd ..
|
||||
git clone https://github.com/projectacrn/acrn-kernel.git
|
||||
cd acrn-kernel
|
||||
git checkout acrn-v3.0
|
||||
git checkout acrn-v3.1
|
||||
|
||||
#. Install Python package dependencies:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo pip3 install "elementpath==2.5.0" lxml "xmlschema==1.9.2" defusedxml tqdm
|
||||
sudo pip3 install "elementpath==2.5.0" lxml "xmlschema==1.9.2" defusedxml tqdm requests
|
||||
|
||||
#. Build and install the iASL compiler/disassembler used for advanced power management,
|
||||
device discovery, and configuration (ACPI) within the host OS:
|
||||
@ -202,7 +203,7 @@ Install OS on the Target
|
||||
|
||||
The target system needs Ubuntu Desktop 20.04 LTS to run the Board Inspector
|
||||
tool. You can read the full instructions to download, create a bootable USB
|
||||
stick, and `Install Ubuntu desktop
|
||||
drive, and `Install Ubuntu desktop
|
||||
<https://ubuntu.com/tutorials/install-ubuntu-desktop#1-overview>`_ on the Ubuntu
|
||||
site. We'll provide a summary here:
|
||||
|
||||
@ -244,6 +245,22 @@ To install Ubuntu 20.04:
|
||||
sudo apt update
|
||||
sudo apt upgrade -y
|
||||
|
||||
#. It's convenient to use the network to transfer files between the development
|
||||
computer and target system, so we recommend installing the openssh-server
|
||||
package on the target system::
|
||||
|
||||
sudo apt install -y openssh-server
|
||||
|
||||
This command will install and start the ssh-server service on the target
|
||||
system. We'll need to know the target system's IP address to make a
|
||||
connection from the development computer, so find it now with this command::
|
||||
|
||||
hostname -I | cut -d ' ' -f 1
|
||||
|
||||
#. Make a working directory on the target system that we'll use later::
|
||||
|
||||
mkdir -p ~/acrn-work
|
||||
|
||||
Configure Target BIOS Settings
|
||||
===============================
|
||||
|
||||
@ -262,8 +279,8 @@ Configure Target BIOS Settings
|
||||
provides additional support for managing I/O virtualization).
|
||||
* Disable **Secure Boot**. This setting simplifies the steps for this example.
|
||||
|
||||
The names and locations of the BIOS settings differ depending on the target
|
||||
hardware and BIOS version.
|
||||
The names and locations of the BIOS settings depend on the target
|
||||
hardware and BIOS vendor and version.
|
||||
|
||||
Generate a Board Configuration File
|
||||
=========================================
|
||||
@ -288,47 +305,56 @@ Generate a Board Configuration File
|
||||
directory.
|
||||
|
||||
#. Copy the Board Inspector Debian package from the development computer to the
|
||||
target system via USB disk as follows:
|
||||
target system.
|
||||
|
||||
a. On the development computer, insert the USB disk that you intend to use to
|
||||
copy files.
|
||||
Option 1: Use ``scp``
|
||||
Use the ``scp`` command to copy the Debian package from your development
|
||||
computer to the ``~/acrn-work`` working directory we created on the target
|
||||
system. Replace ``10.0.0.200`` with the target system's IP address you found earlier::
|
||||
|
||||
#. Ensure that there is only one USB disk inserted by running the following
|
||||
command:
|
||||
scp ~/acrn-work/acrn-hypervisor/build/acrn-board-inspector*.deb acrn@10.0.0.200:~/acrn-work
|
||||
|
||||
.. code-block:: bash
|
||||
Option 2: Use a USB disk
|
||||
a. On the development computer, insert the USB disk that you intend to use to
|
||||
copy files.
|
||||
|
||||
ls /media/$USER
|
||||
#. Ensure that there is only one USB disk inserted by running the following
|
||||
command:
|
||||
|
||||
Confirm that only one disk name appears. You'll use that disk name in the following steps.
|
||||
.. code-block:: bash
|
||||
|
||||
#. Copy the Board Inspector Debian package to the USB disk:
|
||||
ls /media/$USER
|
||||
|
||||
.. code-block:: bash
|
||||
Confirm that only one disk name appears. You'll use that disk name in the following steps.
|
||||
|
||||
cd ~/acrn-work/
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp -r acrn-hypervisor/build/acrn-board-inspector*.deb "$disk"/
|
||||
sync && sudo umount "$disk"
|
||||
#. Copy the Board Inspector Debian package to the USB disk:
|
||||
|
||||
#. Remove the USB stick from the development computer and insert it into the target system.
|
||||
.. code-block:: bash
|
||||
|
||||
#. Copy the Board Inspector Debian package from the USB disk to the target:
|
||||
cd ~/acrn-work/
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp -r acrn-hypervisor/build/acrn-board-inspector*.deb "$disk"/
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
.. code-block:: bash
|
||||
#. Remove the USB disk from the development computer and insert it into the target system.
|
||||
|
||||
mkdir -p ~/acrn-work
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp -r "$disk"/acrn-board-inspector*.deb ~/acrn-work
|
||||
#. Copy the Board Inspector Debian package from the USB disk to the target:
|
||||
|
||||
#. Install the Board Inspector Debian package on the target system:
|
||||
.. code-block:: bash
|
||||
|
||||
mkdir -p ~/acrn-work
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp -r "$disk"/acrn-board-inspector*.deb ~/acrn-work
|
||||
|
||||
#. Now that we've got the Board Inspector Debian package on the target system, install it there:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/acrn-work
|
||||
sudo pip3 install tqdm
|
||||
sudo apt install -y ./acrn-board-inspector*.deb
|
||||
|
||||
#. Reboot the system:
|
||||
#. Reboot the target system:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
@ -356,28 +382,37 @@ Generate a Board Configuration File
|
||||
|
||||
ls ./my_board.xml
|
||||
|
||||
#. Copy ``my_board.xml`` from the target to the development computer via USB
|
||||
disk as follows:
|
||||
#. Copy ``my_board.xml`` from the target to the development computer. Again we
|
||||
have two options:
|
||||
|
||||
a. Make sure the USB disk is connected to the target.
|
||||
Option 1: Use ``scp``
|
||||
From your development computer, use the ``scp`` command to copy the board
|
||||
configuration file from your target system back to the
|
||||
``~/acrn-work`` directory on your development computer. Replace
|
||||
``10.0.0.200`` with the target system's IP address you found earlier::
|
||||
|
||||
#. Copy ``my_board.xml`` to the USB disk:
|
||||
scp acrn@10.0.0.200:~/acrn-work/my_board.xml ~/acrn-work/
|
||||
|
||||
.. code-block:: bash
|
||||
Option 2: Use a USB disk
|
||||
a. Make sure the USB disk is connected to the target.
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp ~/acrn-work/my_board.xml "$disk"/
|
||||
sync && sudo umount "$disk"
|
||||
#. Copy ``my_board.xml`` to the USB disk:
|
||||
|
||||
#. Insert the USB disk into the development computer.
|
||||
.. code-block:: bash
|
||||
|
||||
#. Copy ``my_board.xml`` from the USB disk to the development computer:
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp ~/acrn-work/my_board.xml "$disk"/
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
.. code-block:: bash
|
||||
#. Insert the USB disk into the development computer.
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp "$disk"/my_board.xml ~/acrn-work
|
||||
sync && sudo umount "$disk"
|
||||
#. Copy ``my_board.xml`` from the USB disk to the development computer:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp "$disk"/my_board.xml ~/acrn-work
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
.. _gsg-dev-setup:
|
||||
|
||||
@ -387,7 +422,7 @@ Generate a Scenario Configuration File and Launch Script
|
||||
********************************************************
|
||||
|
||||
In this step, you will download, install, and use the `ACRN Configurator
|
||||
<https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.0/acrn-configurator-3.0.deb>`__
|
||||
<https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.1/acrn-configurator-3.1.deb>`__
|
||||
to generate a scenario configuration file and launch script.
|
||||
|
||||
A **scenario configuration file** is an XML file that holds the parameters of
|
||||
@ -403,7 +438,7 @@ post-launched User VM. Each User VM has its own launch script.
|
||||
.. code-block:: bash
|
||||
|
||||
cd ~/acrn-work
|
||||
wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.0/acrn-configurator-3.0.deb
|
||||
wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.1/acrn-configurator-3.1.deb
|
||||
|
||||
If you already have a previous version of the acrn-configurator installed,
|
||||
you should first remove it:
|
||||
@ -416,7 +451,7 @@ post-launched User VM. Each User VM has its own launch script.
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt install -y ./acrn-configurator-3.0.deb
|
||||
sudo apt install -y ./acrn-configurator-3.1.deb
|
||||
|
||||
#. Launch the ACRN Configurator:
|
||||
|
||||
@ -474,13 +509,13 @@ post-launched User VM. Each User VM has its own launch script.
|
||||
settings to meet your application's particular needs. But for now, you
|
||||
will update only a few settings for functional and educational purposes.
|
||||
|
||||
You may see some error messages from the configurator, such as shown here:
|
||||
You may see some error messages from the Configurator, such as shown here:
|
||||
|
||||
.. image:: images/gsg-config-errors.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
The configurator does consistency and validation checks when you load or save
|
||||
The Configurator does consistency and validation checks when you load or save
|
||||
a scenario. Notice the Hypervisor and VM1 tabs both have an error icon,
|
||||
meaning there are issues with configuration options in two areas. Since the
|
||||
Hypervisor tab is currently highlighted, we're seeing an issue we can resolve
|
||||
@ -527,7 +562,7 @@ post-launched User VM. Each User VM has its own launch script.
|
||||
log in to the User VM later in this guide.
|
||||
|
||||
#. For **Virtio block device**, click **+** and enter
|
||||
``/home/acrn/acrn-work/ubuntu-20.04.4-desktop-amd64.iso``. This parameter
|
||||
``/home/acrn/acrn-work/ubuntu-20.04.5-desktop-amd64.iso``. This parameter
|
||||
specifies the VM's OS image and its location on the target system. Later
|
||||
in this guide, you will save the ISO file to that directory. (If you used
|
||||
a different username when installing Ubuntu on the target system, here's
|
||||
@ -574,7 +609,7 @@ Build ACRN
|
||||
|
||||
cd ./build
|
||||
ls *.deb
|
||||
acrn-my_board-MyConfiguration-3.0.deb
|
||||
acrn-my_board-MyConfiguration-3.1.deb
|
||||
|
||||
The Debian package contains the ACRN hypervisor and tools to ease installing
|
||||
ACRN on the target. The Debian file name contains the board name (``my_board``)
|
||||
@ -609,37 +644,57 @@ Build ACRN
|
||||
|
||||
cd ..
|
||||
ls *.deb
|
||||
linux-headers-5.10.115-acrn-service-vm_5.10.115-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.10.115-acrn-service-vm_5.10.115-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.10.115-acrn-service-vm-dbg_5.10.115-acrn-service-vm-1_amd64.deb
|
||||
linux-libc-dev_5.10.115-acrn-service-vm-1_amd64.deb
|
||||
linux-headers-5.15.44-acrn-service-vm_5.15.44-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.15.44-acrn-service-vm_5.15.44-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.15.44-acrn-service-vm-dbg_5.15.44-acrn-service-vm-1_amd64.deb
|
||||
linux-libc-dev_5.15.44-acrn-service-vm-1_amd64.deb
|
||||
|
||||
#. Copy all the necessary files generated on the development computer to the
|
||||
target system by USB disk as follows:
|
||||
target system, using one of these two options:
|
||||
|
||||
a. Insert the USB disk into the development computer and run these commands:
|
||||
Option 1: Use ``scp``
|
||||
Use the ``scp`` command to copy files from your development computer to
|
||||
the target system.
|
||||
Replace ``10.0.0.200`` with the target system's IP address you found earlier::
|
||||
|
||||
.. code-block:: bash
|
||||
scp ~/acrn-work/acrn-hypervisor/build/acrn-my_board-MyConfiguration*.deb \
|
||||
~/acrn-work/*acrn-service-vm*.deb \
|
||||
~/acrn-work/MyConfiguration/launch_user_vm_id1.sh \
|
||||
~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl \
|
||||
acrn@10.0.0.200:~/acrn-work
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp ~/acrn-work/acrn-hypervisor/build/acrn-my_board-MyConfiguration*.deb "$disk"/
|
||||
cp ~/acrn-work/*acrn-service-vm*.deb "$disk"/
|
||||
cp ~/acrn-work/MyConfiguration/launch_user_vm_id1.sh "$disk"/
|
||||
cp ~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl "$disk"/
|
||||
sync && sudo umount "$disk"
|
||||
Then, go to the target system and put the ``iasl`` tool in its proper
|
||||
place::
|
||||
|
||||
#. Insert the USB disk you just used into the target system and run these
|
||||
commands to copy the files locally:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp "$disk"/acrn-my_board-MyConfiguration*.deb ~/acrn-work
|
||||
cp "$disk"/*acrn-service-vm*.deb ~/acrn-work
|
||||
cp "$disk"/launch_user_vm_id1.sh ~/acrn-work
|
||||
sudo cp "$disk"/iasl /usr/sbin/
|
||||
cd ~/acrn-work
|
||||
sudo cp iasl /usr/sbin/
|
||||
sudo chmod a+x /usr/sbin/iasl
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
|
||||
Option 2: by USB disk
|
||||
a. Insert the USB disk into the development computer and run these commands:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp ~/acrn-work/acrn-hypervisor/build/acrn-my_board-MyConfiguration*.deb "$disk"/
|
||||
cp ~/acrn-work/*acrn-service-vm*.deb "$disk"/
|
||||
cp ~/acrn-work/MyConfiguration/launch_user_vm_id1.sh "$disk"/
|
||||
cp ~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl "$disk"/
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
#. Insert the USB disk you just used into the target system and run these
|
||||
commands to copy the files locally:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp "$disk"/acrn-my_board-MyConfiguration*.deb ~/acrn-work
|
||||
cp "$disk"/*acrn-service-vm*.deb ~/acrn-work
|
||||
cp "$disk"/launch_user_vm_id1.sh ~/acrn-work
|
||||
sudo cp "$disk"/iasl /usr/sbin/
|
||||
sudo chmod a+x /usr/sbin/iasl
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
.. _gsg-install-acrn:
|
||||
|
||||
@ -648,7 +703,7 @@ Build ACRN
|
||||
Install ACRN
|
||||
************
|
||||
|
||||
#. Install the ACRN Debian package and ACRN kernel Debian packages using these
|
||||
#. On the target system, install the ACRN Debian package and ACRN kernel Debian packages using these
|
||||
commands:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -664,7 +719,7 @@ Install ACRN
|
||||
reboot
|
||||
|
||||
#. Confirm that you see the GRUB menu with the "ACRN multiboot2" entry. Select
|
||||
it and proceed to booting ACRN. (It may be autoselected, in which case it
|
||||
it and proceed to booting ACRN. (It may be auto-selected, in which case it
|
||||
will boot with this option automatically in 5 seconds.)
|
||||
|
||||
.. code-block:: console
|
||||
@ -720,10 +775,10 @@ Launch the User VM
|
||||
|
||||
#. On the target system, use the web browser to go to the `official Ubuntu website <https://releases.ubuntu.com/focal/>`__ to
|
||||
get the Ubuntu Desktop 20.04 LTS ISO image
|
||||
``ubuntu-20.04.4-desktop-amd64.iso`` for the User VM. (The same image you
|
||||
specified earlier in the ACRN Configurator UI. (Alternatively, instead of
|
||||
``ubuntu-20.04.5-desktop-amd64.iso`` for the User VM. (The same image you
|
||||
specified earlier in the ACRN Configurator UI.) Alternatively, instead of
|
||||
downloading it again, you can use a USB drive or ``scp`` to copy the ISO
|
||||
image file to the ``~/acrn-work`` directory on the target system.)
|
||||
image file to the ``~/acrn-work`` directory on the target system.
|
||||
|
||||
#. If you downloaded the ISO file on the target system, copy it from the
|
||||
Downloads directory to the ``~/acrn-work/`` directory (the location we said
|
||||
@ -732,7 +787,7 @@ Launch the User VM
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
cp ~/Downloads/ubuntu-20.04.4-desktop-amd64.iso ~/acrn-work
|
||||
cp ~/Downloads/ubuntu-20.04.5-desktop-amd64.iso ~/acrn-work
|
||||
|
||||
#. Launch the User VM:
|
||||
|
||||
@ -747,7 +802,7 @@ Launch the User VM
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Ubuntu 20.04.4 LTS ubuntu hvc0
|
||||
Ubuntu 20.04.5 LTS ubuntu hvc0
|
||||
|
||||
ubuntu login:
|
||||
|
||||
@ -758,7 +813,7 @@ Launch the User VM
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
Welcome to Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-30-generic x86_64)
|
||||
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.13.0-30-generic x86_64)
|
||||
|
||||
* Documentation: https://help.ubuntu.com
|
||||
* Management: https://landscape.canonical.com
|
||||
@ -795,7 +850,7 @@ Launch the User VM
|
||||
.. code-block:: console
|
||||
|
||||
acrn@vecow:~$ uname -r
|
||||
5.10.115-acrn-service-vm
|
||||
5.15.44-acrn-service-vm
|
||||
|
||||
The User VM has launched successfully. You have completed this ACRN setup.
|
||||
|
||||
@ -811,6 +866,9 @@ Launch the User VM
|
||||
Next Steps
|
||||
**************
|
||||
|
||||
:ref:`overview_dev` describes the ACRN configuration process, with links to
|
||||
additional details.
|
||||
* :ref:`overview_dev` describes the ACRN configuration process, with links to
|
||||
additional details.
|
||||
|
||||
* A follow-on :ref:`GSG_sample_app` tutorial shows how to
|
||||
configure, build, and run a more real-world sample application with a Real-time
|
||||
VM communicating with an HMI VM via inter-VM shared memory (IVSHMEM).
|
||||
|
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 45 KiB After Width: | Height: | Size: 56 KiB |
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 222 KiB |
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 85 KiB After Width: | Height: | Size: 70 KiB |
Before Width: | Height: | Size: 102 KiB After Width: | Height: | Size: 192 KiB |
Before Width: | Height: | Size: 21 KiB After Width: | Height: | Size: 23 KiB |
BIN
doc/getting-started/images/samp-image001.png
Normal file
After Width: | Height: | Size: 171 KiB |
BIN
doc/getting-started/images/samp-image002.png
Normal file
After Width: | Height: | Size: 222 KiB |
BIN
doc/getting-started/images/samp-image003.png
Normal file
After Width: | Height: | Size: 35 KiB |
BIN
doc/getting-started/images/samp-image004.png
Normal file
After Width: | Height: | Size: 57 KiB |
BIN
doc/getting-started/images/samp-image004a.png
Normal file
After Width: | Height: | Size: 61 KiB |
BIN
doc/getting-started/images/samp-image005.png
Normal file
After Width: | Height: | Size: 130 KiB |
BIN
doc/getting-started/images/samp-image006.png
Normal file
After Width: | Height: | Size: 88 KiB |
BIN
doc/getting-started/images/samp-image007.png
Normal file
After Width: | Height: | Size: 28 KiB |
BIN
doc/getting-started/images/samp-image008.png
Normal file
After Width: | Height: | Size: 52 KiB |
BIN
doc/getting-started/images/samp-image009.png
Normal file
After Width: | Height: | Size: 17 KiB |
BIN
doc/getting-started/images/samp-image010.png
Normal file
After Width: | Height: | Size: 83 KiB |
BIN
doc/getting-started/images/samp-image011.png
Normal file
After Width: | Height: | Size: 14 KiB |
BIN
doc/getting-started/images/samp-image012.png
Normal file
After Width: | Height: | Size: 16 KiB |
BIN
doc/getting-started/images/samp-image013.png
Normal file
After Width: | Height: | Size: 108 KiB |
BIN
doc/getting-started/images/samp-image014.png
Normal file
After Width: | Height: | Size: 11 KiB |
BIN
doc/getting-started/images/samp-image015.png
Normal file
After Width: | Height: | Size: 34 KiB |
BIN
doc/getting-started/images/samp-image015a.png
Normal file
After Width: | Height: | Size: 65 KiB |
BIN
doc/getting-started/images/samp-image016.png
Normal file
After Width: | Height: | Size: 6.6 KiB |
BIN
doc/getting-started/images/samp-image017.png
Normal file
After Width: | Height: | Size: 351 KiB |
BIN
doc/getting-started/images/samp-image018.png
Normal file
After Width: | Height: | Size: 23 KiB |
668
doc/getting-started/sample-app.rst
Normal file
@ -0,0 +1,668 @@
|
||||
.. _GSG_sample_app:
|
||||
|
||||
Sample Application User Guide
|
||||
#############################
|
||||
|
||||
This sample application shows how to create two VMs that are launched on
|
||||
your target system running ACRN. These VMs communicate with each other
|
||||
via inter-VM shared memory (IVSHMEM). One VM is a real-time VM running
|
||||
`cyclictest <https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/cyclictest/start>`__,
|
||||
an open source application commonly used to measure latencies in
|
||||
real-time systems. This real-time VM (RT_VM) uses inter-VM shared memory
|
||||
(IVSHMEM) to send data to a second Human-Machine Interface VM (HMI_VM)
|
||||
that formats and presents the collected data as a histogram on a web
|
||||
page shown by a browser. This guide shows how to configure, create, and
|
||||
launch the two VM images that make up this application.
|
||||
|
||||
.. figure:: images/samp-image001.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
:width: 900px
|
||||
|
||||
Sample Application Overview
|
||||
|
||||
We build these two VM images on your development computer using scripts
|
||||
in the ACRN source code. Once we have the two VM images, we follow
|
||||
similar steps shown in the *Getting Started Guide* to define a new ACRN
|
||||
scenario with two post-launched user VMs with their IVSHMEM connection.
|
||||
We build a Service VM image and the Hypervisor image based on the
|
||||
scenario configuration (as we did in the Getting Started Guide).
|
||||
Finally, we put this all together on the target system, launch the
|
||||
sample application VMs on ACRN from the Service VM, run the application
|
||||
parts in each VM, and view the cyclictest histogram results in a browser
|
||||
running on our HMI VM (or development computer).
|
||||
|
||||
While this sample application uses the cyclictest to generate data about
|
||||
performance latency in the RTVM, we aren't doing any configuration
|
||||
optimization in this sample to get the best RT performance.
|
||||
|
||||
Prerequisites Environment and Images
|
||||
************************************
|
||||
|
||||
Before beginning, use the ``df`` command on your development computer and
|
||||
verify there's at least 30GB free disk space for building the ACRN
|
||||
sample application. You may see a different Filesystem name and sizes:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ df -h /
|
||||
|
||||
Filesystem Size Used Avail Use% Mounted on
|
||||
/dev/sda5 109G 42G 63G 41% /
|
||||
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Prepare the ACRN Development and Target Environment
|
||||
***************************************************
|
||||
|
||||
.. important::
|
||||
Before building the sample application, it's important that you complete
|
||||
the :ref:`gsg` instructions and leave the development and target systems with
|
||||
the files created in those instructions.
|
||||
|
||||
The :ref:`gsg` instructions get all the tools and packages installed on your
|
||||
development and target systems that we'll also use to build and run this sample
|
||||
application.
|
||||
|
||||
After following the Getting Started Guide, you'll have a directory
|
||||
``~/acrn-work`` on your development computer containing directories with the
|
||||
``acrn-hypervisor`` and ``acrn-kernel`` source code and build output. You'll
|
||||
also have the board XML file that's needed by the ACRN Configurator to
|
||||
configure the ACRN hypervisor and set up the VM launch scripts for this sample
|
||||
application.
|
||||
|
||||
Preparing the Target System
|
||||
===========================
|
||||
|
||||
On the target system, reboot and choose the regular Ubuntu image (not the
|
||||
Multiboot2 choice created when following the Getting Started Guide).
|
||||
|
||||
1. Log in as the **acrn** user. We'll be making ssh connections to the target system
|
||||
later in these steps, so install the ssh server on the target system using::
|
||||
|
||||
sudo apt install -y openssh-server
|
||||
|
||||
#. We'll need to know the IP address of the target system later. Use the
|
||||
``hostname -I`` command and look at the first IP address mentioned. You'll
|
||||
likely see a different IP address than shown in this example:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
hostname -I | cut -d ' ' -f 1
|
||||
10.0.0.200
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Make the Sample Application
|
||||
***************************
|
||||
|
||||
On your development computer, build the applications used by the sample. The
|
||||
``rtApp`` app in the RT VM reads the output from the cyclictest program and
|
||||
sends it via inter-VM shared memory (IVSHMEM) to another regular HMI VM where
|
||||
the ``userApp`` app receives the data and formats it for presentation using the
|
||||
``histapp.py`` Python app.
|
||||
|
||||
As a normal (e.g., **acrn**) user, follow these steps:
|
||||
|
||||
1. Install some additional packages in your development computer used for
|
||||
building the sample application::
|
||||
|
||||
sudo apt install -y cloud-guest-utils schroot kpartx qemu-kvm
|
||||
|
||||
#. Check out the ``acrn-hypervisor`` source code branch (already cloned from the
|
||||
``acrn-hypervisor`` repo when you followed the :ref:`gsg`). We've tagged a
|
||||
specific version of the hypervisor you should use for the sample app's HMI
|
||||
VM::
|
||||
|
||||
cd ~/acrn-work/acrn-hypervisor
|
||||
git fetch --all
|
||||
git checkout v3.1
|
||||
|
||||
#. Build the ACRN sample application source code::
|
||||
|
||||
cd misc/sample_application/
|
||||
make all
|
||||
|
||||
This builds the ``histapp.py``, ``userApp``, and ``rtApp`` used for the
|
||||
sample application.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Make the HMI_VM Image
|
||||
*********************
|
||||
|
||||
1. Make the HMI VM image. This script runs for about 10 minutes total and will
|
||||
prompt you to input the passwords for the **acrn** and **root** user in the
|
||||
HMI_VM image::
|
||||
|
||||
cd ~/acrn-work/acrn-hypervisor/misc/sample_application/image_builder
|
||||
./create_image.sh hmi-vm
|
||||
|
||||
After the script is finished, the ``hmi_vm.img`` image file is created in the
|
||||
``build`` directory. You should see a final message from the script that
|
||||
looks like this:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
2022-08-18T09:53:06+08:00 [ Info ] VM image created at /home/acrn/acrn-work/acrn-hypervisor/misc/sample_application/image_builder/build/hmi_vm.img.
|
||||
|
||||
If you don't see such a message, look back through the output to see what
|
||||
errors are indicated. For example, there could have been a network error
|
||||
while retrieving packages from the Internet. In such a case, simply trying
|
||||
the ``create_image.sh`` command again might work.
|
||||
|
||||
The HMI VM image is a configured Ubuntu desktop image
|
||||
ready to launch as an ACRN user VM with the HMI parts of the sample app
|
||||
installed.
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Make the RT_VM image
|
||||
*********************
|
||||
|
||||
1. Check out the ``acrn-kernel`` source code branch (already cloned from the
|
||||
``acrn-kernel`` repo when you followed the :ref:`gsg`). We've tagged a
|
||||
specific version of the ``acrn-kernel`` you should use for the sample app's
|
||||
RT VM::
|
||||
|
||||
cd ~/acrn-work/acrn-kernel
|
||||
git fetch --all
|
||||
git checkout -b sample_rt acrn-tag-sample-application-rt
|
||||
|
||||
#. Build the preempt-rt patched kernel used by the RT VM::
|
||||
|
||||
make mrproper
|
||||
cp kernel_config .config
|
||||
make olddefconfig
|
||||
make -j $(nproc) deb-pkg
|
||||
|
||||
The kernel build can take 15 minutes on a fast computer but could
|
||||
take 2-3 hours depending on the performance of your development
|
||||
computer. When done, the build generates four Debian packages in the
|
||||
directory above the build root directory, as shown by this command::
|
||||
|
||||
ls ../*rtvm*.deb
|
||||
|
||||
You will see rtvm Debian packages for linux-headers, linux-image
|
||||
(normal and debug), and linux-libc-dev (your filenames might look a
|
||||
bit different):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
linux-headers-5.10.120-rt70-acrn-kernel-rtvm+_5.10.120-rt70-acrn-kernel-rtvm+-1_amd64.deb
|
||||
linux-image-5.10.120-rt70-acrn-kernel-rtvm+_5.10.120-rt70-acrn-kernel-rtvm+-1_amd64.deb
|
||||
linux-image-5.10.120-rt70-acrn-kernel-rtvm+-dbg_5.10.120-rt70-acrn-kernel-rtvm+-1_amd64.deb
|
||||
linux-libc-dev_5.10.120-rt70-acrn-kernel-rtvm+-1_amd64.deb
|
||||
|
||||
#. Make the RT VM image::
|
||||
|
||||
cd ~/acrn-work/acrn-hypervisor/misc/sample_application/image_builder
|
||||
./create_image.sh rt-vm
|
||||
|
||||
After the script is finished, the ``rt_vm.img`` image file is created in the ``build``
|
||||
directory. The RT VM image is a configured Ubuntu image with a
|
||||
preempt-rt patched kernel used for real-time VMs.
|
||||
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Create and Configure the ACRN Scenario
|
||||
**************************************
|
||||
|
||||
Now we turn to building the hypervisor based on the board and scenario
|
||||
configuration for our sample application. We'll use the board XML file
|
||||
and ACRN Configurator already on your development computer when you followed
|
||||
the :ref:`gsg`.
|
||||
|
||||
Use the ACRN Configurator to define a new scenario for our two VMs
|
||||
and generate new launch scripts for this sample application.
|
||||
|
||||
1. On your development computer, launch the ACRN Configurator::
|
||||
|
||||
cd ~/acrn-work
|
||||
acrn-configurator
|
||||
|
||||
#. Under **Start a new configuration**, confirm that the working folder is
|
||||
``/home/acrn/acrn-work/MyConfiguration``. Click **Use This Folder**. (If
|
||||
prompted, confirm it's **OK** to overwrite an existing configuration.)
|
||||
|
||||
|
||||
.. image:: images/samp-image002.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Import your board configuration file as follows:
|
||||
|
||||
a. In the **1. Import a board configuration file** panel, click **Browse
|
||||
for file**.
|
||||
|
||||
#. Browse to ``/home/acrn/acrn-work/my_board.xml`` and click **Open**.
|
||||
Then click **Import Board File**.
|
||||
|
||||
.. image:: images/samp-image003.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
|
||||
#. **Create a new scenario**: select a shared scenario type with a Service VM and
|
||||
two post-launched VMs. Click **OK**.
|
||||
|
||||
.. image:: images/samp-image004.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
The ACRN Configurator will report some problems with the initial scenario
|
||||
configuration that we'll resolve as we make updates. (Notice the error
|
||||
indicators on the settings tabs and above the parameters tabs.) The
|
||||
ACRN Configurator does a verification of the scenario when you open a saved
|
||||
scenario and when you click on the **Save Scenario And Launch Scripts**
|
||||
button.
|
||||
|
||||
.. image:: images/samp-image004a.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Select the VM0 (Service VM) tab and set the **Console virtual UART type** to
|
||||
``COM Port 1``. Edit the **Basic Parameters > Kernel
|
||||
command-line parameters** by appending the existing parameters with ``i915.modeset=1 3``
|
||||
(to disable the GPU driver loading for Intel GPU device).
|
||||
|
||||
.. image:: images/samp-image005.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Select the VM1 tab and change the VM name to HMI_VM. Configure the **Console
|
||||
virtual UART type** to ``COM Port 1``, set the **Memory size** to ``2048``,
|
||||
and add the **physical CPU affinity** to pCPU ``0`` and ``1`` (click the
|
||||
**+** button to create the additional affinity setting), as shown below:
|
||||
|
||||
.. image:: images/samp-image006.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Enable GVT-d configuration by clicking the **+** within the **PCI device
|
||||
setting** options and selecting the VGA compatible controller. Click the
|
||||
**+** button again to add the USB controller to passthrough to the HMI_VM.
|
||||
|
||||
.. image:: images/samp-image007.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Configure the HMI_VM's **virtio console devices** and **virtio network
|
||||
devices** by clicking the **+** button in the section and setting the values
|
||||
as shown here (note the **Network interface name** must be ``tap0``):
|
||||
|
||||
.. image:: images/samp-image008.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Configure the HMI_VM **virtio block device**. Add the absolute path of your
|
||||
``hmi_vm.img`` on the target system (we'll copy the generated ``hmi_vm.img``
|
||||
to this directory in a later step):
|
||||
|
||||
.. image:: images/samp-image009.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
That completes the HMI_VM settings.
|
||||
|
||||
#. Next, select the VM2 tab and change the **VM name** to RT_VM, change the
|
||||
**VM type** to ``Real-time``, set the **Console virtual UART type** to ``COM port 1``,
|
||||
set the **memory size** to ``1024``, set **pCPU affinity** to IDs ``2`` and ``3``, and
|
||||
check the **Real-time vCPU box** for pCPU ID 2, as shown below:
|
||||
|
||||
.. image:: images/samp-image010.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Configure the **virtio console device** for the RT_VM (unlike the HMI_VM, we
|
||||
don't use a **virtio network device** for this RT_VM):
|
||||
|
||||
.. image:: images/samp-image011.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
#. Add the absolute path of your ``rt_vm.img`` on the target system (we'll copy
|
||||
the ``rt_vm.img`` file we generated earlier to this directory in a later
|
||||
step):
|
||||
|
||||
.. image:: images/samp-image012.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Select the Hypervisor tab: Verify the **build type** is ``Debug``, define the
|
||||
**InterVM shared memory region** settings as shown below, adding the
|
||||
HMI_VM and RT_VM as the VMs doing the sharing of this region. (The
|
||||
missing **Virtual BDF** values will be supplied by the ACRN Configurator
|
||||
when you save the configuration.)
|
||||
|
||||
.. image:: images/samp-image013.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
In the **Debug options**, set the **Serial console port** to
|
||||
``/dev/ttyS0``, as shown below (this will resolve the message about the
|
||||
missing serial port configuration):
|
||||
|
||||
.. image:: images/samp-image014.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
#. Click the **Save Scenario and Launch Scripts** to validate and save this
|
||||
configuration and launch scripts. You should see a dialog box saying the
|
||||
scenario is saved and validated, launch scripts are generated, and all files
|
||||
successfully saved. Click **OK**.
|
||||
|
||||
.. image:: images/samp-image015.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
:width: 400px
|
||||
|
||||
|
||||
#. We're done configuring the sample application scenario. When you saved the
|
||||
scenario, the ACRN Configurator did a re-verification of all the option
|
||||
settings and found no issues, so all the error indicators are now cleared.
|
||||
|
||||
Exit the ACRN Configurator by clicking the **X** in the top right corner.
|
||||
|
||||
.. image:: images/samp-image015a.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
You can see the saved scenario and launch scripts in the working
|
||||
directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls MyConfiguration
|
||||
|
||||
launch_user_vm_id1.sh launch_user_vm_id2.sh scenario.xml myboard.board.xml
|
||||
|
||||
You'll see the two VM launch scripts (id1 for the HMI_VM, and id2 for
|
||||
the RT_VM) and the scenario XML file for your sample application (as
|
||||
well as your board XML file).
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Build the ACRN Hypervisor and Service VM Images
|
||||
***********************************************
|
||||
|
||||
1. On the development computer, build the ACRN hypervisor using the
|
||||
board XML and the scenario XML file we just generated::
|
||||
|
||||
cd ~/acrn-work/acrn-hypervisor
|
||||
|
||||
make clean
|
||||
make BOARD=~/acrn-work/MyConfiguration/my_board.board.xml SCENARIO=~/acrn-work/MyConfiguration/scenario.xml
|
||||
|
||||
The build typically takes about a minute. When done, the build
|
||||
generates a Debian package in the build directory with your board and
|
||||
working folder name.
|
||||
|
||||
This Debian package contains the ACRN hypervisor and tools for
|
||||
installing ACRN on the target.
|
||||
|
||||
#. Build the ACRN kernel for the Service VM (the sample application
|
||||
requires a newer version of the Service VM than generated in the
|
||||
Getting Started Guide, so we'll need to generate it again) using a tagged
|
||||
version of the ``acrn-kernel``::
|
||||
|
||||
cd ~/acrn-work/acrn-kernel
|
||||
git fetch --all
|
||||
|
||||
git checkout acrn-v3.1
|
||||
|
||||
make distclean
|
||||
cp kernel_config_service_vm .config
|
||||
make olddefconfig
|
||||
make -j $(nproc) deb-pkg
|
||||
|
||||
The kernel build can take 15 minutes or less on a fast computer, but
|
||||
could take 1-2 hours depending on the performance of your development
|
||||
computer. When done, the build generates four Debian packages in the
|
||||
directory above the build root directory:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ ls ../*acrn-service*.deb
|
||||
|
||||
linux-headers-5.15.44-acrn-service-vm_5.15.44-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.15.44-acrn-service-vm_5.15.44-acrn-service-vm-1_amd64.deb
|
||||
linux-image-5.15.44-acrn-service-vm-dbg_5.15.44-acrn-service-vm-1_amd64.deb
|
||||
linux-libc-dev_5.15.44-acrn-service-vm-1_amd64.deb
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Copy Files from the Development Computer to Your Target System
|
||||
**************************************************************
|
||||
|
||||
1. Copy all the files generated on the development computer to the
|
||||
target system. This includes the sample application executable files,
|
||||
HMI_VM and RT_VM images, Debian packages for the Service VM and
|
||||
Hypervisor, launch scripts, and the iasl tool built following the
|
||||
Getting Started Guide. You can use ``scp`` to copy across the local network,
|
||||
or use a USB stick:
|
||||
|
||||
Option 1: use ``scp`` to copy files over the local network
|
||||
Use ``scp`` to copy files from your development computer to the
|
||||
``~/acrn-work`` directory on the target (replace the IP address used in
|
||||
this example with the target system's IP address you found earlier)::
|
||||
|
||||
cd ~/acrn-work
|
||||
|
||||
scp acrn-hypervisor/misc/sample_application/image_builder/build/*_vm.img \
|
||||
acrn-hypervisor/build/acrn-my_board-MyConfiguration*.deb \
|
||||
*acrn-service-vm*.deb MyConfiguration/launch_user_vm_id*.sh \
|
||||
acpica-unix-20210105/generate/unix/bin/iasl \
|
||||
acrn@10.0.0.200:~/acrn-work
|
||||
|
||||
Then on the target system run these commands::
|
||||
|
||||
sudo cp ~/acrn-work/iasl /usr/sbin
|
||||
sudo ln -s /usr/sbin/iasl /usr/bin/iasl
|
||||
|
||||
Option 2: use a USB stick to copy files
|
||||
Because the VM image files are large, format your USB stick with a file
|
||||
system that supports files greater than 4GB: extFAT or NTFS, but not FAT32.
|
||||
|
||||
Insert a USB stick into the development computer and run these commands::
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
|
||||
cd ~/acrn-work
|
||||
cp acrn-hypervisor/misc/sample_application/image_builder/build/*_vm.img rt_vm.img "$disk"
|
||||
cp acrn-hypervisor/build/acrn-my_board-MyConfiguration*.deb "$disk"
|
||||
cp *acrn-service-vm*.deb "$disk"
|
||||
cp MyConfiguration/launch_user_vm_id*.sh "$disk"
|
||||
cp acpica-unix-20210105/generate/unix/bin/iasl "$disk"
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
Move the USB stick you just used to the target system and run
|
||||
these commands to copy the files locally::
|
||||
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
|
||||
cp "$disk"/*_vm.img ~/acrn-work
|
||||
cp "$disk"/acrn-my_board-MyConfiguration*.deb ~/acrn-work
|
||||
cp "$disk"/*acrn-service-vm*.deb ~/acrn-work
|
||||
cp "$disk"/launch_user_vm_id*.sh ~/acrn-work
|
||||
sudo cp "$disk"/iasl /usr/sbin/
|
||||
sudo ln -s /usr/sbin/iasl /usr/bin/iasl
|
||||
sync && sudo umount "$disk"
|
||||
|
||||
.. rst-class:: numbered-step
|
||||
|
||||
Install and Run ACRN on the Target System
|
||||
*****************************************
|
||||
|
||||
1. On your target system, install the ACRN Debian package and ACRN
|
||||
kernel Debian packages using these commands::
|
||||
|
||||
cd ~/acrn-work
|
||||
sudo apt purge acrn-hypervisor
|
||||
sudo apt install ./acrn-my_board-MyConfiguration*.deb
|
||||
sudo apt install ./*acrn-service-vm*.deb
|
||||
|
||||
#. Enable networking services for sharing with the HMI User VM::
|
||||
|
||||
sudo systemctl enable --now systemd-networkd
|
||||
|
||||
#. Reboot the system::
|
||||
|
||||
reboot
|
||||
|
||||
#. Confirm that you see the GRUB menu with the "ACRN multiboot2" entry. Select
|
||||
it and press :kbd:`Enter` to proceed to booting ACRN. (It may be
|
||||
auto-selected, in which case it will boot with this option automatically in 5
|
||||
seconds.)
|
||||
|
||||
.. image:: images/samp-image016.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
This will boot the ACRN hypervisor and launch the Service VM.
|
||||
|
||||
#. Log in to the Service VM (using the target's keyboard and HDMI monitor) using
|
||||
the ``acrn`` username.
|
||||
|
||||
#. Find the Service VM's IP address (the first IP address shown by this command):
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
$ hostname -I | cut -d ' ' -f 1
|
||||
10.0.0.200
|
||||
|
||||
#. From your development computer, ssh to your target system's Service VM
|
||||
using that IP address::
|
||||
|
||||
ssh acrn@10.0.0.200
|
||||
|
||||
#. In that ssh session, launch the HMI_VM by using the ``launch_user_vm_id1.sh`` launch
|
||||
script::
|
||||
|
||||
sudo chmod +x ~/acrn-work/launch_user_vm_id1.sh
|
||||
sudo ~/acrn-work/launch_user_vm_id1.sh
|
||||
|
||||
#. The launch script will start up the HMI_VM and show an Ubuntu login
|
||||
prompt in your ssh session (and a graphical login on your target's HDMI
|
||||
monitor).
|
||||
|
||||
Log in to the HMI_VM as **root** user (not **acrn**) using your development
|
||||
computer's ssh session:
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 1
|
||||
|
||||
ubuntu login: root
|
||||
Password:
|
||||
Welcome to Ubuntu 20.04.04 LTS (GNU/Linux 5.15.0-46-generic x86_64)
|
||||
|
||||
. . .
|
||||
|
||||
(acrn-guest)root@ubuntu:~#
|
||||
|
||||
#. Find the HMI_VM's IP address:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
(acrn-guest)root@ubuntu:~# hostname -I | cut -d ' ' -f 1
|
||||
10.0.0.100
|
||||
|
||||
If no IP address is reported, run this command to request an IP address and check again::
|
||||
|
||||
dhclient
|
||||
|
||||
#. Run the HMI VM sample application ``userApp`` (in the background)::
|
||||
|
||||
sudo /root/userApp &
|
||||
|
||||
and then the ``histapp.py`` application::
|
||||
|
||||
sudo python3 /root/histapp.py
|
||||
|
||||
At this point, the HMI_VM is running and we've started the HMI parts of
|
||||
the sample application. Next, we will launch the RT_VM and its parts of
|
||||
the sample application.
|
||||
|
||||
#. On your development computer, open a new terminal window and start a
|
||||
new ssh connection to your target system's Service VM::
|
||||
|
||||
ssh acrn@10.0.0.200
|
||||
|
||||
#. In this ssh session, launch the RT_VM by using the vm_id2 launch
|
||||
script::
|
||||
|
||||
sudo chmod +x ~/acrn-work/launch_user_vm_id2.sh
|
||||
sudo ~/acrn-work/launch_user_vm_id2.sh
|
||||
|
||||
#. The launch script will start up the RT_VM. Lots of system messages will go
|
||||
by and end with an Ubuntu login prompt.
|
||||
|
||||
Log in to the RT_VM as **root** user (not **acrn**) in this ssh session:
|
||||
|
||||
.. code-block:: console
|
||||
:emphasize-lines: 1
|
||||
|
||||
ubuntu login: root
|
||||
Password:
|
||||
Welcome to Ubuntu 20.04.04 LTS (GNU/Linux 5.10.120-rt70-acrn-kernel-rtvm X86_64)
|
||||
|
||||
. . .
|
||||
|
||||
(acrn-guest)root@ubuntu:~#
|
||||
|
||||
|
||||
#. Run the cyclictest in this RT_VM (in the background)::
|
||||
|
||||
cyclictest -p 80 --fifo="./data_pipe" -q &
|
||||
|
||||
and then the rtApp in this RT_VM::
|
||||
|
||||
sudo /root/rtApp
|
||||
|
||||
Now the two parts of the sample application are running:
|
||||
|
||||
* The RT_VM is running cyclictest, which generates latency data, and the rtApp
|
||||
sends this data via IVSHMEM to the HMI_VM.
|
||||
* In the HMI_VM, the userApp receives the cyclictest data and provides it to the
|
||||
histapp.py Python application that is running a web server.
|
||||
|
||||
We can view this data displayed as a histogram:
|
||||
|
||||
Option 1: Use a browser on your development computer
|
||||
Open a web browser on your development computer to the
|
||||
HMI_VM IP address we found in an earlier step (e.g., http://10.0.0.100).
|
||||
|
||||
Option 2: Use a browser on the HMI VM using the target system console
|
||||
Log in to the HMI_VM on the target system's console. (If you want to
|
||||
log in as root, click on the "Not listed?" link under the username choices you
|
||||
do see and enter the root username and password.) Open the web browser to
|
||||
http://localhost.
|
||||
|
||||
Refresh the browser. You'll see a histogram graph showing the
|
||||
percentage of latency time intervals reported by cyclictest. The histogram will
|
||||
update every time you refresh the browser. (Notice the count of samples
|
||||
increases as reported on the vertical axis label.)
|
||||
|
||||
.. figure:: images/samp-image018.png
|
||||
:class: drop-shadow
|
||||
:align: center
|
||||
|
||||
Example Histogram Output from Cyclictest as Reported by the Sample App
|
||||
|
||||
The horizontal axis represents the latency values in microseconds, and the
|
||||
vertical axis represents the percentage of occurrences of those values.
|
||||
|
||||
Congratulations
|
||||
***************
|
||||
|
||||
That completes the building and running of this sample application. You
|
||||
can view the application's code in the
|
||||
``~/acrn-work/acrn-hypervisor/misc/sample_application`` directory on your
|
||||
development computer (cloned from the ``acrn-hypervisor`` repo).
|
||||
|
||||
.. note:: As mentioned at the beginning, while this sample application uses the
|
||||
cyclictest to generate data about performance latency in the RT_VM, we
|
||||
haven't done any configuration optimization in this sample to get the
|
||||
best real-time performance.
|
@ -60,42 +60,53 @@ level includes the activities described in the lower levels.
|
||||
.. _UP2 Shop:
|
||||
https://up-shop.org/home/270-up-squared.html
|
||||
|
||||
+------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
|
||||
| | | .. rst-class:: |
|
||||
| | | centered |
|
||||
| | | |
|
||||
| | | ACRN Version |
|
||||
| | +-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Intel Processor Family | Tested Products | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | centered | centered | centered | centered | centered | centered | centered |
|
||||
| | | | | | | | | |
|
||||
| | | v1.0 | v1.6.1 | v2.0 | v2.5 | v2.6 | v2.7 | v3.0 |
|
||||
+========================+======================+===================+===================+===================+===================+===================+===================+===================+
|
||||
| Tiger Lake | `Vecow SPC-7100`_ | | .. rst-class:: |
|
||||
| | | | centered |
|
||||
| | | | |
|
||||
| | | | Maintenance |
|
||||
+------------------------+----------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Tiger Lake | `NUC11TNHi5`_ | | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | | | centered | centered | centered |
|
||||
| | | | | | | | |
|
||||
| | | | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Whiskey Lake | `WHL-IPC-I5`_ | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | | centered | centered | centered |
|
||||
| | | | | | | |
|
||||
| | | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------+-------------------+-------------------+-------------------+-------------------+-------------------+---------------------------------------+
|
||||
| Kaby Lake | `NUC7i7DNHE`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | centered | centered | centered |
|
||||
| | | | | | |
|
||||
| | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------+-------------------+-------------------+---------------------------------------+-----------------------------------------------------------+
|
||||
| Apollo Lake | | `NUC6CAYH`_, | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | `UP2-N3350`_, | centered | centered | centered |
|
||||
| | | `UP2-N4200`_, | | | |
|
||||
| | | `UP2-x5-E3940`_ | Release | Maintenance | Community |
|
||||
+------------------------+----------------------+-------------------+-------------------+---------------------------------------------------------------------------------------------------+
|
||||
.. _ASRock iEPF-9010S-EY4:
|
||||
https://www.asrockind.com/en-gb/iEPF-9010S-EY4
|
||||
|
||||
.. _ASRock iEP-9010E:
|
||||
https://www.asrockind.com/en-gb/iEP-9010E
|
||||
|
||||
+------------------------+----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+-------------------+
|
||||
| | | .. rst-class:: |
|
||||
| | | centered |
|
||||
| | | |
|
||||
| | | ACRN Version |
|
||||
| | +-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Intel Processor Family | Tested Products | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | centered | centered | centered | centered | centered | centered | centered | centered |
|
||||
| | | | | | | | | | |
|
||||
| | | v1.0 | v1.6.1 | v2.0 | v2.5 | v2.6 | v2.7 | v3.0 | v3.1 |
|
||||
+========================+============================+===================+===================+===================+===================+===================+===================+===================+===================+
|
||||
| Alder Lake | | `ASRock iEPF-9010S-EY4`_,| | .. rst-class:: | .. rst-class:: |
|
||||
| | | `ASRock iEP-9010E`_ | | centered | centered |
|
||||
| | | | | |
|
||||
| | | | Release | Community |
|
||||
+------------------------+----------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
|
||||
| Tiger Lake | `Vecow SPC-7100`_ | | .. rst-class:: |
|
||||
| | | | centered |
|
||||
| | | | |
|
||||
| | | | Maintenance |
|
||||
+------------------------+----------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+---------------------------------------+
|
||||
| Tiger Lake | `NUC11TNHi5`_ | | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | | | centered | centered | centered |
|
||||
| | | | | | | | |
|
||||
| | | | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+---------------------------------------+
|
||||
| Whiskey Lake | `WHL-IPC-I5`_ | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | | centered | centered | centered |
|
||||
| | | | | | | |
|
||||
| | | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-----------------------------------------------------------+
|
||||
| Kaby Lake | `NUC7i7DNHE`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | | centered | centered | centered |
|
||||
| | | | | | |
|
||||
| | | | Release | Maintenance | Community |
|
||||
+------------------------+----------------------------+-------------------+-------------------+---------------------------------------+-------------------------------------------------------------------------------+
|
||||
| Apollo Lake | | `NUC6CAYH`_, | .. rst-class:: | .. rst-class:: | .. rst-class:: |
|
||||
| | | `UP2-N3350`_, | centered | centered | centered |
|
||||
| | | `UP2-N4200`_, | | | |
|
||||
| | | `UP2-x5-E3940`_ | Release | Maintenance | Community |
|
||||
+------------------------+----------------------------+-------------------+-------------------+-----------------------------------------------------------------------------------------------------------------------+
|
||||
|
||||
* **Release**: New ACRN features are complete and tested for the listed product.
|
||||
This product is recommended for this ACRN version. Support for older products
|
||||
|
191
doc/release_notes/release_notes_3.1.rst
Normal file
@ -0,0 +1,191 @@
|
||||
.. _release_notes_3.1:
|
||||
|
||||
ACRN v3.1 (Sep 2022) Draft
|
||||
##########################
|
||||
|
||||
We are pleased to announce the release of the Project ACRN hypervisor
|
||||
version 3.1.
|
||||
|
||||
ACRN is a flexible, lightweight reference hypervisor that is built with
|
||||
real-time and safety-criticality in mind. It is optimized to streamline
|
||||
embedded development through an open-source platform. See the
|
||||
:ref:`introduction` introduction for more information.
|
||||
|
||||
All project ACRN source code is maintained in the
|
||||
https://github.com/projectacrn/acrn-hypervisor repository and includes
|
||||
folders for the ACRN hypervisor, the ACRN device model, tools, and
|
||||
documentation. You can download this source code either as a zip or
|
||||
tar.gz file (see the `ACRN v3.1 GitHub release page
|
||||
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v3.1>`_) or
|
||||
use Git ``clone`` and ``checkout`` commands::
|
||||
|
||||
git clone https://github.com/projectacrn/acrn-hypervisor
|
||||
cd acrn-hypervisor
|
||||
git checkout v3.1
|
||||
|
||||
The project's online technical documentation is also tagged to
|
||||
correspond with a specific release: generated v3.1 documents can be
|
||||
found at https://projectacrn.github.io/3.1/. Documentation for the
|
||||
latest development branch is found at https://projectacrn.github.io/latest/.
|
||||
|
||||
ACRN v3.1 requires Ubuntu 20.04. Follow the instructions in the
|
||||
:ref:`gsg` to get started with ACRN.
|
||||
|
||||
|
||||
What's New in v3.1
|
||||
******************
|
||||
|
||||
More ACRN Configuration Improvements
|
||||
Release v3.0 featured a new ACRN Configurator UI tool with a more intuitive
|
||||
design and workflow that simplifies getting the setup for the ACRN hypervisor
|
||||
right. With this v3.1 release, we've continued making improvements to the
|
||||
Configurator including more comprehensive error checking with more
|
||||
developer-friendly messages. You'll also see additional advanced
|
||||
configuration settings for tuning real-time performance including Cache
|
||||
Allocation Technology (CAT) and vCPU affinity. Read more in the
|
||||
:ref:`acrn_configurator_tool` and :ref:`scenario-config-options` documents.
|
||||
|
||||
If you have feedback on this, or other aspects of ACRN, please share them on
|
||||
the `ACRN users mailing list <https://lists.projectacrn.org/g/acrn-users>`_.
|
||||
|
||||
As with the v3.0 release, We've simplified installation of the Configurator by providing a Debian
|
||||
package that you can download from the `ACRN v3.1 tag assets
|
||||
<https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.1/acrn-configurator-3.1.deb>`_
|
||||
and install. See the :ref:`gsg` for more information.
|
||||
|
||||
Improved Board Inspector Collection and Reporting
|
||||
You run the ACRN Board Inspector tool to collect information about your target
|
||||
system's processors, memory, devices, and more. The generated board XML file
|
||||
is used by the ACRN Configurator to determine which ACRN configuration options
|
||||
are possible, as well as possible values for target system resources. The v3.1
|
||||
Board Inspector has improved scanning and provides more messages about
|
||||
potential issues or limitations of your target system that could impact ACRN
|
||||
configuration options.
|
||||
|
||||
The Board Inspector is updated to probe beyond CPUID
|
||||
information for Cache Allocation Technology (CAT) support and also detects
|
||||
availability of L3 CAT by accessing the CAT MSRs directly. Read more in
|
||||
:ref:`board_inspector_tool`.
|
||||
|
||||
Sample Application with Two Post-Launched VMs
|
||||
With this v3.1 release, we provide a follow-on :ref:`GSG_sample_app` to the
|
||||
:ref:`gsg`. This sample application shows how to create two VMs that are
|
||||
launched on your target system running ACRN. One VM is a real-time VM running
|
||||
`cyclictest
|
||||
<https://wiki.linuxfoundation.org/realtime/documentation/howto/tools/cyclictest/start>`__,
|
||||
an open source application commonly used to measure latencies in real-time
|
||||
systems. This real-time VM (RT_VM) uses inter-VM shared memory (IVSHMEM) to
|
||||
send data to a second Human-Machine Interface VM (HMI_VM) that formats and
|
||||
presents the collected data as a histogram on a web page shown by a browser.
|
||||
This guide shows how to configure, create, and launch the two VM images that
|
||||
make up this application. Full code for the sample application is provided in
|
||||
the acrn-hypervisor GitHub repo :acrn_file:`misc/sample_application`.
|
||||
|
||||
Multiple-Displays Support for VMs
|
||||
The virtio-gpu mechanism is enhanced to support VMs with multiple displays.
|
||||
TODO: add reference to tutorial
|
||||
|
||||
Improved TSC frequency reporting
|
||||
The hypervisor now reports TSC frequency in KHz so that VMs can get that number
|
||||
without calibrating to a high precision timer.
|
||||
|
||||
Upgrading to v3.1 from Previous Releases
|
||||
****************************************
|
||||
|
||||
As with the v3.0 release With the introduction of the Configurator UI tool, the need for manually editing
|
||||
XML files is gone. While working on this improved Configurator, we've also made
|
||||
many adjustments to available options in the underlying XML files, including
|
||||
merging the previous scenario and launch XML files into a combined scenario XML
|
||||
file. The board XML file generated by the v3.1 Board Inspector tool includes
|
||||
more information about the target system that is needed by the v3.1
|
||||
Configurator.
|
||||
|
||||
We recommend you generate a new board XML for your target system with the v3.1
|
||||
Board Inspector and use the v3.1 Configurator to generate a new
|
||||
scenario XML file and launch scripts. Board XML and Scenario XML files
|
||||
created by previous ACRN versions will not work with the v3.1 ACRN hypervisor
|
||||
build process and could produce unexpected errors during the build.
|
||||
|
||||
Given the scope of changes for the v3.1 release, we have recommendations for how
|
||||
to upgrade from prior ACRN versions:
|
||||
|
||||
1. Start fresh from our :ref:`gsg`. This is the best way to ensure you have a
|
||||
v3.1-ready board XML file from your target system and generate a new scenario
|
||||
XML and launch scripts from the new ACRN Configurator that are consistent and
|
||||
will work for the v3.1 build system.
|
||||
#. Use the :ref:`upgrade tool <upgrading_configuration>` to attempt upgrading
|
||||
configuration files that worked with a release before v3.1. See
|
||||
:ref:`upgrading_configuration` for details.
|
||||
#. Manually edit your prior scenario XML and launch XML files to make them
|
||||
compatible with v3.1. This is not our recommended approach.
|
||||
|
||||
Here are some additional details about upgrading to the v3.1 release.
|
||||
|
||||
Generate New Board XML
|
||||
======================
|
||||
|
||||
Board XML files, generated by ACRN board inspector, contain board information
|
||||
that is essential for building the ACRN hypervisor and setting up User VMs.
|
||||
Compared to previous versions, ACRN v3.1 adds the following information to the board
|
||||
XML file for supporting new features and fixes:
|
||||
|
||||
- <TODO: topic and PR reference>
|
||||
|
||||
See the :ref:`board_inspector_tool` documentation for a complete list of steps
|
||||
to install and run the tool.
|
||||
|
||||
Update Configuration Options
|
||||
============================
|
||||
|
||||
<TO DO>
|
||||
|
||||
As part of the developer experience improvements to ACRN configuration, the following XML elements
|
||||
were refined in the scenario XML file:
|
||||
|
||||
- <TO DO>
|
||||
|
||||
The following elements are added to scenario XML files.
|
||||
|
||||
- <TO DO>
|
||||
|
||||
The following elements were removed.
|
||||
|
||||
- <TO DO>
|
||||
|
||||
See the :ref:`scenario-config-options` documentation for details about all the
|
||||
available configuration options in the new Configurator.
|
||||
|
||||
|
||||
Document Updates
|
||||
****************
|
||||
|
||||
Sample Application User Guide
|
||||
The new :ref:`GSG_sample_app` documentation shows how to configure, build, and
|
||||
run a practical application with a Real-Time VM and Human-Machine Interface
|
||||
VM that communicate using inter-VM shared memory.
|
||||
|
||||
|
||||
We've also made edits throughout the documentation to improve clarity,
|
||||
formatting, and presentation. We started updating feature enabling tutorials
|
||||
based on the new Configurator, and will continue updating them after the v3.1
|
||||
release (in the `latest documentation <https://docs.projectacrn.org>`_). Here
|
||||
are some of the more significant updates:
|
||||
|
||||
.. rst-class:: rst-columns2
|
||||
|
||||
* :ref:`gsg`
|
||||
* :ref:`GSG_sample_app`
|
||||
* :ref:`rdt_configuration`
|
||||
* :ref:`acrn-dm_parameters-and-launch-script`
|
||||
* :ref:`scenario-config-options`
|
||||
|
||||
Fixed Issues Details
|
||||
********************
|
||||
|
||||
.. comment example item
|
||||
- :acrn-issue:`5626` - Host Call Trace once detected
|
||||
|
||||
|
||||
Known Issues
|
||||
************
|
||||
|
40
doc/scripts/changed-docs.awk
Normal file
@ -0,0 +1,40 @@
|
||||
# parse the git diff --stat output and created a reST list of
|
||||
# (significantly) changed files
|
||||
#
|
||||
# doc/develop.rst | 2 +
|
||||
# doc/developer-guides/contribute_guidelines.rst | 116 +++-
|
||||
# doc/developer-guides/hld/hld-devicemodel.rst | 8 +-
|
||||
# doc/developer-guides/hld/hld-hypervisor.rst | 1 +
|
||||
# doc/developer-guides/hld/hv-rdt.rst | 126 ++--
|
||||
# doc/developer-guides/hld/ivshmem-hld.rst | 70 ++
|
||||
# doc/developer-guides/hld/mmio-dev-passthrough.rst | 40 ++
|
||||
# doc/developer-guides/hld/virtio-net.rst | 42 +-
|
||||
# doc/developer-guides/hld/vuart-virt-hld.rst | 2 +-
|
||||
# doc/getting-started/building-from-source.rst | 39 +-
|
||||
|
||||
|
||||
function getLabel(filename)
|
||||
{
|
||||
label="Label not found in " filename
|
||||
while ((getline line < filename) > 0) {
|
||||
# looking for first occurance of .. _label name here:
|
||||
if (match(line, /^\.\. _([^:]+):/, a) !=0) {
|
||||
label=a[1]
|
||||
break
|
||||
}
|
||||
}
|
||||
close(filename)
|
||||
return label
|
||||
}
|
||||
|
||||
BEGIN {
|
||||
if (changes < 1) {changes=10}
|
||||
print "Showing docs in master branch with " changes " or more changes."
|
||||
}
|
||||
|
||||
# print label for files with more than specified changed lines
|
||||
$3 >= changes {
|
||||
lable=getLabel($1)
|
||||
if (label !~ /^Label not/ ) { print "* :ref:`" label "`" }
|
||||
else { print "* " substr($1,5) " was deleted." }
|
||||
}
|
27
doc/scripts/changed-docs.sh
Normal file
@ -0,0 +1,27 @@
|
||||
#!/bin/bash
|
||||
# Create a reST :ref: list of changed documents for the release notes
|
||||
# comparing the specified tag with master branch
|
||||
#
|
||||
#
|
||||
|
||||
if [ -z $1 ]; then
|
||||
echo
|
||||
echo Create a reST :ref: list of change documents for the release notes
|
||||
echo comparing the specified tag with the master branch
|
||||
echo
|
||||
echo Usage:
|
||||
echo \ \ changed-docs.sh upstream/release_3.0 [changed amount]
|
||||
echo
|
||||
echo \ \ where the optional [changed amount] \(default 10\) is the number
|
||||
echo \ \ of lines added/modified/deleted before showing up in this report.
|
||||
echo
|
||||
elif [ "$(basename $(pwd))" != "acrn-hypervisor" ]; then
|
||||
echo
|
||||
echo Script must be run in the acrn-hypervisor directory and not $(basename $(pwd))
|
||||
else
|
||||
dir=`dirname $0`
|
||||
|
||||
git diff --stat `git rev-parse $1` `git rev-parse master` | \
|
||||
grep \.rst | \
|
||||
awk -v changes=$2 -f $dir/changed-docs.awk
|
||||
fi
|
1
doc/static/acrn-custom.css
vendored
@ -303,6 +303,7 @@ div.numbered-step h2::before {
|
||||
font-weight: bold;
|
||||
line-height: 1.6em;
|
||||
margin-right: 5px;
|
||||
margin-left: -1.8em;
|
||||
text-align: center;
|
||||
width: 1.6em;}
|
||||
|
||||
|
@ -15,6 +15,7 @@ hypervisor, the Service VM, and a User VM on a supported Intel target platform.
|
||||
reference/hardware
|
||||
getting-started/overview_dev
|
||||
getting-started/getting-started
|
||||
getting-started/sample-app
|
||||
|
||||
After getting familiar with ACRN development, check out these
|
||||
:ref:`develop_acrn` for information about more-advanced scenarios and enabling
|
||||
|
@ -1,196 +1,148 @@
|
||||
.. _cpu_sharing:
|
||||
|
||||
Enable CPU Sharing in ACRN
|
||||
##########################
|
||||
Enable CPU Sharing
|
||||
##################
|
||||
|
||||
Introduction
|
||||
************
|
||||
About CPU Sharing
|
||||
*****************
|
||||
|
||||
The goal of CPU Sharing is to fully utilize the physical CPU resource to
|
||||
support more virtual machines. ACRN only supports 1 to 1
|
||||
mapping mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs).
|
||||
Because of the lack of CPU sharing ability, the number of VMs is
|
||||
limited. To support CPU Sharing, we have introduced a scheduling
|
||||
framework and implemented two simple small scheduling algorithms to
|
||||
satisfy embedded device requirements. Note that, CPU Sharing is not
|
||||
available for VMs with local APIC passthrough (``--lapic_pt`` option).
|
||||
CPU sharing allows the virtual CPUs (vCPUs) of different VMs to run on the same
|
||||
physical CPU, just like how multiple processes run concurrently on a single CPU.
|
||||
Internally the hypervisor adopts time slicing scheduling and periodically
|
||||
switches among those vCPUs.
|
||||
|
||||
Scheduling Framework
|
||||
********************
|
||||
This feature can help improve overall CPU utilization when the VMs are not fully
|
||||
loaded. However, sharing a physical CPU among multiple vCPUs increases the
|
||||
worst-case response latency of them, and thus is not suitable for vCPUs running
|
||||
latency-sensitive workloads.
|
||||
|
||||
To satisfy the modularization design concept, the scheduling framework
|
||||
layer isolates the vCPU layer and scheduler algorithm. It does not have
|
||||
a vCPU concept so it is only aware of the thread object instance. The
|
||||
thread object state machine is maintained in the framework. The
|
||||
framework abstracts the scheduler algorithm object, so this architecture
|
||||
can easily extend to new scheduler algorithms.
|
||||
Dependencies and Constraints
|
||||
****************************
|
||||
|
||||
.. figure:: images/cpu_sharing_framework.png
|
||||
Consider the following dependencies and constraints:
|
||||
|
||||
* CPU sharing is a hypervisor feature that is hardware and OS neutral.
|
||||
|
||||
* CPU sharing is not available for real-time VMs or for VMs with local APIC
|
||||
passthrough (via the LAPIC passthrough option in the ACRN Configurator or via
|
||||
the Device Model ``--lapic_pt`` option).
|
||||
|
||||
* You can choose the scheduler the hypervisor uses. A scheduler is an algorithm
|
||||
for determining the priority of VMs running on a shared virtual CPU. ACRN
|
||||
supports the following schedulers:
|
||||
|
||||
- Borrowed Virtual Time (BVT), which fairly allocates time slices to multiple
|
||||
vCPUs pinned to the same physical CPU. The BVT scheduler is the default and
|
||||
is sufficient for most use cases.
|
||||
|
||||
- No-Operation (NOOP), which runs at most one vCPU on each physical CPU.
|
||||
|
||||
- Priority based, which supports vCPU scheduling based on their static
|
||||
priorities defined in the scenario configuration. A vCPU can be running only
|
||||
if there is no higher-priority vCPU running on the same physical CPU.
|
||||
|
||||
Configuration Overview
|
||||
**********************
|
||||
|
||||
You use the :ref:`acrn_configurator_tool` to enable CPU sharing by assigning the
|
||||
same set of physical CPUs to multiple VMs and selecting a scheduler. The
|
||||
following documentation is a general overview of the configuration process.
|
||||
|
||||
To assign the same set of physical CPUs to multiple VMs, set the following
|
||||
parameters in each VM's **Basic Parameters**:
|
||||
|
||||
* VM type: Standard (Real-time VMs don't support CPU sharing)
|
||||
* Physical CPU affinity > pCPU ID: Select a physical CPU by its core ID.
|
||||
* To add another physical CPU, click **+** on the right side of an existing CPU.
|
||||
Or click **-** to delete a CPU.
|
||||
* Repeat the process to assign the same physical CPUs to another VM.
|
||||
|
||||
.. image:: images/configurator-cpusharing-affinity.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
The below diagram shows that the vCPU layer invokes APIs provided by
|
||||
scheduling framework for vCPU scheduling. The scheduling framework also
|
||||
provides some APIs for schedulers. The scheduler mainly implements some
|
||||
callbacks in an ``acrn_scheduler`` instance for scheduling framework.
|
||||
Scheduling initialization is invoked in the hardware management layer.
|
||||
To select a scheduler, go to **Hypervisor Global Settings > Advanced Parameters
|
||||
> Virtual CPU scheduler** and select a scheduler from the list.
|
||||
|
||||
.. figure:: images/cpu_sharing_api.png
|
||||
.. image:: images/configurator-cpusharing-scheduler.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
CPU Affinity
|
||||
*************
|
||||
Example Configuration
|
||||
*********************
|
||||
|
||||
We do not support vCPU migration; the assignment of vCPU mapping to
|
||||
pCPU is fixed at the time the VM is launched. The statically configured
|
||||
cpu_affinity in the VM configuration defines a superset of pCPUs that
|
||||
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU
|
||||
could be assigned to this VM, and the bit number is the pCPU ID. A pre-launched
|
||||
VM is launched on exactly the number of pCPUs assigned in
|
||||
this bitmap. The vCPU to pCPU mapping is implicitly indicated: vCPU0 maps
|
||||
to the pCPU with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
|
||||
so on.
|
||||
The following steps show how to enable and verify CPU sharing between two User
|
||||
VMs. The example extends the information provided in the :ref:`gsg`.
|
||||
|
||||
For post-launched VMs, acrn-dm could choose to launch a subset of pCPUs that
|
||||
are defined in cpu_affinity by specifying the assigned Service VM vCPU's lapic_id
|
||||
(``--cpu_affinity`` option). But it can't assign any pCPUs that are not
|
||||
included in the VM's cpu_affinity.
|
||||
#. In the ACRN Configurator, create a shared scenario with a Service VM and two
|
||||
post-launched User VMs.
|
||||
|
||||
Here is an example for affinity:
|
||||
#. For the first User VM, set the following parameters in the VM's **Basic
|
||||
Parameters**:
|
||||
|
||||
- VM0: 2 vCPUs, pinned to pCPU0 and pCPU1
|
||||
- VM1: 2 vCPUs, pinned to pCPU0 and pCPU1
|
||||
- VM2: 2 vCPUs, pinned to pCPU2 and pCPU3
|
||||
* VM name: This example uses ``POST_STD_VM1``.
|
||||
* VM type: ``Standard``
|
||||
* Physical CPU affinity: Select pCPU ID ``1``, then click **+** and select
|
||||
pCPU ID ``2`` to assign the VM to CPU cores 1 and 2.
|
||||
|
||||
.. figure:: images/cpu_sharing_affinity.png
|
||||
:align: center
|
||||
.. image:: images/configurator-cpusharing-vm1.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
Thread Object State
|
||||
*******************
|
||||
.. image:: images/configurator-cpusharing-affinity.png
|
||||
:align: center
|
||||
:class: drop-shadow
|
||||
|
||||
The thread object contains three states: RUNNING, RUNNABLE, and BLOCKED.
|
||||
#. For the second User VM, set the following parameters in the VM's **Basic
|
||||
Parameters**:
|
||||
|
||||
.. figure:: images/cpu_sharing_state.png
|
||||
:align: center
|
||||
* VM name: This example uses ``POST_STD_VM2``.
|
||||
* VM type: ``Standard``
|
||||
* Physical CPU affinity: Select pCPU ID ``1`` and ``2``. The pCPU IDs must be
|
||||
the same as those of ``POST_STD_VM1`` to use the CPU sharing function.
|
||||
|
||||
After a new vCPU is created, the corresponding thread object is
|
||||
initiated. The vCPU layer invokes a wakeup operation. After wakeup, the
|
||||
state for the new thread object is set to RUNNABLE, and then follows its
|
||||
algorithm to determine whether or not to preempt the current running
|
||||
thread object. If yes, it turns to the RUNNING state. In RUNNING state,
|
||||
the thread object may turn back to the RUNNABLE state when it runs out
|
||||
of its timeslice, or it might yield the pCPU by itself, or be preempted.
|
||||
The thread object under RUNNING state may trigger sleep to transfer to
|
||||
BLOCKED state.
|
||||
#. In **Hypervisor Global Settings > Advanced Parameters > Virtual CPU
|
||||
scheduler**, confirm that the default scheduler, Borrowed Virtual Time, is
|
||||
selected.
|
||||
|
||||
Scheduler
|
||||
*********
|
||||
#. Save the scenario and launch script.
|
||||
|
||||
The below block diagram shows the basic concept for the scheduler. There
|
||||
are four kinds of schedulers in the diagram: NOOP (No-Operation) scheduler,
|
||||
the IO sensitive Round Robin scheduler, the priority based scheduler and
|
||||
the BVT (Borrowed Virtual Time) scheduler. By default, BVT is used.
|
||||
#. Build ACRN, copy all the necessary files from the development computer to
|
||||
the target system, and launch the Service VM and post-launched User VMs.
|
||||
|
||||
#. In the :ref:`ACRN hypervisor shell<acrnshell>`, check the CPU sharing via
|
||||
the ``vcpu_list`` command. For example:
|
||||
|
||||
- **No-Operation scheduler**:
|
||||
.. code-block:: none
|
||||
|
||||
The NOOP (No-operation) scheduler has the same policy as the original
|
||||
1-1 mapping previously used; every pCPU can run only two thread objects:
|
||||
one is the idle thread, and another is the thread of the assigned vCPU.
|
||||
With this scheduler, vCPU works in Work-Conserving mode, which always
|
||||
tries to keep resources busy, and will run once it is ready. The idle thread
|
||||
can run when the vCPU thread is blocked.
|
||||
ACRN:\>vcpu_list
|
||||
|
||||
- **Priority based scheduler**:
|
||||
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE THREAD STATE
|
||||
===== ======= ======= ========= ========== ==========
|
||||
0 0 0 PRIMARY Running RUNNABLE
|
||||
0 1 1 SECONDARY Running BLOCKED
|
||||
0 2 2 SECONDARY Running BLOCKED
|
||||
0 3 3 SECONDARY Running BLOCKED
|
||||
1 1 0 PRIMARY Running RUNNING
|
||||
1 2 1 SECONDARY Running BLOCKED
|
||||
2 1 0 PRIMARY Running BLOCKED
|
||||
2 2 1 SECONDARY Running RUNNING
|
||||
|
||||
The priority based scheduler can support vCPU scheduling based on their
|
||||
pre-configured priorities. A vCPU can be running only if there is no
|
||||
higher priority vCPU running on the same pCPU. For example, in some cases,
|
||||
we have two VMs, one VM can be configured to use **PRIO_LOW** and the
|
||||
other one to use **PRIO_HIGH**. The vCPU of the **PRIO_LOW** VM can
|
||||
only be running when the vCPU of the **PRIO_HIGH** VM voluntarily relinquishes
|
||||
usage of the pCPU.
|
||||
The VM ID, PCPU ID, VCPU ID, and THREAD STATE columns provide information to
|
||||
help you check CPU sharing. In the VM ID column, VM 0 is the Service VM, VM 1
|
||||
is POST_STD_VM1, and VM 2 is POST_STD_VM2. The output shows that ACRN
|
||||
assigned all physical CPUs (pCPUs) to VM 0 as expected. It also confirms that
|
||||
you assigned pCPUs 1 and 2 to VMs 1 and 2 (via the ACRN Configurator). vCPU 1
|
||||
of VM 0 and vCPU 0 of VM 1 and VM 2 are running on the same physical CPU;
|
||||
they are sharing the physical CPU execution time. The thread state column
|
||||
shows the current states of the vCPUs. The BLOCKED state can occur for
|
||||
different reasons, most likely the vCPU is waiting for an I/O operation to be
|
||||
completed. Once it is done, the state will change to RUNNABLE. When this vCPU
|
||||
gets its pCPU execution time, its state will change to RUNNING, then the vCPU
|
||||
is actually running on the pCPU.
|
||||
|
||||
- **Borrowed Virtual Time scheduler**:
|
||||
|
||||
BVT (Borrowed Virtual time) is a virtual time based scheduling
|
||||
algorithm, it dispatches the runnable thread with the earliest
|
||||
effective virtual time.
|
||||
|
||||
- **Virtual time**: The thread with the earliest effective virtual
|
||||
time (EVT) is dispatched first.
|
||||
- **Warp**: a latency-sensitive thread is allowed to warp back in
|
||||
virtual time to make it appear earlier. It borrows virtual time from
|
||||
its future CPU allocation and thus does not disrupt long-term CPU
|
||||
sharing
|
||||
- **MCU**: minimum charging unit, the scheduler account for running time
|
||||
in units of MCU.
|
||||
- **Weighted fair sharing**: each runnable thread receives a share of
|
||||
the processor in proportion to its weight over a scheduling
|
||||
window of some number of MCU.
|
||||
- **C**: context switch allowance. Real time by which the current
|
||||
thread is allowed to advance beyond another runnable thread with
|
||||
equal claim on the CPU. C is similar to the quantum in conventional
|
||||
timesharing.
|
||||
|
||||
|
||||
Scheduler configuration
|
||||
|
||||
* The scheduler used at runtime is defined in the scenario XML file
|
||||
via the :option:`hv.FEATURES.SCHEDULER` option. The default scheduler
|
||||
is **SCHED_BVT**. Use the :ref:`ACRN Configurator tool <acrn_configurator_tool>`
|
||||
if you want to change this scenario option value.
|
||||
|
||||
|
||||
The default scheduler is **SCHED_BVT**.
|
||||
|
||||
* The cpu_affinity could be configured by one of these approaches:
|
||||
|
||||
- Without ``cpu_affinity`` option in acrn-dm. This launches the user VM
|
||||
on all the pCPUs that are included in the statically configured cpu_affinity.
|
||||
|
||||
- With ``cpu_affinity`` option in acrn-dm. This launches the user VM on
|
||||
a subset of the configured cpu_affinity pCPUs.
|
||||
|
||||
For example, assign physical CPUs 0 and 1 to this VM::
|
||||
|
||||
--cpu_affinity 0,1
|
||||
|
||||
|
||||
Example
|
||||
*******
|
||||
|
||||
Use the following settings to support this configuration in the shared scenario:
|
||||
|
||||
+---------+--------+-------+-------+
|
||||
|pCPU0 |pCPU1 |pCPU2 |pCPU3 |
|
||||
+=========+========+=======+=======+
|
||||
|Service VM + WaaG |RT Linux |
|
||||
+------------------+---------------+
|
||||
|
||||
- offline pcpu2-3 in Service VM.
|
||||
|
||||
|
||||
- launch guests.
|
||||
|
||||
- launch WaaG with "--cpu_affinity 0,1"
|
||||
- launch RT with "--cpu_affinity 2,3"
|
||||
|
||||
|
||||
After you start all VMs, check the CPU affinities from the Hypervisor
|
||||
console with the ``vcpu_list`` command:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>vcpu_list
|
||||
|
||||
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE THREAD STATE
|
||||
===== ======= ======= ========= ========== ==========
|
||||
0 0 0 PRIMARY Running RUNNING
|
||||
0 1 1 SECONDARY Running RUNNING
|
||||
1 0 0 PRIMARY Running RUNNABLE
|
||||
1 1 1 SECONDARY Running BLOCKED
|
||||
2 2 0 PRIMARY Running RUNNING
|
||||
2 3 1 SECONDARY Running RUNNING
|
||||
|
||||
Note: the THREAD STATE are instant states, they will change at any time.
|
||||
Learn More
|
||||
**********
|
||||
|
||||
For details on the ACRN CPU virtualization high-level design, For the
|
||||
:ref:`hv-cpu-virt`.
|
@ -72,12 +72,6 @@ For the shared memory region:
|
||||
blank. If the field is blank, the tool provides an address when the
|
||||
configuration is saved.
|
||||
|
||||
.. note::
|
||||
|
||||
The release v3.0 ACRN Configurator has an issue where you need to save the
|
||||
configuration twice to see the generated BDF address in the shared memory
|
||||
setting. (:acrn-issue:`7831`)
|
||||
|
||||
#. Add more VMs to the shared memory region by clicking **+** on the right
|
||||
side of an existing VM. Or click **-** to delete a VM.
|
||||
|
||||
|
@ -26,6 +26,9 @@ Consider the following dependencies and constraints:
|
||||
|
||||
* When a device is assigned to a VM via GVT-d, no other VMs can use it.
|
||||
|
||||
* For ASRock systems, disable the BIOS setting "Above 4G Decoding" (under
|
||||
Advanced Menu > SA Configuration) to enable the GVT-d feature.
|
||||
|
||||
.. note:: After GVT-d is enabled, have either a serial port
|
||||
or SSH session open in the Service VM to interact with it.
|
||||
|
||||
|
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 54 KiB |
Before Width: | Height: | Size: 36 KiB After Width: | Height: | Size: 106 KiB |
Before Width: | Height: | Size: 28 KiB After Width: | Height: | Size: 35 KiB |
Before Width: | Height: | Size: 84 KiB After Width: | Height: | Size: 90 KiB |
Before Width: | Height: | Size: 83 KiB After Width: | Height: | Size: 90 KiB |
Before Width: | Height: | Size: 115 KiB After Width: | Height: | Size: 274 KiB |
BIN
doc/tutorials/images/configurator-cpusharing-affinity.png
Normal file
After Width: | Height: | Size: 27 KiB |
BIN
doc/tutorials/images/configurator-cpusharing-scheduler.png
Normal file
After Width: | Height: | Size: 7.2 KiB |
BIN
doc/tutorials/images/configurator-cpusharing-vm1.png
Normal file
After Width: | Height: | Size: 10 KiB |
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 55 KiB |
Before Width: | Height: | Size: 23 KiB After Width: | Height: | Size: 21 KiB |
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 227 KiB |
Before Width: | Height: | Size: 113 KiB After Width: | Height: | Size: 274 KiB |
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 21 KiB After Width: | Height: | Size: 25 KiB |
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 117 KiB After Width: | Height: | Size: 222 KiB |
Before Width: | Height: | Size: 11 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 50 KiB |
Before Width: | Height: | Size: 118 KiB After Width: | Height: | Size: 227 KiB |
Before Width: | Height: | Size: 26 KiB After Width: | Height: | Size: 34 KiB |
Before Width: | Height: | Size: 85 KiB After Width: | Height: | Size: 70 KiB |
Before Width: | Height: | Size: 62 KiB After Width: | Height: | Size: 51 KiB |
Before Width: | Height: | Size: 49 KiB After Width: | Height: | Size: 54 KiB |
Before Width: | Height: | Size: 56 KiB After Width: | Height: | Size: 63 KiB |
Before Width: | Height: | Size: 6.7 KiB After Width: | Height: | Size: 6.6 KiB |
Before Width: | Height: | Size: 18 KiB After Width: | Height: | Size: 12 KiB |
Before Width: | Height: | Size: 22 KiB After Width: | Height: | Size: 18 KiB |
@ -164,12 +164,12 @@ The table title shows important information:
|
||||
The above example shows an L2 cache table. VMs assigned to any CPU cores 2-6 can
|
||||
have cache allocated to them.
|
||||
|
||||
The table's columns show the names of all VMs that are assigned to the CPU cores
|
||||
noted in the table title, as well as their vCPU IDs. The table categorizes the
|
||||
vCPUs as either standard or real-time. The real-time vCPUs are those that are
|
||||
set as real-time in the VM's parameters. All other vCPUs are considered
|
||||
standard. The above example shows one real-time vCPU (VM1 vCPU 2) and two
|
||||
standard vCPUs (VM0 vCPU 2 and 6).
|
||||
The table's left-most column shows the names of all VMs that are assigned to the
|
||||
CPU cores noted in the table title, as well as their vCPU IDs. The table
|
||||
categorizes the vCPUs as either standard or real-time. The real-time vCPUs are
|
||||
those that are set as real-time in the VM's parameters. All other vCPUs are
|
||||
considered standard. The above example shows one real-time vCPU (VM1 vCPU 2) and
|
||||
two standard vCPUs (VM0 vCPU 2 and 6).
|
||||
|
||||
.. note::
|
||||
|
||||
|
@ -189,8 +189,9 @@ Tip: Disable the software workaround for Machine Check Error on Page Size Change
|
||||
By default, the software workaround for Machine Check Error on Page Size
|
||||
Change is conditionally applied to the models that may be affected by the
|
||||
issue. However, the software workaround has a negative impact on
|
||||
performance. If all guest OS kernels are trusted, the
|
||||
:option:`hv.FEATURES.MCE_ON_PSC_DISABLED` option could be set for performance.
|
||||
performance. If all guest OS kernels are trusted, you can disable the
|
||||
software workaround (by deselecting the :term:`Enable MCE workaround` option
|
||||
in the ACRN Configurator tool) for performance.
|
||||
|
||||
.. note::
|
||||
The tips for preempt-RT Linux are mostly applicable to the Linux-based RTOS as well, such as Xenomai.
|
||||
|
@ -18,8 +18,8 @@ The ACRN hypervisor can boot from the `multiboot protocol
|
||||
with the multiboot protocol, the multiboot2 protocol adds UEFI support.
|
||||
|
||||
The multiboot protocol is supported by the ACRN hypervisor natively. The
|
||||
multiboot2 protocol is supported when :option:`hv.FEATURES.MULTIBOOT2` is
|
||||
enabled in the scenario configuration. The :option:`hv.FEATURES.MULTIBOOT2` is
|
||||
multiboot2 protocol is supported when the :term:`Multiboot2` option is
|
||||
enabled in the scenario configuration. The :term:`Multiboot2` option is
|
||||
enabled by default. To load the hypervisor with the multiboot protocol, run the
|
||||
GRUB ``multiboot`` command. To load the hypervisor with the multiboot2 protocol,
|
||||
run the ``multiboot2`` command. To load a VM kernel or ramdisk, run the
|
||||
@ -29,13 +29,14 @@ for the multiboot2 protocol.
|
||||
The ACRN hypervisor binary is built with two formats: ``acrn.32.out`` in
|
||||
ELF format and ``acrn.bin`` in RAW format. The GRUB ``multiboot``
|
||||
command supports ELF format only and does not support binary relocation,
|
||||
even if :option:`hv.FEATURES.RELOC` is set. The GRUB ``multiboot2``
|
||||
command supports
|
||||
ELF format when :option:`hv.FEATURES.RELOC` is not set, or RAW format when
|
||||
:option:`hv.FEATURES.RELOC` is set.
|
||||
even if the :term:`Hypervisor relocation` option is set in the scenario
|
||||
configuration. The GRUB ``multiboot2`` command supports
|
||||
ELF format when the :term:`Hypervisor relocation` option is not set, or RAW
|
||||
format when the :term:`Hypervisor relocation` option is set.
|
||||
|
||||
.. note::
|
||||
* :option:`hv.FEATURES.RELOC` is set by default, so use ``acrn.32.out`` in
|
||||
* The :term:`Hypervisor relocation` option is set by default, so use
|
||||
``acrn.32.out`` in
|
||||
the multiboot protocol and ``acrn.bin`` in the multiboot2 protocol.
|
||||
|
||||
* Per ACPI specification, the RSDP pointer is described in the EFI System
|
||||
|
@ -75,20 +75,14 @@ For the connection:
|
||||
leave it blank. If the field is blank, the tool provides an address when the
|
||||
configuration is saved.
|
||||
|
||||
.. note::
|
||||
|
||||
The release v3.0 ACRN Configurator has an issue where you need to save the
|
||||
configuration twice to see the generated I/O or BDF address in the vUART
|
||||
setting. (:acrn-issue:`7831`)
|
||||
|
||||
To add another connection, click **+** on the right side of an existing
|
||||
connection. Or click **-** to delete a connection.
|
||||
|
||||
.. note::
|
||||
|
||||
The release v3.0 ACRN Configurator assigns COM2 (I/O address ``0x2F8``) to
|
||||
The release v3.0+ ACRN Configurator assigns COM2 (I/O address ``0x2F8``) to
|
||||
the S5 feature. A conflict will occur if you assign ``0x2F8`` to another
|
||||
connection. In our example, we'll use COM3 (I/O address ``0x3F8``).
|
||||
connection. In our example, we'll use COM3 (I/O address ``0x3E8``).
|
||||
|
||||
.. image:: images/configurator-vuartconn01.png
|
||||
:align: center
|
||||
|
@ -206,12 +206,6 @@ relevant for configuring or debugging ACRN-based systems.
|
||||
A ``memmap`` parameter is also required to reserve the specified memory
|
||||
from the guest VM.
|
||||
|
||||
If hypervisor relocation is disabled, verify that
|
||||
:option:`hv.MEMORY.HV_RAM_START` and the hypervisor RAM size computed by
|
||||
the linker
|
||||
do not overlap with the hypervisor's reserved buffer space allocated
|
||||
in the Service VM. Service VM GPA and HPA are a 1:1 mapping.
|
||||
|
||||
If hypervisor relocation is enabled, reserve the memory below 256MB,
|
||||
since the hypervisor could be relocated anywhere between 256MB and 4GB.
|
||||
|
||||
|
@ -11,8 +11,6 @@ The ``acrnctl`` tool helps users create, delete, launch, and stop a User
|
||||
VM (aka UOS). The tool runs under the Service VM, and User VMs should be based
|
||||
on ``acrn-dm``. The daemon for acrn-manager is `acrnd`_.
|
||||
|
||||
|
||||
|
||||
Usage
|
||||
=====
|
||||
|
||||
@ -32,7 +30,7 @@ You can see the available ``acrnctl`` commands by running:
|
||||
Use acrnctl [cmd] help for details
|
||||
|
||||
.. note::
|
||||
You must run ``acrnctl`` with root privileges, and make sure ``acrnd``
|
||||
You must run ``acrnctl`` with root privileges, and make sure the ``acrnd``
|
||||
service has been started before running ``acrnctl``.
|
||||
|
||||
Here are some usage examples:
|
||||
@ -54,7 +52,7 @@ container::
|
||||
# acrnctl add launch_uos.sh -C
|
||||
|
||||
.. note:: You can download an :acrn_raw:`example launch_uos.sh script
|
||||
<devicemodel/samples/nuc/launch_uos.sh>`
|
||||
<misc/config_tools/data/sample_launch_scripts/nuc/launch_uos.sh>`
|
||||
that supports the ``-C`` (``run_container`` function) option.
|
||||
|
||||
Note that the launch script must only launch one User VM instance.
|
||||
@ -87,8 +85,9 @@ Use the ``list`` command to display VMs and their state:
|
||||
Start VM
|
||||
========
|
||||
|
||||
If a VM is in a ``stopped`` state, you can start it with the ``start``
|
||||
command:
|
||||
If a VM is in a stopped state, you can start it with the ``start`` command. The
|
||||
``acrnd`` service automatically loads the launch script under
|
||||
``/usr/share/acrn/conf/add/`` to boot the VM.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -97,7 +96,7 @@ command:
|
||||
Stop VM
|
||||
=======
|
||||
|
||||
Use the ``stop`` command to stop one or more running VM:
|
||||
Use the ``stop`` command to stop one or more running VMs:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|