doc: Update vUART tutorial

- Update overview, dependencies and constraints
- Update to match Configurator UI instead of manually editing XML files
- Remove architectural details and instead point to high-level design documentation

Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
Reyes, Amy 2022-06-22 14:24:59 -07:00 committed by David Kinder
parent 452128a1ba
commit ad08ad87ea
4 changed files with 99 additions and 329 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 60 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 18 KiB

View File

@ -3,386 +3,156 @@
Enable vUART Configurations
###########################
Introduction
About vUART
============
The virtual universal asynchronous receiver/transmitter (vUART) supports
two functions: one is the console, the other is communication. vUART
only works on a single function.
A virtual universal asynchronous receiver/transmitter (vUART) can be a console
port or a communication port.
Only two vUART configurations are added to the predefined scenarios,
but you can customize the scenarios to enable more using the :ref:`ACRN
Configurator <acrn_configurator_tool>`.
A vUART can exchange data between the hypervisor and a VM
or between two VMs. Typical use cases of a vUART include:
Console Enable List
===================
* Access the console of a VM from the hypervisor or another VM. A VM console,
when enabled by the OS in that VM, typically provides logs and a shell to
log in and execute commands. (vUART console)
+-----------------+-----------------------+--------------------+----------------+----------------+
| Scenarios | vm0 | vm1 | vm2 | vm3 |
+=================+=======================+====================+================+================+
| Hybrid | Pre-launched (Zephyr) | Service VM | Post-launched | |
| | (vUART enable) | (vUART enable) | | |
+-----------------+-----------------------+--------------------+----------------+----------------+
| Shared | Service VM | Post-launched | Post-launched | Post-launched |
| | (vUART enable) | | (vUART enable) | |
+-----------------+-----------------------+--------------------+----------------+----------------+
| Partitioned | Pre-launched | Pre-launched RTVM | Post-launched | |
| | (vUART enable) | (vUART enable) | RTVM | |
+-----------------+-----------------------+--------------------+----------------+----------------+
* Exchange user-specific, low-speed data between two VMs. (vUART communication)
.. _how-to-configure-a-console-port:
To the VMs, the vUARTs are presented in a 8250-compatible manner.
How to Configure a Console Port
===============================
To exchange high-speed (for example, megabytes or gigabytes per second) data
between two VMs, you can use the inter-VM shared memory feature
(IVSHMEM) instead.
To enable the console port for a VM, change only the ``port_base`` and
``irq``. If the IRQ number is already in use in your system (``cat
/proc/interrupt``), choose another IRQ number. If you set the ``.irq =0``,
the vUART will work in polling mode.
Dependencies and Constraints
=============================
- ``COM1_BASE (0x3F8) + COM1_IRQ(4)``
- ``COM2_BASE (0x2F8) + COM2_IRQ(3)``
- ``COM3_BASE (0x3E8) + COM3_IRQ(6)``
- ``COM4_BASE (0x2E8) + COM4_IRQ(7)``
Consider the following dependencies and constraints:
Example:
* The OSes of the VMs need an 8250-compatible serial driver.
.. code-block:: none
* To access the hypervisor shell, you must have a physical UART.
.vuart[0] = {
.type = VUART_LEGACY_PIO,
.addr.port_base = COM1_BASE,
.irq = COM1_IRQ,
},
* Although a vUART is available to all kinds of VMs, you should not
enable a vUART to access the console of or exchange data with a real-time VM.
Exchanging data via a vUART imposes a performance
penalty that could delay the response of asynchronous events in real-time VMs.
.. _how-to-configure-a-communication-port:
* A VM can have one console vUART and multiple communication vUARTs.
How to Configure a Communication Port
=====================================
* A single vUART connection cannot support both console and communication.
To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate.
Configuration Overview
======================
The port_base and IRQ should differ from the ``vuart[0]`` in the same VM.
The :ref:`acrn_configurator_tool` lets you configure vUART connections. The
following documentation is a general overview of the configuration process.
``t_vuart.vm_id`` is the target VM's vm_id, start from 0. (0 means VM0)
To configure access to the console of a VM from the hypervisor, go to the **VM
Basic Parameters > Console virtual UART type**, and select a COM port.
``t_vuart.vuart_id`` is the target vUART index in the target VM. Start
from ``1``. (``1`` means ``vuart[1]``)
Example:
.. code-block:: none
/* VM0 */
...
/* VM1 */
.vuart[1] = {
.type = VUART_LEGACY_PIO,
.addr.port_base = COM2_BASE,
.irq = COM2_IRQ,
.t_vuart.vm_id = 2U,
.t_vuart.vuart_id = 1U,
},
...
/* VM2 */
.vuart[1] = {
.type = VUART_LEGACY_PIO,
.addr.port_base = COM2_BASE,
.irq = COM2_IRQ,
.t_vuart.vm_id = 1U,
.t_vuart.vuart_id = 1U,
},
Communication vUART Enable List
===============================
+-----------------+-----------------------+--------------------+---------------------+----------------+
| Scenarios | vm0 | vm1 | vm2 | vm3 |
+=================+=======================+====================+=====================+================+
| Hybrid | Pre-launched (Zephyr) | Service VM | Post-launched | |
| | (vUART enable COM2) | (vUART enable COM2)| | |
+-----------------+-----------------------+--------------------+---------------------+----------------+
| Shared | Service VM | Post-launched | Post-launched RTVM | Post-launched |
| | (vUART enable COM2) | | (vUART enable COM2) | |
+-----------------+-----------------------+--------------------+---------------------+----------------+
| Partitioned | Pre-launched | Pre-launched RTVM | | |
+-----------------+-----------------------+--------------------+---------------------+----------------+
Launch Script
=============
- ``-s 1:0,lpc -l com1,stdio``
This option is only needed for WaaG and VxWorks (and also when using
OVMF). They depend on the ACPI table, and only ``acrn-dm`` can provide
the ACPI table for UART.
- ``-B " ....,console=ttyS0, ..."``
Add this to the kernel-based system.
Test the Communication Port
===========================
After you have configured the communication port in hypervisor, you can
access the corresponding port. For example, in Linux OS:
1. With ``echo`` and ``cat``
On VM1: ``# cat /dev/ttyS1``
On VM2: ``# echo "test test" > /dev/ttyS1``
You can find the message from VM1 ``/dev/ttyS1``.
If you are not sure which one is the communication port, you can run
``dmesg | grep ttyS`` under the Linux shell to check the base address.
If it matches what you have set in the ``vm_configuration.c`` file, it
is the correct port.
#. With Minicom
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test``
in VM1's Minicom. The message should appear in VM2's Minicom. Disable
flow control in Minicom.
#. Limitations
- The msg cannot be longer than 256 bytes.
- This cannot be used to transfer files because flow control is
not supported so data may be lost.
vUART Design
============
**Console vUART**
.. figure:: images/vuart-config-1.png
.. image:: images/configurator-vuartconn02.png
:align: center
:name: console-vuart
:class: drop-shadow
**Communication vUART (between VM0 and VM1)**
To configure communication between two VMs, go to the **Hypervisor Global
Settings > Basic Parameters > InterVM Virtual UART Connection**. Click **+**
to add the first vUART connection.
.. figure:: images/vuart-config-2.png
.. image:: images/configurator-vuartconn03.png
:align: center
:name: communication-vuart
:class: drop-shadow
COM Port Configurations for Post-Launched VMs
=============================================
For the connection:
For a post-launched VM, the ``acrn-dm`` cmdline also provides a COM port configuration:
#. Select the two VMs to connect.
``-s 1:0,lpc -l com1,stdio``
#. Select the vUART type, either Legacy or PCI.
This adds ``com1 (0x3f8)`` and ``com2 (0x2f8)`` modules in the post-launched VM, including the ACPI info for these two ports.
#. If you select Legacy, the tool displays a virtual I/O address field for each
VM. If you select PCI, the tool displays a virtual Board:Device.Function
(BDF) address field for each VM. In both cases, you can enter an address or
leave it blank. If the field is blank, the tool provides an address when the
configuration is saved.
**Data Flows**
To add another connection, click **+** on the right side of an existing
connection. Or click **-** to delete a connection.
Three different data flows exist based on how the post-launched VM is
started, as shown in the diagram below:
* Figure 1 data flow: The post-launched VM is started with the vUART
enabled in the hypervisor configuration file only.
* Figure 2 data flow: The post-launched VM is started with the
``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio`` only.
* Figure 3 data flow: The post-launched VM is started with both vUART
enabled and the ``acrn-dm`` cmdline of ``-s 1:0,lpc -l com1,stdio``.
.. figure:: images/vuart-config-post-launch.png
.. image:: images/configurator-vuartconn01.png
:align: center
:name: Post-Launched VMs
:class: drop-shadow
.. note::
For operating systems such as VxWorks and Windows that depend on the
ACPI table to probe the UART driver, adding the vUART configuration in
the hypervisor is not sufficient. We recommend that you use
the configuration in the figure 3 data flow. This may be refined in the
future.
Example Configuration
=====================
Use PCI-vUART
#############
The following steps show how to configure and verify a vUART
connection between two VMs. The example extends the information provided in the
:ref:`gsg`.
PCI Interface of ACRN vUART
===========================
#. In the ACRN Configurator, create a shared scenario with a Service VM and one
post-launched User VM.
When you set :ref:`vuart[0] and vuart[1] <vuart_config>`, the ACRN
hypervisor emulates virtual legacy serial devices (I/O port and IRQ) for
VMs. So ``vuart[0]`` and ``vuart[1]`` are legacy vUARTs. ACRN
hypervisor can also emulate virtual PCI serial devices (BDF, MMIO
registers and MSIX capability). These virtual PCI serial devices are
called PCI-vUART, and have an advantage in device enumeration for the
guest OS. It is easy to add new PCI-vUART ports to a VM.
#. Go to **Hypervisor Global Settings > Basic Parameters > InterVM Virtual UART
Connection**.
.. _index-of-vuart:
a. Click **+** to add a vUART connection.
Index of vUART
==============
#. Select the Service VM (ACRN_Service_VM) and the post-launched User VM
(POST_STD_VM1).
ACRN hypervisor supports PCI-vUARTs and legacy vUARTs as ACRN vUARTs.
Each vUART port has its own ``vuart_idx``. ACRN hypervisor supports up
to 8 vUARTs for each VM, from ``vuart_idx=0`` to ``vuart_idx=7``.
Suppose we use vUART0 for a port with ``vuart_idx=0``, vUART1 for
``vuart_idx=1``, and so on.
#. For the vUART type, this example uses ``Legacy``.
Pay attention to these points:
#. For the virtual I/O address, this example uses ``0x2f8``.
* vUART0 is the console port, vUART1-vUART7 are inter-VM communication ports.
* Each communication port must set the connection to another communication vUART port of another VM.
* When legacy ``vuart[0]`` is available, it is vUART0. A PCI-vUART can't
be vUART0 unless ``vuart[0]`` is not set.
* When legacy ``vuart[1]`` is available, it is vUART1. A PCI-vUART can't
be vUART1 unless ``vuart[1]`` is not set.
.. image:: images/configurator-vuartconn01.png
:align: center
:class: drop-shadow
Setup ACRN vUART Using Configuration Tools
==========================================
#. Save the scenario and launch script.
When you set up ACRN VM configurations with PCI-vUART, it is better to
use the ACRN configuration tools because of all the PCI resources required: BDF number,
address and size of mmio registers, and address and size of MSIX entry
tables. These settings can't conflict with another PCI device. Furthermore,
whether PCI-vUART can use ``vuart_idx=0`` and ``vuart_idx=1`` depends on legacy
vUART settings. Configuration tools will override your settings in
:ref:`How to Configure a Console Port <how-to-configure-a-console-port>`
and :ref:`How to Configure a Communication Port
<how-to-configure-a-communication-port>`.
#. Build ACRN, copy all the necessary files from the development computer to the
target system, and launch the Service VM and post-launched User VM.
You can configure both Legacy vUART and PCI-vUART in :ref:`scenario
configurations <acrn_config_types>`. For
example, if VM0 has a legacy vUART0 and a PCI-vUART1, VM1 has no legacy
vUART but has a PCI-vUART0 and a PCI-vUART1, VM0's PCI-vUART1 and VM1's
PCI-vUART1 are connected to each other. You should configure then like this:
#. To verify the connection:
.. code-block:: none
a. In the Service VM, check the communication port via the ``dmesg | grep
tty`` command. In this example, we know the port is ``ttyS1`` because the
I/O address matches the address in the ACRN Configurator.
<vm id="0">
<legacy_vuart id="0">
<type>VUART_LEGACY_PIO</type> /* vuart[0] is console port */
<base>COM1_BASE</base> /* vuart[0] is used */
<irq>COM1_IRQ</irq>
</legacy_vuart>
<legacy_vuart id="1">
<type>VUART_LEGACY_PIO</type>
<base>INVALID_COM_BASE</base> /* vuart[1] is not used */
</legacy_vuart>
<console_vuart id="0">
<base>INVALID_PCI_BASE</base> /* PCI-vUART0 can't be used, because vuart[0] */
</console_vuart>
<communication_vuart id="1">
<base>PCI_VUART</base> /* PCI-vUART1 is communication port, connect to vUART1 of VM1 */
<target_vm_id>1</target_vm_id>
<target_uart_id>1</target_uart_id>
</communication_vuart>
</vm>
.. code-block:: console
:emphasize-lines: 7
<vm id="1">
<legacy_vuart id="0">
<type>VUART_LEGACY_PIO</type>
<base>INVALID_COM_BASE</base> /* vuart[0] is not used */
</legacy_vuart>
<legacy_vuart id="1">
<type>VUART_LEGACY_PIO</type>
<base>INVALID_COM_BASE</base> /* vuart[1] is not used */
</legacy_vuart>
<console_vuart id="0">
<base>PCI_VUART</base> /* PCI-vUART0 is console port */
</console_vuart>
<communication_vuart id="1">
<base>PCI_VUART</base> /* PCI-vUART1 is communication port, connect to vUART1 of VM0 */
<target_vm_id>0</target_vm_id>
<target_uart_id>1</target_uart_id>
</communication_vuart>
</vm>
root@10239146120sos-dom0:~# dmesg |grep tty
[ 0.000000] Command line: root=/dev/nvme0n1p2 idle=halt rw rootwait console=ttyS0 console=tty0 earlyprintk=serial,ttyS0,115200 cons_timer_check consoleblank=0 no_timer_check quiet loglevel=3 i915.nuclear_pageflip=1 nokaslr i915.force_probe=* i915.enable_guc=0x7 maxcpus=16 hugepagesz=1G hugepages=26 hugepagesz=2M hugepages=388 root=PARTUUID=25302f3f-5c45-4ba4-a811-3de2b64ae6f6
[ 0.038630] Kernel command line: root=/dev/nvme0n1p2 idle=halt rw rootwait console=ttyS0 console=tty0 earlyprintk=serial,ttyS0,115200 cons_timer_check consoleblank=0 no_timer_check quiet loglevel=3 i915.nuclear_pageflip=1 nokaslr i915.force_probe=* i915.enable_guc=0x7 maxcpus=16 hugepagesz=1G hugepages=26 hugepagesz=2M hugepages=388 root=PARTUUID=25302f3f-5c45-4ba4-a811-3de2b64ae6f6
[ 0.105303] printk: console [tty0] enabled
[ 0.105319] printk: console [ttyS0] enabled
[ 1.391979] 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
[ 1.649819] serial8250: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A
[ 3.394543] systemd[1]: Created slice system-serial\x2dgetty.slice.
The ACRN vUART related XML fields:
#. Test vUART communication:
- ``id`` in ``<legacy_vuart>``, value of ``vuart_idx``, ``id=0`` is for
legacy ``vuart[0]`` configuration, ``id=1`` is for ``vuart[1]``.
- ``type`` in ``<legacy_vuart>``, type is always ``VUART_LEGACY_PIO``
for legacy vUART.
- ``base`` in ``<legacy_vuart>``, if using the legacy vUART port, set
``COM1_BASE`` for ``vuart[0]``, set ``COM2_BASE`` for ``vuart[1]``.
``INVALID_COM_BASE`` means do not use the legacy vUART port.
- ``irq`` in ``<legacy_vuart>``, if you use the legacy vUART port, set
``COM1_IRQ`` for ``vuart[0]``, set ``COM2_IRQ`` for ``vuart[1]``.
- ``id`` in ``<console_vuart>`` and ``<communication_vuart>``,
``vuart_idx`` for PCI-vUART
- ``base`` in ``<console_vuart>`` and ``<communication_vuart>``,
``PCI_VUART`` means use this PCI-vUART, ``INVALID_PCI_BASE`` means do
not use this PCI-VUART.
- ``target_vm_id`` and ``target_uart_id``, connection settings for this
vUART port.
In the Service VM, run the following command to write ``acrn`` to the
communication port:
Run the command to build ACRN with this XML configuration file::
.. code-block:: console
make BOARD=<board> SCENARIO=<scenario>
root@10239146120sos-dom0:~/kino# echo "acrn" > /dev/ttyS1
The configuration tools will test your settings, and check :ref:`vUART
Rules <index-of-vuart>` for compilation issue. After compiling, you can find
the generated sources under
``build/hypervisor/configs/scenarios/<scenario>/pci_dev.c``,
based on the XML settings, something like:
In the User VM, read the communication port to confirm that ``acrn`` was
received:
.. code-block:: none
.. code-block:: console
struct acrn_vm_pci_dev_config vm0_pci_devs[] = {
{
.emu_type = PCI_DEV_TYPE_HVEMUL,
.vbdf.bits = {.b = 0x00U, .d = 0x05U, .f = 0x00U},
.vdev_ops = &vmcs9900_ops,
.vbar_base[0] = 0x80003000,
.vbar_base[1] = 0x80004000,
.vuart_idx = 1, /* PCI-vUART1 of VM0 */
.t_vuart.vm_id = 1U, /* connected to VM1's vUART1 */
.t_vuart.vuart_id = 1U,
},
}
$ root@intel-corei7-64:~# cat /dev/ttyS1
acrn
This struct shows a PCI-vUART with ``vuart_idx=1``, ``BDF 00:05.0``, it's
a PCI-vUART1 of
VM0, and it is connected to VM1's vUART1 port. When VM0 wants to communicate
with VM1, it can use ``/dev/ttyS*``, the character device file of
VM0's PCI-vUART1. Usually, legacy ``vuart[0]`` is ``ttyS0`` in VM, and
``vuart[1]`` is ``ttyS1``. So we hope PCI-vUART0 is ``ttyS0``,
PCI-VUART1 is ``ttyS1`` and so on through
PCI-vUART7 is ``ttyS7``, but that is not true. We can use BDF to identify
PCI-vUART in VM.
Learn More
==========
If you run ``dmesg | grep tty`` at a VM shell, you may see:
For details on ACRN vUART high-level design, see:
.. code-block:: none
[ 1.276891] 0000:00:05.0: ttyS4 at MMIO 0xa1414000 (irq = 124, base_baud = 115200) is a 16550A
We know for VM0 guest OS, ``ttyS4`` has BDF 00:05.0 and is PCI-vUART1.
VM0 can communicate with VM1 by reading from or writing to ``/dev/ttyS4``.
If VM0 and VM1 are pre-launched VMs, or Service VM, ACRN hypervisor will
create PCI-vUART virtual devices automatically. For post-launched VMs,
created by ``acrn-dm``, an additional ``acrn-dm`` option is needed
to create a PCI-vUART virtual device:
.. code-block:: none
-s <slot>,uart,vuart_idx:<val>
Kernel Config for Legacy vUART
==============================
When ACRN hypervisor passthroughs a local APIC to a VM, there is IRQ
injection issue for legacy vUART. The kernel driver must work in
polling mode to avoid the problem. The VM kernel should have these config
symbols set:
.. code-block:: none
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_DETECT_IRQ=y
Kernel Cmdline for PCI-vUART Console
====================================
When an ACRN VM does not have a legacy ``vuart[0]`` but has a
PCI-vUART0, you can use PCI-vUART0 for VM serial input/output. Check
which TTY has the BDF of PCI-vUART0; usually it is not ``/dev/ttyS0``.
For example, if ``/dev/ttyS4`` is PCI-vUART0, you must set
``console=/dev/ttyS4`` in the kernel cmdline.
* :ref:`hv-console-shell-uart`
* :ref:`vuart_virtualization`
* :ref:`uart_virtualization`