doc: update release_2.1 with new docs

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder
2020-08-07 17:23:43 -07:00
committed by David Kinder
parent c3800aea66
commit d8ee2f3303
28 changed files with 537 additions and 406 deletions

View File

@@ -56,6 +56,7 @@ options:
[-l lpc] [-m mem] [-p vcpu:hostcpu] [-r ramdisk_image_path]
[-s pci] [-U uuid] [--vsbl vsbl_file_name] [--ovmf ovmf_file_path]
[--part_info part_info_name] [--enable_trusty] [--intr_monitor param_setting]
[--acpidev_pt HID] [--mmiodev_pt MMIO_regions]
[--vtpm2 sock_path] [--virtio_poll interval] [--mac_seed seed_string]
[--ptdev_no_reset] [--debugexit]
[--lapic_pt] <vm>
@@ -86,6 +87,8 @@ options:
--intr_monitor: enable interrupt storm monitor
its params: threshold/s,probe-period(s),delay_time(ms),delay_duration(ms),
--virtio_poll: enable virtio poll mode with poll interval with ns
--acpidev_pt: acpi device ID args: HID in ACPI Table
--mmiodev_pt: MMIO resources args: physical MMIO regions
--vtpm2: Virtual TPM2 args: sock_path=$PATH_OF_SWTPM_SOCKET
--lapic_pt: enable local apic passthrough
--rtvm: indicate that the guest is rtvm
@@ -104,6 +107,7 @@ Here's an example showing how to run a VM with:
- GPU device on PCI 00:02.0
- Virtio-block device on PCI 00:03.0
- Virtio-net device on PCI 00:04.0
- TPM2 MSFT0101
.. code-block:: bash
@@ -113,6 +117,7 @@ Here's an example showing how to run a VM with:
-s 5,virtio-console,@pty:pty_port \
-s 3,virtio-blk,b,/data/clearlinux/clearlinux.img \
-s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \
--acpidev_pt MSFT0101 \
--intr_monitor 10000,10,1,100 \
-B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \
console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \
@@ -1193,4 +1198,5 @@ Passthrough in Device Model
****************************
You may refer to :ref:`hv-device-passthrough` for passthrough realization
in device model.
in device model and :ref:`mmio-device-passthrough` for MMIO passthrough realization
in device model and ACRN Hypervisor..

View File

@@ -18,6 +18,7 @@ Hypervisor high-level design
Virtual Interrupt <hv-virt-interrupt>
VT-d <hv-vt-d>
Device Passthrough <hv-dev-passthrough>
mmio-dev-passthrough
hv-partitionmode
Power Management <hv-pm>
Console, Shell, and vUART <hv-console>

View File

@@ -70,7 +70,7 @@ Specifically:
the hypervisor shell. Inputs to the physical UART will be
redirected to the vUART starting from the next timer event.
- The vUART is deactivated after a :kbd:`Ctrl + Space` hotkey is received
- The vUART is deactivated after a :kbd:`Ctrl` + :kbd:`Space` hotkey is received
from the physical UART. Inputs to the physical UART will be
handled by the hypervisor shell starting from the next timer
event.

View File

@@ -38,58 +38,36 @@ IA32_PQR_ASSOC MSR to CLOS 0. (Note that CLOS, or Class of Service, is a
resource allocator.) The user can check the cache capabilities such as cache
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
CLOS ID, to select a cache mask to take effect. ACRN uses
VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes to
enforce the settings.
CLOS ID, to select a cache mask to take effect. These configurations can be
done in scenario xml file under ``FEATURES`` section as shown in the below example.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
to enforce the settings.
.. code-block:: none
:emphasize-lines: 3,7,11,15
:emphasize-lines: 2,4
struct platform_clos_info platform_l2_clos_array[MAX_PLATFORM_CLOS_NUM] = {
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 0,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 1,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 2,
},
{
.clos_mask = 0xff,
.msr_index = MSR_IA32_L3_MASK_BASE + 3,
},
};
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section. If user desires
to use CDP feature, CDP_ENABLED should be set to ``y``.
.. code-block:: none
:emphasize-lines: 6
:emphasize-lines: 2
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = {
{
.type = SOS_VM,
.name = SOS_VM_CONFIG_NAME,
.guest_flags = 0UL,
.clos = 0,
.memory = {
.start_hpa = 0x0UL,
.size = CONFIG_SOS_RAM_SIZE,
},
.os_config = {
.name = SOS_VM_CONFIG_OS_NAME,
},
},
};
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<vcpu_clos>0</vcpu_clos>
.. note::
ACRN takes the lowest common CLOS max value between the supported
resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS
supported by L3 is 16 and L2 is 8, ACRN programs MAX_PLATFORM_CLOS_NUM to
8. ACRN recommends consistent capabilities across all RDT
resources by using the common subset CLOS. This is done in order to
minimize misconfiguration errors.
resources as maximum supported CLOS ID. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends to have consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize
misconfiguration errors.
Objective of MBA
@@ -128,53 +106,31 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
users can check the MBA capabilities such as mba delay values and
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root
modes to enforce the settings.
These configurations can be done in scenario xml file under ``FEATURES`` section
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
for non-root and root modes to enforce the settings.
.. code-block:: none
:emphasize-lines: 3,7,11,15
:emphasize-lines: 2,5
struct platform_clos_info platform_mba_clos_array[MAX_PLATFORM_CLOS_NUM] = {
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 0,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 1,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 2,
},
{
.mba_delay = 0,
.msr_index = MSR_IA32_MBA_MASK_BASE + 3,
},
};
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask"></CLOS_MASK>
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario xml file under ``VM`` section.
.. code-block:: none
:emphasize-lines: 6
:emphasize-lines: 2
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = {
{
.type = SOS_VM,
.name = SOS_VM_CONFIG_NAME,
.guest_flags = 0UL,
.clos = 0,
.memory = {
.start_hpa = 0x0UL,
.size = CONFIG_SOS_RAM_SIZE,
},
.os_config = {
.name = SOS_VM_CONFIG_OS_NAME,
},
},
};
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<vcpu_clos>0</vcpu_clos>
.. note::
ACRN takes the lowest common CLOS max value between the supported
resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS
resources as maximum supported CLOS ID. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends to have consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize

View File

@@ -186,7 +186,7 @@ Inter-VM Communication Security hardening (BKMs)
************************************************
As previously highlighted, ACRN 2.0 provides the capability to create shared
memory regions between Post-Launch user VMs known as Inter-VM Communication”.
memory regions between Post-Launch user VMs known as "Inter-VM Communication".
This mechanism is based on ivshmem v1.0 exposing virtual PCI devices for the
shared regions (in Service VM's memory for this release). This feature adopts a
community-approved design for shared memory between VMs, following same
@@ -194,7 +194,7 @@ specification for KVM/QEMU (`Link <https://git.qemu.org/?p=qemu.git;a=blob_plain
Following the ACRN threat model, the policy definition for allocation and
assignment of these regions is controlled by the Service VM, which is part of
ACRNs Trusted Computing Base (TCB). However, to secure inter-VM communication
ACRN's Trusted Computing Base (TCB). However, to secure inter-VM communication
between any userspace applications that harness this channel, applications will
face more requirements for the confidentiality, integrity, and authenticity of
shared or transferred data. It is the application development team's
@@ -218,17 +218,17 @@ architecture and threat model for your application.
- Add restrictions based on behavior or subject and object rules around information flow and accesses.
- In Service VM, consider the ``/dev/shm`` device node as a critical interface with special access requirement. Those requirements can be fulfilled using any of the existing opensource MAC technologies or even ACLs depending on the OS compatibility (Ubuntu, Windows, etc..) and integration complexity.
- In the User VM, the shared memory region can be accessed using ``mmap()`` of UIO device node. Other complementary info can be found under:
- ``/sys/class/uio/uioX/device/resource2`` --> shared memory base address
- ``/sys/class/uio/uioX/device/config`` --> shared memory Size.
- For Linux-based User VMs, we recommend using the standard ``UIO`` and ``UIO_PCI_GENERIC`` drivers through the device node (for example, ``/dev/uioX``).
- Reference: `AppArmor <https://wiki.ubuntuusers.de/AppArmor/>`_, `SELinux <https://selinuxproject.org/page/Main_Page>`_, `UIO driver-API <https://www.kernel.org/doc/html/v4.12/driver-api/uio-howto.html>`_
3. **Crypto Support and Secure Applied Crypto**
- According to the applications threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services.
- According to the application's threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services.
- Don't implement your own crypto functions. Use available compliant crypto libraries as applicable, such as. (`Intel IPP <https://github.com/intel/ipp-crypto>`_ or `TinyCrypt <https://01.org/tinycrypt>`_)
- Utilize the platform/kernel infrastructure and services (e.g., :ref:`hld-security` , `Kernel Crypto backend/APIs <https://www.kernel.org/doc/html/v5.4/crypto/index.html>`_ , `keyring subsystem <https://www.man7.org/linux/man-pages/man7/keyrings.7.html>`_, etc..).
- Implement necessary flows for key lifecycle management including wrapping,revocation and migration, depending on the crypto key type used and if there are requirements for key persistence across system and power management events.

View File

@@ -0,0 +1,40 @@
.. _mmio-device-passthrough:
MMIO Device Passthrough
########################
The ACRN Hypervisor supports both PCI and MMIO device passthrough.
However there are some constraints on and hypervisor assumptions about
MMIO devices: there can be no DMA access to the MMIO device and the MMIO
device may not use IRQs.
Here is how ACRN supports MMIO device passthrough:
* For a pre-launched VM, the VM configuration tells the ACRN hypervisor
the addresses of the physical MMIO device's regions and where they are
mapped to in the pre-launched VM. The hypervisor then removes these
MMIO regions from the Service VM and fills the vACPI table for this MMIO
device based on the device's physical ACPI table.
* For a post-launched VM, the same actions are done as in a
pre-launched VM, plus we use the command line to tell which MMIO
device we want to pass through to the post-launched VM.
If the MMIO device has ACPI Tables, use ``--acpidev_pt HID`` and
if not, use ``--mmiodev_pt MMIO_regions``.
.. note::
Currently, the vTPM and PT TPM in the ACRN-DM have the same HID so we
can't support them both at the same time. The VM will fail to boot if
both are used.
These issues remain to be implemented:
* Save the MMIO regions in a field of the VM structure in order to
release the resources when the post-launched VM shuts down abnormally.
* Allocate the guest MMIO regions for the MMIO device in a guest-reserved
MMIO region instead of being hard-coded. With this, we could add more
passthrough MMIO devices.
* De-assign the MMIO device from the Service VM first before passing
through it to the post-launched VM and not only removing the MMIO
regions from the Service VM.

View File

@@ -70,8 +70,8 @@ ACRN Device Model and virtio-net Backend Driver:
the virtio-net backend driver to process the request. The backend driver
receives the data in a shared virtqueue and sends it to the TAP device.
Bridge and Tap Device:
Bridge and Tap are standard virtual network infrastructures. They play
Bridge and TAP Device:
Bridge and TAP are standard virtual network infrastructures. They play
an important role in communication among the Service VM, the User VM, and the
outside world.
@@ -108,7 +108,7 @@ Initialization in Device Model
- Present frontend for a virtual PCI based NIC
- Setup control plan callbacks
- Setup data plan callbacks, including TX, RX
- Setup tap backend
- Setup TAP backend
Initialization in virtio-net Frontend Driver
============================================
@@ -365,7 +365,7 @@ cases.)
.. code-block:: c
vring_interrupt --> // virtio-net frontend driver interrupt handler
skb_recv_done --> //registed by virtnet_probe-->init_vqs-->virtnet_find_vqs
skb_recv_done --> // registered by virtnet_probe-->init_vqs-->virtnet_find_vqs
virtqueue_napi_schedule -->
__napi_schedule -->
virtnet_poll -->
@@ -406,13 +406,13 @@ cases.)
sk->sk_data_ready --> // application will get notified
How to Use
==========
How to Use TAP Interface
========================
The network infrastructure shown in :numref:`net-virt-infra` needs to be
prepared in the Service VM before we start. We need to create a bridge and at
least one tap device (two tap devices are needed to create a dual
virtual NIC) and attach a physical NIC and tap device to the bridge.
least one TAP device (two TAP devices are needed to create a dual
virtual NIC) and attach a physical NIC and TAP device to the bridge.
.. figure:: images/network-virt-sos-infrastruct.png
:align: center
@@ -509,6 +509,32 @@ is the virtual NIC created by acrn-dm:
collisions:0 txqueuelen:1000
RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
How to Use MacVTap Interface
============================
In addition to TAP interface, ACRN also supports MacVTap interface.
MacVTap replaces the combination of the TAP and bridge drivers with
a single module based on MacVLan driver. With MacVTap, each
virtual network interface is assigned its own MAC and IP address
and is directly attached to the physical interface of the host machine
to improve throughput and latencies.
Create a MacVTap interface in the Service VM as shown here:
.. code-block:: none
sudo ip link add link eth0 name macvtap0 type macvtap
where ``eth0`` is the name of the physical network interface, and
``macvtap0`` is the name of the MacVTap interface being created. (Make
sure the MacVTap interface name includes the keyword ``tap``.)
Once the MacVTap interface is created, the User VM can be launched by adding
a PCI slot to the device model acrn-dm as shown below.
.. code-block:: none
-s 4,virtio-net,<macvtap_name>,[mac=<XX:XX:XX:XX:XX:XX>]
Performance Estimation
======================

View File

@@ -96,7 +96,7 @@ Usage
- For console vUART
To enable the console port for a VM, change the
port_base and IRQ in ``acrn-hypervisor/hypervisor/scenarios/<scenario
port_base and IRQ in ``misc/vm_configs/scenarios/<scenario
name>/vm_configurations.c``. If the IRQ number has been used in your
system ( ``cat /proc/interrupt``), you can choose other IRQ number. Set
the .irq =0, the vUART will work in polling mode.