Doc: Edits to MBA and CAT documentation.

Signed-off-by: Deb Taylor <deb.taylor@intel.com>
This commit is contained in:
Deb Taylor 2020-03-29 18:37:33 -04:00 committed by wenlingz
parent 539f939595
commit 93f1b7e8c6
7 changed files with 214 additions and 132 deletions

View File

@ -64,7 +64,7 @@ for its RT VM:
- LAPIC pass-thru - LAPIC pass-thru
- Polling mode driver - Polling mode driver
- ART (always running timer) - ART (always running timer)
- other TCC feautres like split lock detection, Pseudo locking for cache - other TCC features like split lock detection, Pseudo locking for cache
Hardware Requirements Hardware Requirements
@ -112,7 +112,7 @@ provides I/O mediation to VMs. Some of the PCIe devices function as a
pass-through mode to User VMs according to VM configuration. In addition, pass-through mode to User VMs according to VM configuration. In addition,
the Service VM could run the IC applications and HV helper applications such the Service VM could run the IC applications and HV helper applications such
as the Device Model, VM manager, etc. where the VM manager is responsible as the Device Model, VM manager, etc. where the VM manager is responsible
for VM start/stop/pause, virtual CPU pause/resume,etc. for VM start/stop/pause, virtual CPU pause/resume, etc.
.. figure:: images/over-image34.png .. figure:: images/over-image34.png
:align: center :align: center
@ -130,7 +130,7 @@ and Real-Time (RT) VM.
compared to ACRN 1.0 is that: compared to ACRN 1.0 is that:
- a pre-launched VM is supported in ACRN 2.0, with isolated resources, including - a pre-launched VM is supported in ACRN 2.0, with isolated resources, including
CPU, memory, and HW devices etc CPU, memory, and HW devices, etc
- ACRN 2.0 adds a few necessary device emulations in hypervisor like vPCI and vUART to avoid - ACRN 2.0 adds a few necessary device emulations in hypervisor like vPCI and vUART to avoid
interference between different VMs interference between different VMs
@ -236,7 +236,7 @@ Hypervisor
ACRN takes advantage of Intel Virtualization Technology (Intel VT). ACRN takes advantage of Intel Virtualization Technology (Intel VT).
The ACRN HV runs in Virtual Machine Extension (VMX) root operation, The ACRN HV runs in Virtual Machine Extension (VMX) root operation,
host mode, or VMM mode, while the Serivce and User VM guests run host mode, or VMM mode, while the Service and User VM guests run
in VMX non-root operation, or guest mode. (We'll use "root mode" in VMX non-root operation, or guest mode. (We'll use "root mode"
and "non-root mode" for simplicity). and "non-root mode" for simplicity).
@ -266,7 +266,7 @@ used by commercial OS).
managing physical resources at runtime. Examples include handling managing physical resources at runtime. Examples include handling
physical interrupts and low power state changes. physical interrupts and low power state changes.
- A layer siting on top of hardware management enables virtual - A layer sitting on top of hardware management enables virtual
CPUs (or vCPUs), leveraging Intel VT. A vCPU loop runs a vCPU in CPUs (or vCPUs), leveraging Intel VT. A vCPU loop runs a vCPU in
non-root mode and handles VM exit events triggered by the vCPU. non-root mode and handles VM exit events triggered by the vCPU.
This layer handles CPU and memory-related VM This layer handles CPU and memory-related VM
@ -365,7 +365,7 @@ User VM
Currently, ACRN can boot Linux and Android guest OSes. For Android guest OS, ACRN Currently, ACRN can boot Linux and Android guest OSes. For Android guest OS, ACRN
provides a VM environment with two worlds: normal world and trusty provides a VM environment with two worlds: normal world and trusty
world. The Android OS runs in the the normal world. The trusty OS and world. The Android OS runs in the normal world. The trusty OS and
security sensitive applications run in the trusty world. The trusty security sensitive applications run in the trusty world. The trusty
world can see the memory of normal world, but normal world cannot see world can see the memory of normal world, but normal world cannot see
trusty world. trusty world.
@ -436,10 +436,10 @@ to boot Linux or Android guest OS.
The vSBL image is released as a part of the Service OS root The vSBL image is released as a part of the Service OS root
filesystem (rootfs). The vSBL is copied to the User VM memory by the VM manager filesystem (rootfs). The vSBL is copied to the User VM memory by the VM manager
in the Service VM while creating the the User VM virtual BSP of the User VM. The Service VM passes the in the Service VM while creating the User VM virtual BSP of the User VM. The Service VM passes the
start of vSBL and related information to HV. HV sets the guest RIP of the User VM's start of vSBL and related information to HV. HV sets the guest RIP of the User VM's
virtual BSP as the start of vSBL and related guest registers, and virtual BSP as the start of vSBL and related guest registers, and
launches the the User VM virtual BSP. The vSBL starts running in the virtual launches the User VM virtual BSP. The vSBL starts running in the virtual
real mode within the User VM. Conceptually, vSBL is part of the User VM runtime. real mode within the User VM. Conceptually, vSBL is part of the User VM runtime.
In the current design, the vSBL supports booting Android guest OS or In the current design, the vSBL supports booting Android guest OS or
@ -458,8 +458,8 @@ the EFI boot of the User VM on the ACRN hypervisor platform.
The OVMF is copied to the User VM memory by the VM manager in the Service VM while creating The OVMF is copied to the User VM memory by the VM manager in the Service VM while creating
the User VM virtual BSP of the User VM. The Service VM passes the start of OVMF and related the User VM virtual BSP of the User VM. The Service VM passes the start of OVMF and related
information to HV. HV sets guest RIP of the User VM virtual BSP as the start of OVMF information to HV. HV sets guest RIP of the User VM virtual BSP as the start of OVMF
and related guest registers, and launches the the User VM virtual BSP. The OVMF starts and related guest registers, and launches the User VM virtual BSP. The OVMF starts
running in the virtual real mode within the the User VM. Conceptually, OVMF is part of the User VM runtime. running in the virtual real mode within the User VM. Conceptually, OVMF is part of the User VM runtime.
Freedom From Interference Freedom From Interference
************************* *************************
@ -495,7 +495,7 @@ the following mechanisms:
2. The User VM cannot access the memory of the Service VM and the hypervisor 2. The User VM cannot access the memory of the Service VM and the hypervisor
3. The hypervisor does not unintendedly access the memory of the Serivce or User VM. 3. The hypervisor does not unintendedly access the memory of the Service or User VM.
- Destination of external interrupts are set to be the physical core - Destination of external interrupts are set to be the physical core
where the VM that handles them is running. where the VM that handles them is running.

View File

@ -94,7 +94,7 @@ CPU management in the Service VM under flexing CPU sharing
========================================================== ==========================================================
As all Service VM CPUs could share with different UOSs, ACRN can still pass-thru As all Service VM CPUs could share with different UOSs, ACRN can still pass-thru
MADT to Service VM, and the Service VM is still able to see all physcial CPUs. MADT to Service VM, and the Service VM is still able to see all physical CPUs.
But as under CPU sharing, the Service VM does not need offline/release the physical But as under CPU sharing, the Service VM does not need offline/release the physical
CPUs intended for UOS use. CPUs intended for UOS use.
@ -102,8 +102,8 @@ CPUs intended for UOS use.
CPU management in UOS CPU management in UOS
===================== =====================
From the UOS point of view, CPU management is very simple - when DM do From the UOS point of view, CPU management is very simple - when DM does
hypercall to create VM, the hypervisor will create its all virtual CPUs hypercalls to create VMs, the hypervisor will create its virtual CPUs
based on the configuration in this UOS VM's ``vm config``. based on the configuration in this UOS VM's ``vm config``.
As mentioned in previous description, ``vcpu_affinity`` in ``vm config`` As mentioned in previous description, ``vcpu_affinity`` in ``vm config``
@ -150,7 +150,7 @@ the major states are:
- **VPCU_ZOMBIE**: vCPU is being offline, and its vCPU thread is not - **VPCU_ZOMBIE**: vCPU is being offline, and its vCPU thread is not
running on its associated CPU running on its associated CPU
- **VPCU_OFFLINE**: vCPU is offlined - **VPCU_OFFLINE**: vCPU is offline
.. figure:: images/hld-image17.png .. figure:: images/hld-image17.png
:align: center :align: center

View File

@ -3,8 +3,8 @@
Partition mode Partition mode
############## ##############
ACRN is type-1 hypervisor that supports running multiple guest operating ACRN is a type-1 hypervisor that supports running multiple guest operating
systems (OS). Typically, the platform BIOS/boot-loader boots ACRN, and systems (OS). Typically, the platform BIOS/bootloader boots ACRN, and
ACRN loads single or multiple guest OSes. Refer to :ref:`hv-startup` for ACRN loads single or multiple guest OSes. Refer to :ref:`hv-startup` for
details on the start-up flow of the ACRN hypervisor. details on the start-up flow of the ACRN hypervisor.
@ -21,12 +21,12 @@ Introduction
In partition mode, ACRN provides guests with exclusive access to cores, In partition mode, ACRN provides guests with exclusive access to cores,
memory, cache, and peripheral devices. Partition mode enables developers memory, cache, and peripheral devices. Partition mode enables developers
to dedicate resources exclusively among the guests. However there is no to dedicate resources exclusively among the guests. However, there is no
support today in x86 hardware or in ACRN to partition resources such as support today in x86 hardware or in ACRN to partition resources such as
peripheral buses (e.g. PCI). On x86 platforms that support Cache peripheral buses (e.g. PCI). On x86 platforms that support Cache
Allocation Technology (CAT) and Memory Bandwidth Allocation(MBA), resources Allocation Technology (CAT) and Memory Bandwidth Allocation(MBA), resources
such as Cache and memory bandwidth can be used by developers to partition such as Cache and memory bandwidth can be used by developers to partition
L2, Last Level Cache (LLC) and memory bandwidth among the guests. Refer to L2, Last Level Cache (LLC), and memory bandwidth among the guests. Refer to
:ref:`hv_rdt` for more details on ACRN RDT high-level design and :ref:`hv_rdt` for more details on ACRN RDT high-level design and
:ref:`rdt_configuration` for RDT configuration. :ref:`rdt_configuration` for RDT configuration.
@ -34,7 +34,7 @@ L2, Last Level Cache (LLC) and memory bandwidth among the guests. Refer to
ACRN expects static partitioning of resources either by code ACRN expects static partitioning of resources either by code
modification for guest configuration or through compile-time config modification for guest configuration or through compile-time config
options. All the devices exposed to the guests are either physical options. All the devices exposed to the guests are either physical
resources or emulated in the hypervisor. So, there is no need for resources or are emulated in the hypervisor. So, there is no need for a
device-model and Service OS. :numref:`pmode2vms` shows a partition mode device-model and Service OS. :numref:`pmode2vms` shows a partition mode
example of two VMs with exclusive access to physical resources. example of two VMs with exclusive access to physical resources.
@ -47,7 +47,7 @@ example of two VMs with exclusive access to physical resources.
Guest info Guest info
********** **********
ACRN uses multi-boot info passed from the platform boot-loader to know ACRN uses multi-boot info passed from the platform bootloader to know
the location of each guest kernel in memory. ACRN creates a copy of each the location of each guest kernel in memory. ACRN creates a copy of each
guest kernel into each of the guests' memory. Current implementation of guest kernel into each of the guests' memory. Current implementation of
ACRN requires developers to specify kernel parameters for the guests as ACRN requires developers to specify kernel parameters for the guests as
@ -64,7 +64,7 @@ Cores
===== =====
ACRN requires the developer to specify the number of guests and the ACRN requires the developer to specify the number of guests and the
cores dedicated for each guest. Also the developer needs to specify cores dedicated for each guest. Also, the developer needs to specify
the physical core used as the Boot Strap Processor (BSP) for each guest. As the physical core used as the Boot Strap Processor (BSP) for each guest. As
the processors are brought to life in the hypervisor, it checks if they are the processors are brought to life in the hypervisor, it checks if they are
configured as BSP for any of the guests. If a processor is BSP of any of configured as BSP for any of the guests. If a processor is BSP of any of
@ -90,7 +90,7 @@ for assigning host memory to the guests:
1) Sum of guest PCI hole and guest "System RAM" is less than 4GB. 1) Sum of guest PCI hole and guest "System RAM" is less than 4GB.
2) Pick the starting address in the host physical address and the 2) Pick the starting address in the host physical address and the
size, so that it does not overlap with any reserved regions in size so that it does not overlap with any reserved regions in
host E820. host E820.
ACRN creates EPT mapping for the guest between GPA (0, memory size) and ACRN creates EPT mapping for the guest between GPA (0, memory size) and
@ -150,11 +150,11 @@ the virtual host bridge. ACRN does not support either passing thru
bridges or emulating virtual bridges. Pass-thru devices should be bridges or emulating virtual bridges. Pass-thru devices should be
statically allocated to each guest using the guest configuration. ACRN statically allocated to each guest using the guest configuration. ACRN
expects the developer to provide the virtual BDF to BDF of the expects the developer to provide the virtual BDF to BDF of the
physical device mapping for all the pass-thru devices as physical device mapping for all the pass-thru devices as part of each guest
part of each guest configuration. configuration.
Run-time ACRN support for guests Runtime ACRN support for guests
******************************** *******************************
ACRN, in partition mode, supports an option to pass-thru LAPIC of the ACRN, in partition mode, supports an option to pass-thru LAPIC of the
physical CPUs to the guest. ACRN expects developers to specify if the physical CPUs to the guest. ACRN expects developers to specify if the
@ -185,20 +185,20 @@ Guests w/o LAPIC pass-thru
-------------------------- --------------------------
For guests without LAPIC pass-thru, IPIs between guest CPUs are handled in For guests without LAPIC pass-thru, IPIs between guest CPUs are handled in
the same way as sharing mode of ACRN. Refer to :ref:`virtual-interrupt-hld` the same way as sharing mode in ACRN. Refer to :ref:`virtual-interrupt-hld`
for more details. for more details.
Guests w/ LAPIC pass-thru Guests w/ LAPIC pass-thru
------------------------- -------------------------
ACRN supports pass-thru if and only if the guest is using x2APIC mode ACRN supports pass-thru if and only if the guest is using x2APIC mode
for the vLAPIC. In LAPIC pass-thru mode, writes to Interrupt Command for the vLAPIC. In LAPIC pass-thru mode, writes to the Interrupt Command
Register (ICR) x2APIC MSR is intercepted. Guest writes the IPI info Register (ICR) x2APIC MSR is intercepted. Guest writes the IPI info,
including vector, destination APIC IDs to the ICR. Upon an IPI request including vector, and destination APIC IDs to the ICR. Upon an IPI request
from the guest, ACRN does sanity check on the destination processors from the guest, ACRN does a sanity check on the destination processors
programmed into ICR. If the destination is a valid target for the guest, programmed into the ICR. If the destination is a valid target for the guest,
ACRN sends IPI with the same vector from ICR to the physical CPUs ACRN sends an IPI with the same vector from the ICR to the physical CPUs
corresponding to the destination processor info in ICR. corresponding to the destination processor info in the ICR.
.. figure:: images/partition-image14.png .. figure:: images/partition-image14.png
:align: center :align: center
@ -217,7 +217,7 @@ Address registers (BAR), offsets starting from 0x10H to 0x24H, provide
the information about the resources (I/O and MMIO) used by the PCI the information about the resources (I/O and MMIO) used by the PCI
device. ACRN virtualizes the BAR registers and for the rest of the device. ACRN virtualizes the BAR registers and for the rest of the
config space, forwards reads and writes to the physical config space of config space, forwards reads and writes to the physical config space of
pass-thru devices. Refer to `I/O`_ section below for more details. pass-thru devices. Refer to the `I/O`_ section below for more details.
.. figure:: images/partition-image1.png .. figure:: images/partition-image1.png
:align: center :align: center
@ -237,14 +237,14 @@ I/O
ACRN supports I/O for pass-thru devices with two restrictions. ACRN supports I/O for pass-thru devices with two restrictions.
1) Supports only MMIO. So requires developers to expose I/O BARs as 1) Supports only MMIO. Thus, this requires developers to expose I/O BARs as
not present in the guest configuration. not present in the guest configuration.
2) Supports only 32-bit MMIO BAR type. 2) Supports only 32-bit MMIO BAR type.
As guest PCI sub-system scans the PCI bus and assigns Guest Physical As the guest PCI sub-system scans the PCI bus and assigns a Guest Physical
Address (GPA) to the MMIO BAR, ACRN maps GPA to the address in the Address (GPA) to the MMIO BAR, ACRN maps the GPA to the address in the
physical BAR of the pass-thru device using EPT. Following timeline chart physical BAR of the pass-thru device using EPT. The following timeline chart
explains how PCI devices are assigned to guest and BARs are mapped upon explains how PCI devices are assigned to guest and BARs are mapped upon
guest initialization. guest initialization.
@ -265,7 +265,7 @@ ACRN expects developers to identify the interrupt line info (0x3CH) from
the physical BAR of the pass-thru device and build an interrupt entry in the physical BAR of the pass-thru device and build an interrupt entry in
the mptable for the corresponding guest. As guest configures the vIOAPIC the mptable for the corresponding guest. As guest configures the vIOAPIC
for the interrupt RTE, ACRN writes the info from the guest RTE into the for the interrupt RTE, ACRN writes the info from the guest RTE into the
physical IOAPIC RTE. Upon guest kernel request to mask the interrupt, physical IOAPIC RTE. Upon the guest kernel request to mask the interrupt,
ACRN writes to the physical RTE to mask the interrupt at the physical ACRN writes to the physical RTE to mask the interrupt at the physical
IOAPIC. When guest masks the RTE in vIOAPIC, ACRN masks the interrupt IOAPIC. When guest masks the RTE in vIOAPIC, ACRN masks the interrupt
RTE in the physical IOAPIC. Level triggered interrupts are not RTE in the physical IOAPIC. Level triggered interrupts are not
@ -275,9 +275,9 @@ MSI support
~~~~~~~~~~~ ~~~~~~~~~~~
Guest reads/writes to PCI configuration space for configuring MSI Guest reads/writes to PCI configuration space for configuring MSI
interrupts using address. Data and control registers are pass-thru to interrupts using an address. Data and control registers are pass-thru to
the physical BAR of pass-thru device. Refer to `Configuration the physical BAR of the pass-thru device. Refer to `Configuration
space access`_ for details on how PCI configuration space is emulated. space access`_ for details on how the PCI configuration space is emulated.
Virtual device support Virtual device support
====================== ======================
@ -328,7 +328,7 @@ Hypervisor IPIs work the same way as in sharing mode.
Guests w/ LAPIC pass-thru Guests w/ LAPIC pass-thru
------------------------- -------------------------
Since external interrupts are pass-thru to guest IDT, IPIs do not Since external interrupts are pass-thru to the guest IDT, IPIs do not
trigger vmexit. ACRN uses NMI delivery mode and the NMI exiting is trigger vmexit. ACRN uses NMI delivery mode and the NMI exiting is
chosen for vCPUs. At the time of NMI interrupt on the target processor, chosen for vCPUs. At the time of NMI interrupt on the target processor,
if the processor is in non-root mode, vmexit happens on the processor if the processor is in non-root mode, vmexit happens on the processor
@ -341,8 +341,8 @@ For details on how hypervisor console works, refer to
:ref:`hv-console`. :ref:`hv-console`.
For a guest console in partition mode, ACRN provides an option to pass For a guest console in partition mode, ACRN provides an option to pass
``vmid`` as an argument to ``vm_console``. vmid is same as the one ``vmid`` as an argument to ``vm_console``. vmid is the same as the one
developer uses in the guest configuration. developers use in the guest configuration.
Guests w/o LAPIC pass-thru Guests w/o LAPIC pass-thru
-------------------------- --------------------------
@ -352,18 +352,18 @@ Works the same way as sharing mode.
Hypervisor Console Hypervisor Console
================== ==================
ACRN uses TSC deadline timer to provide timer service. Hypervisor ACRN uses the TSC deadline timer to provide a timer service. The hypervisor
console uses a timer on CPU0 to poll characters on the serial device. To console uses a timer on CPU0 to poll characters on the serial device. To
support LAPIC pass-thru, TSC deadline MSR is pass-thru and the local support LAPIC pass-thru, the TSC deadline MSR is pass-thru and the local
timer interrupt also delivered to the guest IDT. Instead of TSC deadline timer interrupt is also delivered to the guest IDT. Instead of the TSC
timer, ACRN uses VMX preemption timer to poll the serial device. deadline timer, ACRN uses the VMX preemption timer to poll the serial device.
Guest Console Guest Console
============= =============
ACRN exposes vUART to partition mode guests. vUART uses vPIC to inject ACRN exposes vUART to partition mode guests. vUART uses vPIC to inject
interrupt to the guest BSP. In cases of guest having more than one core, interrupt to the guest BSP. In cases of the guest having more than one core,
during runtime, vUART might need to inject interrupt to guest BSP from during runtime, vUART might need to inject an interrupt to the guest BSP from
another core (other than BSP). As mentioned in section <Hypervisor IPI another core (other than BSP). As mentioned in section <Hypervisor IPI
service>, ACRN uses NMI delivery mode for notifying the CPU running BSP service>, ACRN uses NMI delivery mode for notifying the CPU running the BSP
of the guest. of the guest.

View File

@ -3,23 +3,44 @@
RDT Allocation Feature Supported by Hypervisor RDT Allocation Feature Supported by Hypervisor
############################################## ##############################################
The hypervisor uses RDT (Resource Director Technology) allocation features such as CAT(Cache Allocation Technology) and MBA(Memory Bandwidth Allocation) to control VMs which may be over-utilizing cache resources or memory bandwidth relative to their priority. By setting limits to critical resources, ACRN can optimize RTVM performance over regular VMs. In ACRN, the CAT and MBA are configured via the "VM-Configuration". The resources allocated for VMs are determined in the VM configuration(:ref:`rdt_vm_configuration`). The ACRN hypervisor uses RDT (Resource Director Technology) allocation features
such as CAT (Cache Allocation Technology) and MBA (Memory Bandwidth
Allocation) to control VMs which may be over-utilizing cache resources or
memory bandwidth relative to their priorities. By setting limits to critical
resources, ACRN can optimize RTVM performance over regular VMs. In ACRN, the
CAT and MBA are configured via the "VM-Configuration". The resources
allocated for VMs are determined in the VM configuration (:ref:`rdt_vm_configuration`).
For futher details on Intel RDT, please refer to `Intel (R) 64 and IA-32 Architectures Software Developer's Manual, (Section 17.19 INTEL® RESOURCE DIRECTOR TECHNOLOGY ALLOCATION FEATURES) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_ For further details on the Intel RDT, refer to `Intel 64 and IA-32 Architectures Software Developer's Manual, (Section 17.19 Intel Resource Director Technology Allocation Features) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_.
Objective of CAT Objective of CAT
**************** ****************
The CAT feature in the hypervisor can isolate the cache for a VM from other VMs. It can also isolate the cache usage between VMX root mode and VMX non-root mode. Generally, certain cache resources will be allocated for the RT VMs in order to reduce the performance interference through the shared cache access from the neighbor VMs. The CAT feature in the hypervisor can isolate the cache for a VM from other
VMs. It can also isolate cache usage between VMX root and non-root
modes. Generally, certain cache resources are allocated for the
RT VMs in order to reduce performance interference through the shared
cache access from the neighbor VMs.
The figure below shows that with CAT, the cache ways can be isolated vs default where high priority VMs can be impacted by a noisy neighbor. The figure below shows that with CAT, the cache ways can be isolated vs
the default where high priority VMs can be impacted by a noisy neighbor.
.. figure:: images/cat-objective.png .. figure:: images/cat-objective.png
:align: center :align: center
CAT Support in ACRN CAT Support in ACRN
=================== ===================
On x86 platforms that support CAT, ACRN hypervisor automatically enables the support and by default shares the cache ways equally between all the VMs. This is done by setting max cache mask in MSR_IA32_type_MASK_n (where type: L2 or L3) MSR corresponding to each CLOS and setting IA32_PQR_ASSOC MSR with CLOS 0. The user can check the cache capabilities such as cache mask, max supported CLOS as described in :ref:`rdt_detection_capabilities` and program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with class-of-service (CLOS) ID, to select a cache mask to take effect. ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes to enforce the settings. On x86 platforms that support CAT, the ACRN hypervisor automatically enables
support and by default shares the cache ways equally between all VMs.
This is done by setting the max cache mask in the MSR_IA32_type_MASK_n (where
type: L2 or L3) MSR that corresponds to each CLOS and then setting the
IA32_PQR_ASSOC MSR to CLOS 0. (Note that CLOS, or Class of Service, is a
resource allocator.) The user can check the cache capabilities such as cache
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
CLOS ID, to select a cache mask to take effect. ACRN uses
VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes to
enforce the settings.
.. code-block:: none .. code-block:: none
:emphasize-lines: 3,7,11,15 :emphasize-lines: 3,7,11,15
@ -63,14 +84,26 @@ On x86 platforms that support CAT, ACRN hypervisor automatically enables the sup
}; };
.. note:: .. note::
ACRN takes the lowest common CLOS max value between the supported resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS supported by L3 is 16 and L2 is 8, ACRN programs MAX_PLATFORM_CLOS_NUM to 8. ACRN recommends to have consistent capabilities across all RDT resource by using common subset CLOS. This is done in order to minimize misconfiguration errors. ACRN takes the lowest common CLOS max value between the supported
resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS
supported by L3 is 16 and L2 is 8, ACRN programs MAX_PLATFORM_CLOS_NUM to
8. ACRN recommends consistent capabilities across all RDT
resources by using the common subset CLOS. This is done in order to
minimize misconfiguration errors.
Objective of MBA Objective of MBA
**************** ****************
The Memory Bandwidth Allocation (MBA) feature provides indirect and approximate control over memory bandwidth available per-core. It provides a method to control VMs which may be over-utilizing bandwidth relative to their priority and thus improving performance of high priority VMs. MBA introduces a programmable request rate controller (PRRC) between cores and high-speed interconnect. Throttling values can be programmed via MSRs to the PRRC to limit bandwidth availability. The Memory Bandwidth Allocation (MBA) feature provides indirect and
approximate control over memory bandwidth that's available per core. It
provides a method to control VMs which may be over-utilizing bandwidth
relative to their priorities and thus improves the performance of high
priority VMs. MBA introduces a programmable request rate controller (PRRC)
between cores and high-speed interconnect. Throttling values can be
programmed via MSRs to the PRRC to limit bandwidth availability.
The following figure shows memory bandwidth impact without MBA which cause bottleneck for high priority VMs vs with MBA support, The following figure shows memory bandwidth impact without MBA which causes
bottlenecks for high priority VMs vs with MBA support:
.. figure:: images/no_mba_objective.png .. figure:: images/no_mba_objective.png
:align: center :align: center
@ -87,7 +120,16 @@ The following figure shows memory bandwidth impact without MBA which cause bottl
MBA Support in ACRN MBA Support in ACRN
=================== ===================
On x86 platforms that support MBA, ACRN hypervisor automatically enables the support and by default sets no limits to the memory bandwidth access by VMs. This is done by setting 0 mba delay value in MSR_IA32_MBA_MASK_n MSR corresponding to each CLOS and setting IA32_PQR_ASSOC MSR with CLOS 0. The user can check the MBA capabilities such as mba delay values, max supported CLOS as described in :ref:`rdt_detection_capabilities` and program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with class-of-service (CLOS) ID, to select a delay to take effect for restricting memory bandwidth. ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes to enforce the settings. On x86 platforms that support MBA, the ACRN hypervisor automatically enables
support and by default sets no limits to the memory bandwidth access by VMs.
This is done by setting a 0 mba delay value in the MSR_IA32_MBA_MASK_n MSR
that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
0. To select a delay to take effect for restricting memory bandwidth,
users can check the MBA capabilities such as mba delay values and
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root
modes to enforce the settings.
.. code-block:: none .. code-block:: none
:emphasize-lines: 3,7,11,15 :emphasize-lines: 3,7,11,15
@ -131,7 +173,12 @@ On x86 platforms that support MBA, ACRN hypervisor automatically enables the sup
}; };
.. note:: .. note::
ACRN takes the lowest common CLOS max value between the supported resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM to 8. ACRN recommends to have consistent capabilities across all RDT resource by using common subset CLOS. This is done in order to minimize misconfiguration errors. ACRN takes the lowest common CLOS max value between the supported
resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
to 8. ACRN recommends to have consistent capabilities across all RDT
resources by using a common subset CLOS. This is done in order to minimize
misconfiguration errors.
CAT and MBA high-level design in ACRN CAT and MBA high-level design in ACRN
@ -139,7 +186,7 @@ CAT and MBA high-level design in ACRN
Data structures Data structures
=============== ===============
The below figure shows the RDT data structure to store the enumerated resources. The below figure shows the RDT data structure to store enumerated resources.
.. figure:: images/mba_data_structures.png .. figure:: images/mba_data_structures.png
:align: center :align: center
@ -147,14 +194,28 @@ The below figure shows the RDT data structure to store the enumerated resources.
Enabling CAT, MBA software flow Enabling CAT, MBA software flow
=============================== ===============================
The hypervisor enumerates RDT capabilities and sets up mask arrays; it also sets up CLOS for VMs and hypervisor itself per the "vm configuration"(:ref:`rdt_vm_configuration`). The hypervisor enumerates RDT capabilities and sets up mask arrays; it also
sets up CLOS for VMs and the hypervisor itself per the "vm configuration"(:ref:`rdt_vm_configuration`).
* The RDT capabilities are enumerated on boot-strap processor (BSP), at the pCPU pre-initialize stage. The global data structure ``res_cap_info`` stores the capabilites of the supported resources. - The RDT capabilities are enumerated on the bootstrap processor (BSP) during
* If CAT or/and MBA is supported, then setup masks array on all APs, at the pCPU post-initialize stage. The mask values are written to IA32_type_MASK_n. Refer :ref:`rdt_detection_capabilities` for details on identifying values to program the mask/delay MRSs as well as max CLOS. the pCPU pre-initialize stage. The global data structure ``res_cap_info``
* If CAT or/and is supported, the CLOS of a **VM** will be stored into its vCPU ``msr_store_area`` data structure guest part. It will be loaded to MSR IA32_PQR_ASSOC at each VM entry. stores the capabilites of the supported resources.
* If CAT or/and MBA is supported, the CLOS of **hypervisor** is stored for all VMs, in their vCPU ``msr_store_area`` data structure host part. It will be loaded to MSR IA32_PQR_ASSOC at each VM exit.
The figure below shows the high level overview of RDT resource flow in ACRN hypervisor. - If CAT or/and MBA is supported, then setup masks array on all APs at the
pCPU post-initialize stage. The mask values are written to
IA32_type_MASK_n. Refer to :ref:`rdt_detection_capabilities` for details
on identifying values to program the mask/delay MRSs and the max CLOS.
- If CAT or/and MBA is supported, the CLOS of a **VM** will be stored into
its vCPU ``msr_store_area`` data structure guest part. It will be loaded
to MSR IA32_PQR_ASSOC at each VM entry.
- If CAT or/and MBA is supported, the CLOS of **hypervisor** is stored for
all VMs, in their vCPU ``msr_store_area`` data structure host part. It will
be loaded to MSR IA32_PQR_ASSOC at each VM exit.
The figure below shows the high level overview of RDT resource flow in the
ACRN hypervisor.
.. figure:: images/cat_mba_software_flow.png .. figure:: images/cat_mba_software_flow.png
:align: center :align: center

View File

@ -342,7 +342,11 @@ Recommended BIOS settings
Configure RDT Configure RDT
------------- -------------
In addition to setting the CAT configuration via HV commands, we allow developers to add the CAT configurations to the VM config and do the configure automatically at the time of RTVM creation. Refer to :ref:`rdt_configuration` for details on RDT configuration and :ref:`hv_rdt` for details on RDT high-level design. In addition to setting the CAT configuration via HV commands, we allow
developers to add CAT configurations to the VM config and configure
automatically at the time of RTVM creation. Refer to :ref:`rdt_configuration`
for details on RDT configuration and :ref:`hv_rdt` for details on RDT
high-level design.
Set up the core allocation for the RTVM Set up the core allocation for the RTVM
--------------------------------------- ---------------------------------------

View File

@ -168,16 +168,16 @@ Glossary of Terms
RDT RDT
Intel Resource Director Technology (Intel RDT) provides a set of Intel Resource Director Technology (Intel RDT) provides a set of
monitoring and allocation capabilities to control resources such as monitoring and allocation capabilities to control resources such as
Cache, Memory. ACRN supports, Cache Allocation Technology (CAT) and Cache and Memory. ACRN supports Cache Allocation Technology (CAT) and
Memory Bandwidth Allocation (MBA). Memory Bandwidth Allocation (MBA).
RTVM RTVM
Real-time VM. A specially designed VM to run hard real-time or Real-time VM. A specially-designed VM that can run hard real-time or
soft real-time workloads (or application) much more efficiently soft real-time workloads (or applications) much more efficiently
than the typical User VM through the use of pass-through interrupt than the typical User VM through the use of a passthrough interrupt
controller, polling-mode Virtio, Intel RDT allocation features(CAT, MBA) controller, polling-mode Virtio, Intel RDT allocation features (CAT,
and I/O prioritization. RTVMs are typically a :term:`pre-launched VM`. MBA), and I/O prioritization. RTVMs are typically a :term:`pre-launched VM`.
A non-:term:`safety VM` with real-time requirements can a A non-:term:`safety VM` with real-time requirements is a
:term:`post-launched VM`. :term:`post-launched VM`.
Safety VM Safety VM

View File

@ -3,39 +3,55 @@
RDT Configuration RDT Configuration
################# #################
On x86 platforms that support Intel Resource Director Technology(RDT) Allocation features such as Cache Allocation Technology (CAT) and Memory Bandwidth Allocation(MBA), ACRN hypervisor can be used to limit regular VMs which may be over-utilizing common resources such cache and memory bandwidth relative to their priority so that performance of other higher priority VMs (such as RTVMs) are not impacted. On x86 platforms that support Intel Resource Director Technology (RDT)
There are basically 3 steps to use RDT, allocation features such as Cache Allocation Technology (CAT) and Memory
Bandwidth Allocation (MBA), the ACRN hypervisor can be used to limit regular
VMs which may be over-utilizing common resources such as cache and memory
bandwidth relative to their priorities so that the performance of other
higher priorities VMs (such as RTVMs) are not impacted.
Using RDT includes three steps:
1. Detect and enumerate RDT allocation capabilites on supported resources such as cache and memory bandwidth. 1. Detect and enumerate RDT allocation capabilites on supported resources such as cache and memory bandwidth.
2. Setup resource mask array MSRs for each CLOS (Class of Service), basically to limit or allow access to resource usage. #. Set up resource mask array MSRs (Model-Specific Registers) for each CLOS (Class of Service, which is a resource allocation), basically to limit or allow access to resource usage.
3. Select CLOS for the CPU associated with VM, that will apply the resource mask on the CP #. Select the CLOS for the CPU associated with the VM that will apply the resource mask on the CP.
Steps #2 and #3 configure RDT resources for a VM and can be done in 2 ways, Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
a) Using HV debug shell
b) Using VM configuration
The following sections discuss how to detect, enumerate capabilities and configure RDT resources for VMs in ACRN hypervisor. * Using a HV debug shell (See `Tuning RDT resources in HV debug shell`_)
* Using a VM configuration (See `Configure RDT for VM using VM Configuration`_)
For futher details, please refer to ACRN RDT high-level design :ref:`hv_rdt` and `Intel (R) 64 and IA-32 Architectures Software Developer's Manual, (Section 17.19 INTEL® RESOURCE DIRECTOR TECHNOLOGY ALLOCATION FEATURES) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_ The following sections discuss how to detect, enumerate capabilities, and
configure RDT resources for VMs in the ACRN hypervisor.
For further details, refer to the ACRN RDT high-level design :ref:`hv_rdt` and `Intel 64 and IA-32 Architectures Software Developer's Manual, (Section 17.19 Intel Resource Director Technology Allocation Features) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_
.. _rdt_detection_capabilities: .. _rdt_detection_capabilities:
RDT detection and resource capabilites RDT detection and resource capabilites
************************************** **************************************
From the ACRN HV debug shell, you can use ``cpuid`` to detect and identify the resource capabilities. You can use the platform's serial port for the HV shell (refer to :ref:`getting-started-up2` for setup instructions). From the ACRN HV debug shell, use ``cpuid`` to detect and identify the
resource capabilities. Use the platform's serial port for the HV shell
(refer to :ref:`getting-started-up2` for setup instructions).
Check if the platform supports RDT with ``cpuid``. First run ``cpuid 0x7 0x0``, the return value ebx[bit 15] should be set to 1 if the platform supports RDT. Then run ``cpuid 0x10 0x0``, and check EBX[3-1] bits. EBX[bit 1] set, represents L3 CAT is supported, EBX[bit 2] set, represents L2 CAT is supported and EBX[bit 3] set, represents MBA is supported. To query the capabilties of the supported resources, use the bit posistion as subleaf index. For example, run ``cpuid 0x10 0x2`` to query L2 CAT capability. Check if the platform supports RDT with ``cpuid``. First, run ``cpuid 0x7 0x0``; the return value ebx [bit 15] is set to 1 if the platform supports
RDT. Next, run ``cpuid 0x10 0x0`` and check the EBX [3-1] bits. EBX [bit 1]
indicates that L3 CAT is supported. EBX [bit 2] indicates that L2 CAT is
supported. EBX [bit 3] indicates that MBA is supported. To query the
capabilties of the supported resources, use the bit position as a subleaf
index. For example, run ``cpuid 0x10 0x2`` to query the L2 CAT capability.
.. code-block:: none .. code-block:: none
ACRN:\>cpuid 0x7 0x0 ACRN:\>cpuid 0x7 0x0
cpuid leaf: 0x7, subleaf: 0x0, 0x0:0xd39ffffb:0x00000818:0xbc000400 cpuid leaf: 0x7, subleaf: 0x0, 0x0:0xd39ffffb:0x00000818:0xbc000400
For L3/L2, the following are the bit encoding, L3/L2 bit encoding:
* EAX[bit 4:0] reports the length of cache mask minus one. For example a value 0xa, means cache mask is 0x7ff.
* EBX[bit 31:0] reports a bit mask. Each set bit indicates the corresponding unit of the cache allocation may be used by other entities in the platform. (e.g. integrated graphics engine) * EAX [bit 4:0] reports the length of the cache mask minus one. For example, a value 0xa means the cache mask is 0x7ff.
* ECX[bit 2] if set, indicates cache Code and Data Prioritization Technology is supported. * EBX [bit 31:0] reports a bit mask. Each set bit indicates the corresponding unit of the cache allocation that can be used by other entities in the platform (e.g. integrated graphics engine).
* EDX[bit 15:0] reports the maximum CLOS supported for the resource minus one. For example a value of 0xf, means the max CLOS supported is 0x10. * ECX [bit 2] if set, indicates that cache Code and Data Prioritization Technology is supported.
* EDX [bit 15:0] reports the maximum CLOS supported for the resource minus one. For example, a value of 0xf means the max CLOS supported is 0x10.
.. code-block:: none .. code-block:: none
@ -45,11 +61,12 @@ Check if the platform supports RDT with ``cpuid``. First run ``cpuid 0x7 0x0``,
ACRN:\>cpuid 0x10 **0x1** ACRN:\>cpuid 0x10 **0x1**
cpuid leaf: 0x10, subleaf: 0x1, 0xa:0x600:0x4:0xf cpuid leaf: 0x10, subleaf: 0x1, 0xa:0x600:0x4:0xf
For MBA, the following are the bit encoding, MBA bit encoding:
* EAX[bit 11:0] reports the maximum MBA throttling value minus one. For example a value 0x59, means max delay value is 0x60.
* EBX[bit 31:0] reserved * EAX [bit 11:0] reports the maximum MBA throttling value minus one. For example, a value 0x59 means the max delay value is 0x60.
* ECX[bit 2] reports whether the response of the delay values is linear. * EBX [bit 31:0] reserved.
* EDX[bit 15:0] reports the maximum CLOS supported for the resource minus one. For example a value of 0x7, means the max CLOS supported is 0x8. * ECX [bit 2] reports whether the response of the delay values is linear.
* EDX [bit 15:0] reports the maximum CLOS supported for the resource minus one. For example, a value of 0x7 means the max CLOS supported is 0x8.
.. code-block:: none .. code-block:: none
@ -62,10 +79,10 @@ Check if the platform supports RDT with ``cpuid``. First run ``cpuid 0x7 0x0``,
Tuning RDT resources in HV debug shell Tuning RDT resources in HV debug shell
************************************** **************************************
This section explains how to configure the RDT resources from HV debug shell. This section explains how to configure the RDT resources from the HV debug
shell.
#. Check PCPU IDs of each VM, the ``vcpu_list`` shows that VM0 is running on PCPU0, #. Check the PCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is running on PCPU0, and VM1 is running on PCPU1:
and VM1 is running on PCPU1:
.. code-block:: none .. code-block:: none
@ -76,14 +93,14 @@ This section explains how to configure the RDT resources from HV debug shell.
0 0 0 PRIMARY Running 0 0 0 PRIMARY Running
1 1 0 PRIMARY Running 1 1 0 PRIMARY Running
#. Set resource mask array MSRs for each CLOS with ``wrmsr <reg_num> <value>``. For example if we want to restrict VM1 to use the lower 4 ways of LLC cache and allocate upper 7 ways of LLC access to VM0, first a CLOS is assigned for each VM. (say VM0 is assigned CLOS 0 and VM1 CLOS1). Then resource mask MSR corresponding to the CLOS0, in this case IA32_L3_MASK_BASE + 0 is programmed to 0x7f0 and resouce mask MSR corresponding to CLOS1, IA32_L3_MASK_BASE + 1 is set to 0xf. #. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``. For example, if you want to restrict VM1 to use the lower 4 ways of LLC cache and you want to allocate the upper 7 ways of LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0 is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is programmed to 0x7f0. Finally, resource mask the MSR that corresponds to CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
.. code-block:: none .. code-block:: none
ACRN:\>wrmsr -p1 0xc90 0x7f0 ACRN:\>wrmsr -p1 0xc90 0x7f0
ACRN:\>wrmsr -p1 0xc91 0xf ACRN:\>wrmsr -p1 0xc91 0xf
#. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32] (0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that, IA32_PQR_ASSOC is per LP MSR and CLOS has to programmed on each LP. #. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32] (0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
.. code-block:: none .. code-block:: none
@ -95,7 +112,7 @@ This section explains how to configure the RDT resources from HV debug shell.
Configure RDT for VM using VM Configuration Configure RDT for VM using VM Configuration
******************************************* *******************************************
#. RDT on ACRN is enabled by default on platforms that have the support. Thanks to offline tool approach which generates a platform specific xml file using which ACRN identifies if RDT is supported on the platform or not. But the feature can be also be toggled using CONFIG_RDT_ENABLED flag with ``make menuconfig`` command. The first step is to clone the ACRN source code (if you haven't done it already): #. RDT on ACRN is enabled by default on supported platforms. This information can be found using an offline tool that generates a platform-specific xml file that helps ACRN identify RDT-supported platforms. This feature can be also be toggled using the CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first step is to clone the ACRN source code (if you haven't already done so):
.. code-block:: none .. code-block:: none
@ -105,7 +122,7 @@ Configure RDT for VM using VM Configuration
.. figure:: images/menuconfig-rdt.png .. figure:: images/menuconfig-rdt.png
:align: center :align: center
#. The predefined cache masks can be found at ``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c``, for respective board. For example for apl-up2, it can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``. #. The predefined cache masks can be found at ``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards. For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``.
.. code-block:: none .. code-block:: none
:emphasize-lines: 3,7,11,15 :emphasize-lines: 3,7,11,15
@ -129,10 +146,10 @@ Configure RDT for VM using VM Configuration
}, },
}; };
.. note:: .. note::
User can change the mask values, but cache mask must have **continuous bits**, or a #GP fault can be triggered. Similary when programming MBA delay value, care should taken to set the value to less than or equal to MAX dealy value. Users can change the mask values, but the cache mask must have **continuous bits** or a #GP fault can be triggered. Similary, when programming an MBA delay value, be sure to set the value to less than or equal to the MAX delay value.
#. Set up CLOS in the VM config. Please follow `RDT detection and resource capabilites` to identify the MAX CLOS that can be used. In ACRN we a value between 0 to **the lowest common MAX CLOS** among all the RDT resources to avoid resource misconfigurations. We will take SOS on sharing mode as an example. Its configuration data can be found at ``hypervisor/arch/x86/configs/vm_config.c`` #. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilites`_ to identify the MAX CLOS that can be used. ACRN uses the **the lowest common MAX CLOS** value among all RDT resources to avoid resource misconfigurations. For example, configuration data for the Service VM sharing mode can be found at ``hypervisor/arch/x86/configs/vm_config.c``
.. code-block:: none .. code-block:: none
:emphasize-lines: 6 :emphasize-lines: 6
@ -153,10 +170,10 @@ Configure RDT for VM using VM Configuration
}, },
}; };
.. note:: .. note::
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2>...clos n). So care should be taken to program the VMs CLOS accordingly. In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2>...clos n). So, carefully program each VM's CLOS accordingly.
#. Careful consideration should be made in assigning vCPU affinity. In cache isolation configuration, not only need to isolate CAT-capable caches, but need to isolate lower-level caches as well. In the following example, logical processor #0 and #2 share L1 and L2 caches. In this case, do not assign LP #0 and LP #2 to different VMs that need to do cache isolation. Assign LP #1 and LP #3 with similar consideration. #. Careful consideration should be made when assigning vCPU affinity. In a cache isolation configuration, in addition to isolating CAT-capable caches, you must also isolate lower-level caches. In the following example, logical processor #0 and #2 share L1 and L2 caches. In this case, do not assign LP #0 and LP #2 to different VMs that need to do cache isolation. Assign LP #1 and LP #3 with similar consideration:
.. code-block:: none .. code-block:: none
:emphasize-lines: 3 :emphasize-lines: 3
@ -177,9 +194,9 @@ Configure RDT for VM using VM Configuration
PU L#2 (P#1) PU L#2 (P#1)
PU L#3 (P#3) PU L#3 (P#3)
#. Similary bandwidth control is per-core (not per LP), so max delay values of per-LP CLOS is applied to the core. If HT is turned on, dont place high priority threads on sibling LP running lower priority threads. #. Bandwidth control is per-core (not per LP), so max delay values of per-LP CLOS is applied to the core. If HT is turned on, dont place high priority threads on sibling LPs running lower priority threads.
#. Based on scenario, build the ACRN hypervisor and copy the artifact ``acrn.efi`` to the #. Based on our scenario, build the ACRN hypervisor and copy the artifact ``acrn.efi`` to the
``/boot/EFI/acrn`` directory. If needed, update the devicemodel ``acrn-dm`` as well in ``/usr/bin`` directory. see :ref:`getting-started-building` for building instructions. ``/boot/EFI/acrn`` directory. If needed, update the devicemodel ``acrn-dm`` as well in ``/usr/bin`` directory. see :ref:`getting-started-building` for building instructions.
.. code-block:: none .. code-block:: none
@ -191,4 +208,4 @@ Configure RDT for VM using VM Configuration
$ mount /dev/mmcblk0p0 /boot $ mount /dev/mmcblk0p0 /boot
$ scp <acrn.efi-at-your-compile-PC> /boot/EFI/acrn $ scp <acrn.efi-at-your-compile-PC> /boot/EFI/acrn
#. Restart the platform #. Restart the platform.