doc: consistent spelling of passthrough

Attempt to replace all the variations of "pass-thru", "pass thru", "pass
through", and "pass-through" to be "passthrough" (except for doc labels
and in code or API uses)

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2020-06-18 11:38:09 -07:00 committed by David Kinder
parent 380cd7f33e
commit 922129cad4
15 changed files with 98 additions and 98 deletions

View File

@ -83,8 +83,8 @@ responses to user space modules, notified by vIRQ injections.
.. _MPT_interface:
AcrnGT mediated pass-through (MPT) interface
**************************************************
AcrnGT mediated passthrough (MPT) interface
*******************************************
AcrnGT receives request from GVT module through MPT interface. Refer to the
:ref:`Graphic_mediation` page.

View File

@ -17,7 +17,7 @@ SoCs.
This document describes:
- The different GPU virtualization techniques
- GVT-g mediated pass-through
- GVT-g mediated passthrough
- High level design
- Key components
- GVT-g new architecture differentiation
@ -47,7 +47,7 @@ Background
Intel GVT-g is an enabling technology in emerging graphics
virtualization scenarios. It adopts a full GPU virtualization approach
based on mediated pass-through technology, to achieve good performance,
based on mediated passthrough technology, to achieve good performance,
scalability and secure isolation among Virtual Machines (VMs). A virtual
GPU (vGPU), with full GPU features, is presented to each VM so that a
native graphics driver can run directly inside a VM.
@ -161,10 +161,10 @@ also suffers from the following intrinsic limitations:
exhibit quite different performance, which gives rise to a need for a
fine-grained graphics tuning effort.
Direct Pass-Through
Direct Passthrough
-------------------
"Direct pass-through" dedicates the GPU to a single VM, providing full
"Direct passthrough" dedicates the GPU to a single VM, providing full
features and good performance, but at the cost of device sharing
capability among VMs. Only one VM at a time can use the hardware
acceleration capability of the GPU, which is a major limitation of this
@ -177,7 +177,7 @@ solution. Intel GVT-d uses this mechanism.
:align: center
:name: gvt-pass-through
Pass-Through
Passthrough
SR-IOV
------
@ -188,16 +188,16 @@ with each VF directly assignable to a VM.
.. _Graphic_mediation:
Mediated Pass-Through
Mediated Passthrough
*********************
Intel GVT-g achieves full GPU virtualization using a "mediated
pass-through" technique.
passthrough" technique.
Concept
=======
Mediated pass-through allows a VM to access performance-critical I/O
Mediated passthrough allows a VM to access performance-critical I/O
resources (usually partitioned) directly, without intervention from the
hypervisor in most cases. Privileged operations from this VM are
trapped-and-emulated to provide secure isolation among VMs.
@ -207,7 +207,7 @@ trapped-and-emulated to provide secure isolation among VMs.
:align: center
:name: mediated-pass-through
Mediated Pass-Through
Mediated Passthrough
The Hypervisor must ensure that no vulnerability is exposed when
assigning performance-critical resource to each VM. When a
@ -229,7 +229,7 @@ Examples of performance-critical I/O resources include the following:
Performance-Critical I/O Resources
The key to implementing mediated pass-through for a specific device is
The key to implementing mediated passthrough for a specific device is
to define the right policy for various I/O resources.
Virtualization Policies for GPU Resources
@ -317,7 +317,7 @@ High Level Architecture
:numref:`gvt-arch` shows the overall architecture of GVT-g, based on the
ACRN hypervisor, with Service VM as the privileged VM, and multiple user
guests. A GVT-g device model working with the ACRN hypervisor,
implements the policies of trap and pass-through. Each guest runs the
implements the policies of trap and passthrough. Each guest runs the
native graphics driver and can directly access performance-critical
resources: the Frame Buffer and Command Buffer, with resource
partitioning (as presented later). To protect privileged resources, that
@ -331,14 +331,14 @@ concurrently with the CPU scheduler in ACRN to share the physical GPU
timeslot among the VMs. GVT-g uses the physical GPU to directly execute
all the commands submitted from a VM, so it avoids the complexity of
emulating the Render Engine, which is the most complex part of the GPU.
In the meantime, the resource pass-through of both the Frame Buffer and
In the meantime, the resource passthrough of both the Frame Buffer and
Command Buffer minimizes the hypervisor's intervention of CPU accesses,
while the GPU scheduler guarantees every VM a quantum time-slice for
direct GPU execution. With that, GVT-g can achieve near-native
performance for a VM workload.
In :numref:`gvt-arch`, the yellow GVT device model works as a client on
top of an i915 driver in the Service VM. It has a generic Mediated Pass-Through
top of an i915 driver in the Service VM. It has a generic Mediated Passthrough
(MPT) interface, compatible with all types of hypervisors. For ACRN,
some extra development work is needed for such MPT interfaces. For
example, we need some changes in ACRN-DM to make ACRN compatible with
@ -795,7 +795,7 @@ the shadow PTE entries.
Per-VM Shadow PPGTT
-------------------
To support local graphics memory access pass-through, GVT-g implements
To support local graphics memory access passthrough, GVT-g implements
per-VM shadow local page tables. The local graphics memory is only
accessible from the Render Engine. The local page tables have two-level
paging structures, as shown in :numref:`per-vm-shadow`.

View File

@ -1189,8 +1189,8 @@ and waits for a resume signal. When the User VM should exit from S3, DM will
get a wakeup signal and reset the User VM to emulate the User VM exit from
S3.
Pass-through in Device Model
Passthrough in Device Model
****************************
You may refer to :ref:`hv-device-passthrough` for pass-through realization
You may refer to :ref:`hv-device-passthrough` for passthrough realization
in device model.

View File

@ -61,7 +61,7 @@ for its RT VM:
- CAT (Cache Allocation Technology)
- MBA (Memory Bandwidth Allocation)
- LAPIC pass-thru
- LAPIC passthrough
- Polling mode driver
- ART (always running timer)
- other TCC features like split lock detection, Pseudo locking for cache
@ -109,7 +109,7 @@ separate instrument cluster VM is started after the User VM is booted.
:numref:`overview-arch1.0` shows the architecture of ACRN 1.0 together with
the IC VM and Service VM. As shown, the Service VM owns most of platform devices and
provides I/O mediation to VMs. Some of the PCIe devices function as a
pass-through mode to User VMs according to VM configuration. In addition,
passthrough mode to User VMs according to VM configuration. In addition,
the Service VM could run the IC applications and HV helper applications such
as the Device Model, VM manager, etc. where the VM manager is responsible
for VM start/stop/pause, virtual CPU pause/resume, etc.
@ -136,7 +136,7 @@ compared to ACRN 1.0 is that:
interference between different VMs
- ACRN 2.0 supports RT VM for a post-launched User VM, with assistant features like LAPIC
pass-thru and PMD virtio driver
passthrough and PMD virtio driver
ACRN 2.0 is still WIP, and some of its features are already merged in the master.
@ -162,12 +162,12 @@ ACRN adopts various approaches for emulating devices for the User VM:
- para-virtualized, requiring front-end drivers in
the User VM to function.
- **Pass-through device**: A device passed through to the User VM is fully
- **Passthrough device**: A device passed through to the User VM is fully
accessible to the User VM without interception. However, interrupts
are first handled by the hypervisor before
being injected to the User VM.
- **Mediated pass-through device**: A mediated pass-through device is a
- **Mediated passthrough device**: A mediated passthrough device is a
hybrid of the previous two approaches. Performance-critical
resources (mostly data-plane related) are passed-through to the User VMs and
others (mostly control-plane related) are emulated.
@ -275,7 +275,7 @@ used by commercial OS).
- On top of vCPUs are three components for device emulation: one for
emulation inside the hypervisor, another for communicating with
the Service VM for mediation, and the third for managing pass-through
the Service VM for mediation, and the third for managing passthrough
devices.
- The highest layer is a VM management module providing
@ -311,7 +311,7 @@ based on command line configurations.
Based on a VHM kernel module, DM interacts with VM manager to create the User
VM. It then emulates devices through full virtualization on the DM user
level, or para-virtualized based on kernel mediator (such as virtio,
GVT), or pass-through based on kernel VHM APIs.
GVT), or passthrough based on kernel VHM APIs.
Refer to :ref:`hld-devicemodel` for more details.
@ -592,6 +592,6 @@ Some details about the ACPI table for the User and Service VMs:
knows which register the User VM writes to trigger power state
transitions. Device Model must register an I/O handler for it.
- The ACPI table in the Service VM is passthru. There is no ACPI parser
- The ACPI table in the Service VM is passthrough. There is no ACPI parser
in ACRN HV. The power management related ACPI table is
generated offline and hardcoded in ACRN HV.

View File

@ -19,7 +19,7 @@ for the Device Model to build a virtual ACPI table.
The Px/Cx data includes four
ACPI objects: _PCT, _PPC, and _PSS for P-state management, and _CST for
C-state management. All these ACPI data must be consistent with the
native data because the control method is a kind of pass through.
native data because the control method is a kind of passthrough.
These ACPI objects data are parsed by an offline tool and hard-coded in a
Hypervisor module named CPU state table:

View File

@ -11,10 +11,10 @@ to manage interrupts and exceptions, as shown in
:numref:`interrupt-modules-overview`. In its native layer, it configures
the physical PIC, IOAPIC, and LAPIC to support different interrupt
sources from the local timer/IPI to the external INTx/MSI. In its virtual guest
layer, it emulates virtual PIC, virtual IOAPIC, and virtual LAPIC/pass-thru
layer, it emulates virtual PIC, virtual IOAPIC, and virtual LAPIC/passthrough
LAPIC. It provides full APIs, allowing virtual interrupt injection from
emulated or pass-thru devices. The contents in this section do not include
the pass-thru LAPIC case. For the pass-thru LAPIC, refer to
emulated or passthrough devices. The contents in this section do not include
the passthrough LAPIC case. For the passthrough LAPIC, refer to
:ref:`lapic_passthru`
.. figure:: images/interrupt-image3.png
@ -29,10 +29,10 @@ the ACRN hypervisor sets up the physical interrupt in its basic
interrupt modules (e.g., IOAPIC/LAPIC/IDT). It dispatches the interrupt
in the hypervisor interrupt flow control layer to the corresponding
handlers; this could be pre-defined IPI notification, timer, or runtime
registered pass-thru devices. The ACRN hypervisor then uses its VM
registered passthrough devices. The ACRN hypervisor then uses its VM
interfaces based on vPIC, vIOAPIC, and vMSI modules, to inject the
necessary virtual interrupt into the specific VM, or directly deliver
interrupt to the specific RT VM with pass-thru LAPIC.
interrupt to the specific RT VM with passthrough LAPIC.
.. figure:: images/interrupt-image2.png
:align: center
@ -100,7 +100,7 @@ Physical Interrupt Initialization
After ACRN hypervisor gets control from the bootloader, it
initializes all physical interrupt-related modules for all the CPUs. ACRN
hypervisor creates a framework to manage the physical interrupt for
hypervisor local devices, pass-thru devices, and IPI between CPUs, as
hypervisor local devices, passthrough devices, and IPI between CPUs, as
shown in :numref:`hv-interrupt-init`:
.. figure:: images/interrupt-image66.png
@ -323,7 +323,7 @@ there are three different handling flows according to flags:
- ``IRQF_LEVEL && IRQF_PT``
For pass-thru devices, to avoid continuous interrupt triggers, it masks
For passthrough devices, to avoid continuous interrupt triggers, it masks
the IOAPIC pin and leaves it unmasked until corresponding vIOAPIC
pin gets an explicit EOI ACK from guest.

View File

@ -141,26 +141,26 @@ Port I/O is supported for PCI device config space 0xcfc and 0xcf8, vUART
host-bridge at BDF (Bus Device Function) 0.0:0 to each guest. Access to
256 bytes of config space for virtual host bridge is emulated.
I/O - Pass-thru devices
=======================
I/O - Passthrough devices
=========================
ACRN, in partition mode, supports passing thru PCI devices on the
platform. All the pass-thru devices are exposed as child devices under
platform. All the passthrough devices are exposed as child devices under
the virtual host bridge. ACRN does not support either passing thru
bridges or emulating virtual bridges. Pass-thru devices should be
bridges or emulating virtual bridges. Passthrough devices should be
statically allocated to each guest using the guest configuration. ACRN
expects the developer to provide the virtual BDF to BDF of the
physical device mapping for all the pass-thru devices as part of each guest
physical device mapping for all the passthrough devices as part of each guest
configuration.
Runtime ACRN support for guests
*******************************
ACRN, in partition mode, supports an option to pass-thru LAPIC of the
ACRN, in partition mode, supports an option to passthrough LAPIC of the
physical CPUs to the guest. ACRN expects developers to specify if the
guest needs LAPIC pass-thru using guest configuration. When guest
guest needs LAPIC passthrough using guest configuration. When guest
configures vLAPIC as x2APIC, and if the guest configuration has LAPIC
pass-thru enabled, ACRN passes the LAPIC to the guest. Guest can access
passthrough enabled, ACRN passes the LAPIC to the guest. Guest can access
the LAPIC hardware directly without hypervisor interception. During
runtime of the guest, this option differentiates how ACRN supports
inter-processor interrupt handling and device interrupt handling. This
@ -181,18 +181,18 @@ the Service VM startup in sharing mode.
Inter-processor Interrupt (IPI) Handling
========================================
Guests w/o LAPIC pass-thru
--------------------------
Guests w/o LAPIC passthrough
----------------------------
For guests without LAPIC pass-thru, IPIs between guest CPUs are handled in
For guests without LAPIC passthrough, IPIs between guest CPUs are handled in
the same way as sharing mode in ACRN. Refer to :ref:`virtual-interrupt-hld`
for more details.
Guests w/ LAPIC pass-thru
-------------------------
Guests w/ LAPIC passthrough
---------------------------
ACRN supports pass-thru if and only if the guest is using x2APIC mode
for the vLAPIC. In LAPIC pass-thru mode, writes to the Interrupt Command
ACRN supports passthrough if and only if the guest is using x2APIC mode
for the vLAPIC. In LAPIC passthrough mode, writes to the Interrupt Command
Register (ICR) x2APIC MSR is intercepted. Guest writes the IPI info,
including vector, and destination APIC IDs to the ICR. Upon an IPI request
from the guest, ACRN does a sanity check on the destination processors
@ -204,8 +204,8 @@ corresponding to the destination processor info in the ICR.
:align: center
Pass-thru device support
========================
Passthrough device support
==========================
Configuration space access
--------------------------
@ -217,7 +217,7 @@ Address registers (BAR), offsets starting from 0x10H to 0x24H, provide
the information about the resources (I/O and MMIO) used by the PCI
device. ACRN virtualizes the BAR registers and for the rest of the
config space, forwards reads and writes to the physical config space of
pass-thru devices. Refer to the `I/O`_ section below for more details.
passthrough devices. Refer to the `I/O`_ section below for more details.
.. figure:: images/partition-image1.png
:align: center
@ -226,16 +226,16 @@ pass-thru devices. Refer to the `I/O`_ section below for more details.
DMA
---
ACRN developers need to statically define the pass-thru devices for each
ACRN developers need to statically define the passthrough devices for each
guest using the guest configuration. For devices to DMA to/from guest
memory directly, ACRN parses the list of pass-thru devices for each
memory directly, ACRN parses the list of passthrough devices for each
guest and creates context entries in the VT-d remapping hardware. EPT
page tables created for the guest are used for VT-d page tables.
I/O
---
ACRN supports I/O for pass-thru devices with two restrictions.
ACRN supports I/O for passthrough devices with two restrictions.
1) Supports only MMIO. Thus, this requires developers to expose I/O BARs as
not present in the guest configuration.
@ -244,7 +244,7 @@ ACRN supports I/O for pass-thru devices with two restrictions.
As the guest PCI sub-system scans the PCI bus and assigns a Guest Physical
Address (GPA) to the MMIO BAR, ACRN maps the GPA to the address in the
physical BAR of the pass-thru device using EPT. The following timeline chart
physical BAR of the passthrough device using EPT. The following timeline chart
explains how PCI devices are assigned to guest and BARs are mapped upon
guest initialization.
@ -255,14 +255,14 @@ guest initialization.
Interrupt Configuration
-----------------------
ACRN supports both legacy (INTx) and MSI interrupts for pass-thru
ACRN supports both legacy (INTx) and MSI interrupts for passthrough
devices.
INTx support
~~~~~~~~~~~~
ACRN expects developers to identify the interrupt line info (0x3CH) from
the physical BAR of the pass-thru device and build an interrupt entry in
the physical BAR of the passthrough device and build an interrupt entry in
the mptable for the corresponding guest. As guest configures the vIOAPIC
for the interrupt RTE, ACRN writes the info from the guest RTE into the
physical IOAPIC RTE. Upon the guest kernel request to mask the interrupt,
@ -275,8 +275,8 @@ MSI support
~~~~~~~~~~~
Guest reads/writes to PCI configuration space for configuring MSI
interrupts using an address. Data and control registers are pass-thru to
the physical BAR of the pass-thru device. Refer to `Configuration
interrupts using an address. Data and control registers are passthrough to
the physical BAR of the passthrough device. Refer to `Configuration
space access`_ for details on how the PCI configuration space is emulated.
Virtual device support
@ -291,8 +291,8 @@ writes are discarded.
Interrupt delivery
==================
Guests w/o LAPIC pass-thru
--------------------------
Guests w/o LAPIC passthrough
----------------------------
In partition mode of ACRN, interrupts stay disabled after a vmexit. The
processor does not take interrupts when it is executing in VMX root
@ -307,10 +307,10 @@ for device interrupts.
:align: center
Guests w/ LAPIC pass-thru
-------------------------
Guests w/ LAPIC passthrough
---------------------------
For guests with LAPIC pass-thru, ACRN does not configure vmexit upon
For guests with LAPIC passthrough, ACRN does not configure vmexit upon
external interrupts. There is no vmexit upon device interrupts and they are
handled by the guest IDT.
@ -320,15 +320,15 @@ Hypervisor IPI service
ACRN needs IPIs for events such as flushing TLBs across CPUs, sending virtual
device interrupts (e.g. vUART to vCPUs), and others.
Guests w/o LAPIC pass-thru
--------------------------
Guests w/o LAPIC passthrough
----------------------------
Hypervisor IPIs work the same way as in sharing mode.
Guests w/ LAPIC pass-thru
-------------------------
Guests w/ LAPIC passthrough
---------------------------
Since external interrupts are pass-thru to the guest IDT, IPIs do not
Since external interrupts are passthrough to the guest IDT, IPIs do not
trigger vmexit. ACRN uses NMI delivery mode and the NMI exiting is
chosen for vCPUs. At the time of NMI interrupt on the target processor,
if the processor is in non-root mode, vmexit happens on the processor
@ -344,8 +344,8 @@ For a guest console in partition mode, ACRN provides an option to pass
``vmid`` as an argument to ``vm_console``. vmid is the same as the one
developers use in the guest configuration.
Guests w/o LAPIC pass-thru
--------------------------
Guests w/o LAPIC passthrough
----------------------------
Works the same way as sharing mode.
@ -354,7 +354,7 @@ Hypervisor Console
ACRN uses the TSC deadline timer to provide a timer service. The hypervisor
console uses a timer on CPU0 to poll characters on the serial device. To
support LAPIC pass-thru, the TSC deadline MSR is pass-thru and the local
support LAPIC passthrough, the TSC deadline MSR is passthrough and the local
timer interrupt is also delivered to the guest IDT. Instead of the TSC
deadline timer, ACRN uses the VMX preemption timer to poll the serial device.

View File

@ -8,16 +8,16 @@ management, which includes:
- VCPU request for virtual interrupt kick off,
- vPIC/vIOAPIC/vLAPIC for virtual interrupt injection interfaces,
- physical-to-virtual interrupt mapping for a pass-thru device, and
- physical-to-virtual interrupt mapping for a passthrough device, and
- the process of VMX interrupt/exception injection.
A standard VM never owns any physical interrupts; all interrupts received by the
Guest OS come from a virtual interrupt injected by vLAPIC, vIOAPIC, or
vPIC. Such virtual interrupts are triggered either from a pass-through
vPIC. Such virtual interrupts are triggered either from a passthrough
device or from I/O mediators in the Service VM via hypercalls. The
:ref:`interrupt-remapping` section discusses how the hypervisor manages
the mapping between physical and virtual interrupts for pass-through
devices. However, a hard RT VM with LAPIC pass-through does own the physical
the mapping between physical and virtual interrupts for passthrough
devices. However, a hard RT VM with LAPIC passthrough does own the physical
maskable external interrupts. On its physical CPUs, interrupts are disabled
in VMX root mode, while in VMX non-root mode, physical interrupts will be
delivered to RT VM directly.
@ -64,7 +64,7 @@ will send an IPI to kick it out, which leads to an external-interrupt
VM-Exit. In some cases, there is no need to send IPI when making a request,
because the CPU making the request itself is the target VCPU. For
example, the #GP exception request always happens on the current CPU when it
finds an invalid emulation has happened. An external interrupt for a pass-thru
finds an invalid emulation has happened. An external interrupt for a passthrough
device always happens on the VCPUs of the VM which this device is belonged to,
so after it triggers an external-interrupt VM-Exit, the current CPU is the very
target VCPU.
@ -93,7 +93,7 @@ APIs are invoked when an interrupt source from vLAPIC needs to inject
an interrupt, for example:
- from LVT like LAPIC timer
- from vIOAPIC for a pass-thru device interrupt
- from vIOAPIC for a passthrough device interrupt
- from an emulated device for a MSI
These APIs will finish by making a vCPU request.
@ -134,7 +134,7 @@ LAPIC passthrough based on vLAPIC
LAPIC passthrough is supported based on vLAPIC, the guest OS first boots with
vLAPIC in xAPIC mode and then switches to x2APIC mode to enable the LAPIC
pass-through.
passthrough.
In case of LAPIC passthrough based on vLAPIC, the system will have the
following characteristics.

View File

@ -92,7 +92,7 @@ The components are listed as follows.
* **Device Emulation** This component implements devices that are emulated in
the hypervisor itself, such as the virtual programmable interrupt controllers
including vPIC, vLAPIC and vIOAPIC.
* **Passthru Management** This component manages devices that are passed-through
* **Passthrough Management** This component manages devices that are passed-through
to specific VMs.
* **Extended Device Emulation** This component implements an I/O request
mechanism that allow the hypervisor to forward I/O accesses from a User

View File

@ -581,7 +581,7 @@ The following table shows some use cases of module level configuration design:
* - Configuration data provided by BSP
- This module is used to virtualize LAPIC, and the configuration data is
provided by BSP.
For example, some VMs use LAPIC pass-through and the other VMs use
For example, some VMs use LAPIC passthrough and the other VMs use
vLAPIC.
- If a function pointer is used, the prerequisite is
"hv_operation_mode == OPERATIONAL".

View File

@ -122,9 +122,9 @@ Glossary of Terms
OSPM
Operating System Power Management
Pass-Through Device
Passthrough Device
Physical devices (typically PCI) exclusively assigned to a guest. In
the Project ACRN architecture, pass-through devices are owned by the
the Project ACRN architecture, passthrough devices are owned by the
foreground OS.
Partition Mode

View File

@ -585,7 +585,7 @@ hypervisor, or in user space within an independent VM, overhead exists.
This overhead is worthwhile as long as the devices need to be shared by
multiple guest operating systems. If sharing is not necessary, then
there are more efficient methods for accessing devices, for example
"pass-through".
"passthrough".
ACRN device model is a placeholder of the User VM. It allocates memory for
the User OS, configures and initializes the devices used by the User VM,
@ -635,10 +635,10 @@ ACRN Device model incorporates these three aspects:
.. _pass-through:
Device pass through
*******************
Device passthrough
******************
At the highest level, device pass-through is about providing isolation
At the highest level, device passthrough is about providing isolation
of a device to a given guest operating system so that the device can be
used exclusively by that guest.
@ -662,8 +662,8 @@ Finally, there may be specialized PCI devices that only one guest domain
uses, so they should be passed through to the guest. Individual USB
ports could be isolated to a given domain too, or a serial port (which
is itself not shareable) could be isolated to a particular guest. In
ACRN hypervisor, we support USB controller Pass through only and we
don't support pass through for a legacy serial port, (for example
ACRN hypervisor, we support USB controller passthrough only and we
don't support passthrough for a legacy serial port, (for example
0x3f8).
@ -671,7 +671,7 @@ Hardware support for device passthrough
=======================================
Intel's current processor architectures provides support for device
pass-through with VT-d. VT-d maps guest physical address to machine
passthrough with VT-d. VT-d maps guest physical address to machine
physical address, so device can use guest physical address directly.
When this mapping occurs, the hardware takes care of access (and
protection), and the guest operating system can use the device as if it
@ -694,9 +694,9 @@ Hypervisor support for device passthrough
By using the latest virtualization-enhanced processor architectures,
hypervisors and virtualization solutions can support device
pass-through (using VT-d), including Xen, KVM, and ACRN hypervisor.
passthrough (using VT-d), including Xen, KVM, and ACRN hypervisor.
In most cases, the guest operating system (User
OS) must be compiled to support pass-through, by using
OS) must be compiled to support passthrough, by using
kernel build-time options. Hiding the devices from the host VM may also
be required (as is done with Xen using pciback). Some restrictions apply
in PCI, for example, PCI devices behind a PCIe-to-PCI bridge must be

View File

@ -18,7 +18,7 @@ the ``help`` command within the ACRN shell.
An example
**********
As an example, we'll show how to obtain the interrupts of a pass-through USB device.
As an example, we'll show how to obtain the interrupts of a passthrough USB device.
First, we can get the USB controller BDF number (0:15.0) through the
following command in the Service VM console::

View File

@ -16,10 +16,10 @@ Run RTVM with dedicated resources/devices
For best practice, ACRN allocates dedicated CPU, memory resources, and cache resources (using Intel
Resource Directory allocation Technology such as CAT, MBA) to RTVMs. For best real time performance
of I/O devices, we recommend using dedicated (pass-thru) PCIe devices to avoid VM-Exit at run time.
of I/O devices, we recommend using dedicated (passthrough) PCIe devices to avoid VM-Exit at run time.
.. note::
The configuration space for pass-thru PCI devices is still emulated and accessing it will
The configuration space for passthrough PCI devices is still emulated and accessing it will
trigger a VM-Exit.
RTVM with virtio PMD (Polling Mode Driver) for I/O sharing

View File

@ -33,7 +33,7 @@ The ACRN hypervisor shell supports the following commands:
* - int
- List interrupt information per CPU
* - pt
- Show pass-through device information
- Show passthrough device information
* - vioapic <vm_id>
- Show virtual IOAPIC (vIOAPIC) information for a specific VM
* - dump_ioapic
@ -184,7 +184,7 @@ IRQ vector number, etc.
pt
==
``pt`` provides pass-through detailed information, such as the virtual
``pt`` provides passthrough detailed information, such as the virtual
machine number, interrupt type, interrupt request, interrupt vector,
trigger mode, etc.