doc: Style cleanup in dm, gpio, interrupt hld

Minor style changes per Acrolinx recommendations and for consistency

Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
Reyes, Amy 2022-02-25 16:48:16 -08:00 committed by David Kinder
parent eb78f1bb7c
commit 85fe6d7d1a
3 changed files with 139 additions and 139 deletions

View File

@ -6,50 +6,51 @@ Virtual Interrupt
This section introduces the ACRN guest virtual interrupt
management, which includes:
- VCPU request for virtual interrupt kick off,
- vPIC/vIOAPIC/vLAPIC for virtual interrupt injection interfaces,
- physical-to-virtual interrupt mapping for a passthrough device, and
- the process of VMX interrupt/exception injection.
- vCPU request for virtual interrupt kickoff
- vPIC, vIOAPIC, and vLAPIC for virtual interrupt injection interfaces
- physical-to-virtual interrupt mapping for a passthrough device
- the process of VMX interrupt and exception injection
A standard VM never owns any physical interrupts; all interrupts received by the
Guest OS come from a virtual interrupt injected by vLAPIC, vIOAPIC, or
A standard VM never owns any physical interrupts. All interrupts received by the
guest OS come from a virtual interrupt injected by vLAPIC, vIOAPIC, or
vPIC. Such virtual interrupts are triggered either from a passthrough
device or from I/O mediators in the Service VM via hypercalls. The
:ref:`interrupt-remapping` section discusses how the hypervisor manages
:ref:`interrupt-remapping` section describes how the hypervisor manages
the mapping between physical and virtual interrupts for passthrough
devices. However, a hard RT VM with LAPIC passthrough does own the physical
devices. However, a hard RTVM with LAPIC passthrough does own the physical
maskable external interrupts. On its physical CPUs, interrupts are disabled
in VMX root mode, while in VMX non-root mode, physical interrupts will be
delivered to RT VM directly.
in VMX root mode. While in VMX non-root mode, physical interrupts are
delivered to the RTVM directly.
Emulation for devices is inside the Service VM user space device model, i.e.,
acrn-dm. However, for performance consideration, vLAPIC, vIOAPIC, and vPIC
are emulated inside HV directly.
Devices are emulated inside the Service VM Device Model, i.e.,
``acrn-dm``. However, for performance consideration, vLAPIC, vIOAPIC, and vPIC
are emulated inside the hypervisor directly.
From the guest OS point of view, vPIC is Virtual Wire Mode via vIOAPIC. The
symmetric I/O Mode is shown in :numref:`pending-virt-interrupt` later in
this section.
The following command line
options to guest Linux affects whether it uses PIC or IOAPIC:
The following command-line options to a Linux guest affect whether it uses PIC
or IOAPIC:
- **Kernel boot param with vPIC**: add "maxcpu=0" Guest OS will use PIC
- **Kernel boot param with vIOAPIC**: add "maxcpu=1" (as long as not "0")
Guest OS will use IOAPIC. And Keep IOAPIC pin2 as source of PIC.
- **Kernel boot parameter with vPIC**: Add ``maxcpu=0``. The guest OS will use
PIC.
- **Kernel boot parameter with vIOAPIC**: Add ``maxcpu=1`` (as long as not
"0"). The guest OS will use IOAPIC and keep IOAPIC pin 2 as the source of
PIC.
.. _vcpu-request-interrupt-injection:
vCPU Request for Interrupt Injection
************************************
The vCPU request mechanism (described in :ref:`pending-request-handlers`) is leveraged
to inject interrupts to a certain vCPU. As mentioned in
:ref:`ipi-management`,
physical vector 0xF0 is used to kick VCPU out of its VMX non-root mode,
used to make a request for virtual interrupt injection or other
requests such as flush EPT.
The vCPU request mechanism (described in :ref:`pending-request-handlers`) is
used to inject interrupts to a certain vCPU. As mentioned in
:ref:`ipi-management`, physical vector 0xF0 is used to kick the vCPU out of its
VMX non-root mode. Physical vector 0xF0 is also used to make a request for
virtual interrupt injection or other requests such as flush EPT.
.. note:: the IPI based vCPU request mechanism doesn't work for the hard RT VM.
.. note:: The IPI-based vCPU request mechanism doesn't work for the hard RTVM.
The eventid supported for virtual interrupt injection includes:
@ -60,21 +61,21 @@ The eventid supported for virtual interrupt injection includes:
The *vcpu_make_request* is necessary for a virtual interrupt
injection. If the target vCPU is running under VMX non-root mode, it
will send an IPI to kick it out, which leads to an external-interrupt
VM-Exit. In some cases, there is no need to send IPI when making a request,
because the CPU making the request itself is the target VCPU. For
sends an IPI to kick it out, which leads to an external-interrupt
VM-Exit. In some cases, there is no need to send an IPI when making a request,
because the CPU making the request itself is the target vCPU. For
example, the #GP exception request always happens on the current CPU when it
finds an invalid emulation has happened. An external interrupt for a passthrough
device always happens on the VCPUs of the VM which this device is belonged to,
so after it triggers an external-interrupt VM-Exit, the current CPU is the very
target VCPU.
device always happens on the vCPUs of the VM that the device belongs to.
After it triggers an external-interrupt VM-Exit, the current CPU is the
target vCPU.
Virtual LAPIC
*************
LAPIC is virtualized for all Guest types: Service and User VMs. Given support
by the physical processor, APICv Virtual Interrupt Delivery (VID) is enabled
and will support Posted-Interrupt feature. Otherwise, it will fall back to
LAPIC is virtualized for all guest types: Service VM and User VMs. Given support
by the physical processor, APICv virtual interrupt delivery (VID) is enabled
and supports the Posted-Interrupt feature. Otherwise, it falls back to
the legacy virtual interrupt injection mode.
vLAPIC provides the same features as the native LAPIC:
@ -94,9 +95,9 @@ an interrupt, for example:
- from LVT like LAPIC timer
- from vIOAPIC for a passthrough device interrupt
- from an emulated device for a MSI
- from an emulated device for an MSI
These APIs will finish by making a vCPU request.
These APIs finish by making a vCPU request.
.. doxygenfunction:: vlapic_inject_intr
:project: Project ACRN
@ -116,15 +117,14 @@ These APIs will finish by making a vCPU request.
EOI Processing
==============
EOI virtualization is enabled if APICv virtual interrupt delivery is
supported. Except for level triggered interrupts, the VM will not exit in
case of EOI.
If APICv virtual interrupt delivery is supported, EOI virtualization is enabled.
Except for level triggered interrupts, the VM will not exit in case of EOI.
In case of no APICv virtual interrupt delivery support, vLAPIC requires
EOI from Guest OS whenever a vector was acknowledged and processed by
guest. vLAPIC behavior is the same as HW LAPIC. Once an EOI is received,
If APICv virtual interrupt delivery is not supported, vLAPIC requires
EOI from the guest OS whenever a vector is acknowledged and processed by the
guest. vLAPIC behavior is the same as hardware LAPIC. Once an EOI is received,
it clears the highest priority vector in ISR, and updates PPR
status. vLAPIC will send an EOI message to vIOAPIC if the TMR bit is set to
status. vLAPIC sends an EOI message to vIOAPIC if the TMR bit is set to
indicate that is a level triggered interrupt.
.. _lapic_passthru:
@ -132,30 +132,29 @@ indicate that is a level triggered interrupt.
LAPIC Passthrough Based on vLAPIC
=================================
LAPIC passthrough is supported based on vLAPIC, the guest OS first boots with
LAPIC passthrough is supported based on vLAPIC. The guest OS first boots with
vLAPIC in xAPIC mode and then switches to x2APIC mode to enable the LAPIC
passthrough.
In case of LAPIC passthrough based on vLAPIC, the system will have the
following characteristics.
If LAPIC passthrough is based on vLAPIC, the system has the
following characteristics:
* IRQs received by the LAPIC can be handled by the Guest VM without ``vmexit``
* Guest VM always see virtual LAPIC IDs for security consideration
* most MSRs are directly accessible from Guest VM except for ``XAPICID``,
``LDR`` and ``ICR``. Write operations to ``ICR`` will be trapped to avoid
malicious IPIs. Read operations to ``XAPIC`` and ``LDR`` will be trapped in
order to make the Guest VM always see the virtual LAPIC IDs instead of the
* IRQs received by the LAPIC can be handled by the guest VM without ``vmexit``.
* Guest VM always sees virtual LAPIC IDs for security consideration.
* Most MSRs are directly accessible from the guest VM except for ``XAPICID``,
``LDR``, and ``ICR``. Write operations to ``ICR`` are trapped to avoid
malicious IPIs. Read operations to ``XAPIC`` and ``LDR`` are trapped,
so that the guest VM always sees the virtual LAPIC IDs instead of the
physical ones.
Virtual IOAPIC
**************
vIOAPIC is emulated by HV when Guest accesses MMIO GPA range:
0xFEC00000-0xFEC01000. vIOAPIC for Service VM should match to the native HW
IOAPIC Pin numbers. vIOAPIC for guest VM provides 48 pins. As the vIOAPIC is
always associated with vLAPIC, the virtual interrupt injection from
vIOAPIC will finally trigger a request for vLAPIC event by calling
vLAPIC APIs.
The hypervisor emulates vIOAPIC when the guest accesses the MMIO GPA range:
0xFEC00000-0xFEC01000. vIOAPIC for the Service VM should match the native
hardware IOAPIC pin numbers. vIOAPIC for a guest VM provides 48 pins. As the
vIOAPIC is always associated with vLAPIC, the virtual interrupt injection from
vIOAPIC triggers a request for a vLAPIC event by calling vLAPIC APIs.
**Supported APIs:**
@ -168,23 +167,23 @@ vLAPIC APIs.
Virtual PIC
***********
vPIC is required for TSC calculation. Normally guest OS will boot with
vPIC is required for TSC calculation. Normally the guest OS boots with
vIOAPIC and vPIC as the source of external interrupts. On every
VM Exit, HV will check if there are any pending external PIC interrupts.
vPIC APIs usage are similar to vIOAPIC.
VM Exit, the hypervisor checks for pending external PIC interrupts.
Usage of vPIC APIs is similar to that of vIOAPIC.
ACRN hypervisor emulates a vPIC for each VM based on IO range 0x20~0x21,
0xa0~0xa1 and 0x4d0~0x4d1.
ACRN hypervisor emulates a vPIC for each VM based on I/O range 0x20~0x21,
0xa0~0xa1, and 0x4d0~0x4d1.
If an interrupt source from vPIC need to inject an interrupt, the
If an interrupt source from vPIC needs to inject an interrupt, the
following APIs need be called, which will finally make a request for
*ACRN_REQUEST_EXTINT or ACRN_REQUEST_EVENT*:
``ACRN_REQUEST_EXTINT`` or ``ACRN_REQUEST_EVENT``:
.. doxygenfunction:: vpic_set_irqline
:project: Project ACRN
The following APIs are used to query the vector needed to be injected and ACK
the service (means move the interrupt from request service - IRR to in
The following APIs are used to query the vector that needs to be injected and
ACK the service (move the interrupt from request service - IRR to in
service - ISR):
.. doxygenfunction:: vpic_pending_intr
@ -196,13 +195,13 @@ service - ISR):
Virtual Exception
*****************
When doing emulation, an exception may need to be triggered in
hypervisor, for example:
When doing emulation, an exception may need to be triggered in the
hypervisor for these reasons:
- if guest accesses an invalid vMSR register,
- hypervisor needs to inject a #GP, or
- hypervisor needs to inject #PF when an instruction accesses a non-exist page
from rip_gva during instruction emulation.
- The guest accesses an invalid vMSR register.
- The hypervisor needs to inject a #GP.
- The hypervisor needs to inject a #PF when an instruction accesses a
non-existent page from ``rip_gva`` during instruction emulation.
ACRN hypervisor implements virtual exception injection using these APIs:
@ -221,11 +220,11 @@ ACRN hypervisor implements virtual exception injection using these APIs:
.. doxygenfunction:: vcpu_inject_ss
:project: Project ACRN
ACRN hypervisor uses the *vcpu_inject_gp/vcpu_inject_pf* functions
to queue exception request, and follows SDM vol3 - 6.15, Table 6-5 to
generate double fault if the condition is met.
ACRN hypervisor uses the ``vcpu_inject_gp`` and ``vcpu_inject_pf`` functions to
queue an exception request. The hypervisor follows `Intel® 64 and IA-32 Architectures Software Developer's Manual <https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html>`__, Volume 3, Section 6.15, Table 6-5, to
generate a double fault if the condition is met.
ACRN hypervisor could inject *extint/nmi* using the similar vcpu APIs:
ACRN hypervisor can inject ``extint`` and ``nmi`` using similar vCPU APIs:
.. doxygenfunction:: vcpu_inject_extint
:project: Project ACRN
@ -239,26 +238,27 @@ ACRN hypervisor could inject *extint/nmi* using the similar vcpu APIs:
Virtual Interrupt Injection
***************************
The source of virtual interrupts comes from either DM or assigned
Virtual interrupts come from the DM or assigned
devices.
- **For Service VM assigned devices**: as most devices are assigned to the
Service VM directly. Whenever there is a physical interrupt from an assigned
device, the corresponding virtual interrupt will be injected to the Service
- **For Service VM assigned devices**: Whenever a physical interrupt
is from an assigned
device, the corresponding virtual interrupt is injected to the Service
VM via vLAPIC/vIOAPIC. See :ref:`device-assignment`.
- **For User VM assigned devices**: only PCI devices could be assigned to
User VM. For the standard VM and soft RT VM, the virtual interrupt
injection follows the same way as Service VM. A virtual interrupt injection
- **For User VM assigned devices**: Only PCI devices can be assigned to
User VMs. For the standard VM and soft RTVM, the virtual interrupt
injection process is the same way as for the Service VM. A virtual interrupt
injection
operation is triggered when a device's physical interrupt occurs. For the
hard RT VM, the physical interrupts are delivered to VM directly without
hard RTVM, the physical interrupts are delivered to the VM directly without
causing VM-exit.
- **For User VM emulated devices**: DM is responsible for the
emulated devices' interrupt lifecycle management. DM knows when
an emulated device needs to assert a virtual IOPAIC/PIC Pin or
needs to send a virtual MSI vector to Guest. These logic is
entirely handled by DM. For the hard RT VM, there should be no
- **For User VM emulated devices**: DM manages the interrupt lifecycle of
emulated devices. DM knows when
an emulated device needs to assert a virtual IOAPIC/PIC pin or
needs to send a virtual MSI vector to the guest. The logic is
entirely handled by DM. Hard RTVMs should not have
emulated devices.
.. figure:: images/virtint-image64.png
@ -268,22 +268,22 @@ devices.
Handle pending virtual interrupt
Before APICv virtual interrupt delivery, a virtual interrupt can be
injected only if guest interrupt is allowed. There are many cases
that Guest ``RFLAGS.IF`` gets cleared and it would not accept any further
interrupts. HV will check for the available Guest IRQ windows before
injected only if the guest interrupt is allowed. In many cases,
the guest ``RFLAGS.IF`` gets cleared and does not accept any further
interrupts. The hypervisor checks for the available guest IRQ windows before
injection.
NMI is unmaskable interrupt and its injection is always allowed
regardless of the guest IRQ window status. If current IRQ
window is not present, HV would enable
NMI is an unmaskable interrupt and its injection is always allowed
regardless of the guest IRQ window status. If the current IRQ
window is not present, the hypervisor enables
``MSR_IA32_VMX_PROCBASED_CTLS_IRQ_WIN (PROCBASED_CTRL.bit[2])`` and
VM Enter directly. The injection will be done on next VM Exit once Guest
VM Enter directly. The injection will be done on the next VM Exit once the guest
issues ``STI (GuestRFLAG.IF=1)``.
Data Structures and Interfaces
******************************
There is no data structure exported to the other components in the
No data structure is exported to the other components in the
hypervisor for virtual interrupts. The APIs listed in the previous
sections are meant to be called whenever a virtual interrupt should be
injected or acknowledged.

View File

@ -7,8 +7,8 @@ We usually emulate devices in the Device Model. However, in some cases, we
need to emulate devices in the ACRN Hypervisor. For example, the
post-launched RTVM needs to emulate passthrough PCI(e) devices in the ACRN
Hypervisor so that it can continue to run even if the Device Model is
no longer working. In spite of this, the Device Model still owns the overall
resource management such as memory/MMIO space, interrupt pin, etc.
no longer working. Nevertheless, the Device Model still owns the overall
resource management such as memory/MMIO space and interrupt pins.
One communication method provided by the ACRN Hypervisor aligns the resource information for the Device Model with the ACRN Hypervisor emulated device.

View File

@ -3,14 +3,14 @@
Virtio-GPIO
###########
virtio-gpio provides a virtual GPIO controller, which will map part of
native GPIOs to User VM, User VM can perform GPIO operations through it,
including setting values, including set/get value, set/get direction and
set configuration (only Open Source and Open Drain types are currently
supported). GPIOs quite often be used as IRQs, typically for wakeup
events, virtio-gpio supports level and edge interrupt trigger modes.
Virtio-gpio provides a virtual general-purpose input/output (GPIO) controller
that can map native GPIOs to a User VM. The User VM can perform GPIO operations
through it, including set value, get value, set direction, get direction, and
set configuration. Only Open Source and Open Drain types are currently
supported. GPIOs are often used as IRQs, typically for wakeup events.
Virtio-gpio supports level and edge interrupt trigger modes.
The virtio-gpio architecture is shown below
The virtio-gpio architecture is shown below:
.. figure:: images/virtio-gpio-1.png
:align: center
@ -18,20 +18,20 @@ The virtio-gpio architecture is shown below
Virtio-gpio Architecture
Virtio-gpio is implemented as a virtio legacy device in the ACRN device
model (DM), and is registered as a PCI virtio device to the guest OS. No
Virtio-gpio is implemented as a virtio legacy device in the ACRN Device
Model (DM), and is registered as a PCI virtio device to the guest OS. No
changes are required in the frontend Linux virtio-gpio except that the
guest (User VM) kernel should be built with ``CONFIG_VIRTIO_GPIO=y``.
There are three virtqueues used between FE and BE, one for gpio
operations, one for IRQ request and one for IRQ event notification.
Three virtqueues are used between FE and BE, one for GPIO
operations, one for IRQ requests, and one for IRQ event notification.
Virtio-gpio FE driver will register a gpiochip and irqchip when it is
probed, the base and number of gpio are generated by the BE. Each
gpiochip or irqchip operation(e.g. get_direction of gpiochip or
irq_set_type of irqchip) will trigger a virtqueue_kick on its own
virtqueue. If some gpio has been set to interrupt mode, the interrupt
events will be handled within the IRQ virtqueue callback.
The virtio-gpio FE driver registers a gpiochip and irqchip when it is
probed. BE generates the base and number of GPIOs. Each
gpiochip or irqchip operation (for example, get_direction of gpiochip or
irq_set_type of irqchip) triggers a virtqueue_kick on its own
virtqueue. If a GPIO has been set to interrupt mode, the interrupt
events are handled within the IRQ virtqueue callback.
GPIO Mapping
************
@ -42,12 +42,12 @@ GPIO Mapping
GPIO mapping
- Each User VM has only one GPIO chip instance, its number of GPIO is
based on acrn-dm command line and GPIO base always start from 0.
- Each User VM has only one GPIO chip instance. The number of GPIOs is
based on the acrn-dm command line. The GPIO base always starts from 0.
- Each GPIO is exclusive, User VM can't map the same native gpio.
- Each GPIO is exclusive. A User VM can't map the same native GPIO.
- Each acrn-dm maximum number of GPIO is 64.
- For each acrn-dm, the maximum number of GPIOs is 64.
Usage
*****
@ -57,33 +57,33 @@ Add the following parameters into the command line::
-s <slot>,virtio-gpio,<@controller_name{offset|name[=mapping_name]:offset|name[=mapping_name]:...}@controller_name{...}...]>
- **controller_name**: Input ``ls /sys/bus/gpio/devices`` to check native
gpio controller information. Usually, the devices represent the
controller_name, you can use it as controller_name directly. You can
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
and Intel NUC platforms, the controller_name are gpiochip0, gpiochip1,
GPIO controller information. Usually, the devices represent the
controller_name, and you can use it as controller_name directly. You can
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get a device ID that can
be used to match ``/dev/XXX``, then use XXX as the controller_name. On MRB
and Intel NUC platforms, the controller_name values are gpiochip0, gpiochip1,
gpiochip2.gpiochip3.
- **offset|name**: you can use gpio offset or its name to locate one
native gpio within the gpio controller.
- **offset|name**: You can use the GPIO offset or its name to locate one
native GPIO within the GPIO controller.
- **mapping_name**: This is optional, if you want to use a customized
name for a FE gpio, you can set a new name for a FE virtual gpio.
- **mapping_name**: This parameter is optional. If you want to use a customized
name for a FE GPIO, you can set a new name for a FE virtual GPIO.
Example
*******
- Map three native gpio to User VM, they are native gpiochip0 with
offset of 1 and 6, and with the name ``reset``. In User VM, the three
gpio has no name, and base from 0.::
- Map three native GPIOs to the User VM. They are native gpiochip0 with
offset of 1 and 6, and with the name ``reset``. In the User VM, the three
GPIOs have no name, and base starts from 0::
-s 10,virtio-gpio,@gpiochip0{1:6:reset}
- Map four native gpio to User VM, native gpiochip0's gpio with offset 1
and offset 6 map to FE virtual gpio with offset 0 and offset 1
without names, native gpiochip0's gpio with name ``reset`` maps to FE
virtual gpio with offset 2 and its name is ``shutdown``, native
gpiochip1's gpio with offset 0 maps to FE virtual gpio with offset 3 and
its name is ``reset`` ::
- Map four native GPIOs to the User VM. The native gpiochip0's GPIO with offset
1 and offset 6 map to FE virtual GPIO with offset 0 and offset 1 without
names. The native gpiochip0's GPIO with name ``reset`` maps to FE virtual
GPIO with offset 2 and its name is ``shutdown``. The native gpiochip1's GPIO
with offset 0 maps to FE virtual GPIO with offset 3 and its name is
``reset``::
-s 10,virtio-gpio,@gpiochip0{1:6:reset=shutdown}@gpiochip1{0=reset}