Update pages with missing links

Signed-off-by: Deb Taylor <deb.taylor@intel.com>
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
Deb Taylor 2019-08-24 19:47:33 -04:00 committed by David Kinder
parent 2d57c5feb7
commit 6ca4095d9c
10 changed files with 51 additions and 37 deletions

View File

@ -36,11 +36,11 @@ framework. There are 3 major subsystems in SOS:
- VHM driver notifies HV on the completion through hypercall
- DM injects VIRQ to UOS frontend device through hypercall
- VHM: Virtio and HV service Module is a kernel module in SOS as a
middle layer to support DM. Refer to chapter 5.4 for details
- VHM: Virtio and Hypervisor Service Module is a kernel module in SOS as a
middle layer to support DM. Refer to :ref:`virtio-APIs` for details
This chapter introduces how the acrn-dm application is configured and
walks through the DM overall flow. We'll then elaborate on device,
This section introduces how the acrn-dm application is configured and
walks through the DM overall flow. We'll then elaborate on device,
ISA, and PCI emulation.
Configuration
@ -140,8 +140,8 @@ DM Initialization
allocated by DM for a specific VM in user space. This buffer is
shared between DM, VHM and HV. **Set I/O Request buffer** calls
an ioctl executing a hypercall to share this unique page buffer
with VHM and HV. Please refer to chapter 3.4 and 4.4 for more
details.
with VHM and HV. Refer to :ref:`hld-io-emulation` and
:ref:`IO-emulation-in-sos` for more details.
- **Memory Setup**: UOS memory is allocated from SOS
memory. This section of memory will use SOS hugetlbfs to allocate
@ -158,11 +158,11 @@ DM Initialization
API and PIO handler by *register_inout()* API or INOUT_PORT()
macro.
- **PCI Init**: PCI initialization scans PCI bus/slot/function to
- **PCI Init**: PCI initialization scans the PCI bus/slot/function to
identify each configured PCI device on the acrn-dm command line
and initializes their configuration space by calling their
dedicated vdev_init() function. For more detail of DM PCI
emulation please refer to section 4.6.
dedicated vdev_init() function. For more details on the DM PCI
emulation, refer to `PCI Emulation`_.
- **ACPI Build**: If there is "-A" option in acrn-dm command line, DM
will build ACPI table into its VM's F-Segment (0xf2400). This
@ -296,6 +296,8 @@ VHM ioctl interfaces
IRQ and Interrupts, Device Model management, Guest Memory management,
PCI assignment, and Power management
.. _IO-emulation-in-sos:
I/O Emulation in SOS
********************

View File

@ -33,7 +33,7 @@ synchronize access by the producer and consumer.
sbuf APIs
=========
.. note:: reference APIs defined in hypervisor/include/debug/sbuf.h
The sbuf APIs are defined in ``hypervisor/include/debug/sbuf.h``
ACRN Trace
@ -67,8 +67,8 @@ up:
Trace APIs
==========
.. note:: reference APIs defined in hypervisor/include/debug/trace.h
for trace_entry struct and functions.
See ``hypervisor/include/debug/trace.h``
for trace_entry struct and function APIs.
SOS Trace Module
@ -92,8 +92,10 @@ ACRNTrace application includes a binary to retrieve trace data from
Sbuf, and Python scripts to convert trace data from raw format into
readable text, and do analysis.
Figure 2.2 shows the sequence of trace initialization and trace data
collection. With a debug build, trace components are initialized at boot
.. note:: There was no Figure showing the sequence of trace
initialization and trace data collection.
With a debug build, trace components are initialized at boot
time. After initialization, HV writes trace event date into sbuf
until sbuf is full, which can happen easily if the ACRNTrace app is not
consuming trace data from Sbuf on SOS user space.
@ -103,7 +105,6 @@ created to periodically read RAW trace data from sbuf and write to a
file.
.. note:: figure is missing
Figure 2.2 Sequence of trace init and trace data collection
These are the Python scripts provided:

View File

@ -398,6 +398,8 @@ The workflow can be summarized as:
6. irqfd related logic inject an interrupt through vhm interrupt API.
7. interrupt is delivered to UOS FE driver through hypervisor.
.. _virtio-APIs:
Virtio APIs
***********

View File

@ -484,9 +484,9 @@ A bitmap in the vCPU structure lists the different requests:
ACRN provides the function *vcpu_make_request* to make different
requests, set the bitmap of corresponding request, and notify the target vCPU
through IPI if necessary (when the target vCPU is not currently running). See
section 3.5.5 for details.
requests, set the bitmap of the corresponding request, and notify the target
vCPU through the IPI if necessary (when the target vCPU is not currently
running). See :ref:`vcpu-request-interrupt-injection` for details.
.. code-block:: c
@ -579,8 +579,8 @@ entry control and exit control, as shown in the table below.
The table briefly shows how each field got configured.
The guest state field is critical for a guest CPU start to run
based on different CPU modes. One structure *boot_ctx* is used to pass
the necessary initialized guest state to VMX,
used only for the BSP of a guest.
the necessary initialized guest state to VMX, used only for the BSP of a
guest.
For a guest vCPU's state initialization:
@ -879,9 +879,9 @@ exit reason for reading or writing these MSRs is respectively
*VMX_EXIT_REASON_RDMSR* or *VMX_EXIT_REASON_WRMSR* and the vm exit
handler is *rdmsr_vmexit_handler* or *wrmsr_vmexit_handler*.
This table shows the predefined MSRs ACRN will trap
for all the guests. For the MSRs whose bitmap are not set in the
MSR bitmap, guest access will be pass-through directly:
This table shows the predefined MSRs ACRN will trap for all the guests. For
the MSRs whose bitmap are not set in the MSR bitmap, guest access will be
pass-through directly:
.. list-table::
:widths: 33 33 33

View File

@ -131,6 +131,8 @@ model.
this, device model is linked with lib pci access to access physical PCI
device.
.. _interrupt-remapping:
Interrupt Remapping
*******************

View File

@ -64,9 +64,9 @@ default assigned to SOS. Any interrupts received by Guest VM (SOS or
UOS) device drivers are virtual interrupts injected by HV (via vLAPIC).
HV manages a Host-to-Guest mapping. When a native IRQ/interrupt occurs,
HV decides whether this IRQ/interrupt should be forwarded to a VM and
which VM to forward to (if any). Refer to section 3.7.6 for virtual
interrupt injection and section 3.9.6 for the management of interrupt
remapping.
which VM to forward to (if any). Refer to
:ref:`virt-interrupt-injection` and :ref:`interrupt-remapping` for
more information.
HV does not own any exceptions. Guest VMCS are configured so no VM Exit
happens, with some exceptions such as #INT3 and #MC. This is to
@ -88,6 +88,8 @@ sources:
- Inter CPU IPI
- LAPIC timer
.. _physical-interrupt-initialization:
Physical Interrupt Initialization
*********************************
@ -356,8 +358,7 @@ IPI Management
The only purpose of IPI use in HV is to kick a vCPU out of non-root mode
and enter to HV mode. This requires I/O request and virtual interrupt
injection be distributed to different IPI vectors. The I/O request uses
IPI vector 0xF4 upcall (refer to Chapter 5.4). The virtual interrupt
injection uses IPI vector 0xF0.
IPI vector 0xF4 upcall. The virtual interrupt injection uses IPI vector 0xF0.
0xF4 upcall
A Guest vCPU VM Exit exits due to EPT violation or IO instruction trap.

View File

@ -116,7 +116,7 @@ buffer on VM creation, otherwise I/O accesses from UOS cannot be
emulated by SOS, and all I/O accesses not handled by the I/O handlers in
the hypervisor will be dropped (reads get all 1's).
Refer to Section 4.4.1 for the details of I/O requests and the
Refer to the following sections for details on I/O requests and the
initialization of the I/O request buffer.
Types of I/O Requests
@ -267,8 +267,7 @@ External Interfaces
The following structures represent an I/O request. *struct vhm_request*
is the main structure and the others are detailed representations of I/O
requests of different kinds. Refer to Section 4.4.4 for the usage of
*struct pci_request*.
requests of different kinds.
.. doxygenstruct:: mmio_request
:project: Project ACRN
@ -285,7 +284,7 @@ requests of different kinds. Refer to Section 4.4.4 for the usage of
.. doxygenstruct:: vhm_request
:project: Project ACRN
For hypercalls related to I/O emulation, refer to Section 3.11.4.
For hypercalls related to I/O emulation, refer to `I/O Emulation in the Hypervisor`_.
.. _io-handler-init:

View File

@ -95,7 +95,7 @@ Memory
interrupt stack table (IST) which are different across physical
processors. LDT is disabled.
Refer to section 3.5.2 for a detailed description of interrupt-related
Refer to :ref:`physical-interrupt-initialization` for a detailed description of interrupt-related
initial states, including IDT and physical PICs.
After BSP detects that all APs are up, BSP will start creating the first

View File

@ -14,9 +14,10 @@ management, which includes:
A guest VM never owns any physical interrupts. All interrupts received by
Guest OS come from a virtual interrupt injected by vLAPIC, vIOAPIC or
vPIC. Such virtual interrupts are triggered either from a pass-through
device or from I/O mediators in SOS via hypercalls. Section 3.8.6
introduces how the hypervisor manages the mapping between physical and
virtual interrupts for pass-through devices.
device or from I/O mediators in SOS via hypercalls. The
:ref:`interrupt-remapping` section discusses how the hypervisor manages
the mapping between physical and virtual interrupts for pass-through
devices.
Emulation for devices is inside SOS user space device model, i.e.,
acrn-dm. However for performance consideration: vLAPIC, vIOAPIC, and vPIC
@ -33,6 +34,8 @@ options to guest Linux affects whether it uses PIC or IOAPIC:
- **Kernel boot param with vIOAPIC**: add "maxcpu=1" (as long as not "0")
Guest OS will use IOAPIC. And Keep IOAPIC pin2 as source of PIC.
.. _vcpu-request-interrupt-injection:
vCPU Request for Interrupt Injection
************************************
@ -213,6 +216,8 @@ ACRN hypervisor uses the *vcpu_inject_gp/vcpu_inject_pf* functions
to queue exception request, and follows SDM vol3 - 6.15, Table 6-5 to
generate double fault if the condition is met.
.. _virt-interrupt-injection:
Virtual Interrupt Injection
***************************
@ -223,7 +228,7 @@ devices.
directly. Whenever there is a device's physical interrupt, the
corresponding virtual interrupts are injected to SOS via
vLAPIC/vIOAPIC. SOS does not use vPIC and does not have emulated
devices. See section 3.8.5 Device assignment.
devices. See :ref:`device-assignment`.
- **For UOS assigned devices**: only PCI devices could be assigned to
UOS. Virtual interrupt injection follows the same way as SOS. A

View File

@ -257,6 +257,8 @@ as address translation table when creating SOS_VM as Service OS. And all
PCI devices on the platform are added to SOS_VM domain. Then enable DMAR
translation for DMAR unit(s) if they are not marked as ignored.
.. _device-assignment:
Device assignment
*****************