doc: update another few mis-handled titles

After grand update of all titles to use title-case, we found some more
that needed a manual tweak.

Signed-off-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
This commit is contained in:
Geoffroy Van Cutsem 2021-02-23 13:06:29 +01:00 committed by fitchbe
parent e14387bebf
commit 359f4ee6ea
17 changed files with 56 additions and 56 deletions

View File

@ -145,7 +145,7 @@ This section describes the wrap functions:
.. _intel_gvt_ops_interface:
GVT-g Intel_gvt_ops Interface
GVT-g intel_gvt_ops Interface
*****************************
This section contains APIs for GVT-g intel_gvt_ops interface. Sources are found
@ -186,14 +186,14 @@ in the `ACRN kernel GitHub repo`_
.. _sysfs_interface:
AcrnGT Sysfs Interface
***********************
AcrnGT sysfs Interface
**********************
This section contains APIs for the AcrnGT sysfs interface. Sources are found
in the `ACRN kernel GitHub repo`_
Sysfs Nodes
sysfs Nodes
===========
In the following examples, all accesses to these interfaces are via bash command

View File

@ -302,7 +302,7 @@ hypercall to the hypervisor. There are two exceptions:
Architecture of ACRN VHM
VHM Ioctl Interfaces
VHM ioctl Interfaces
====================
.. note:: Reference API documents for General interface, VM Management,

View File

@ -30,7 +30,7 @@ is allowed to put data into that sbuf in HV, and a single consumer is
allowed to get data from sbuf in Service VM. Therefore, no lock is required to
synchronize access by the producer and consumer.
Sbuf APIs
sbuf APIs
=========
The sbuf APIs are defined in ``hypervisor/include/debug/sbuf.h``.

View File

@ -608,7 +608,7 @@ APIs Provided by VBS-K Modules in Service OS
virtio_vqs_index_get
virtio_dev_reset
VHOST APIS
VHOST APIs
==========
APIs Provided by DM

View File

@ -123,7 +123,7 @@ e820 info for all the guests.
| RESERVED |
+------------------------+
Platform Info - Mptable
Platform Info - mptable
=======================
ACRN, in partition mode, uses mptable to convey platform info to each
@ -181,15 +181,15 @@ the Service VM startup in sharing mode.
Inter-Processor Interrupt (IPI) Handling
========================================
Guests W/O LAPIC Passthrough
----------------------------
Guests Without LAPIC Passthrough
--------------------------------
For guests without LAPIC passthrough, IPIs between guest CPUs are handled in
the same way as sharing mode in ACRN. Refer to :ref:`virtual-interrupt-hld`
for more details.
Guests W/ LAPIC Passthrough
---------------------------
Guests With LAPIC Passthrough
-----------------------------
ACRN supports passthrough if and only if the guest is using x2APIC mode
for the vLAPIC. In LAPIC passthrough mode, writes to the Interrupt Command
@ -291,8 +291,8 @@ writes are discarded.
Interrupt Delivery
==================
Guests W/O LAPIC Passthrough
----------------------------
Guests Without LAPIC Passthrough
--------------------------------
In partition mode of ACRN, interrupts stay disabled after a vmexit. The
processor does not take interrupts when it is executing in VMX root
@ -307,8 +307,8 @@ for device interrupts.
:align: center
Guests W/ LAPIC Passthrough
---------------------------
Guests With LAPIC Passthrough
-----------------------------
For guests with LAPIC passthrough, ACRN does not configure vmexit upon
external interrupts. There is no vmexit upon device interrupts and they are
@ -320,13 +320,13 @@ Hypervisor IPI Service
ACRN needs IPIs for events such as flushing TLBs across CPUs, sending virtual
device interrupts (e.g. vUART to vCPUs), and others.
Guests W/O LAPIC Passthrough
----------------------------
Guests Without LAPIC Passthrough
--------------------------------
Hypervisor IPIs work the same way as in sharing mode.
Guests W/ LAPIC Passthrough
---------------------------
Guests With LAPIC Passthrough
-----------------------------
Since external interrupts are passthrough to the guest IDT, IPIs do not
trigger vmexit. ACRN uses NMI delivery mode and the NMI exiting is
@ -344,8 +344,8 @@ For a guest console in partition mode, ACRN provides an option to pass
``vmid`` as an argument to ``vm_console``. vmid is the same as the one
developers use in the guest configuration.
Guests W/O LAPIC Passthrough
----------------------------
Guests Without LAPIC Passthrough
--------------------------------
Works the same way as sharing mode.

View File

@ -1,6 +1,6 @@
.. _ivshmem-hld:
ACRN Shared Memory Based Inter-Vm Communication
ACRN Shared Memory Based Inter-VM Communication
###############################################
ACRN supports inter-virtual machine communication based on a shared
@ -8,7 +8,7 @@ memory mechanism. The ACRN device model or hypervisor emulates a virtual
PCI device (called an ``ivshmem`` device) to expose the base address and
size of this shared memory.
Inter-Vm Communication Overview
Inter-VM Communication Overview
*******************************
.. figure:: images/ivshmem-architecture.png
@ -129,11 +129,11 @@ Usage
For usage information, see :ref:`enable_ivshmem`
Inter-Vm Communication Security Hardening (BKMs)
Inter-VM Communication Security Hardening (BKMs)
************************************************
As previously highlighted, ACRN 2.0 provides the capability to create shared
memory regions between Post-Launch user VMs known as "Inter-VM Communication".
memory regions between Post-Launched User VMs known as "Inter-VM Communication".
This mechanism is based on ivshmem v1.0 exposing virtual PCI devices for the
shared regions (in Service VM's memory for this release). This feature adopts a
community-approved design for shared memory between VMs, following same

View File

@ -35,7 +35,7 @@ The feature bits supported by the BE device are shown as follows:
Device can toggle its cache between writeback and writethrough modes.
Virtio-BLK-Be Design
Virtio-BLK BE Design
********************
.. figure:: images/virtio-blk-image02.png

View File

@ -1,6 +1,6 @@
.. _virtio-gpio:
Virtio-Gpio
Virtio-GPIO
###########
virtio-gpio provides a virtual GPIO controller, which will map part of

View File

@ -1,6 +1,6 @@
.. _virtio-i2c:
Virtio-I2c
Virtio-I2C
##########
Virtio-i2c provides a virtual I2C adapter that supports mapping multiple

View File

@ -243,7 +243,7 @@ Creating UEFI Secure Boot Key
The keys to be enrolled in UEFI firmware: :file:`PK.der`, :file:`KEK.der`, :file:`db.der`.
The keys to sign bootloader image: :file:`grubx64.efi`, :file:`db.key` , :file:`db.crt`.
Sign GRUB Image With Db Key
Sign GRUB Image With db Key
===========================
sbsign --key db.key --cert db.crt path/to/grubx64.efi

View File

@ -1,6 +1,6 @@
.. _enable_ivshmem:
Enable Inter-Vm Communication Based on Ivshmem
Enable Inter-VM Communication Based on Ivshmem
##############################################
You can use inter-VM communication based on the ``ivshmem`` dm-land
@ -9,7 +9,7 @@ solution or hv-land solution, according to the usage scenario needs.
While both solutions can be used at the same time, VMs using different
solutions cannot communicate with each other.
Ivshmem Dm-Land Usage
Ivshmem DM-Land Usage
*********************
Add this line as an ``acrn-dm`` boot parameter::
@ -35,7 +35,7 @@ where
.. _ivshmem-hv:
Ivshmem Hv-Land Usage
Ivshmem HV-Land Usage
*********************
The ``ivshmem`` hv-land solution is disabled by default in ACRN. You
@ -94,10 +94,10 @@ to applications.
.. note:: Notification is supported only for HV-land ivshmem devices. (Future
support may include notification for DM-land ivshmem devices.)
Inter-Vm Communication Examples
Inter-VM Communication Examples
*******************************
Dm-Land Example
DM-Land Example
===============
This example uses dm-land inter-VM communication between two
@ -167,7 +167,7 @@ Linux-based post-launched VMs (VM1 and VM2).
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
Hv-Land Example
HV-Land Example
===============
This example uses hv-land inter-VM communication between two

View File

@ -9,7 +9,7 @@ real-time performance analysis. Two parts are included:
- Method to trace ``vmexit`` occurrences for analysis.
- Method to collect Performance Monitoring Counters information for tuning based on Performance Monitoring Unit, or PMU.
Vmexit Analysis for ACRN RT Performance
vmexit Analysis for ACRN RT Performance
***************************************
``vmexit`` are triggered in response to certain instructions and events and are
@ -149,7 +149,7 @@ Note that Precise Event Based Sampling (PEBS) is not yet enabled in the VM.
value64 = hva2hpa(vcpu->arch.msr_bitmap);
exec_vmwrite64(VMX_MSR_BITMAP_FULL, value64);
Perf/Pmu Tools in Performance Analysis
Perf/PMU Tools in Performance Analysis
======================================
After exposing PMU-related CPUID/MSRs to the VM, performance analysis tools

View File

@ -256,7 +256,7 @@ section, we'll focus on two major components:
See :ref:`trusty_tee` for additional details of Trusty implementation in
ACRN.
One-Vm, Two-Worlds
One-VM, Two-Worlds
==================
As previously mentioned, Trusty Secure Monitor could be any

View File

@ -257,7 +257,7 @@ ACRN Windows Verified Feature List
, "Microsoft Store", "OK"
, "3D Viewer", "OK"
Explanation for Acrn-Dm Popular Command Lines
Explanation for acrn-dm Popular Command Lines
*********************************************
.. note:: Use these acrn-dm command line entries according to your

View File

@ -16,7 +16,7 @@ components, and software components. Layers are repositories containing
related sets of instructions that tell the Yocto Project build system
what to do.
The Meta-Acrn Layer
The meta-acrn Layer
*******************
The meta-acrn layer integrates the ACRN hypervisor with OpenEmbedded,

View File

@ -59,7 +59,7 @@ Command Examples
The following sections provide further details and examples for some of these commands.
Vm_list
vm_list
=======
``vm_list`` provides the name of each virtual machine and its corresponding ID and
@ -70,7 +70,7 @@ state.
vm_list information
Vcpu_list
vcpu_list
=========
``vcpu_list`` provides information about virtual CPUs (vCPU), including
@ -82,7 +82,7 @@ STATE (init, paused, running, zombie or unknown).
vcpu_list information
Vcpu_dumpreg
vcpu_dumpreg
============
``vcpu_dumpreg vmid cpuid`` provides vCPU related information such as
@ -107,7 +107,7 @@ function ``acpi_idle_do_entry``.
system map information
Dump_host_mem
dump_host_mem
=============
``dump_host_mem hva length`` provides the specified memory target data such as
@ -132,7 +132,7 @@ pCPU number is 0x0000000000000004.
acrn map information
Dump_guest_mem
dump_guest_mem
==============
The ``dump_guest_mem`` command can dump guest memory according to the given
@ -152,13 +152,13 @@ in guest console or through the ``system.map`` (Note that the path for
guest memory information
Vm_console
vm_console
===========
The ``vm_console`` command switches the ACRN's console to become the VM's console.
Press :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN shell console.
Vioapic
vioapic
=======
``vioapic <vm_id>`` shows the virtual IOAPIC information for a specific
@ -170,7 +170,7 @@ VM1:
vioapic information
Dump_ioapic
dump_ioapic
===========
``dump_ioapic`` provides IOAPIC information and we can get IRQ number,
@ -181,7 +181,7 @@ IRQ vector number, etc.
dump_ioapic information
Pt
pt
==
``pt`` provides passthrough detailed information, such as the virtual
@ -193,7 +193,7 @@ trigger mode, etc.
pt information
Int
int
===
``int`` provides interrupt information on all CPUs and their corresponding
@ -204,7 +204,7 @@ interrupt vector.
int information
Cpuid
cpuid
=====
``cpuid <leaf> [subleaf]`` provides the CPUID leaf [subleaf] in
@ -215,7 +215,7 @@ hexadecimal.
cpuid information
RDMSR
rdmsr
=====
We can read model specific register (MSR) to get register
@ -238,7 +238,7 @@ and see that 1B (Hexadecimal) is the IA32_APIC_BASE MSR address.
rdmsr information
WRMSR
wrmsr
=====
We can write model specific register (MSR) to set register

View File

@ -385,7 +385,7 @@ GVT-g (AcrnGT) Kernel Options Details
This section provides additional information and details on the kernel command
line options that are related to AcrnGT.
I915.enable_gvt
i915.enable_gvt
---------------
This option enables support for Intel GVT-g graphics virtualization
@ -393,7 +393,7 @@ support in the host. By default, it's not enabled, so we need to add
``i915.enable_gvt=1`` in the Service VM kernel command line. This is a Service
OS only parameter, and cannot be enabled in the User VM.
I915.enable_hangcheck
i915.enable_hangcheck
=====================
This parameter enable detection of a GPU hang. When enabled, the i915