Doc: Grammatical edits to CPU Virtualization doc.

Signed-off-by: Deb Taylor <deb.taylor@intel.com>
This commit is contained in:
Deb Taylor 2020-05-18 18:36:07 -04:00 committed by deb-intel
parent 87adc93c96
commit 9d3f2066c1

View File

@ -9,81 +9,83 @@ CPU Virtualization
ACRN Hypervisor CPU Virtualization Components ACRN Hypervisor CPU Virtualization Components
The following sections discuss the major modules (shown in blue) in the The following sections discuss the major modules (indicated above in blue)
CPU virtualization overview shown in :numref:`hv-cpu-virt-components`. in the CPU virtualization overview shown in :numref:`hv-cpu-virt-components`.
Based on Intel VT-x virtualization technology, ACRN emulates a virtual CPU Based on Intel VT-x virtualization technology, ACRN emulates a virtual CPU
(vCPU) with the following methods: (vCPU) with the following methods:
- **core partition**: one vCPU is dedicated and associated with one - **core partition**: one vCPU is dedicated and associated with one
physical CPU (pCPU), physical CPU (pCPU),
making much of hardware register emulation simply making much of the hardware register emulation simply
pass-through and provides good isolation for physical interrupt passthrough. This provides good isolation for physical interrupts
and guest execution. (See `Static CPU Partitioning`_ for more and guest execution. (See `Static CPU Partitioning`_ for more
information.) information.)
- **core sharing** (to be added): two or more vCPUs are sharing one - **core sharing** (to be added): two or more vCPUs share one
physical CPU (pCPU); a more complicated context switch is needed physical CPU (pCPU). A more complicated context switch is needed
between different vCPUs' switching, and provides flexible computing between different vCPUs' switching. This provides flexible computing
resources sharing for low performance demand vCPU tasks. resources sharing for low-performance demand vCPU tasks.
(See `Flexible CPU Sharing`_ for more information.) (See `Flexible CPU Sharing`_ for more information.)
- **simple schedule**: a well-designed scheduler framework allows ACRN - **simple schedule**: a well-designed scheduler framework that allows ACRN
to adopt different scheduling policy, for example - noop & round-robin: to adopt different scheduling policies, such as the **noop** and **round-robin**:
noop scheduler - only two thread loops are maintained for a CPU - - **noop scheduler**: only two thread loops are maintained for a CPU: a
vCPU thread and default idle thread. A CPU runs most of the time in vCPU thread and a default idle thread. A CPU runs most of the time in
the vCPU thread for emulating a guest CPU, switching between VMX root the vCPU thread for emulating a guest CPU, switching between VMX root
mode and non-root mode. A CPU schedules out to default idle when an mode and non-root mode. A CPU schedules out to default idle when an
operation needs it to stay in VMX root mode, such as when waiting for operation needs it to stay in VMX root mode, such as when waiting for
an I/O request from DM or ready to destroy. an I/O request from the DM or when ready to destroy.
round-robin scheduler (to be added) - allow more vcpu thread loops - **round-robin scheduler** (to be added): allows more vCPU thread loops
running on a CPU. A CPU switches among different vCPU thread and default to run on a CPU. A CPU switches among different vCPU threads and default
idle thread, upon running out corresponding timeslice or necessary idle threads as it runs out corresponding timeslices or necessary
scheduling out such as waiting for an I/O request. A vCPU could yield scheduling outs such as waiting for an I/O request. A vCPU can yield
itself as well, for example when it executes "PAUSE" instruction. itself as well, such as when it executes "PAUSE" instruction.
Static CPU Partitioning Static CPU partitioning
*********************** ***********************
CPU partitioning is a policy for mapping a virtual CPU partitioning is a policy for mapping a virtual
CPU (VCPU) to a physical CPU. To enable this, the ACRN hypervisor could CPU (vCPU) to a physical CPU. To enable this, the ACRN hypervisor can
configure "noop scheduler" as the schedule policy for this physical CPU. configure a noop scheduler as the schedule policy for this physical CPU.
ACRN then forces a fixed 1:1 mapping between a VCPU and this physical CPU ACRN then forces a fixed 1:1 mapping between a vCPU and this physical CPU
when creating a VCPU for the guest Operating System. This makes the VCPU when creating a vCPU for the guest Operating System. This makes the vCPU
management code much simpler. management code much simpler.
``cpu_affinity`` in ``vm config`` helps to decide which physical CPU a ``cpu_affinity`` in ``vm config`` helps to decide which physical CPU a
VCPU in a VM affines to, then finalize the fixed mapping. When launching an VCPU in a VM affines to, then finalize the fixed mapping. When launching a
user VM, need to choose pCPUs from the VM's cpu_affinity that are not User VM, need to choose pCPUs from the VM's cpu_affinity that are not
used by any other VMs. used by any other VMs.
Flexible CPU Sharing Flexible CPU Sharing
******************** ********************
To enable CPU sharing, ACRN hypervisor could configure IORR To enable CPU sharing, ACRN hypervisor can configure IORR
(IO sensitive Round-Robin) or BVT (Borrowed Virtual Time) scheduler policy. (IO sensitive Round-Robin) or the BVT (Borrowed Virtual Time) scheduler
policy.
``cpu_affinity`` in ``vm config`` indicates all the physical CPUs this VM ``cpu_affinity`` in ``vm config`` indicates all the physical CPUs on which
is allowed to run on. A pCPU can be shared among Service OS and any user VMs this VM is allowed to run. A pCPU can be shared among a Service VM and any
as long as local APIC passthrough is not enabled in that user VM. User VM as long as the local APIC passthrough is not enabled in that User
VM.
see :ref:`cpu_sharing` for more information. See :ref:`cpu_sharing` for more information.
CPU management in the Service VM under static CPU partitioning CPU management in the Service VM under static CPU partitioning
============================================================== ==============================================================
With ACRN, all ACPI table entries are pass-thru to the Service VM, including With ACRN, all ACPI table entries are passthrough to the Service VM, including
the Multiple Interrupt Controller Table (MADT). The Service VM sees all the Multiple Interrupt Controller Table (MADT). The Service VM sees all
physical CPUs by parsing the MADT when the Service VM kernel boots. All physical CPUs by parsing the MADT when the Service VM kernel boots. All
physical CPUs are initially assigned to the Service VM by creating the same physical CPUs are initially assigned to the Service VM by creating the same
number of virtual CPUs. number of virtual CPUs.
When the Service VM boot is finished, it releases the physical CPUs intended When the Service VM boot is finished, it releases the physical CPUs intended
for UOS use. for User VM use.
Here is an example flow of CPU allocation on a multi-core platform. Here is an example flow of CPU allocation on a multi-core platform.
@ -97,16 +99,16 @@ Here is an example flow of CPU allocation on a multi-core platform.
CPU management in the Service VM under flexible CPU sharing CPU management in the Service VM under flexible CPU sharing
=========================================================== ===========================================================
As all Service VM CPUs could share with different UOSs, ACRN can still pass-thru As all Service VM CPUs could share with different User VMs, ACRN can still passthrough
MADT to Service VM, and the Service VM is still able to see all physical CPUs. MADT to Service VM, and the Service VM is still able to see all physical CPUs.
But as under CPU sharing, the Service VM does not need offline/release the physical But as under CPU sharing, the Service VM does not need offline/release the physical
CPUs intended for UOS use. CPUs intended for User VM use.
CPU management in UOS CPU management in the User VM
===================== =============================
``cpu_affinity`` in ``vm config`` defines a set of pCPUs that an User VM ``cpu_affinity`` in ``vm config`` defines a set of pCPUs that a User VM
is allowed to run on. acrn-dm could choose to launch on only a subset of the pCPUs is allowed to run on. acrn-dm could choose to launch on only a subset of the pCPUs
or on all pCPUs listed in cpu_affinity, but it can't assign or on all pCPUs listed in cpu_affinity, but it can't assign
any pCPU that is not included in it. any pCPU that is not included in it.
@ -564,7 +566,7 @@ For a guest vCPU's state initialization:
SW load based on different boot mode SW load based on different boot mode
- UOS BSP: DM context initialization through hypercall - User VM BSP: DM context initialization through hypercall
- If it's AP, then it will always start from real mode, and the start - If it's AP, then it will always start from real mode, and the start
vector will always come from vlapic INIT-SIPI emulation. vector will always come from vlapic INIT-SIPI emulation.
@ -855,7 +857,7 @@ handler is *rdmsr_vmexit_handler* or *wrmsr_vmexit_handler*.
This table shows the predefined MSRs ACRN will trap for all the guests. For This table shows the predefined MSRs ACRN will trap for all the guests. For
the MSRs whose bitmap are not set in the MSR bitmap, guest access will be the MSRs whose bitmap are not set in the MSR bitmap, guest access will be
pass-through directly: passthrough directly:
.. list-table:: .. list-table::
:widths: 33 33 33 :widths: 33 33 33
@ -1091,12 +1093,12 @@ Emulation" section for more details.
For an emulated device done in the hypervisor, ACRN provide some basic For an emulated device done in the hypervisor, ACRN provide some basic
APIs to register its IO/MMIO range: APIs to register its IO/MMIO range:
- For the Service VM, the default I/O bitmap are all set to 0, which means the Service VM will pass - For the Service VM, the default I/O bitmap are all set to 0, which means
through all I/O port access by default. Adding an I/O handler the Service VM will passthrough all I/O port access by default. Adding an I/O handler
for a hypervisor emulated device needs to first set its corresponding for a hypervisor emulated device needs to first set its corresponding
I/O bitmap to 1. I/O bitmap to 1.
- For UOS, the default I/O bitmap are all set to 1, which means UOS will trap - For the User VM, the default I/O bitmap are all set to 1, which means the User Vm will trap
all I/O port access by default. Adding an I/O handler for a all I/O port access by default. Adding an I/O handler for a
hypervisor emulated device does not need change its I/O bitmap. hypervisor emulated device does not need change its I/O bitmap.
If the trapped I/O port access does not fall into a hypervisor If the trapped I/O port access does not fall into a hypervisor
@ -1104,12 +1106,12 @@ APIs to register its IO/MMIO range:
DM. DM.
- For the Service VM, EPT maps all range of memory to the Service VM except for ACRN hypervisor - For the Service VM, EPT maps all range of memory to the Service VM except for ACRN hypervisor
area. This means the Service VM will pass through all MMIO access by area. This means the Service VM will passthrough all MMIO access by
default. Adding a MMIO handler for a hypervisor emulated default. Adding a MMIO handler for a hypervisor emulated
device needs to first remove its MMIO range from EPT mapping. device needs to first remove its MMIO range from EPT mapping.
- For UOS, EPT only maps its system RAM to the UOS, which means UOS will - For the User VM, EPT only maps its system RAM to the User VM, which means the User VM will
trap all MMIO access by default. Adding a MMIO handler for a trap all MMIO access by default. Adding an MMIO handler for a
hypervisor emulated device does not need to change its EPT mapping. hypervisor emulated device does not need to change its EPT mapping.
If the trapped MMIO access does not fall into a hypervisor If the trapped MMIO access does not fall into a hypervisor
emulated device, it will create an I/O request and pass it to the Service VM emulated device, it will create an I/O request and pass it to the Service VM
@ -1270,7 +1272,7 @@ ACRN emulates XSAVE features through the following rules:
2. If yes for step 1, enable XSAVE in hypervisor by CR4.OSXSAVE 2. If yes for step 1, enable XSAVE in hypervisor by CR4.OSXSAVE
3. Emulates XSAVE related CPUID.01H & CPUID.0DH to guest 3. Emulates XSAVE related CPUID.01H & CPUID.0DH to guest
4. Emulates XCR0 access through *xsetbv_vmexit_handler* 4. Emulates XCR0 access through *xsetbv_vmexit_handler*
5. ACRN pass-through the access of IA32_XSS MSR to guest 5. ACRN passthrough the access of IA32_XSS MSR to guest
6. ACRN hypervisor does NOT use any feature of XSAVE 6. ACRN hypervisor does NOT use any feature of XSAVE
7. As ACRN emulate vCPU with partition mode, so based on above rules 5 7. As ACRN emulate vCPU with partition mode, so based on above rules 5
and 6, a guest vCPU will fully control the XSAVE feature in and 6, a guest vCPU will fully control the XSAVE feature in