mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-09-08 20:29:40 +00:00
doc: Spelling and grammar tweaks
Did a partial run of ACRN documents through Acrolinx to catch additional spelling and grammar fixes missed during regular reviews. Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
committed by
David Kinder
parent
d54347a054
commit
dd0fe54141
@@ -22,4 +22,4 @@ documented in this section.
|
||||
Hostbridge emulation <hostbridge-virt-hld>
|
||||
AT keyboard controller emulation <atkbdc-virt-hld>
|
||||
Split Device Model <split-dm>
|
||||
Shared memory based inter-vm communication <ivshmem-hld>
|
||||
Shared memory based inter-VM communication <ivshmem-hld>
|
||||
|
@@ -5,7 +5,7 @@ ACRN high-level design overview
|
||||
|
||||
ACRN is an open source reference hypervisor (HV) that runs on top of Intel
|
||||
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & real-time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
I/O mediation solution with a permissive license and provides auto makers and
|
||||
industry users a reference software stack for corresponding use.
|
||||
|
||||
@@ -124,7 +124,7 @@ ACRN 2.0
|
||||
========
|
||||
|
||||
ACRN 2.0 is extending ACRN to support pre-launched VM (mainly for safety VM)
|
||||
and Real-Time (RT) VM.
|
||||
and real-time (RT) VM.
|
||||
|
||||
:numref:`overview-arch2.0` shows the architecture of ACRN 2.0; the main difference
|
||||
compared to ACRN 1.0 is that:
|
||||
|
@@ -1016,7 +1016,7 @@ access is like this:
|
||||
#. If the verification is successful in eMMC RPMB controller, then the
|
||||
data will be written into storage device.
|
||||
|
||||
This work flow of authenticated data read is very similar to this flow
|
||||
This workflow of authenticated data read is very similar to this flow
|
||||
above, but in reverse order.
|
||||
|
||||
Note that there are some security considerations in this design:
|
||||
|
@@ -358,7 +358,7 @@ general workflow of ioeventfd.
|
||||
:align: center
|
||||
:name: ioeventfd-workflow
|
||||
|
||||
ioeventfd general work flow
|
||||
ioeventfd general workflow
|
||||
|
||||
The workflow can be summarized as:
|
||||
|
||||
|
@@ -13,7 +13,7 @@ SoC and back, as well as signals the SoC uses to control onboard
|
||||
peripherals.
|
||||
|
||||
.. note::
|
||||
NUC and UP2 platforms do not support IOC hardware, and as such, IOC
|
||||
Intel NUC and UP2 platforms do not support IOC hardware, and as such, IOC
|
||||
virtualization is not supported on these platforms.
|
||||
|
||||
The main purpose of IOC virtualization is to transfer data between
|
||||
@@ -131,7 +131,7 @@ There are five parts in this high-level design:
|
||||
* State transfer introduces IOC mediator work states
|
||||
* CBC protocol illustrates the CBC data packing/unpacking
|
||||
* Power management involves boot/resume/suspend/shutdown flows
|
||||
* Emulated CBC commands introduces some commands work flow
|
||||
* Emulated CBC commands introduces some commands workflow
|
||||
|
||||
IOC mediator has three threads to transfer data between User VM and Service VM. The
|
||||
core thread is responsible for data reception, and Tx and Rx threads are
|
||||
|
@@ -57,8 +57,8 @@ configuration and copies them to the corresponding guest memory.
|
||||
.. figure:: images/partition-image18.png
|
||||
:align: center
|
||||
|
||||
ACRN set-up for guests
|
||||
**********************
|
||||
ACRN setup for guests
|
||||
*********************
|
||||
|
||||
Cores
|
||||
=====
|
||||
|
@@ -39,7 +39,7 @@ resource allocator.) The user can check the cache capabilities such as cache
|
||||
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
|
||||
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
|
||||
CLOS ID, to select a cache mask to take effect. These configurations can be
|
||||
done in scenario xml file under ``FEATURES`` section as shown in the below example.
|
||||
done in scenario XML file under ``FEATURES`` section as shown in the below example.
|
||||
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
|
||||
to enforce the settings.
|
||||
|
||||
@@ -52,7 +52,7 @@ to enforce the settings.
|
||||
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
|
||||
|
||||
Once the cache mask is set of each individual CPU, the respective CLOS ID
|
||||
needs to be set in the scenario xml file under ``VM`` section. If user desires
|
||||
needs to be set in the scenario XML file under ``VM`` section. If user desires
|
||||
to use CDP feature, CDP_ENABLED should be set to ``y``.
|
||||
|
||||
.. code-block:: none
|
||||
@@ -106,7 +106,7 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
|
||||
users can check the MBA capabilities such as mba delay values and
|
||||
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
|
||||
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
|
||||
These configurations can be done in scenario xml file under ``FEATURES`` section
|
||||
These configurations can be done in scenario XML file under ``FEATURES`` section
|
||||
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
|
||||
for non-root and root modes to enforce the settings.
|
||||
|
||||
@@ -120,7 +120,7 @@ for non-root and root modes to enforce the settings.
|
||||
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
|
||||
|
||||
Once the cache mask is set of each individual CPU, the respective CLOS ID
|
||||
needs to be set in the scenario xml file under ``VM`` section.
|
||||
needs to be set in the scenario XML file under ``VM`` section.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2
|
||||
|
@@ -15,7 +15,7 @@ Inter-VM Communication Overview
|
||||
:align: center
|
||||
:name: ivshmem-architecture-overview
|
||||
|
||||
ACRN shared memory based inter-vm communication architecture
|
||||
ACRN shared memory based inter-VM communication architecture
|
||||
|
||||
There are two ways ACRN can emulate the ``ivshmem`` device:
|
||||
|
||||
|
@@ -86,7 +86,7 @@ I/O ports definition::
|
||||
RTC emulation
|
||||
=============
|
||||
|
||||
ACRN supports RTC (Real-Time Clock) that can only be accessed through
|
||||
ACRN supports RTC (real-time clock) that can only be accessed through
|
||||
I/O ports (0x70 and 0x71).
|
||||
|
||||
0x70 is used to access CMOS address register and 0x71 is used to access
|
||||
|
@@ -61,7 +61,7 @@ Add the following parameters into the command line::
|
||||
controller_name, you can use it as controller_name directly. You can
|
||||
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can
|
||||
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
|
||||
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
and Intel NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one
|
||||
|
@@ -34,8 +34,8 @@ It receives read/write commands from the watchdog driver, does the
|
||||
actions, and returns. In ACRN, the commands are from User VM
|
||||
watchdog driver.
|
||||
|
||||
User VM watchdog work flow
|
||||
**************************
|
||||
User VM watchdog workflow
|
||||
*************************
|
||||
|
||||
When the User VM does a read or write operation on the watchdog device's
|
||||
registers or memory space (Port IO or Memory map I/O), it will trap into
|
||||
|
@@ -77,7 +77,7 @@ PTEs (with present bit cleared, or reserved bit set) pointing to valid
|
||||
host PFNs, a malicious guest may use those EPT PTEs to construct an attack.
|
||||
|
||||
A special aspect of L1TF in the context of virtualization is symmetric
|
||||
multi threading (SMT), e.g. Intel |reg| Hyper-Threading Technology.
|
||||
multi threading (SMT), e.g. Intel |reg| Hyper-threading Technology.
|
||||
Logical processors on the affected physical cores share the L1 Data Cache
|
||||
(L1D). This fact could make more variants of L1TF-based attack, e.g.
|
||||
a malicious guest running on one logical processor can attack the data which
|
||||
@@ -88,11 +88,11 @@ Guest -> guest Attack
|
||||
=====================
|
||||
|
||||
The possibility of guest -> guest attack varies on specific configuration,
|
||||
e.g. whether CPU partitioning is used, whether Hyper-Threading is on, etc.
|
||||
e.g. whether CPU partitioning is used, whether Hyper-threading is on, etc.
|
||||
|
||||
If CPU partitioning is enabled (default policy in ACRN), there is
|
||||
1:1 mapping between vCPUs and pCPUs i.e. no sharing of pCPU. There
|
||||
may be an attack possibility when Hyper-Threading is on, where
|
||||
may be an attack possibility when Hyper-threading is on, where
|
||||
logical processors of same physical core may be allocated to two
|
||||
different guests. Then one guest may be able to attack the other guest
|
||||
on sibling thread due to shared L1D.
|
||||
@@ -221,7 +221,7 @@ This mitigation is always enabled.
|
||||
Core-based scheduling
|
||||
=====================
|
||||
|
||||
If Hyper-Threading is enabled, it's important to avoid running
|
||||
If Hyper-threading is enabled, it's important to avoid running
|
||||
sensitive context (if containing security data which a given VM
|
||||
has no permission to access) on the same physical core that runs
|
||||
said VM. It requires scheduler enhancement to enable core-based
|
||||
@@ -265,9 +265,9 @@ requirements:
|
||||
- Doing 5) is not feasible, or
|
||||
- CPU sharing is enabled (in the future)
|
||||
|
||||
If Hyper-Threading is enabled, there is no available mitigation
|
||||
If Hyper-threading is enabled, there is no available mitigation
|
||||
option before core scheduling is planned. User should understand
|
||||
the security implication and only turn on Hyper-Threading
|
||||
the security implication and only turn on Hyper-threading
|
||||
when the potential risk is acceptable to their usage.
|
||||
|
||||
Mitigation Status
|
||||
|
@@ -566,7 +566,7 @@ The following table shows some use cases of module level configuration design:
|
||||
- This module is used to virtualize part of LAPIC functionalities.
|
||||
It can be done via APICv or software emulation depending on CPU
|
||||
capabilities.
|
||||
For example, KBL NUC doesn't support virtual-interrupt delivery, while
|
||||
For example, KBL Intel NUC doesn't support virtual-interrupt delivery, while
|
||||
other platforms support it.
|
||||
- If a function pointer is used, the prerequisite is
|
||||
"hv_operation_mode == OPERATIONAL".
|
||||
|
Reference in New Issue
Block a user