mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-09-23 09:47:44 +00:00
doc: Spelling and grammar tweaks
Did a partial run of ACRN documents through Acrolinx to catch additional spelling and grammar fixes missed during regular reviews. Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
committed by
David Kinder
parent
d54347a054
commit
dd0fe54141
@@ -22,4 +22,4 @@ documented in this section.
|
||||
Hostbridge emulation <hostbridge-virt-hld>
|
||||
AT keyboard controller emulation <atkbdc-virt-hld>
|
||||
Split Device Model <split-dm>
|
||||
Shared memory based inter-vm communication <ivshmem-hld>
|
||||
Shared memory based inter-VM communication <ivshmem-hld>
|
||||
|
@@ -5,7 +5,7 @@ ACRN high-level design overview
|
||||
|
||||
ACRN is an open source reference hypervisor (HV) that runs on top of Intel
|
||||
platforms (APL, KBL, etc) for heterogeneous scenarios such as the Software Defined
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & Real-Time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
Cockpit (SDC), or the In-Vehicle Experience (IVE) for automotive, or HMI & real-time OS for industry. ACRN provides embedded hypervisor vendors with a reference
|
||||
I/O mediation solution with a permissive license and provides auto makers and
|
||||
industry users a reference software stack for corresponding use.
|
||||
|
||||
@@ -124,7 +124,7 @@ ACRN 2.0
|
||||
========
|
||||
|
||||
ACRN 2.0 is extending ACRN to support pre-launched VM (mainly for safety VM)
|
||||
and Real-Time (RT) VM.
|
||||
and real-time (RT) VM.
|
||||
|
||||
:numref:`overview-arch2.0` shows the architecture of ACRN 2.0; the main difference
|
||||
compared to ACRN 1.0 is that:
|
||||
|
@@ -1016,7 +1016,7 @@ access is like this:
|
||||
#. If the verification is successful in eMMC RPMB controller, then the
|
||||
data will be written into storage device.
|
||||
|
||||
This work flow of authenticated data read is very similar to this flow
|
||||
This workflow of authenticated data read is very similar to this flow
|
||||
above, but in reverse order.
|
||||
|
||||
Note that there are some security considerations in this design:
|
||||
|
@@ -358,7 +358,7 @@ general workflow of ioeventfd.
|
||||
:align: center
|
||||
:name: ioeventfd-workflow
|
||||
|
||||
ioeventfd general work flow
|
||||
ioeventfd general workflow
|
||||
|
||||
The workflow can be summarized as:
|
||||
|
||||
|
@@ -13,7 +13,7 @@ SoC and back, as well as signals the SoC uses to control onboard
|
||||
peripherals.
|
||||
|
||||
.. note::
|
||||
NUC and UP2 platforms do not support IOC hardware, and as such, IOC
|
||||
Intel NUC and UP2 platforms do not support IOC hardware, and as such, IOC
|
||||
virtualization is not supported on these platforms.
|
||||
|
||||
The main purpose of IOC virtualization is to transfer data between
|
||||
@@ -131,7 +131,7 @@ There are five parts in this high-level design:
|
||||
* State transfer introduces IOC mediator work states
|
||||
* CBC protocol illustrates the CBC data packing/unpacking
|
||||
* Power management involves boot/resume/suspend/shutdown flows
|
||||
* Emulated CBC commands introduces some commands work flow
|
||||
* Emulated CBC commands introduces some commands workflow
|
||||
|
||||
IOC mediator has three threads to transfer data between User VM and Service VM. The
|
||||
core thread is responsible for data reception, and Tx and Rx threads are
|
||||
|
@@ -57,8 +57,8 @@ configuration and copies them to the corresponding guest memory.
|
||||
.. figure:: images/partition-image18.png
|
||||
:align: center
|
||||
|
||||
ACRN set-up for guests
|
||||
**********************
|
||||
ACRN setup for guests
|
||||
*********************
|
||||
|
||||
Cores
|
||||
=====
|
||||
|
@@ -39,7 +39,7 @@ resource allocator.) The user can check the cache capabilities such as cache
|
||||
mask and max supported CLOS as described in :ref:`rdt_detection_capabilities`
|
||||
and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a
|
||||
CLOS ID, to select a cache mask to take effect. These configurations can be
|
||||
done in scenario xml file under ``FEATURES`` section as shown in the below example.
|
||||
done in scenario XML file under ``FEATURES`` section as shown in the below example.
|
||||
ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes
|
||||
to enforce the settings.
|
||||
|
||||
@@ -52,7 +52,7 @@ to enforce the settings.
|
||||
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
|
||||
|
||||
Once the cache mask is set of each individual CPU, the respective CLOS ID
|
||||
needs to be set in the scenario xml file under ``VM`` section. If user desires
|
||||
needs to be set in the scenario XML file under ``VM`` section. If user desires
|
||||
to use CDP feature, CDP_ENABLED should be set to ``y``.
|
||||
|
||||
.. code-block:: none
|
||||
@@ -106,7 +106,7 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS
|
||||
users can check the MBA capabilities such as mba delay values and
|
||||
max supported CLOS as described in :ref:`rdt_detection_capabilities` and
|
||||
then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID.
|
||||
These configurations can be done in scenario xml file under ``FEATURES`` section
|
||||
These configurations can be done in scenario XML file under ``FEATURES`` section
|
||||
as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit
|
||||
for non-root and root modes to enforce the settings.
|
||||
|
||||
@@ -120,7 +120,7 @@ for non-root and root modes to enforce the settings.
|
||||
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
|
||||
|
||||
Once the cache mask is set of each individual CPU, the respective CLOS ID
|
||||
needs to be set in the scenario xml file under ``VM`` section.
|
||||
needs to be set in the scenario XML file under ``VM`` section.
|
||||
|
||||
.. code-block:: none
|
||||
:emphasize-lines: 2
|
||||
|
@@ -15,7 +15,7 @@ Inter-VM Communication Overview
|
||||
:align: center
|
||||
:name: ivshmem-architecture-overview
|
||||
|
||||
ACRN shared memory based inter-vm communication architecture
|
||||
ACRN shared memory based inter-VM communication architecture
|
||||
|
||||
There are two ways ACRN can emulate the ``ivshmem`` device:
|
||||
|
||||
|
@@ -86,7 +86,7 @@ I/O ports definition::
|
||||
RTC emulation
|
||||
=============
|
||||
|
||||
ACRN supports RTC (Real-Time Clock) that can only be accessed through
|
||||
ACRN supports RTC (real-time clock) that can only be accessed through
|
||||
I/O ports (0x70 and 0x71).
|
||||
|
||||
0x70 is used to access CMOS address register and 0x71 is used to access
|
||||
|
@@ -61,7 +61,7 @@ Add the following parameters into the command line::
|
||||
controller_name, you can use it as controller_name directly. You can
|
||||
also input ``cat /sys/bus/gpio/device/XXX/dev`` to get device id that can
|
||||
be used to match /dev/XXX, then use XXX as the controller_name. On MRB
|
||||
and NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
and Intel NUC platforms, the controller_name are gpiochip0, gpiochip1,
|
||||
gpiochip2.gpiochip3.
|
||||
|
||||
- **offset|name**: you can use gpio offset or its name to locate one
|
||||
|
@@ -34,8 +34,8 @@ It receives read/write commands from the watchdog driver, does the
|
||||
actions, and returns. In ACRN, the commands are from User VM
|
||||
watchdog driver.
|
||||
|
||||
User VM watchdog work flow
|
||||
**************************
|
||||
User VM watchdog workflow
|
||||
*************************
|
||||
|
||||
When the User VM does a read or write operation on the watchdog device's
|
||||
registers or memory space (Port IO or Memory map I/O), it will trap into
|
||||
|
Reference in New Issue
Block a user