doc: Minor style cleanup

- Remove "currently"
- Capitalize titles and Device Model

Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
Reyes, Amy
2022-03-18 15:24:35 -07:00
committed by David Kinder
parent 21aeb4f422
commit f5b021b1b5
47 changed files with 226 additions and 218 deletions

View File

@@ -7,7 +7,7 @@ Introduction
************
The goal of CPU Sharing is to fully utilize the physical CPU resource to
support more virtual machines. Currently, ACRN only supports 1 to 1
support more virtual machines. ACRN only supports 1 to 1
mapping mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs).
Because of the lack of CPU sharing ability, the number of VMs is
limited. To support CPU Sharing, we have introduced a scheduling
@@ -40,7 +40,7 @@ Scheduling initialization is invoked in the hardware management layer.
CPU Affinity
*************
Currently, we do not support vCPU migration; the assignment of vCPU mapping to
We do not support vCPU migration; the assignment of vCPU mapping to
pCPU is fixed at the time the VM is launched. The statically configured
cpu_affinity in the VM configuration defines a superset of pCPUs that
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU

View File

@@ -34,7 +34,7 @@ Ubuntu as the ACRN Service VM.
Supported Hardware Platform
***************************
Currently, ACRN has enabled GVT-d on the following platforms:
ACRN has enabled GVT-d on the following platforms:
* Kaby Lake
* Whiskey Lake

View File

@@ -20,7 +20,7 @@ and :ref:`vuart_config`).
:align: center
:name: Inter-VM vUART communication
Inter-VM vUART communication
Inter-VM vUART Communication
- Pros:
- POSIX APIs; development-friendly (easily used programmatically
@@ -37,7 +37,7 @@ Inter-VM network communication
Inter-VM network communication is based on the network stack. ACRN supports
both pass-through NICs to VMs and Virtio-Net solutions. (Refer to :ref:`virtio-net`
background introductions of ACRN Virtio-Net Architecture and Design).
background introductions of ACRN Virtio-Net Architecture and Design).
:numref:`Inter-VM network communication` shows the Inter-VM network communication overview:
@@ -45,7 +45,7 @@ background introductions of ACRN Virtio-Net Architecture and Design).
:align: center
:name: Inter-VM network communication
Inter-VM network communication
Inter-VM Network Communication
- Pros:
- Socket-based APIs; development-friendly (easily used programmatically
@@ -61,7 +61,7 @@ Inter-VM shared memory communication (ivshmem)
**********************************************
Inter-VM shared memory communication is based on a shared memory mechanism
to transfer data between VMs. The ACRN device model or hypervisor emulates
to transfer data between VMs. The ACRN Device Model or hypervisor emulates
a virtual PCI device (called an ``ivshmem device``) to expose this shared memory's
base address and size. (Refer to :ref:`ivshmem-hld` and :ref:`enable_ivshmem` for the
background introductions).
@@ -72,7 +72,7 @@ background introductions).
:align: center
:name: Inter-VM shared memory communication
Inter-VM shared memory communication
Inter-VM Shared Memory Communication
- Pros:
- Shared memory is exposed to VMs via PCI MMIO Bar and is mapped and accessed directly.
@@ -224,7 +224,7 @@ a data transfer notification mechanism between the VMs.
/* set eventfds of msix to kernel driver by ioctl */
p_ivsh_dev_ctx->irq_data[i].vector = i;
p_ivsh_dev_ctx->irq_data[i].fd = evt_fd;
ioctl(p_ivsh_dev_ctx->uio_dev_fd, UIO_IRQ_DATA, &p_ivsh_dev_ctx->irq_data[i])
ioctl(p_ivsh_dev_ctx->uio_dev_fd, UIO_IRQ_DATA, &p_ivsh_dev_ctx->irq_data[i])
/* create epoll */
p_ivsh_dev_ctx->epfds_irq[i] = epoll_create1(0);
@@ -323,7 +323,7 @@ after ivshmem device is initialized.
:align: center
:name: Inter-VM ivshmem data transfer state machine
Inter-VM ivshmem data transfer state machine
Inter-VM Ivshmem Data Transfer State Machine
:numref:`Inter-VM ivshmem handshake communication` shows the handshake communication between two machines:
@@ -331,7 +331,7 @@ after ivshmem device is initialized.
:align: center
:name: Inter-VM ivshmem handshake communication
Inter-VM ivshmem handshake communication
Inter-VM Ivshmem Handshake Communication
Reference Sender and Receiver Sample Code Based Doorbell Mode

View File

@@ -33,7 +33,7 @@ RTVM With HV Emulated Device
****************************
ACRN uses hypervisor emulated virtual UART (vUART) devices for inter-VM synchronization such as
logging output or command send/receive. Currently, the vUART only works in polling mode, but
logging output or command send/receive. The vUART only works in polling mode, but
may be extended to support interrupt mode in a future release. In the meantime, for better RT
behavior, the RT application using the vUART shall reserve a margin of CPU cycles to accommodate
for the additional latency introduced by the VM-Exit to the vUART I/O registers (~2000-3000 cycles

View File

@@ -261,7 +261,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
Now is a great time to take a snapshot of the container using ``lxc
snapshot``. If the OpenStack installation fails, manually rolling back
to the previous state can be difficult. Currently, no step exists to
to the previous state can be difficult. No step exists to
reliably restart OpenStack after restarting the container.
5. Install OpenStack::

View File

@@ -40,7 +40,7 @@ No Enclave in a Hypervisor
--------------------------
ACRN does not support running an enclave in a hypervisor since the whole
hypervisor is currently running in VMX root mode, ring 0, and an enclave must
hypervisor is running in VMX root mode, ring 0, and an enclave must
run in ring 3. ACRN SGX virtualization provides the capability to
non-Service VMs.
@@ -124,7 +124,7 @@ CPUID Leaf 07H
* CPUID_07H.EAX[2] SGX: Supports Intel Software Guard Extensions if 1. If SGX
is supported in Guest, this bit will be set.
* CPUID_07H.ECX[30] SGX_LC: Supports SGX Launch Configuration if 1. Currently,
* CPUID_07H.ECX[30] SGX_LC: Supports SGX Launch Configuration if 1.
ACRN does not support the SGX Launch Configuration. This bit will not be
set. Thus, the Launch Enclave must be signed by the Intel SGX Launch Enclave
Key.
@@ -172,7 +172,7 @@ The hypervisor will opt in to SGX for VM if SGX is enabled for VM.
IA32_SGXLEPUBKEYHASH[0-3]
-------------------------
This is read-only since SGX LC is currently not supported.
This is read-only since SGX LC is not supported.
SGXOWNEREPOCH[0-1]
------------------
@@ -245,7 +245,8 @@ PAUSE Exiting
Future Development
******************
Following are some currently unplanned areas of interest for future
Following are some unplanned areas of interest for future
ACRN development around SGX virtualization.
Launch Configuration Support

View File

@@ -135,7 +135,7 @@ SR-IOV Passthrough VF Architecture in ACRN
:align: center
:name: SR-IOV-vf-passthrough
SR-IOV VF Passthrough Architecture In ACRN
SR-IOV VF Passthrough Architecture in ACRN
1. The SR-IOV VF device needs to bind the PCI-stud driver instead of the
vendor-specific VF driver before the device passthrough.
@@ -213,7 +213,7 @@ SR-IOV VF Assignment Policy
1. All SR-IOV PF devices are managed by the Service VM.
2. Currently, the SR-IOV PF cannot passthrough to the User VM.
2. The SR-IOV PF cannot passthrough to the User VM.
3. All VFs can passthrough to the User VM, but we do not recommend
a passthrough to high privilege VMs because the PF device may impact
@@ -236,7 +236,7 @@ only support LaaG (Linux as a Guest).
:align: center
:name: 82576-pf
82576 SR-IOV PF devices
82576 SR-IOV PF Devices
#. Input the ``echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs``
command in the Service VM to enable n VF devices for the first PF
@@ -249,7 +249,7 @@ only support LaaG (Linux as a Guest).
:align: center
:name: 82576-vf
82576 SR-IOV VF devices
82576 SR-IOV VF Devices
.. figure:: images/sriov-image11.png
:align: center

View File

@@ -140,7 +140,7 @@ details in this `Android keymaster functions document
:width: 600px
:name: keymaster-app
Keystore service and Keymaster HAL
Keystore Service and Keymaster HAL
As shown in :numref:`keymaster-app` above, the Keymaster HAL is a
dynamically-loadable library used by the Keystore service to provide
@@ -318,7 +318,7 @@ provided by secure world (TEE/Trusty). In the current ACRN
implementation, secure storage is built in the RPMB partition in eMMC
(or UFS storage).
Currently the eMMC in the APL SoC platform only has a single RPMB
The eMMC in the APL SoC platform only has a single RPMB
partition for tamper-resistant and anti-replay secure storage. The
secure storage (RPMB) is virtualized to support multiple guest User VM VMs.
Although newer generations of flash storage (e.g. UFS 3.0, and NVMe)

View File

@@ -5,14 +5,14 @@ Getting Started Guide for ACRN Hybrid Mode
ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr
or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is
launched by a Device model in the Service VM.
launched by a Device Model in the Service VM.
.. figure:: images/ACRN-Hybrid.png
:align: center
:width: 600px
:name: hybrid_scenario_on_nuc
The Hybrid scenario on the Intel NUC
The Hybrid Scenario on the Intel NUC
The following guidelines
describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC,
@@ -109,7 +109,7 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
}
.. note:: The module ``/boot/zephyr.elf`` is the VM0 (Zephyr) kernel file.
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
``kern_mod`` of VM0, which is configured in the ``misc/config_tools/data/nuc11tnbi5/hybrid.xml``
@@ -138,7 +138,7 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
module2 /boot/zephyr.elf Zephyr_ElfImage
module2 /boot/bzImage Linux_bzImage
module2 /boot/ACPI_VM0.bin ACPI_VM0
}
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu
@@ -177,7 +177,7 @@ Hybrid Scenario Startup Check
#. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
#. Verify that the VM1's Service VM can boot and you can log in.
#. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script.
#. ssh to VM1 and launch the post-launched VM2 using the ACRN Device Model launch script.
#. Go to the Service VM console, and enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console.
#. Verify that VM2 can boot and you can log in.

View File

@@ -62,7 +62,7 @@ Download Win10 Image and Drivers
- Check **I accept the terms in the license agreement**. Click **Continue**.
- From the list, right check the item labeled **Oracle VirtIO Drivers
Version for Microsoft Windows 1.1.x, yy MB**, and then **Save link as
...**. Currently, it is named ``V982789-01.zip``.
...**. It is named ``V982789-01.zip``.
- Click **Download**. When the download is complete, unzip the file. You
will see an ISO named ``winvirtio.iso``.

View File

@@ -16,7 +16,7 @@ into XML in the scenario file:
- Edit :option:`hv.FEATURES.RDT.RDT_ENABLED` to `y` to enable RDT
- Edit :option:`hv.FEATURES.RDT.CDP_ENABLED` to `n` to disable CDP.
Currently vCAT requires CDP to be disabled.
vCAT requires CDP to be disabled.
- Edit :option:`hv.FEATURES.RDT.VCAT_ENABLED` to `y` to enable vCAT