doc: fix all headings to use title case

While we hoped to make the headings consistent over time while doing
other edits, we should instead just make the squirrels happy and do them
all at once or they'll likely never be made consistent.

A python script was used to find the headings, and then a call to
https://pypi.org/project/titlecase to transform the title.  A visual
inspection was used to tweak a few unexpected resulting titles.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder
2021-02-12 16:27:24 -08:00
committed by David Kinder
parent 6e655d098b
commit 0bd384d41b
119 changed files with 576 additions and 638 deletions

View File

@@ -1,6 +1,6 @@
.. _how-to-enable-acrn-secure-boot-with-grub:
Enable ACRN Secure Boot with GRUB
Enable ACRN Secure Boot With GRUB
#################################
This document shows how to enable ACRN secure boot with GRUB including:
@@ -243,14 +243,14 @@ Creating UEFI Secure Boot Key
The keys to be enrolled in UEFI firmware: :file:`PK.der`, :file:`KEK.der`, :file:`db.der`.
The keys to sign bootloader image: :file:`grubx64.efi`, :file:`db.key` , :file:`db.crt`.
Sign GRUB Image With ``db`` Key
================================
Sign GRUB Image With Db Key
===========================
sbsign --key db.key --cert db.crt path/to/grubx64.efi
:file:`grubx64.efi.signed` will be created, it will be your bootloader.
Enroll UEFI Keys To UEFI Firmware
Enroll UEFI Keys to UEFI Firmware
=================================
Enroll ``PK`` (:file:`PK.der`), ``KEK`` (:file:`KEK.der`) and ``db``

View File

@@ -15,7 +15,7 @@ Introduction
ACRN includes three types of configurations: Hypervisor, Board, and VM. Each
is discussed in the following sections.
Hypervisor configuration
Hypervisor Configuration
========================
The hypervisor configuration defines a working scenario and target
@@ -29,7 +29,7 @@ A board-specific ``defconfig`` file, for example
``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/$(BOARD).config``
is loaded first; it is the default ``Kconfig`` for the specified board.
Board configuration
Board Configuration
===================
The board configuration stores board-specific settings referenced by the
@@ -40,7 +40,7 @@ and BDF information. The reference board configuration is organized as
``*.c/*.h`` files located in the
``misc/vm_configs/boards/$(BOARD)/`` folder.
VM configuration
VM Configuration
=================
VM configuration includes **scenario-based** VM configuration
@@ -58,7 +58,7 @@ The board-specific configurations on this scenario are stored in the
User VM launch script samples are located in the
``misc/vm_configs/sample_launch_scripts/`` folder.
ACRN configuration XMLs
ACRN Configuration XMLs
***********************
The ACRN configuration includes three kinds of XML files for acrn-config
@@ -75,7 +75,7 @@ configurations by importing customized XMLs or by saving the
configurations by exporting XMLs.
Board XML format
Board XML Format
================
The board XMLs are located in the
@@ -89,7 +89,7 @@ The board XML has an ``acrn-config`` root element and a ``board`` attribute:
As an input for the ``acrn-config`` tool, end users do not need to care
about the format of board XML and should not modify it.
Scenario XML format
Scenario XML Format
===================
The scenario XMLs are located in the
``misc/vm_configs/xmls/config-xmls/`` folder. The
@@ -103,7 +103,7 @@ and ``scenario`` attributes:
See :ref:`scenario-config-options` for a full explanation of available scenario XML elements.
Launch XML format
Launch XML Format
=================
The launch XMLs are located in the
``misc/vm_configs/xmls/config-xmls/`` folder.
@@ -188,10 +188,10 @@ current scenario has:
interface. When ``configurable="0"``, the item does not appear on the
interface.
Configuration tool workflow
Configuration Tool Workflow
***************************
Hypervisor configuration workflow
Hypervisor Configuration Workflow
==================================
The hypervisor configuration is based on the ``Kconfig``
@@ -219,7 +219,7 @@ configuration steps.
.. _vm_config_workflow:
Board and VM configuration workflow
Board and VM Configuration Workflow
===================================
Python offline tools are provided to configure Board and VM configurations.
@@ -300,7 +300,7 @@ Here is the offline configuration tool workflow:
.. _acrn_config_tool_ui:
Use the ACRN configuration app
Use the ACRN Configuration App
******************************
The ACRN configuration app is a web user interface application that performs the following:

View File

@@ -1,6 +1,6 @@
.. _acrn_on_qemu:
Enable ACRN over QEMU/KVM
Enable ACRN Over QEMU/KVM
#########################
Goal of this document is to bring-up ACRN as a nested Hypervisor on top of QEMU/KVM
@@ -195,7 +195,7 @@ Install ACRN Hypervisor
$ virsh destroy ACRNSOS # where ACRNSOS is the virsh domain name.
Service VM Networking updates for User VM
Service VM Networking Updates for User VM
*****************************************
Follow these steps to enable networking for the User VM (L2 guest):
@@ -232,7 +232,7 @@ Follow these steps to enable networking for the User VM (L2 guest):
4. Restart ACRNSOS guest (L1 guest) to complete the setup and start with bring-up of User VM
Bring-up User VM (L2 Guest)
Bring-Up User VM (L2 Guest)
***************************
1. Build the device-model, using ``make devicemodel`` and copy acrn-dm to ACRNSOS guest (L1 guest) directory ``/usr/bin/acrn-dm``

View File

@@ -37,7 +37,7 @@ Scheduling initialization is invoked in the hardware management layer.
.. figure:: images/cpu_sharing_api.png
:align: center
CPU affinity
CPU Affinity
*************
Currently, we do not support vCPU migration; the assignment of vCPU mapping to
@@ -64,7 +64,7 @@ Here is an example for affinity:
.. figure:: images/cpu_sharing_affinity.png
:align: center
Thread object state
Thread Object State
*******************
The thread object contains three states: RUNNING, RUNNABLE, and BLOCKED.
@@ -128,7 +128,6 @@ and BVT (Borrowed Virtual Time) scheduler.
Scheduler configuration
***********************
* The option in Kconfig decides the only scheduler used in runtime.
``hypervisor/arch/x86/Kconfig``
@@ -159,7 +158,7 @@ The default scheduler is **SCHED_BVT**.
- With ``cpu_affinity`` option in acrn-dm. This launches the user VM on
a subset of the configured cpu_affinity pCPUs.
For example, assign physical CPUs 0 and 1 to this VM::
--cpu_affinity 0,1

View File

@@ -15,7 +15,7 @@ full list of commands, or see a summary of available commands by using
the ``help`` command within the ACRN shell.
An example
An Example
**********
As an example, we'll show how to obtain the interrupts of a passthrough USB device.
@@ -54,7 +54,7 @@ ACRN log provides a console log and a mem log for a user to analyze.
We can use console log to debug directly, while mem log is a userland tool
used to capture an ACRN hypervisor log.
Turn on the logging info
Turn on the Logging Info
========================
ACRN enables a console log by default.
@@ -65,7 +65,7 @@ To enable and start the mem log::
$ systemctl start acrnlog
Set and grab log
Set and Grab Log
================
We have six (1-6) log levels for console log and mem log. The following
@@ -129,7 +129,7 @@ ACRN trace is a tool running on the Service VM to capture trace
data. We can use the existing trace information to analyze, and we can
add self-defined tracing to analyze code that we care about.
Using Existing trace event ID to analyze trace
Using Existing Trace Event ID to Analyze Trace
==============================================
As an example, we can use the existing vm_exit trace to analyze the
@@ -159,7 +159,7 @@ reason and times of each vm_exit after we have done some operations.
vmexit summary information
Using Self-defined trace event ID to analyze trace
Using Self-Defined Trace Event ID to Analyze Trace
==================================================
For some undefined trace event ID, we can define it by ourselves as

View File

@@ -1,7 +1,7 @@
.. _enable_ivshmem:
Enable Inter-VM Communication Based on ``ivshmem``
##################################################
Enable Inter-Vm Communication Based on Ivshmem
##############################################
You can use inter-VM communication based on the ``ivshmem`` dm-land
solution or hv-land solution, according to the usage scenario needs.
@@ -9,7 +9,7 @@ solution or hv-land solution, according to the usage scenario needs.
While both solutions can be used at the same time, VMs using different
solutions cannot communicate with each other.
ivshmem dm-land usage
Ivshmem Dm-Land Usage
*********************
Add this line as an ``acrn-dm`` boot parameter::
@@ -35,7 +35,7 @@ where
.. _ivshmem-hv:
ivshmem hv-land usage
Ivshmem Hv-Land Usage
*********************
The ``ivshmem`` hv-land solution is disabled by default in ACRN. You
@@ -68,7 +68,7 @@ enable it using the :ref:`acrn_configuration_tool` with these steps:
- Build the XML configuration, refer to :ref:`getting-started-building`
ivshmem notification mechanism
Ivshmem Notification Mechanism
******************************
Notification (doorbell) of ivshmem device allows VMs with ivshmem
@@ -94,10 +94,10 @@ to applications.
.. note:: Notification is supported only for HV-land ivshmem devices. (Future
support may include notification for DM-land ivshmem devices.)
Inter-VM Communication Examples
Inter-Vm Communication Examples
*******************************
dm-land example
Dm-Land Example
===============
This example uses dm-land inter-VM communication between two
@@ -167,7 +167,7 @@ Linux-based post-launched VMs (VM1 and VM2).
- For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio``
- For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio``
hv-land example
Hv-Land Example
===============
This example uses hv-land inter-VM communication between two

View File

@@ -57,7 +57,7 @@ to the User VM through a channel. If the User VM receives the command, it will s
to the Device Model. It is the Service VM's responsibility to check if the User VMs
shut down successfully or not, and decides when to power off itself.
User VM "lifecycle manager"
User VM "Lifecycle Manager"
===========================
As part of the current S5 reference design, a lifecycle manager daemon (life_mngr) runs in the
@@ -159,7 +159,7 @@ The procedure for enabling S5 is specific to the particular OS:
.. note:: S5 state is not automatically triggered by a Service VM shutdown; this needs
to be run before powering off the Service VM.
How to test
How to Test
***********
As described in :ref:`vuart_config`, two vUARTs are defined in
pre-defined ACRN scenarios: vUART0/ttyS0 for the console and

View File

@@ -18,7 +18,7 @@ It allows for direct assignment of an entire GPU's prowess to a single
user, passing the native driver capabilities through to the hypervisor
without any limitations.
Verified version
Verified Version
*****************
- ACRN-hypervisor tag: **acrn-2020w17.4-140000p**
@@ -31,7 +31,7 @@ Prerequisites
Follow :ref:`these instructions <rt_industry_ubuntu_setup>` to set up
Ubuntu as the ACRN Service VM.
Supported hardware platform
Supported Hardware Platform
***************************
Currently, ACRN has enabled GVT-d on the following platforms:
@@ -40,16 +40,16 @@ Currently, ACRN has enabled GVT-d on the following platforms:
* Whiskey Lake
* Elkhart Lake
BIOS settings
BIOS Settings
*************
Kaby Lake platform
Kaby Lake Platform
==================
* Set **IGD Minimum Memory** to **64MB** in **Devices** &rarr;
**Video** &rarr; **IGD Minimum Memory**.
Whiskey Lake platform
Whiskey Lake Platform
=====================
* Set **PM Support** to **Enabled** in **Chipset** &rarr; **System
@@ -59,7 +59,7 @@ Whiskey Lake platform
**System Agent (SA) Configuration**
&rarr; **Graphics Configuration** &rarr; **DVMT Pre-Allocated**.
Elkhart Lake platform
Elkhart Lake Platform
=====================
* Set **DMVT Pre-Allocated** to **64MB** in **Intel Advanced Menu**
@@ -93,7 +93,7 @@ Passthrough the GPU to Guest
4. Run ``launch_win.sh``.
Enable the GVT-d GOP driver
Enable the GVT-d GOP Driver
***************************
When enabling GVT-d, the Guest OS cannot light up the physical screen

View File

@@ -1,6 +1,6 @@
.. _pre_launched_rt:
Pre-Launched Preempt-RT Linux Mode in ACRN
Pre-Launched Preempt-Rt Linux Mode in ACRN
##########################################
The Pre-Launched Preempt-RT Linux Mode of ACRN, abbreviated as
@@ -34,7 +34,7 @@ two Ethernet ports. We will passthrough the SATA and Ethernet 03:00.0
devices into the Pre-Launched RT VM, and give the rest of the devices to
the Service VM.
Install SOS with Grub on NVMe
Install SOS With Grub on NVMe
=============================
As with the Hybrid and Logical Partition scenarios, the Pre-Launched RT
@@ -64,7 +64,7 @@ the SATA to the NVMe drive:
# mount /dev/sda1 /mnt
# cp /mnt/bzImage /boot/EFI/BOOT/bzImage_RT
Build ACRN with Pre-Launched RT Mode
Build ACRN With Pre-Launched RT Mode
====================================
The ACRN VM configuration framework can easily configure resources for

View File

@@ -35,7 +35,7 @@ Manual, (Section 17.19 Intel Resource Director Technology Allocation Features)
.. _rdt_detection_capabilities:
RDT detection and resource capabilities
RDT Detection and Resource Capabilities
***************************************
From the ACRN HV debug shell, use ``cpuid`` to detect and identify the
resource capabilities. Use the platform's serial port for the HV shell.
@@ -98,7 +98,7 @@ MBA bit encoding:
resources by using a common subset CLOS. This is done in order to minimize
misconfiguration errors.
Tuning RDT resources in HV debug shell
Tuning RDT Resources in HV Debug Shell
**************************************
This section explains how to configure the RDT resources from the HV debug
shell.
@@ -141,7 +141,7 @@ shell.
.. _rdt_vm_configuration:
Configure RDT for VM using VM Configuration
Configure RDT for VM Using VM Configuration
*******************************************
#. RDT hardware feature is enabled by default on supported platforms. This
@@ -166,11 +166,11 @@ Configure RDT for VM using VM Configuration
</RDT>
#. Once RDT is enabled in the scenario XML file, the next step is to program
the desired cache mask or/and the MBA delay value as needed in the
scenario file. Each cache mask or MBA delay configuration corresponds
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
the desired cache mask or/and the MBA delay value as needed in the
scenario file. Each cache mask or MBA delay configuration corresponds
to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4
cache mask settings needs to be in place where each setting corresponds
to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and
to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and
use default delay value for MBA, it can be done as shown in the example below.
.. code-block:: none

View File

@@ -1,6 +1,6 @@
.. _rt_performance_tuning:
ACRN Real-time (RT) Performance Analysis
ACRN Real-Time (RT) Performance Analysis
########################################
The document describes the methods to collect trace/data for ACRN real-time VM (RTVM)
@@ -9,8 +9,8 @@ real-time performance analysis. Two parts are included:
- Method to trace ``vmexit`` occurrences for analysis.
- Method to collect Performance Monitoring Counters information for tuning based on Performance Monitoring Unit, or PMU.
``vmexit`` analysis for ACRN RT performance
*******************************************
Vmexit Analysis for ACRN RT Performance
***************************************
``vmexit`` are triggered in response to certain instructions and events and are
a key source of performance degradation in virtual machines. During the runtime
@@ -30,7 +30,7 @@ the duration of time where we do not want to see any ``vmexit`` occur.
Different RT tasks use different critical sections. This document uses
the cyclictest benchmark as an example of how to do ``vmexit`` analysis.
The critical sections
The Critical Sections
=====================
Here is example pseudocode of a cyclictest implementation.
@@ -53,14 +53,14 @@ the cyclictest to be awakened and scheduled. Here we can get the latency by
So, we define the starting point of the critical section as ``next`` and
the ending point as ``now``.
Log and trace data collection
Log and Trace Data Collection
=============================
#. Add time stamps (in TSC) at ``next`` and ``now``.
#. Capture the log with the above time stamps in the RTVM.
#. Capture the ``acrntrace`` log in the Service VM at the same time.
Offline analysis
Offline Analysis
================
#. Convert the raw trace data to human readable format.
@@ -71,10 +71,10 @@ Offline analysis
:align: center
:name: vm_exits_log
Collecting Performance Monitoring Counters data
Collecting Performance Monitoring Counters Data
***********************************************
Enable Performance Monitoring Unit (PMU) support in VM
Enable Performance Monitoring Unit (PMU) Support in VM
======================================================
By default, the ACRN hypervisor doesn't expose the PMU-related CPUID and
@@ -149,7 +149,7 @@ Note that Precise Event Based Sampling (PEBS) is not yet enabled in the VM.
value64 = hva2hpa(vcpu->arch.msr_bitmap);
exec_vmwrite64(VMX_MSR_BITMAP_FULL, value64);
Perf/PMU tools in performance analysis
Perf/Pmu Tools in Performance Analysis
======================================
After exposing PMU-related CPUID/MSRs to the VM, performance analysis tools
@@ -170,7 +170,7 @@ following links for perf usage:
Refer to https://github.com/andikleen/pmu-tools for PMU usage.
Top-down Microarchitecture Analysis Method (TMAM)
Top-Down Microarchitecture Analysis Method (TMAM)
==================================================
The top-down microarchitecture analysis method (TMAM), based on top-down

View File

@@ -1,6 +1,6 @@
.. _rt_perf_tips_rtvm:
ACRN Real-time VM Performance Tips
ACRN Real-Time VM Performance Tips
##################################
Background
@@ -34,7 +34,7 @@ RTVM performance:
This document summarizes tips from issues encountered and
resolved during real-time development and performance tuning.
Mandatory options for an RTVM
Mandatory Options for an RTVM
*****************************
An RTVM is a post-launched VM with LAPIC passthrough. Pay attention to
@@ -55,7 +55,7 @@ Tip: Use virtio polling mode
and enables polling mode to avoid a VM-exit at the frontend. Enable
virtio polling mode via the option ``--virtio_poll [polling interval]``.
Avoid VM-exit latency
Avoid VM-exit Latency
*********************
VM-exit has a significant negative impact on virtualization performance.
@@ -137,7 +137,7 @@ Tip: Create and initialize the RT tasks at the beginning to avoid runtime access
to CR3 and CR8 does not cause a VM-exit. However, writes to CR0 and CR4 may cause a
VM-exit, which would happen at the spawning and initialization of a new task.
Isolating the impact of neighbor VMs
Isolating the Impact of Neighbor VMs
************************************
ACRN makes use of several technologies and hardware features to avoid

View File

@@ -1,6 +1,6 @@
.. _rtvm_workload_guideline:
Real-time VM Application Design Guidelines
Real-Time VM Application Design Guidelines
##########################################
An RTOS developer must be aware of the differences between running applications on a native
@@ -11,7 +11,7 @@ incremental runtime overhead.
This document provides some application design guidelines when using an RTVM within the ACRN hypervisor.
Run RTVM with dedicated resources/devices
Run RTVM With Dedicated Resources/Devices
*****************************************
For best practice, ACRN allocates dedicated CPU, memory resources, and cache resources (using Intel
@@ -22,14 +22,14 @@ of I/O devices, we recommend using dedicated (passthrough) PCIe devices to avoid
The configuration space for passthrough PCI devices is still emulated and accessing it will
trigger a VM-Exit.
RTVM with virtio PMD (Polling Mode Driver) for I/O sharing
RTVM With Virtio PMD (Polling Mode Driver) for I/O Sharing
**********************************************************
If the RTVM must use shared devices, we recommend using PMD drivers that can eliminate the
unpredictable latency caused by guest I/O trap-and-emulate access. The RTVM application must be
aware that the packets in the PMD driver may arrive or be sent later than expected.
RTVM with HV Emulated Device
RTVM With HV Emulated Device
****************************
ACRN uses hypervisor emulated virtual UART (vUART) devices for inter-VM synchronization such as
@@ -39,7 +39,7 @@ behavior, the RT application using the vUART shall reserve a margin of CPU cycle
for the additional latency introduced by the VM-Exit to the vUART I/O registers (~2000-3000 cycles
per register access).
DM emulated device (Except PMD)
DM Emulated Device (Except PMD)
*******************************
We recommend **not** using DM-emulated devices in an RTVM.

View File

@@ -177,7 +177,7 @@ outputs:
Debug = false
UseVSock = false
Run a Kata Container with ACRN
Run a Kata Container With ACRN
******************************
The system is now ready to run a Kata Container on ACRN. Note that a reboot

View File

@@ -146,7 +146,7 @@ Install ACRN on the Debian VM
[ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19
[ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp
Enable the network sharing to give network access to User VM
Enable the Network Sharing to Give Network Access to User VM
************************************************************
.. code-block:: bash

View File

@@ -190,7 +190,7 @@ Modify the ``launch_win.sh`` script in order to launch Ubuntu as the User VM.
The Ubuntu desktop on the secondary monitor
Enable the Ubuntu Console instead of the User Interface
Enable the Ubuntu Console Instead of the User Interface
*******************************************************
After the Ubuntu VM reboots, follow the steps below to enable the Ubuntu

View File

@@ -1,6 +1,6 @@
.. _setup_openstack_libvirt:
Configure ACRN using OpenStack and libvirt
Configure ACRN Using OpenStack and Libvirt
##########################################
Introduction
@@ -41,7 +41,7 @@ Install ACRN
create it using the instructions in
:ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`.
Set up and launch LXC/LXD
Set Up and Launch LXC/LXD
*************************
1. Set up the LXC/LXD Linux container engine::
@@ -148,7 +148,7 @@ The ``openstack`` container is now properly configured for OpenStack.
Use the ``lxc list`` command to verify that both **eth0** and **eth1**
appear in the container.
Set up ACRN prerequisites inside the container
Set Up ACRN Prerequisites Inside the Container
**********************************************
1. Log in to the ``openstack`` container as the **stack** user::
@@ -177,7 +177,7 @@ Set up ACRN prerequisites inside the container
.. note:: Use the tag that matches the version of the ACRN hypervisor (``acrn.bin``)
that runs on your system.
Set up libvirt
Set Up Libvirt
**************
1. Install the required packages::
@@ -218,7 +218,7 @@ Set up libvirt
$ sudo systemctl daemon-reload
Set up OpenStack
Set Up OpenStack
****************
Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://docs.openstack.org/devstack/>`_.
@@ -303,7 +303,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
$ sudo iptables -t nat -A POSTROUTING -s 172.24.4.1/24 -o br-ex -j SNAT --to-source 192.168.1.104
Configure and create OpenStack Instance
Configure and Create OpenStack Instance
***************************************
We'll be using the Ubuntu 20.04 (Focal) Cloud image as the OS image (qcow2

View File

@@ -30,7 +30,7 @@ The image below shows the high-level design of SGX virtualization in ACRN.
SGX Virtualization in ACRN
Enable SGX support for Guest
Enable SGX Support for Guest
****************************
Presumptions
@@ -232,13 +232,13 @@ ENCLS[ECREATE]
Other VMExit Control
********************
RDRAND exiting
RDRAND Exiting
==============
* ACRN allows Guest to use RDRAND/RDSEED instruction but does not set "RDRAND
exiting" to 1.
PAUSE exiting
PAUSE Exiting
=============
* ACRN does not set "PAUSE exiting" to 1.
@@ -248,7 +248,7 @@ Future Development
Following are some currently unplanned areas of interest for future
ACRN development around SGX virtualization.
Launch Configuration support
Launch Configuration Support
============================
When the following two conditions are both satisfied:

View File

@@ -128,7 +128,7 @@ SR-IOV Architecture in ACRN
standard BAR registers. The MSI-X mapping base address is also from the
PF's SR-IOV capabilities, not PCI standard BAR registers.
SR-IOV Passthrough VF Architecture In ACRN
SR-IOV Passthrough VF Architecture in ACRN
------------------------------------------
.. figure:: images/sriov-image4.png
@@ -219,7 +219,7 @@ SR-IOV VF Assignment Policy
a passthrough to high privilege VMs because the PF device may impact
the assigned VFs' functionality and stability.
SR-IOV Usage Guide In ACRN
SR-IOV Usage Guide in ACRN
--------------------------
We use the Intel 82576 NIC as an example in the following instructions. We
@@ -280,7 +280,7 @@ only support LaaG (Linux as a Guest).
c. Boot the User VM
SR-IOV Limitations In ACRN
SR-IOV Limitations in ACRN
--------------------------
1. The SR-IOV migration feature is not supported.

View File

@@ -256,7 +256,7 @@ section, we'll focus on two major components:
See :ref:`trusty_tee` for additional details of Trusty implementation in
ACRN.
One-VM, Two-Worlds
One-Vm, Two-Worlds
==================
As previously mentioned, Trusty Secure Monitor could be any

View File

@@ -1,6 +1,6 @@
.. _using_grub:
Using GRUB to boot ACRN
Using GRUB to Boot ACRN
#######################
`GRUB <http://www.gnu.org/software/grub/>`_ is a multiboot bootloader
@@ -45,7 +45,7 @@ ELF format when :option:`hv.FEATURES.RELOC` is not set, or RAW format when
.. _pre-installed-grub:
Using pre-installed GRUB
Using Pre-Installed GRUB
************************
Most Linux distributions use GRUB version 2 by default. If its version
@@ -137,7 +137,7 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
start the VMs automatically.
Installing self-built GRUB
Installing Self-Built GRUB
**************************
If the GRUB version on your platform is outdated or has issues booting

View File

@@ -1,6 +1,6 @@
.. _using_hybrid_mode_on_nuc:
Getting Started Guide for ACRN hybrid mode
Getting Started Guide for ACRN Hybrid Mode
##########################################
ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr

View File

@@ -1,6 +1,6 @@
.. _using_partition_mode_on_nuc:
Getting Started Guide for ACRN logical partition mode
Getting Started Guide for ACRN Logical Partition Mode
#####################################################
The ACRN hypervisor supports a logical partition scenario in which the User
@@ -41,7 +41,7 @@ Prerequisites
.. rst-class:: numbered-step
Update kernel image and modules of pre-launched VM
Update Kernel Image and Modules of Pre-Launched VM
**************************************************
#. On your development workstation, clone the ACRN kernel source tree, and
build the Linux kernel image that will be used to boot the pre-launched VMs:
@@ -105,7 +105,7 @@ Update kernel image and modules of pre-launched VM
.. rst-class:: numbered-step
Update ACRN hypervisor image
Update ACRN Hypervisor Image
****************************
#. Before building the ACRN hypervisor, find the I/O address of the serial
@@ -189,7 +189,7 @@ Update ACRN hypervisor image
.. rst-class:: numbered-step
Update Ubuntu GRUB to boot hypervisor and load kernel image
Update Ubuntu GRUB to Boot Hypervisor and Load Kernel Image
***********************************************************
#. Append the following configuration to the ``/etc/grub.d/40_custom`` file:
@@ -249,7 +249,7 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
.. rst-class:: numbered-step
Logical partition scenario startup check
Logical Partition Scenario Startup Check
****************************************
#. Use these steps to verify that the hypervisor is properly running:

View File

@@ -21,7 +21,7 @@ In the following steps, you'll first create a Windows image
in the Service VM, and then launch that image as a Guest VM.
Verified version
Verified Version
================
* Windows 10 Version:
@@ -38,12 +38,12 @@ Verified version
set **DVMT Pre-Allocated** to **64MB** and set **PM Support**
to **Enabled**.
Create a Windows 10 image in the Service VM
Create a Windows 10 Image in the Service VM
===========================================
Create a Windows 10 image to install Windows 10 onto a virtual disk.
Download Win10 image and drivers
Download Win10 Image and Drivers
--------------------------------
#. Download `MediaCreationTool20H2.exe <https://www.microsoft.com/software-download/windows10>`_.
@@ -66,7 +66,7 @@ Download Win10 image and drivers
- Click **Download**. When the download is complete, unzip the file. You
will see an ISO named ``winvirtio.iso``.
Create a raw disk
Create a Raw Disk
-----------------
Run these commands on the Service VM::
@@ -76,7 +76,7 @@ Run these commands on the Service VM::
$ cd /home/acrn/work
$ qemu-img create -f raw win10-ltsc.img 30G
Prepare the script to create an image
Prepare the Script to Create an Image
-------------------------------------
#. Refer :ref:`gpu-passthrough` to enable GVT-d GOP feature; then copy above .iso files and the built OVMF.fd to /home/acrn/work
@@ -212,7 +212,7 @@ When you see the UEFI shell, input **exit**.
Windows and install in safe mode.
The latest version(27.20.100.9030) was verified on WHL.Youd better use the same version as the one in native Windows 10 on your board.
Boot Windows on ACRN with a default configuration
Boot Windows on ACRN With a Default Configuration
=================================================
#. Prepare WaaG lauch script
@@ -235,7 +235,7 @@ Boot Windows on ACRN with a default configuration
The WaaG desktop displays on the monitor.
ACRN Windows verified feature list
ACRN Windows Verified Feature List
**********************************
.. csv-table::
@@ -257,7 +257,7 @@ ACRN Windows verified feature list
, "Microsoft Store", "OK"
, "3D Viewer", "OK"
Explanation for acrn-dm popular command lines
Explanation for Acrn-Dm Popular Command Lines
*********************************************
.. note:: Use these acrn-dm command line entries according to your
@@ -297,7 +297,7 @@ Explanation for acrn-dm popular command lines
* ``--ovmf /home/acrn/work/OVMF.fd``:
Make sure it points to your OVMF binary path.
Secure boot enabling
Secure Boot Enabling
********************
Refer to the steps in :ref:`How-to-enable-secure-boot-for-windows` for
secure boot enabling.

View File

@@ -1,6 +1,6 @@
.. _using_xenomai_as_uos:
Run Xenomai as the User VM OS (Real-time VM)
Run Xenomai as the User VM OS (Real-Time VM)
############################################
`Xenomai`_ is a versatile real-time framework that provides support to user space applications that are seamlessly integrated into Linux environments.
@@ -9,7 +9,7 @@ This tutorial describes how to run Xenomai as the User VM OS (real-time VM) on t
.. _Xenomai: https://gitlab.denx.de/Xenomai/xenomai/-/wikis/home
Build the Xenomai kernel
Build the Xenomai Kernel
************************
Follow these instructions to build the Xenomai kernel:
@@ -92,7 +92,7 @@ Launch the RTVM
clr-c1ff5bba8c3145ac8478e8e1f96e1087 login:
Install the Xenomai libraries and tools
Install the Xenomai Libraries and Tools
***************************************
To build and install Xenomai tools or its libraries in the RVTM, refer to the official

View File

@@ -1,6 +1,6 @@
.. _using_yp:
Using Yocto Project with ACRN
Using Yocto Project With ACRN
#############################
The `Yocto Project <https://yoctoproject.org>`_ (YP) is an open source
@@ -16,7 +16,7 @@ components, and software components. Layers are repositories containing
related sets of instructions that tell the Yocto Project build system
what to do.
The meta-acrn layer
The Meta-Acrn Layer
*******************
The meta-acrn layer integrates the ACRN hypervisor with OpenEmbedded,

View File

@@ -29,7 +29,7 @@ change the value in it.
``vuart[1]`` is initiated as a **communication** port.
Console enable list
Console Enable List
===================
+-----------------+-----------------------+--------------------+----------------+----------------+
@@ -50,7 +50,7 @@ Console enable list
.. _how-to-configure-a-console-port:
How to configure a console port
How to Configure a Console Port
===============================
To enable the console port for a VM, change only the ``port_base`` and
@@ -75,7 +75,7 @@ Example:
.. _how-to-configure-a-communication-port:
How to configure a communication port
How to Configure a Communication Port
=====================================
To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate.
@@ -111,7 +111,7 @@ Example:
.t_vuart.vuart_id = 1U,
},
Communication vUART enable list
Communication vUART Enable List
===============================
+-----------------+-----------------------+--------------------+---------------------+----------------+
@@ -128,7 +128,7 @@ Communication vUART enable list
| Logic_partition | Pre-launched | Pre-launched RTVM | | |
+-----------------+-----------------------+--------------------+---------------------+----------------+
Launch script
Launch Script
=============
- ``-s 1:0,lpc -l com1,stdio``
@@ -139,7 +139,7 @@ Launch script
- ``-B " ....,console=ttyS0, ..."``
Add this to the kernel-based system.
Test the communication port
Test the Communication Port
===========================
After you have configured the communication port in hypervisor, you can
@@ -172,7 +172,7 @@ access the corresponding port. For example, in Linux OS:
- This cannot be used to transfer files because flow control is
not supported so data may be lost.
vUART design
vUART Design
============
**Console vUART**
@@ -187,7 +187,7 @@ vUART design
:align: center
:name: communication-vuart
COM port configurations for Post-Launched VMs
COM Port Configurations for Post-Launched VMs
=============================================
For a post-launched VM, the ``acrn-dm`` cmdline also provides a COM port configuration:
@@ -365,7 +365,7 @@ VM0's PCI-vUART1. Usually, legacy ``vuart[0]`` is ``ttyS0`` in VM, and
``vuart[1]`` is ``ttyS1``. So we hope PCI-vUART0 is ``ttyS0``,
PCI-VUART1 is ``ttyS1`` and so on through
PCI-vUART7 is ``ttyS7``, but that is not true. We can use BDF to identify
PCI-vUART in VM.
PCI-vUART in VM.
If you run ``dmesg | grep tty`` at a VM shell, you may see:
@@ -398,7 +398,7 @@ symbols set:
CONFIG_SERIAL_8250_EXTENDED=y
CONFIG_SERIAL_8250_DETECT_IRQ=y
Kernel Cmdline for PCI-vUART console
Kernel Cmdline for PCI-vUART Console
====================================
When an ACRN VM does not have a legacy ``vuart[0]`` but has a

View File

@@ -22,7 +22,7 @@ the OEM can generate their own PK.
Here we show two ways to generate a PK: ``openssl`` and Microsoft tools.
Generate PK Using openssl
Generate PK Using Openssl
=========================
- Generate a Self-Signed Certificate as PK from a new key using the
@@ -128,7 +128,7 @@ Generate PK Using openssl
openssl x509 -in PK.crt -outform der -out PK.der
Using Microsoft tools
Using Microsoft Tools
=====================
Microsoft documents explain `how to use Microsoft tools to generate a secure boot key
@@ -414,7 +414,7 @@ which we'll summarize below.
Conventions. CRT and CER file extensions can be interchanged as
the encoding type is identical.
Download KEK and DB from Microsoft
Download KEK and DB From Microsoft
**********************************
KEK (Key Exchange Key):
@@ -431,10 +431,9 @@ DB (Allowed Signature database):
<https://go.microsoft.com/fwlink/p/?LinkID=321194>`_:
Microsoft signer for third party UEFI binaries via DevCenter program.
Compile OVMF with secure boot support
Compile OVMF With Secure Boot Support
*************************************
::
git clone https://github.com/projectacrn/acrn-edk2.git
@@ -475,7 +474,7 @@ Notes:
.. _qemu_inject_boot_keys:
Use QEMU to inject secure boot keys into OVMF
Use QEMU to Inject Secure Boot Keys Into OVMF
*********************************************
We follow the `openSUSE: UEFI Secure boot using qemu-kvm document