doc: fix mispellings and formatting

* General scan for misspellings, "smart quotes", and formatting errors
  missed during regular review. Also removed used of "please".

* Fix old XML examples that had desc="..." comments. These comments were
  moved to to xsd files instead of being in the XML files themselves.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder 2021-09-20 16:46:41 -07:00 committed by David Kinder
parent ac67051ab5
commit 0d03224070
24 changed files with 108 additions and 108 deletions

View File

@ -3283,10 +3283,10 @@ each function:
1) The comments block shall start with ``/**`` (slash-asterisk-asterisk) in a
single line.
2) The comments block shall end with :literal:`\ */` (space-asterisk-slash) in
2) The comments block shall end with :literal:`\ */` (space-asterisk-slash) in
a single line.
3) Other than the first line and the last line, every line inside the comments
block shall start with :literal:`\ *` (space-asterisk). It also applies to
block shall start with :literal:`\ *` (space-asterisk). It also applies to
the line which is used to separate different paragraphs. We'll call it a
blank line for simplicity.
4) For each function, following information shall be documented:

View File

@ -467,13 +467,13 @@ All changes and topics sent to GitHub must be well-formed, as described above.
Commit Message Body
===================
When editing the commit message, please briefly explain what your change
When editing the commit message, briefly explain what your change
does and why it's needed. A change summary of ``"Fixes stuff"`` will be
rejected.
.. warning::
An empty change summary body is not permitted. Even for trivial changes,
please include a summary body in the commit message.
include a summary body in the commit message.
The description body of the commit message must include:

View File

@ -311,9 +311,9 @@ creates a hyperlink to that file in the current branch. For example, a GitHub
link to the reST file used to create this document can be generated
using ``:acrn_file:`doc/developer-guides/doc_guidelines.rst```, which will
appear as :acrn_file:`doc/developer-guides/doc_guidelines.rst`, a link to
the “blob” file in the GitHub repo as displayed by GitHub. Theres also an
the "blob" file in the GitHub repo as displayed by GitHub. There's also an
``:acrn_raw:`doc/developer-guides/doc_guidelines.rst``` role that will link
to the “raw” uninterpreted file,
to the "raw" uninterpreted file,
:acrn_raw:`doc/developer-guides/doc_guidelines.rst` file. (Click these links
to see the difference.)

View File

@ -5,13 +5,13 @@ Drawings Using Graphviz
We support using the Sphinx `graphviz extension`_ for creating simple
graphs and line drawings using the dot language. The advantage of using
graphviz for drawings is that the source for a drawing is a text file that
Graphviz for drawings is that the source for a drawing is a text file that
can be edited and maintained in the repo along with the documentation.
.. _graphviz extension: http://graphviz.gitlab.io
These source ``.dot`` files are generally kept separate from the document
itself, and included by using a graphviz directive:
itself, and included by using a Graphviz directive:
.. code-block:: none
@ -38,7 +38,7 @@ the dot language and drawing options.
Simple Directed Graph
*********************
For simple drawings with shapes and lines, you can put the graphviz commands
For simple drawings with shapes and lines, you can put the Graphviz commands
in the content block for the directive. For example, for a simple directed
graph (digraph) with two nodes connected by an arrow, you can write:
@ -108,7 +108,7 @@ are centered, left-justified, and right-justified, respectively.
Finite-State Machine
********************
Here's an example of using graphviz for defining a finite-state machine
Here's an example of using Graphviz for defining a finite-state machine
for pumping gas:
.. literalinclude:: images/gaspump.dot

View File

@ -118,8 +118,8 @@ API forwarding, or a split driver model, is another widely-used I/O
virtualization technology. It has been used in commercial virtualization
productions such as VMware*, PCoIP*, and Microsoft* RemoteFx*.
It is a natural path when researchers study a new type of
I/O virtualization usagefor example, when GPGPU computing in a VM was
initially proposed. Intel GVT-s is based on this approach.
I/O virtualization usage (for example, when GPGPU computing in a VM was
initially proposed). Intel GVT-s is based on this approach.
The architecture of API forwarding is shown in :numref:`api-forwarding`:
@ -319,9 +319,9 @@ ACRN hypervisor, with Service VM as the privileged VM, and multiple user
guests. A GVT-g device model working with the ACRN hypervisor
implements the policies of trap and passthrough. Each guest runs the
native graphics driver and can directly access performance-critical
resources: the Frame Buffer and Command Buffer, with resource
partitioning (as presented later). To protect privileged resources—that
is, the I/O registers and PTEs—corresponding accesses from the graphics
resources, such as the Frame Buffer and Command Buffer, with resource
partitioning. To protect privileged resources including
the I/O registers and PTEs, corresponding accesses from the graphics
driver in user VMs are trapped and forwarded to the GVT device model in the
Service VM for emulation. The device model leverages i915 interfaces to access
the physical GPU.
@ -399,8 +399,8 @@ read-write; that is, the guest driver will read back the same value that was
programmed earlier. A common emulation handler (for example,
intel_gvt_emulate_read/write) is enough to handle such general
emulation requirements. However, some registers must be emulated with
specific logicfor example, affected by change of other states or
additional audit or translation when updating the virtual register.
specific logic (for example, affected by change of other states or
additional audit or translation when updating the virtual register).
Therefore, a specific emulation handler must be installed for those
special registers.
@ -653,7 +653,7 @@ buffers for the IPU and others can also be shared with it. However, it
does require that the Service VM port the Hyper DMA Buffer importer driver. Also,
the Service VM must comprehend and implement the DMA buffer sharing model.
For detailed information about this model, please refer to the `Linux
For detailed information about this model, refer to the `Linux
HYPER_DMABUF Driver High Level Design
<https://github.com/downor/linux_hyper_dmabuf/blob/hyper_dmabuf_integration_v4/Documentation/hyper-dmabuf-sharing.txt>`_.
@ -842,7 +842,7 @@ Because the User VM always uses the host-based command submission (ELSP) model
and it never accesses the GPU or the Graphic Micro Controller (:term:`GuC`)
directly, its scheduler cannot do any preemption by itself.
The i915 scheduler does ensure that batch buffers are
submitted in dependency order—that is, if a compositor has to wait for
submitted in dependency order. If a compositor has to wait for
an application buffer to finish before its workload can be submitted to
the GPU, then the i915 scheduler of the User VM ensures that this happens.

View File

@ -333,7 +333,7 @@ power operations.
VM Manager creates the User VM based on DM application, and does User VM state
management by interacting with lifecycle service in ACRN service.
Please refer to VM management chapter for more details.
Refer to VM management chapter for more details.
ACRN Service
============

View File

@ -1034,7 +1034,7 @@ Note that there are some security considerations in this design:
other User VM.
Keeping the Service VM system as secure as possible is a very important goal in
the system security design, please follow the recommendations in
the system security design. Follow the recommendations in
:ref:`sos_hardening`.
SEED Derivation
@ -1058,7 +1058,7 @@ the non-secure OS issues this power event) is about to enter S3. While
the restore state hypercall is called only by vBIOS when User VM is ready to
resume from suspend state.
For security design consideration of handling secure world S3, please
For security design consideration of handling secure world S3,
read the previous section: :ref:`uos_suspend_resume`.
Platform Security Feature Virtualization and Enablement

View File

@ -116,7 +116,7 @@ any pCPU that is not included in it.
CPU Assignment Management in HV
===============================
The physical CPU assignment is pre-defined by ``cpu_affinity`` in
The physical CPU assignment is predefined by ``cpu_affinity`` in
``vm config``, while post-launched VMs could be launched on pCPUs that are
a subset of it.
@ -1084,7 +1084,7 @@ ACRN always enables I/O bitmap in *VMX_PROC_VM_EXEC_CONTROLS* and EPT
in *VMX_PROC_VM_EXEC_CONTROLS2*. Based on them,
*pio_instr_vmexit_handler* and *ept_violation_vmexit_handler* are
used for IO/MMIO emulation for a emulated device. The emulated device
could locate in hypervisor or DM in the Service VM. Please refer to the "I/O
could locate in hypervisor or DM in the Service VM. Refer to the "I/O
Emulation" section for more details.
For an emulated device done in the hypervisor, ACRN provide some basic

View File

@ -83,7 +83,7 @@ one the following 4 cases:
debug purpose, so the UART device is owned by hypervisor and is not visible
to any VM. For now, UART is the only pci device could be owned by hypervisor.
- **Pre-launched VM**: The passthrough devices will be used in a pre-launched VM is
pre-defined in VM configuration. These passthrough devices are owned by the
predefined in VM configuration. These passthrough devices are owned by the
pre-launched VM after the VM is created. These devices will not be removed
from the pre-launched VM. There could be pre-launched VM(s) in logical partition
mode and hybrid mode.
@ -381,7 +381,7 @@ GSI Sharing Violation Check
All the PCI devices that are sharing the same GSI should be assigned to
the same VM to avoid physical GSI sharing between multiple VMs.
In logical partition mode or hybrid mode, the PCI devices assigned to
pre-launched VM is statically pre-defined. Developers should take care not to
pre-launched VM is statically predefined. Developers should take care not to
violate the rule.
For post-launched VM, devices that don't support MSI, ACRN DM puts the devices
sharing the same GSI pin to a GSI
@ -404,7 +404,7 @@ multiple PCI components with independent local time clocks within the same
system. Intel supports PTM on several of its systems and devices, such as PTM
root capabilities support on Whiskey Lake and Tiger Lake PCIe root ports, and
PTM device support on an Intel I225-V/I225-LM family Ethernet controller. For
further details on PTM, please refer to the `PCIe specification
further details on PTM, refer to the `PCIe specification
<https://pcisig.com/specifications>`_.
ACRN adds PCIe root port emulation in the hypervisor to support the PTM feature
@ -473,7 +473,7 @@ hypervisor startup. The Device Model (DM) then checks whether the pass-through d
supports PTM requestor capabilities and whether the corresponding root port
supports PTM root capabilities, as well as some other sanity checks. If an
error is detected during these checks, the error will be reported and ACRN will
not enable PTM in the Guest VM. This doesnt prevent the user from launching the Guest
not enable PTM in the Guest VM. This doesn't prevent the user from launching the Guest
VM and passing through the device to the Guest VM. If no error is detected,
the device model will use ``add_vdev`` hypercall to add a virtual root port (VRP),
acting as the PTM root, to the Guest VM before passing through the device to the Guest VM.

View File

@ -28,7 +28,7 @@ In the software modules view shown in :numref:`interrupt-sw-modules`,
the ACRN hypervisor sets up the physical interrupt in its basic
interrupt modules (e.g., IOAPIC/LAPIC/IDT). It dispatches the interrupt
in the hypervisor interrupt flow control layer to the corresponding
handlers; this could be pre-defined IPI notification, timer, or runtime
handlers; this could be predefined IPI notification, timer, or runtime
registered passthrough devices. The ACRN hypervisor then uses its VM
interfaces based on vPIC, vIOAPIC, and vMSI modules, to inject the
necessary virtual interrupt into the specific VM, or directly deliver
@ -246,9 +246,6 @@ ACRN hypervisor maintains a global IRQ Descriptor Table shared among the
physical CPUs, so the same vector will link to the same IRQ number for
all CPUs.
.. note:: need to reference API doc for irq_desc
The *irq_desc[]* array's index represents IRQ number. A *handle_irq*
will be called from *interrupt_dispatch* to commonly handle edge/level
triggered IRQ and call the registered *action_fn*.

View File

@ -46,19 +46,19 @@ to enforce the settings.
.. code-block:: none
:emphasize-lines: 2,4
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask">0xF</CLOS_MASK>
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED</CDP_ENABLED>
<CLOS_MASK>0xF</CLOS_MASK>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario XML file under ``VM`` section. If user desires
to use CDP feature, CDP_ENABLED should be set to ``y``.
to use CDP feature, ``CDP_ENABLED`` should be set to ``y``.
.. code-block:: none
:emphasize-lines: 2
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<clos>
<vcpu_clos>0</vcpu_clos>
.. note::
@ -113,11 +113,11 @@ for non-root and root modes to enforce the settings.
.. code-block:: none
:emphasize-lines: 2,5
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask"></CLOS_MASK>
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">0</MBA_DELAY>
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<CLOS_MASK></CLOS_MASK>
<MBA_DELAY>0</MBA_DELAY>
Once the cache mask is set of each individual CPU, the respective CLOS ID
needs to be set in the scenario XML file under ``VM`` section.
@ -125,7 +125,7 @@ needs to be set in the scenario XML file under ``VM`` section.
.. code-block:: none
:emphasize-lines: 2
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<clos>
<vcpu_clos>0</vcpu_clos>
.. note::

View File

@ -113,8 +113,8 @@ initial states, including IDT and physical PICs.
After the BSP detects that all APs are up, it will continue to enter guest mode; similar, after one AP
complete its initialization, it will start entering guest mode as well.
When BSP & APs enter guest mode, they will try to launch pre-defined VMs whose vBSP associated with
this physical core; these pre-defined VMs are static configured in ``vm config`` and they could be
When BSP & APs enter guest mode, they will try to launch predefined VMs whose vBSP associated with
this physical core; these predefined VMs are static configured in ``vm config`` and they could be
pre-launched Safety VM or Service VM; the VM startup will be explained in next section.
.. _vm-startup:

View File

@ -32,8 +32,8 @@ VM powers off, the VM returns to a 'powered off' state again.
A VM can be paused to wait for some operation when it is running, so there is
also a 'paused' state.
:numref:`hvvm-state` illustrates the state-machine of a VM state transition,
please refer to :ref:`hv-cpu-virt` for related VCPU state.
:numref:`hvvm-state` illustrates the state-machine of a VM state transition.
Refer to :ref:`hv-cpu-virt` for related vCPU state.
.. figure:: images/hld-image108.png
:align: center
@ -49,7 +49,7 @@ Pre-Launched and Service VM
The hypervisor is the owner to control pre-launched and Service VM's state
by calling VM APIs directly, following the design of system power
management. Please refer to ACRN power management design for more details.
management. Refer to ACRN power management design for more details.
Post-Launched User VMs
@ -59,5 +59,5 @@ DM takes control of post-launched User VMs' state transition after the Service V
boots, by calling VM APIs through hypercalls.
Service VM user level service such as Life-Cycle-Service and tools such
as Acrnd may work together with DM to launch or stop a User VM. Please
refer to ACRN tool introduction for more details.
as ``acrnd`` may work together with DM to launch or stop a User VM.
Refer to :ref:`acrnctl` documentation for more details.

View File

@ -82,12 +82,12 @@ The device model configuration command syntax for virtio-console is::
- The ``stdio/tty/pty`` is TTY capable, which means :kbd:`TAB` and
:kbd:`BACKSPACE` are supported, as on a regular terminal
- When TTY is used, please make sure the redirected TTY is sleeping,
- When TTY is used, make sure the redirected TTY is sleeping,
(e.g., by ``sleep 2d`` command), and will not read input from stdin before it
is used by virtio-console to redirect guest output.
- When virtio-console socket_type is appointed to client, please make sure
server VM(socket_type is appointed to server) has started.
- When virtio-console socket_type is appointed to client, make sure
server VM (socket_type is appointed to server) has started.
- Claiming multiple virtio-serial ports as consoles is supported,
however the guest Linux OS will only use one of them, through the
@ -222,7 +222,7 @@ SOCKET
The virtio-console socket-type can be set as socket server or client. Device model will
create a Unix domain socket if appointed the socket_type as server, then server VM or
another user VM can bind and listen for communication requirement. If appointed to
client, please make sure the socket server is ready prior to launch device model.
client, make sure the socket server is ready prior to launch device model.
1. Add a PCI slot to the device model (``acrn-dm``) command line, adjusting
the ``</path/to/file.sock>`` to your use case in the VM1 configuration::

View File

@ -9,7 +9,7 @@ Introduction
`Trusty`_ is a set of software components supporting a Trusted Execution
Environment (TEE). TEE is commonly known as an isolated processing environment
in which applications can be securely executed irrespective of the rest of the
system. For more information about TEE, please visit the
system. For more information about TEE, visit the
`Trusted Execution Environment wiki page <https://en.wikipedia.org/wiki/Trusted_execution_environment>`_.
Trusty consists of:

View File

@ -228,7 +228,7 @@ Configure Target BIOS Settings
#. Boot your target and enter the BIOS configuration editor.
Tip: When you are booting your target, youll see an option (quickly) to
Tip: When you are booting your target, you'll see an option (quickly) to
enter the BIOS configuration editor, typically by pressing :kbd:`F2` during
the boot and before the GRUB menu (or Ubuntu login screen) appears.

View File

@ -345,7 +345,7 @@ can define your own configuration scenarios.
In this example, one post-launched User VM provides Human Machine Interface
(HMI) capability, another provides Artificial Intelligence (AI) capability,
some compute function is run the Kata Container, d the RTVM runs the soft
some compute function is run the Kata Container, and the RTVM runs the soft
Programmable Logic Controller (PLC) that requires hard real-time
characteristics.

View File

@ -199,7 +199,7 @@ Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM).
.. code-block:: none
:emphasize-lines: 2,3
<IVSHMEM desc="IVSHMEM configuration">
<IVSHMEM>
<IVSHMEM_ENABLED>y</IVSHMEM_ENABLED>
<IVSHMEM_REGION>hv:/shm_region_0, 2, 0:2</IVSHMEM_REGION>
</IVSHMEM>

View File

@ -75,7 +75,7 @@ Ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with:
make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml RELEASE=0
After the build completes, please update ACRN on NVMe. It is
After the build completes, update ACRN on NVMe. It is
/boot/EFI/BOOT/acrn.bin, if /dev/nvme0n1p1 is mounted at /boot.
Add Pre-Launched RT Kernel Image to GRUB Config

View File

@ -155,14 +155,14 @@ Configure RDT for VM Using VM Configuration
:emphasize-lines: 6
<FEATURES>
<RELOC desc="Enable hypervisor relocation">y</RELOC>
<SCHEDULER desc="The CPU scheduler to be used by the hypervisor.">SCHED_BVT</SCHEDULER>
<MULTIBOOT2 desc="Support boot ACRN from multiboot2 protocol.">y</MULTIBOOT2>
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">*y*</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask"></CLOS_MASK>
<MBA_DELAY desc="Memory Bandwidth Allocation delay value"></MBA_DELAY>
<RELOC>y</RELOC>
<SCHEDULER>SCHED_BVT</SCHEDULER>
<MULTIBOOT2>y</MULTIBOOT2>
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<CLOS_MASK></CLOS_MASK>
<MBA_DELAY></MBA_DELAY>
</RDT>
#. Once RDT is enabled in the scenario XML file, the next step is to program
@ -177,17 +177,17 @@ Configure RDT for VM Using VM Configuration
:emphasize-lines: 8,9,10,11,12
<FEATURES>
<RELOC desc="Enable hypervisor relocation">y</RELOC>
<SCHEDULER desc="The CPU scheduler to be used by the hypervisor.">SCHED_BVT</SCHEDULER>
<MULTIBOOT2 desc="Support boot ACRN from multiboot2 protocol.">y</MULTIBOOT2>
<RDT desc="Intel RDT (Resource Director Technology).">
<RDT_ENABLED desc="Enable RDT">y</RDT_ENABLED>
<CDP_ENABLED desc="CDP (Code and Data Prioritization). CDP is an extension of CAT.">n</CDP_ENABLED>
<CLOS_MASK desc="Cache Capacity Bitmask">*0xff*</CLOS_MASK>
<CLOS_MASK desc="Cache Capacity Bitmask">*0x3f*</CLOS_MASK>
<CLOS_MASK desc="Cache Capacity Bitmask">*0xf*</CLOS_MASK>
<CLOS_MASK desc="Cache Capacity Bitmask">*0x3*</CLOS_MASK>
<MBA_DELAY desc="Memory Bandwidth Allocation delay value">*0*</MBA_DELAY>
<RELOC>y</RELOC>
<SCHEDULER>SCHED_BVT</SCHEDULER>
<MULTIBOOT2>y</MULTIBOOT2>
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<CLOS_MASK>0xff</CLOS_MASK>
<CLOS_MASK>0x3f</CLOS_MASK>
<CLOS_MASK>0xf</CLOS_MASK>
<CLOS_MASK>0x3</CLOS_MASK>
<MBA_DELAY>0</MBA_DELAY>
</RDT>
.. note::
@ -206,12 +206,12 @@ Configure RDT for VM Using VM Configuration
:emphasize-lines: 5,6,7,8
<vm id="0">
<vm_type desc="Specify the VM type" readonly="true">PRE_STD_VM</vm_type>
<name desc="Specify the VM name which will be shown in hypervisor console command: vm_list.">ACRN PRE-LAUNCHED VM0</name>
<uuid configurable="0" desc="vm uuid">26c5e0d8-8f8a-47d8-8109-f201ebd61a5e</uuid>
<clos desc="Class of Service for Cache Allocation Technology. Please refer SDM 17.19.2 for details and use with caution.">
<vcpu_clos>*0*</vcpu_clos>
<vcpu_clos>*1*</vcpu_clos>
<vm_type readonly="true">PRE_STD_VM</vm_type>
<name>ACRN PRE-LAUNCHED VM0</name>
<uuid configurable="0">26c5e0d8-8f8a-47d8-8109-f201ebd61a5e</uuid>
<clos>
<vcpu_clos>0</vcpu_clos>
<vcpu_clos>1</vcpu_clos>
</clos>
</vm>

View File

@ -148,7 +148,7 @@ Tip: Do not share CPUs allocated to the RTVM with other RT or non-RT VMs.
However, for an RT VM, CPUs should be dedicatedly allocated for determinism.
Tip: Use RDT such as CAT and MBA to allocate dedicated resources to the RTVM.
ACRN enables Intel® Resource Director Technology such as CAT, and MBA
ACRN enables Intel Resource Director Technology such as CAT, and MBA
components such as the GPU via the memory hierarchy. The availability of RDT is
hardware-specific. Refer to the :ref:`rdt_configuration`.

View File

@ -3,8 +3,8 @@
Enable SGX Virtualization
#########################
SGX refers to `Intel® Software Guard Extensions <https://software.intel.com/
en-us/sgx>`_ (Intel® SGX). This is a set of instructions that can be used by
SGX refers to `Intel Software Guard Extensions <https://software.intel.com/
en-us/sgx>`_ (Intel SGX). This is a set of instructions that can be used by
applications to set aside protected areas for select code and data in order to
prevent direct attacks on executing code or data stored in memory. SGX allows
an application to instantiate a protected container, referred to as an

View File

@ -94,16 +94,16 @@ Convert the BDF to Hex Format
Refer this :ref:`hv-parameters` to change bdf 01:00.1 to Hex format: 0x101;
then add it to the grub menu:
.. Note::
.. code-block:: bash
multiboot2 /boot/acrn.bin root=PARTUUID="b1bebafc-2b06-43e2-bf6a-323337daebc0 uart=bdf@0x101
multiboot2 /boot/acrn.bin root=PARTUUID="b1bebafc-2b06-43e2-bf6a-323337daebc0" uart=bdf@0x101
.. Note::
uart=bdf@0x100 for port 1
``uart=bdf@0x100`` for port 1
uart=bdf@0x101 for port 2
``uart=bdf@0x101`` for port 2
uart=bdf@0x101 is preferred for the industry scenario; otherwise, it can't
``uart=bdf@0x101`` is preferred for the industry scenario; otherwise, it can't
input in the Hypervisor console after the Service VM boots up.
There is no such limitation for the hybrid or hybrid_rt scenarios.

View File

@ -22,7 +22,7 @@ the OEM can generate their own PK.
Here we show two ways to generate a PK: ``openssl`` and Microsoft tools.
Generate PK Using Openssl
Generate PK Using OpenSSL
=========================
- Generate a Self-Signed Certificate as PK from a new key using the
@ -139,7 +139,7 @@ which we'll summarize below.
(CSP)
For the detailed information of each Microsoft Cryptographic Service
Provider, please check the `Microsoft CRP document
Provider, check the `Microsoft CRP document
<https://docs.microsoft.com/en-us/windows/desktop/seccrypto/microsoft-cryptographic-service-providers>`_
Here, we chose "Microsoft Strong Cryptographic Provider" for example::
@ -393,12 +393,12 @@ Download KEK and DB From Microsoft
KEK (Key Exchange Key):
`Microsoft Corporation KEK CA 2011
<https://go.microsoft.com/fwlink/p/?linkid=321185>`_:
allows updates to db and dbx.
allows updates to DB and DBX.
DB (Allowed Signature database):
`Microsoft Windows Production CA 2011
<https://go.microsoft.com/fwlink/?LinkId=321192>`_:
This CA in the Signature Database (db) allows Windows to boot.
This CA in the Signature Database (DB) allows Windows to boot.
`Microsoft Corporation UEFI CA 2011
<https://go.microsoft.com/fwlink/p/?LinkID=321194>`_:
@ -407,6 +407,7 @@ DB (Allowed Signature database):
Compile OVMF With Secure Boot Support
*************************************
.. code-block:: bash
git clone https://github.com/projectacrn/acrn-edk2.git
@ -419,12 +420,14 @@ Compile OVMF With Secure Boot Support
source edksetup.sh
make -C BaseTools
vim Conf/target.txt
Edit the ``Conf/target.txt`` file and set these values::
ACTIVE_PLATFORM = OvmfPkg/OvmfPkgX64.dsc
TARGET_ARCH = X64
TOOL_CHAIN_TAG = GCC5
Then continue doing the build::
build -DFD_SIZE_2MB -DDEBUG_ON_SERIAL_PORT=TRUE -DSECURE_BOOT_ENABLE