mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-08-02 00:08:43 +00:00
doc: spelling and grammar improvements
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
f300d1ff77
commit
576a3b5947
@ -323,11 +323,11 @@ Remapping of (virtual) PIC interrupts are set up in a similar sequence:
|
||||
|
||||
Initialization of remapping of virtual MSI for Service VM
|
||||
|
||||
This figure illustrates how mappings of MSI or MSIX are set up for
|
||||
This figure illustrates how mappings of MSI or MSI-X are set up for
|
||||
Service VM. Service VM is responsible for issuing a hypercall to notify the
|
||||
hypervisor before it configures the PCI configuration space to enable an
|
||||
MSI. The hypervisor takes this opportunity to set up a remapping for the
|
||||
given MSI or MSIX before it is actually enabled by Service VM.
|
||||
given MSI or MSI-X before it is actually enabled by Service VM.
|
||||
|
||||
When the User VM needs to access the physical device by passthrough, it uses
|
||||
the following steps:
|
||||
|
@ -842,7 +842,7 @@ space as shown in :numref:`virtio-framework-userland`:
|
||||
In the Virtio user-land framework, the implementation is compatible with
|
||||
Virtio Spec 0.9/1.0. The VBS-U is statically linked with the Device Model,
|
||||
and communicates with the Device Model through the PCIe interface: PIO/MMIO
|
||||
or MSI/MSIx. VBS-U accesses Virtio APIs through the user space ``vring`` service
|
||||
or MSI/MSI-X. VBS-U accesses Virtio APIs through the user space ``vring`` service
|
||||
API helpers. User space ``vring`` service API helpers access shared ring
|
||||
through a remote memory map (mmap). VHM maps User VM memory with the help of
|
||||
ACRN Hypervisor.
|
||||
|
@ -212,7 +212,7 @@ Additional scenario XML elements:
|
||||
Specify the maximum number of Interrupt Remapping Entries.
|
||||
|
||||
``MAX_IOAPIC_NUM`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of IO-APICs.
|
||||
Specify the maximum number of IOAPICs.
|
||||
|
||||
``MAX_PCI_DEV_NUM`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of PCI devices.
|
||||
@ -221,7 +221,7 @@ Additional scenario XML elements:
|
||||
Specify the maximum number of interrupt lines per IOAPIC.
|
||||
|
||||
``MAX_PT_IRQ_ENTRIES`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of interrupt source for PT devices.
|
||||
Specify the maximum number of interrupt sources for PT devices.
|
||||
|
||||
``MAX_MSIX_TABLE_NUM`` (a child node of ``CAPACITIES``):
|
||||
Specify the maximum number of MSI-X tables per device.
|
||||
@ -233,7 +233,7 @@ Additional scenario XML elements:
|
||||
Specify the Segment, Bus, Device, and function of the GPU.
|
||||
|
||||
``vm``:
|
||||
Specify the VM with VMID by its "id" attribute.
|
||||
Specify the VM with VMID by its ``id`` attribute.
|
||||
|
||||
``vm_type``:
|
||||
Current supported VM types are:
|
||||
@ -263,10 +263,10 @@ Additional scenario XML elements:
|
||||
List of pCPU: the guest VM is allowed to create vCPU from all or a subset of this list.
|
||||
|
||||
``base`` (a child node of ``epc_section``):
|
||||
SGX EPC section base; must be page aligned.
|
||||
SGX Enclave Page Cache section base; must be page aligned.
|
||||
|
||||
``size`` (a child node of ``epc_section``):
|
||||
SGX EPC section size in bytes; must be page aligned.
|
||||
SGX Enclave Page Cache section size in bytes; must be page aligned.
|
||||
|
||||
``clos``:
|
||||
Class of Service for Cache Allocation Technology settings. Refer to :ref:`hv_rdt` for details.
|
||||
@ -293,7 +293,7 @@ Additional scenario XML elements:
|
||||
must exactly match the module tag in the GRUB multiboot cmdline.
|
||||
|
||||
``bootargs`` (a child node of ``os_config``):
|
||||
For internal use and is not configurable. Specify the kernel boot arguments
|
||||
For internal use only and is not configurable. Specify the kernel boot arguments
|
||||
in bootargs under the parent of board_private.
|
||||
|
||||
``kern_load_addr`` (a child node of ``os_config``):
|
||||
@ -303,7 +303,7 @@ Additional scenario XML elements:
|
||||
The entry address in host memory for the VM kernel.
|
||||
|
||||
``vuart``:
|
||||
Specify the vuart (aka COM) with the vUART ID by its "id" attribute.
|
||||
Specify the vUART (aka COM) with the vUART ID by its ``id`` attribute.
|
||||
Refer to :ref:`vuart_config` for detailed vUART settings.
|
||||
|
||||
``type`` (a child node of ``vuart``):
|
||||
@ -367,7 +367,7 @@ Attributes of the ``uos_launcher`` specify the number of User VMs that the
|
||||
current scenario has:
|
||||
|
||||
``uos``:
|
||||
Specify the User VM with its relative ID to Service VM by the "id" attribute.
|
||||
Specify the User VM with its relative ID to Service VM by the ``id`` attribute.
|
||||
|
||||
``uos_type``:
|
||||
Specify the User VM type, such as ``CLEARLINUX``, ``ANDROID``, ``ALIOS``,
|
||||
@ -378,11 +378,11 @@ current scenario has:
|
||||
Specify the User VM Real-time capability: Soft RT, Hard RT, or none of them.
|
||||
|
||||
``mem_size``:
|
||||
Specify the User VM memory size in Mbyte.
|
||||
Specify the User VM memory size in megabytes.
|
||||
|
||||
``gvt_args``:
|
||||
GVT arguments for the VM. Set it to ``gvtd`` for GVTd, otherwise stand
|
||||
for GVTg arguments. The GVTg Input format: ``low_gm_size high_gm_size fence_sz``,
|
||||
GVT arguments for the VM. Set it to ``gvtd`` for GVT-d, otherwise it's
|
||||
for GVT-g arguments. The GVT-g input format: ``low_gm_size high_gm_size fence_sz``,
|
||||
The recommendation is ``64 448 8``. Leave it blank to disable the GVT.
|
||||
|
||||
``vbootloader``:
|
||||
@ -396,7 +396,7 @@ current scenario has:
|
||||
|
||||
``poweroff_channel``:
|
||||
Specify whether the User VM power off channel is through the IOC,
|
||||
Powerbutton, or vUART.
|
||||
power button, or vUART.
|
||||
|
||||
``usb_xhci``:
|
||||
USB xHCI mediator configuration. Input format:
|
||||
@ -407,13 +407,14 @@ current scenario has:
|
||||
List of shared memory regions for inter-VM communication.
|
||||
|
||||
``shm_region`` (a child node of ``shm_regions``):
|
||||
configure the shm regions for current VM, input format: hv:/<;shm name>;,
|
||||
<;shm size in MB>;. Refer to :ref:`ivshmem-hld` for details.
|
||||
configure the shared memory regions for current VM, input format:
|
||||
``hv:/<;shm name>;, <;shm size in MB>;``. Refer to :ref:`ivshmem-hld` for details.
|
||||
|
||||
``passthrough_devices``:
|
||||
Select the passthrough device from the lspci list. Currently we support:
|
||||
usb_xdci, audio, audio_codec, ipu, ipu_i2c, cse, wifi, Bluetooth, sd_card,
|
||||
Ethernet, wifi, sata, and nvme.
|
||||
``usb_xdci``, ``audio``, ``audio_codec``, ``ipu``, ``ipu_i2c``,
|
||||
``cse``, ``wifi``, ``bluetooth``, ``sd_card``,
|
||||
``ethernet``, ``sata``, and ``nvme``.
|
||||
|
||||
``network`` (a child node of ``virtio_devices``):
|
||||
The virtio network device setting.
|
||||
@ -445,8 +446,8 @@ Hypervisor configuration workflow
|
||||
The hypervisor configuration is based on the ``Kconfig``
|
||||
mechanism. Begin by creating a board-specific ``defconfig`` file to
|
||||
set up the default ``Kconfig`` values for the specified board.
|
||||
Next, configure the hypervisor build options using the ``make
|
||||
menuconfig`` graphical interface or ``make defconfig`` to generate
|
||||
Next, configure the hypervisor build options using the ``make menuconfig``
|
||||
graphical interface or ``make defconfig`` to generate
|
||||
a ``.config`` file. The resulting ``.config`` file is
|
||||
used by the ACRN build process to create a configured scenario- and
|
||||
board-specific hypervisor image.
|
||||
@ -459,7 +460,7 @@ board-specific hypervisor image.
|
||||
.. figure:: images/GUI_of_menuconfig.png
|
||||
:align: center
|
||||
|
||||
menuconfig interface sample
|
||||
``menuconfig`` interface sample
|
||||
|
||||
Refer to :ref:`getting-started-hypervisor-configuration` for detailed
|
||||
configuration steps.
|
||||
@ -634,7 +635,7 @@ Instructions
|
||||
menu, and then select one default scenario setting to load a default
|
||||
scenario setting for the current board.
|
||||
|
||||
The default scenario configuration xmls are located at
|
||||
The default scenario configuration XMLs are located at
|
||||
``misc/vm_configs/xmls/config-xmls/[board]/``.
|
||||
We can edit the scenario name when creating or loading a scenario. If the
|
||||
current scenario name is duplicated with an existing scenario setting
|
||||
@ -648,8 +649,8 @@ Instructions
|
||||
XML**. The configuration app automatically directs to the new scenario
|
||||
XML once the import is complete.
|
||||
|
||||
#. The configurable items display after one scenario is created/loaded/
|
||||
selected. Following is an industry scenario:
|
||||
#. The configurable items display after one scenario is created, loaded,
|
||||
or selected. Following is an industry scenario:
|
||||
|
||||
.. figure:: images/configure_scenario.png
|
||||
:align: center
|
||||
@ -659,9 +660,9 @@ Instructions
|
||||
|
||||
- Read-only items are marked as gray.
|
||||
|
||||
- Hover the mouse pointer over the item to display the description.
|
||||
- Hover the mouse cursor over the item to display the description.
|
||||
|
||||
#. To dynamically add or delete VMs:
|
||||
#. Dynamically add or delete VMs:
|
||||
|
||||
- Click **Add a VM below** in one VM setting, and then select one VM type
|
||||
to add a new VM under the current VM.
|
||||
@ -675,14 +676,14 @@ Instructions
|
||||
.. figure:: images/configure_vm_add.png
|
||||
:align: center
|
||||
|
||||
#. Click **Export XML** to save the scenario xml; you can rename it in the
|
||||
#. Click **Export XML** to save the scenario XML; you can rename it in the
|
||||
pop-up model.
|
||||
|
||||
.. note::
|
||||
All customized scenario xmls will be in user-defined groups, which are
|
||||
All customized scenario XMLs will be in user-defined groups, which are
|
||||
located in ``misc/vm_configs/xmls/config-xmls/[board]/user_defined/``.
|
||||
|
||||
Before saving the scenario xml, the configuration app validates the
|
||||
Before saving the scenario XML, the configuration app validates the
|
||||
configurable items. If errors exist, the configuration app lists all
|
||||
incorrect configurable items and shows the errors as below:
|
||||
|
||||
@ -690,7 +691,7 @@ Instructions
|
||||
:align: center
|
||||
|
||||
After the scenario is saved, the page automatically directs to the saved
|
||||
scenario xmls. Delete the configured scenario by clicking **Export XML** -> **Remove**.
|
||||
scenario XMLs. Delete the configured scenario by clicking **Export XML** -> **Remove**.
|
||||
|
||||
#. Click **Generate configuration files** to save the current scenario
|
||||
setting and then generate files for the board-related configuration
|
||||
|
@ -6,8 +6,8 @@ Enable ACRN over QEMU/KVM
|
||||
Goal of this document is to bring-up ACRN as a nested Hypervisor on top of QEMU/KVM
|
||||
with basic functionality such as running Service VM (SOS) and User VM (UOS) for primarily 2 reasons,
|
||||
|
||||
1. In order for the users to evaluate ACRN.
|
||||
2. Make ACRN platform agnostic and remove per hardware platform configurations setup overhead.
|
||||
1. Allow users to evaluate ACRN.
|
||||
2. Make ACRN platform agnostic and remove hardware-specific platform configurations setup overhead.
|
||||
|
||||
This setup was tested with the following configuration,
|
||||
|
||||
@ -15,12 +15,13 @@ This setup was tested with the following configuration,
|
||||
- ACRN Kernel: 5.4.28 (Commit ID: a8cd22f49f0b2b56e526150fe0aaa9118edfcede)
|
||||
- QEMU emulator version 4.2.0
|
||||
- Service VM/User VM is ubuntu18.04
|
||||
- Platforms Tested: ApolloLake, KabyLake, CoffeeLake
|
||||
- Platforms Tested: Apollo Lake, Kaby Lake, Coffee Lake
|
||||
|
||||
|
||||
Prerequisites
|
||||
*************
|
||||
1. Make sure the platform supports Intel VMX as well as VT-d technologies. On ubuntu18.04, this
|
||||
1. Make sure the platform supports Intel VMX as well as VT-d
|
||||
technologies. On Ubuntu 18.04, this
|
||||
can be checked by installing ``cpu-checker`` tool. If the output displays **KVM acceleration can be used**
|
||||
the platform supports it.
|
||||
|
||||
@ -174,8 +175,10 @@ Install ACRN Hypervisor
|
||||
|
||||
6. Update GRUB ``sudo update-grub``.
|
||||
|
||||
7. Shutdown the guest and relaunch using, ``virsh start ACRNSOS --console`` and select ACRN hypervisor from GRUB menu to launch Service VM running on top of ACRN.
|
||||
This can be verified from ``dmesg`` as shown below,
|
||||
7. Shut down the guest and relaunch using, ``virsh start ACRNSOS --console``
|
||||
and select ACRN hypervisor from GRUB menu to launch Service
|
||||
VM running on top of ACRN.
|
||||
This can be verified using ``dmesg``, as shown below,
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -45,7 +45,7 @@ pCPU is fixed at the time the VM is launched. The statically configured
|
||||
cpu_affinity in the VM configuration defines a superset of pCPUs that
|
||||
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU
|
||||
could be assigned to this VM, and the bit number is the pCPU ID. A pre-launched
|
||||
VM is supposed to be launched on exact number of pCPUs that are assigned in
|
||||
VM is launched on exactly the number of pCPUs assigned in
|
||||
this bitmap. The vCPU to pCPU mapping is implicitly indicated: vCPU0 maps
|
||||
to the pCPU with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
|
||||
so on.
|
||||
@ -96,13 +96,13 @@ and BVT (Borrowed Virtual Time) scheduler.
|
||||
1-1 mapping previously used; every pCPU can run only two thread objects:
|
||||
one is the idle thread, and another is the thread of the assigned vCPU.
|
||||
With this scheduler, vCPU works in Work-Conserving mode, which always
|
||||
try to keep resource busy, and will run once it is ready. Idle thread
|
||||
tries to keep resources busy, and will run once it is ready. The idle thread
|
||||
can run when the vCPU thread is blocked.
|
||||
|
||||
- **Borrowed Virtual Time scheduler**:
|
||||
|
||||
BVT (Borrowed Virtual time) is a virtual time based scheduling
|
||||
algorithm, it dispatching the runnable thread with the earliest
|
||||
algorithm, it dispatches the runnable thread with the earliest
|
||||
effective virtual time.
|
||||
|
||||
TODO: BVT scheduler will be built on top of prioritized scheduling
|
||||
@ -173,7 +173,7 @@ Use the following settings to support this configuration in the industry scenari
|
||||
+---------+--------+-------+-------+
|
||||
|pCPU0 |pCPU1 |pCPU2 |pCPU3 |
|
||||
+=========+========+=======+=======+
|
||||
|Service VM + Waag |RT Linux |
|
||||
|Service VM + WaaG |RT Linux |
|
||||
+------------------+---------------+
|
||||
|
||||
- offline pcpu2-3 in Service VM.
|
||||
|
@ -50,7 +50,7 @@ interrupt vector to find the interrupt number (909) on CPU3.
|
||||
ACRN Log
|
||||
********
|
||||
|
||||
ACRN log provides console log and mem log for a user to analyze.
|
||||
ACRN log provides a console log and a mem log for a user to analyze.
|
||||
We can use console log to debug directly, while mem log is a userland tool
|
||||
used to capture an ACRN hypervisor log.
|
||||
|
||||
@ -68,7 +68,7 @@ To enable and start the mem log::
|
||||
Set and grab log
|
||||
================
|
||||
|
||||
We have 1-6 log levels for console log and mem log. The following
|
||||
We have six (1-6) log levels for console log and mem log. The following
|
||||
functions print different levels of console log and mem log::
|
||||
|
||||
pr_dbg("debug...level %d", LOG_DEBUG); //level 6
|
||||
@ -127,9 +127,9 @@ ACRN Trace
|
||||
|
||||
ACRN trace is a tool running on the Service VM to capture trace
|
||||
data. We can use the existing trace information to analyze, and we can
|
||||
add self-defined tracing to analyze code which we care about.
|
||||
add self-defined tracing to analyze code that we care about.
|
||||
|
||||
Using Existing trace event id to analyze trace
|
||||
Using Existing trace event ID to analyze trace
|
||||
==============================================
|
||||
|
||||
As an example, we can use the existing vm_exit trace to analyze the
|
||||
@ -159,19 +159,19 @@ reason and times of each vm_exit after we have done some operations.
|
||||
|
||||
vmexit summary information
|
||||
|
||||
Using Self-defined trace event id to analyze trace
|
||||
Using Self-defined trace event ID to analyze trace
|
||||
==================================================
|
||||
|
||||
For some undefined trace event id, we can define it by ourselves as
|
||||
For some undefined trace event ID, we can define it by ourselves as
|
||||
shown in the following example:
|
||||
|
||||
1. Add the following new event id into
|
||||
1. Add the following new event ID into
|
||||
``acrn-hypervisor/hypervisor/include/debug/trace.h``:
|
||||
|
||||
.. figure:: images/debug_image25.png
|
||||
:align: center
|
||||
|
||||
trace event id
|
||||
trace event ID
|
||||
|
||||
2. Add the following format to
|
||||
``misc/tools/acrntrace/scripts/formats``:
|
||||
@ -184,18 +184,18 @@ shown in the following example:
|
||||
.. note::
|
||||
|
||||
Formats:
|
||||
0x00000005: event id for trace test
|
||||
``0x00000005``: event ID for trace test
|
||||
|
||||
%(cpu)d: corresponding CPU index with 'decimal' format
|
||||
``%(cpu)d``: corresponding CPU index with decimal format
|
||||
|
||||
%(event)016x: corresponding event id with 'hex' format
|
||||
``%(event)016x``: corresponding event id with hex format
|
||||
|
||||
%(tsc)d: corresponding event time stamp with 'decimal' format
|
||||
``%(tsc)d``: corresponding event time stamp with decimal format
|
||||
|
||||
%(1)08x: corresponding first 'Long' data in TRACE_2L
|
||||
``%(1)08x``: corresponding first Long data in TRACE_2L
|
||||
|
||||
3. Add trace into function ``emulate_io`` in
|
||||
``acrn-hypervisor/hypervisor/arch/x86/guest/io_emul.c`` which we want to
|
||||
``acrn-hypervisor/hypervisor/arch/x86/guest/io_emul.c`` that we want to
|
||||
trace for the calling times of function ``emulate_io``:
|
||||
|
||||
.. figure:: images/debug_image2.png
|
||||
|
@ -29,7 +29,7 @@ The project's documentation contains the following items:
|
||||
folders (such as ``misc/``) by the build scripts.
|
||||
|
||||
* Doxygen-generated material used to create all API-specific documents
|
||||
found at http://projectacrn.github.io/latest/api/. The doc build
|
||||
found at http://projectacrn.github.io/latest/api/. The documentation build
|
||||
process uses doxygen to scan source files in the hypervisor and
|
||||
device-model folders, and from sources in the acrn-kernel repo (as
|
||||
explained later).
|
||||
@ -71,13 +71,13 @@ folder setup for documentation contributions and generation:
|
||||
acrn-kernel/
|
||||
|
||||
The parent projectacrn folder is there because we'll also be creating a
|
||||
publishing area later in these steps. For API doc generation, we'll also
|
||||
publishing area later in these steps. For API documentation generation, we'll also
|
||||
need the acrn-kernel repo contents in a sibling folder to the
|
||||
acrn-hypervisor repo contents.
|
||||
|
||||
It's best if the acrn-hypervisor
|
||||
folder is an ssh clone of your personal fork of the upstream project
|
||||
repos (though https clones work too):
|
||||
repos (though ``https`` clones work too):
|
||||
|
||||
#. Use your browser to visit https://github.com/projectacrn and do a
|
||||
fork of the **acrn-hypervisor** repo to your personal GitHub account.)
|
||||
@ -103,7 +103,7 @@ repos (though https clones work too):
|
||||
cd acrn-hypervisor
|
||||
git remote add upstream git@github.com:projectacrn/acrn-hypervisor.git
|
||||
|
||||
#. For API doc generation we'll also need the acrn-kernel repo available
|
||||
#. For API documentation generation we'll also need the acrn-kernel repo available
|
||||
locally:
|
||||
|
||||
.. code-block:: bash
|
||||
@ -111,7 +111,7 @@ repos (though https clones work too):
|
||||
cd ..
|
||||
git clone git@github.com:projectacrn/acrn-kernel.git
|
||||
|
||||
.. note:: We assume for doc generation that ``origin`` is pointed to
|
||||
.. note:: We assume for documentation generation that ``origin`` is pointed to
|
||||
the upstream repo. If you're a developer and have the acrn-kernel
|
||||
repo already set up as a sibling folder to the acrn-hypervisor,
|
||||
you can skip this clone step.
|
||||
@ -177,7 +177,7 @@ And with that you're ready to generate the documentation.
|
||||
.. note::
|
||||
|
||||
We've provided a script you can run to show what versions of the
|
||||
doc building tools you have installed ::
|
||||
documentation building tools you have installed ::
|
||||
|
||||
doc/scripts/show-versions.py
|
||||
|
||||
@ -188,7 +188,7 @@ Sphinx supports easy customization of the generated documentation
|
||||
appearance through the use of themes. Replace the theme files and do
|
||||
another ``make html`` and the output layout and style is changed. The
|
||||
sphinx build system creates document cache information that attempts to
|
||||
expedite doc rebuilds, but occasionally can cause an unexpected error or
|
||||
expedite documentation rebuilds, but occasionally can cause an unexpected error or
|
||||
warning to be generated. Doing a ``make clean`` to create a clean doc
|
||||
generation and a ``make html`` again generally cleans this up.
|
||||
|
||||
@ -201,8 +201,8 @@ theme template overrides found in ``doc/_templates``.
|
||||
Running the documentation processors
|
||||
************************************
|
||||
|
||||
The acrn-hypervisor/doc directory has all the .rst source files, extra
|
||||
tools, and Makefile for generating a local copy of the ACRN technical
|
||||
The ``acrn-hypervisor/doc`` directory has all the ``.rst`` source files, extra
|
||||
tools, and ``Makefile`` for generating a local copy of the ACRN technical
|
||||
documentation. For generating all the API documentation, there is a
|
||||
dependency on having the ``acrn-kernel`` repo's contents available too
|
||||
(as described previously). You'll get a sphinx warning if that repo is
|
||||
@ -253,7 +253,7 @@ This will delete everything in the publishing repo's **latest** folder
|
||||
(in case the new version has deleted files) and push a copy of the
|
||||
newly-generated HTML content directly to the GitHub pages publishing
|
||||
repo. The public site at https://projectacrn.github.io will be updated
|
||||
(nearly) immediately so it's best to verify the locally generated html
|
||||
(nearly) immediately so it's best to verify the locally generated HTML
|
||||
before publishing.
|
||||
|
||||
Document Versioning
|
||||
@ -279,7 +279,7 @@ list should be updated to include the version number and publishing
|
||||
folder. Note that there's no direct selection to go to a newer version
|
||||
from an older one, without going to ``latest`` first.
|
||||
|
||||
By default, doc build and publishing assumes we're generating
|
||||
By default, documentation build and publishing assumes we're generating
|
||||
documentation for the main branch and publishing to the ``/latest/``
|
||||
area on https://projectacrn.github.io. When we're generating the
|
||||
documentation for a tagged version (e.g., 0.2), check out that version
|
||||
|
@ -117,7 +117,7 @@ Linux-based post-launched VMs (VM1 and VM2).
|
||||
- For VM2, it shows ``00:05.0 RAM memory: Red Hat, Inc. Inter-VM shared memory (rev 01)``
|
||||
|
||||
3. As recorded in the `PCI ID Repository <https://pci-ids.ucw.cz/read/PC/1af4>`_,
|
||||
the ``ivshmem`` device vendor ID is ``1af4`` (RedHat) and device ID is ``1110``
|
||||
the ``ivshmem`` device vendor ID is ``1af4`` (Red Hat) and device ID is ``1110``
|
||||
(Inter-VM shared memory). Use these commands to probe the device::
|
||||
|
||||
$ sudo modprobe uio
|
||||
|
@ -54,15 +54,15 @@ On the Service VM side, it uses the ``acrnctl`` tool to trigger the User VM's S5
|
||||
``acrnctl stop user-vm-name``. Then, the Device Model sends a ``shutdown`` command
|
||||
to the User VM through a channel. If the User VM receives the command, it will send an "ACK"
|
||||
to the Device Model. It is the Service VM's responsibility to check if the User VMs
|
||||
shutdown successfully or not, and decides when to power off itself.
|
||||
shut down successfully or not, and decides when to power off itself.
|
||||
|
||||
User VM "life-cycle manager"
|
||||
============================
|
||||
User VM "lifecycle manager"
|
||||
===========================
|
||||
|
||||
As part of the current S5 reference design, a life-cycle manager daemon (life_mngr) runs in the
|
||||
As part of the current S5 reference design, a lifecycle manager daemon (life_mngr) runs in the
|
||||
User VM to implement S5. It waits for the command from the Service VM on the
|
||||
paired serial port. The simple protocol between the Service VM and User VM is as follows:
|
||||
When the daemon receives ``shutdown``, it sends "acked" to the Service VM;
|
||||
When the daemon receives ``shutdown``, it sends "ACKed" to the Service VM;
|
||||
then it can power off the User VM. If the User VM is not ready to power off,
|
||||
it can ignore the ``shutdown`` command.
|
||||
|
||||
@ -96,9 +96,9 @@ The procedure for enabling S5 is specific to the particular OS:
|
||||
|
||||
.. note:: For RT-Linux, the vUART is emulated in the hypervisor; expose the node as ``/dev/ttySn``.
|
||||
|
||||
#. For LaaG and RT-Linux VMs, run the life-cycle manager daemon:
|
||||
#. For LaaG and RT-Linux VMs, run the lifecycle manager daemon:
|
||||
|
||||
a. Use these commands to build the life-cycle manager daemon, ``life_mngr``.
|
||||
a. Use these commands to build the lifecycle manager daemon, ``life_mngr``.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -120,7 +120,7 @@ The procedure for enabling S5 is specific to the particular OS:
|
||||
# systemctl enable life_mngr.service
|
||||
# reboot
|
||||
|
||||
#. For the WaaG VM, run the life-cycle manager daemon:
|
||||
#. For the WaaG VM, run the lifecycle manager daemon:
|
||||
|
||||
a) Build the ``life_mngr_win.exe`` application::
|
||||
|
||||
@ -140,7 +140,8 @@ The procedure for enabling S5 is specific to the particular OS:
|
||||
|
||||
.. figure:: images/Microsoft-Visual-C-install-option-2.png
|
||||
|
||||
#) In WaaG, use the :kbd:`WIN + R` shortcut key, input "shell:startup", click :kbd:`OK`
|
||||
#) In WaaG, use the :kbd:`Windows + R` shortcut key, input
|
||||
``shell:startup``, click :kbd:`OK`
|
||||
and then copy the ``life_mngr_win.exe`` application into this directory.
|
||||
|
||||
.. figure:: images/run-shell-startup.png
|
||||
|
@ -6,8 +6,7 @@ Enable GVT-d in ACRN
|
||||
This tutorial describes how to enable GVT-d in ACRN.
|
||||
|
||||
.. note:: After GVT-d is enabled, have either a serial port
|
||||
or SSH session open in the Service VM to be able to
|
||||
continue interact with it.
|
||||
or SSH session open in the Service VM to interact with it.
|
||||
|
||||
Introduction
|
||||
************
|
||||
@ -99,7 +98,7 @@ Enable the GVT-d GOP driver
|
||||
|
||||
When enabling GVT-d, the Guest OS cannot light up the physical screen
|
||||
before the OS driver loads. As a result, the Guest BIOS and the Grub UI
|
||||
is not visible on the physical screen. The occurs because the physical
|
||||
are not visible on the physical screen. This occurs because the physical
|
||||
display is initialized by the GOP driver or VBIOS before the OS driver
|
||||
loads, and the Guest BIOS doesn't have them.
|
||||
|
||||
@ -131,7 +130,7 @@ Steps
|
||||
|
||||
Confirm that these binaries names match the board manufacturer names.
|
||||
|
||||
#. Git apply the following two patches:
|
||||
#. Use ``git apply`` to add the following two patches:
|
||||
|
||||
* `Use-the-default-vbt-released-with-GOP-driver.patch <../_static/downloads/Use-the-default-vbt-released-with-GOP-driver.patch>`_
|
||||
|
||||
|
@ -14,8 +14,8 @@ Using RDT includes three steps:
|
||||
|
||||
1. Detect and enumerate RDT allocation capabilities on supported
|
||||
resources such as cache and memory bandwidth.
|
||||
#. Set up resource mask array MSRs (Model-Specific Registers) for each
|
||||
CLOS (Class of Service, which is a resource allocation), basically to
|
||||
#. Set up resource mask array MSRs (model-specific registers) for each
|
||||
CLOS (class of service, which is a resource allocation), basically to
|
||||
limit or allow access to resource usage.
|
||||
#. Select the CLOS for the CPU associated with the VM that will apply
|
||||
the resource mask on the CP.
|
||||
@ -40,12 +40,14 @@ RDT detection and resource capabilities
|
||||
From the ACRN HV debug shell, use ``cpuid`` to detect and identify the
|
||||
resource capabilities. Use the platform's serial port for the HV shell.
|
||||
|
||||
Check if the platform supports RDT with ``cpuid``. First, run ``cpuid 0x7 0x0``; the return value ebx [bit 15] is set to 1 if the platform supports
|
||||
RDT. Next, run ``cpuid 0x10 0x0`` and check the EBX [3-1] bits. EBX [bit 1]
|
||||
indicates that L3 CAT is supported. EBX [bit 2] indicates that L2 CAT is
|
||||
supported. EBX [bit 3] indicates that MBA is supported. To query the
|
||||
capabilities of the supported resources, use the bit position as a subleaf
|
||||
index. For example, run ``cpuid 0x10 0x2`` to query the L2 CAT capability.
|
||||
Check if the platform supports RDT with ``cpuid``. First, run
|
||||
``cpuid 0x7 0x0``; the return value EBX [bit 15] is set to 1 if the
|
||||
platform supports RDT. Next, run ``cpuid 0x10 0x0`` and check the EBX
|
||||
[3-1] bits. EBX [bit 1] indicates that L3 CAT is supported. EBX [bit 2]
|
||||
indicates that L2 CAT is supported. EBX [bit 3] indicates that MBA is
|
||||
supported. To query the capabilities of the supported resources, use the
|
||||
bit position as a subleaf index. For example, run ``cpuid 0x10 0x2`` to
|
||||
query the L2 CAT capability.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -101,14 +103,14 @@ Tuning RDT resources in HV debug shell
|
||||
This section explains how to configure the RDT resources from the HV debug
|
||||
shell.
|
||||
|
||||
#. Check the PCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is
|
||||
running on PCPU0, and VM1 is running on PCPU1:
|
||||
#. Check the pCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is
|
||||
running on pCPU0, and VM1 is running on pCPU1:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
ACRN:\>vcpu_list
|
||||
|
||||
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE
|
||||
VM ID pCPU ID VCPU ID VCPU ROLE VCPU STATE
|
||||
===== ======= ======= ========= ==========
|
||||
0 0 0 PRIMARY Running
|
||||
1 1 0 PRIMARY Running
|
||||
@ -127,8 +129,8 @@ shell.
|
||||
ACRN:\>wrmsr -p1 0xc90 0x7f0
|
||||
ACRN:\>wrmsr -p1 0xc91 0xf
|
||||
|
||||
#. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32]
|
||||
(0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by
|
||||
#. Assign CLOS1 to pCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32]
|
||||
(0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to pCPU 0 by
|
||||
programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that
|
||||
IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
|
||||
|
||||
@ -190,7 +192,7 @@ Configure RDT for VM using VM Configuration
|
||||
|
||||
.. note::
|
||||
Users can change the mask values, but the cache mask must have
|
||||
**continuous bits** or a #GP fault can be triggered. Similary, when
|
||||
**continuous bits** or a #GP fault can be triggered. Similarly, when
|
||||
programming an MBA delay value, be sure to set the value to less than or
|
||||
equal to the MAX delay value.
|
||||
|
||||
@ -214,7 +216,7 @@ Configure RDT for VM using VM Configuration
|
||||
</vm>
|
||||
|
||||
.. note::
|
||||
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2> ...clos n).
|
||||
In ACRN, Lower CLOS always means higher priority (CLOS 0 > CLOS 1 > CLOS 2 > ... CLOS n).
|
||||
So, carefully program each VM's CLOS accordingly.
|
||||
|
||||
#. Careful consideration should be made when assigning vCPU affinity. In
|
||||
@ -249,7 +251,7 @@ Configure RDT for VM using VM Configuration
|
||||
|
||||
#. Based on our scenario, build the ACRN hypervisor and copy the
|
||||
artifact ``acrn.efi`` to the
|
||||
``/boot/EFI/acrn`` directory. If needed, update the devicemodel
|
||||
``/boot/EFI/acrn`` directory. If needed, update the device model
|
||||
``acrn-dm`` as well in ``/usr/bin`` directory. see
|
||||
:ref:`getting-started-building` for building instructions.
|
||||
|
||||
|
@ -49,9 +49,9 @@ Here is example pseudocode of a cyclictest implementation.
|
||||
Time point ``now`` is the actual point at which the cyclictest app is woken up
|
||||
and scheduled. Time point ``next`` is the expected point at which we want
|
||||
the cyclictest to be awakened and scheduled. Here we can get the latency by
|
||||
``now - next``. We don't want to see any ``vmexit`` in between ``next`` and ``now``.
|
||||
So, we define the start point of the critical section as ``next`` and the end
|
||||
point as ``now``.
|
||||
``now - next``. We don't want to see a ``vmexit`` in between ``next`` and ``now``.
|
||||
So, we define the starting point of the critical section as ``next`` and
|
||||
the ending point as ``now``.
|
||||
|
||||
Log and trace data collection
|
||||
=============================
|
||||
@ -153,15 +153,15 @@ Perf/PMU tools in performance analysis
|
||||
======================================
|
||||
|
||||
After exposing PMU-related CPUID/MSRs to the VM, performance analysis tools
|
||||
such as **perf** and **pmu** can be used inside the VM to locate
|
||||
such as ``perf`` and ``PMU`` can be used inside the VM to locate
|
||||
the bottleneck of the application.
|
||||
|
||||
**Perf** is a profiler tool for Linux 2.6+ based systems that abstracts away
|
||||
``Perf`` is a profiler tool for Linux 2.6+ based systems that abstracts away
|
||||
CPU hardware differences in Linux performance measurements and presents a
|
||||
simple command line interface. Perf is based on the ``perf_events`` interface
|
||||
exported by recent versions of the Linux kernel.
|
||||
|
||||
**PMU** tools is a collection of tools for profile collection and
|
||||
``PMU tools`` is a collection of tools for profile collection and
|
||||
performance analysis on Intel CPUs on top of Linux Perf. Refer to the
|
||||
following links for perf usage:
|
||||
|
||||
@ -170,11 +170,11 @@ following links for perf usage:
|
||||
|
||||
Refer to https://github.com/andikleen/pmu-tools for pmu usage.
|
||||
|
||||
Top-down Micro-Architecture Analysis Method (TMAM)
|
||||
Top-down Microarchitecture Analysis Method (TMAM)
|
||||
==================================================
|
||||
|
||||
The Top-down Micro-Architecture Analysis Method (TMAM), based on Top-Down
|
||||
Characterization methodology, aims to provide an insight into whether you
|
||||
The top-down microarchitecture analysis method (TMAM), based on top-down
|
||||
characterization methodology, aims to provide an insight into whether you
|
||||
have made wise choices with your algorithms and data structures. See the
|
||||
Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual
|
||||
<http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
|
||||
@ -182,10 +182,10 @@ Appendix B.1 for more details on TMAM. Refer to this `technical paper
|
||||
<https://fd.io/docs/whitepapers/performance_analysis_sw_data_planes_dec21_2017.pdf>`_
|
||||
which adopts TMAM for systematic performance benchmarking and analysis
|
||||
of compute-native Network Function data planes that are executed on
|
||||
Commercial-Off-The-Shelf (COTS) servers using available open-source
|
||||
commercial-off-the-shelf (COTS) servers using available open-source
|
||||
measurement tools.
|
||||
|
||||
Example: Using Perf to analyze TMAM level 1 on CPU core 1
|
||||
Example: Using Perf to analyze TMAM level 1 on CPU core 1:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
|
@ -59,7 +59,7 @@ Avoid VM-exit latency
|
||||
*********************
|
||||
|
||||
VM-exit has a significant negative impact on virtualization performance.
|
||||
A single VM-exit causes a several micro-second or longer latency,
|
||||
A single VM-exit causes several micro-seconds or longer latency,
|
||||
depending on what's done in VMX-root mode. VM-exit is classified into two
|
||||
types: triggered by external CPU events or triggered by operations initiated
|
||||
by the vCPU.
|
||||
@ -90,7 +90,7 @@ Tip: Do not use CPUID in a real-time critical section.
|
||||
subsequent instructions after the RDTSCP normally have data dependency
|
||||
on it, so they must wait until the RDTSCP has been executed.
|
||||
|
||||
RDMSR or WRMSR are instructions that cause VM-exits conditionally. On the
|
||||
RDMSR and WRMSR are instructions that cause VM-exits conditionally. On the
|
||||
ACRN RTVM, most MSRs are not intercepted by the HV, so they won't cause a
|
||||
VM-exit. But there are exceptions for security consideration:
|
||||
|
||||
@ -114,7 +114,7 @@ Tip: Utilize Preempt-RT Linux mechanisms to reduce the access of ICR from the RT
|
||||
#. Add ``domain`` to ``isolcpus`` ( ``isolcpus=nohz,domain,1`` ) to the kernel parameters.
|
||||
#. Add ``idle=poll`` to the kernel parameters.
|
||||
#. Add ``rcu_nocb_poll`` along with ``rcu_nocbs=1`` to the kernel parameters.
|
||||
#. Disable the logging service like journald, syslogd if possible.
|
||||
#. Disable the logging service isuch as ``journald`` or ``syslogd`` if possible.
|
||||
|
||||
The parameters shown above are recommended for the guest Preempt-RT
|
||||
Linux. For an UP RTVM, ICR interception is not a problem. But for an SMP
|
||||
@ -173,9 +173,9 @@ Tip: Disable timer migration on Preempt-RT Linux.
|
||||
Tip: Add ``mce=off`` to RT VM kernel parameters.
|
||||
This parameter disables the mce periodic timer and avoids a VM-exit.
|
||||
|
||||
Tip: Disable the Intel processor C-State and P-State of the RTVM.
|
||||
Tip: Disable the Intel processor C-state and P-state of the RTVM.
|
||||
Power management of a processor could save power, but it could also impact
|
||||
the RT performance because the power state is changing. C-State and P-State
|
||||
the RT performance because the power state is changing. C-state and P-state
|
||||
PM mechanism can be disabled by adding ``processor.max_cstate=0
|
||||
intel_idle.max_cstate=0 intel_pstate=disable`` to the kernel parameters.
|
||||
|
||||
@ -193,4 +193,4 @@ Tip: Disable the software workaround for Machine Check Error on Page Size Change
|
||||
:option:`CONFIG_MCE_ON_PSC_WORKAROUND_DISABLED` option could be set for performance.
|
||||
|
||||
.. note::
|
||||
The tips for preempt-RT Linux are mostly applicable to the Linux-based RT OS as well, such as Xenomai.
|
||||
The tips for preempt-RT Linux are mostly applicable to the Linux-based RTOS as well, such as Xenomai.
|
||||
|
@ -33,7 +33,7 @@ RTVM with HV Emulated Device
|
||||
****************************
|
||||
|
||||
ACRN uses hypervisor emulated virtual UART (vUART) devices for inter-VM synchronization such as
|
||||
logging output, or command send/receive. Currently, the vUART only works in polling mode, but
|
||||
logging output or command send/receive. Currently, the vUART only works in polling mode, but
|
||||
may be extended to support interrupt mode in a future release. In the meantime, for better RT
|
||||
behavior, the RT application using the vUART shall reserve a margin of CPU cycles to accommodate
|
||||
for the additional latency introduced by the VM-Exit to the vUART I/O registers (~2000-3000 cycles
|
||||
@ -42,4 +42,4 @@ per register access).
|
||||
DM emulated device (Except PMD)
|
||||
*******************************
|
||||
|
||||
We recommend **not** using DM-emulated devices in a RTVM.
|
||||
We recommend **not** using DM-emulated devices in an RTVM.
|
||||
|
@ -6,18 +6,17 @@ Run Kata Containers on a Service VM
|
||||
This tutorial describes how to install, configure, and run `Kata Containers
|
||||
<https://katacontainers.io/>`_ on the Ubuntu based Service VM with the ACRN
|
||||
hypervisor. In this configuration,
|
||||
Kata Containers leverage the ACRN hypervisor instead of QEMU which is used by
|
||||
default. Refer to the `Kata Containers with ACRN
|
||||
Kata Containers leverage the ACRN hypervisor instead of QEMU, which is used by
|
||||
default. Refer to the `Kata Containers with ACRN presentation
|
||||
<https://www.slideshare.net/ProjectACRN/acrn-kata-container-on-acrn>`_
|
||||
presentation from a previous ACRN Project Technical Community Meeting for
|
||||
more details on Kata Containers and how the integration with ACRN has been
|
||||
done.
|
||||
for more details on Kata Containers and how the integration with ACRN
|
||||
has been done.
|
||||
|
||||
Prerequisites
|
||||
**************
|
||||
|
||||
#. Refer to the :ref:`ACRN supported hardware <hardware>`.
|
||||
#. For a default prebuilt ACRN binary in the E2E package, you must have 4
|
||||
#. For a default prebuilt ACRN binary in the end-to-end (E2E) package, you must have 4
|
||||
CPU cores or enable "CPU Hyper-threading" in order to have 4 CPU threads for 2 CPU cores.
|
||||
#. Follow the :ref:`rt_industry_ubuntu_setup` to set up the ACRN Service VM
|
||||
based on Ubuntu.
|
||||
@ -184,7 +183,7 @@ Run a Kata Container with ACRN
|
||||
The system is now ready to run a Kata Container on ACRN. Note that a reboot
|
||||
is recommended after the installation.
|
||||
|
||||
Before running a Kata Container on ACRN, you must offline at least one CPU:
|
||||
Before running a Kata Container on ACRN, you must take at least one CPU offline:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
|
@ -68,7 +68,7 @@ Install ACRN on the Debian VM
|
||||
|
||||
#. Install the hypervisor.
|
||||
The ACRN Device Model and tools were installed as part of a previous
|
||||
step. However, make install does not install the hypervisor (acrn.efi)
|
||||
step. However, ``make install`` does not install the hypervisor (acrn.efi)
|
||||
on your EFI System Partition (ESP), nor does it configure your EFI
|
||||
firmware to boot it automatically. Follow the steps below to perform
|
||||
these operations and complete the ACRN installation. Note that we are
|
||||
|
@ -107,7 +107,8 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
|
||||
Create a New Virtual Machine
|
||||
|
||||
#. Choose **Use ISO image** and click **Browse** - **Browse Local**. Select the ISO which you get from Step 1 above.
|
||||
#. Choose **Use ISO image** and click **Browse** - **Browse Local**.
|
||||
Select the ISO image you get from Step 1 above.
|
||||
|
||||
#. Choose the **OS type:** Linux, **Version:** Debian Stretch and then click **Forward**.
|
||||
|
||||
@ -119,7 +120,7 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
#. Rename the image if you desire. You must check the **customize
|
||||
configuration before install** option before you finish all stages.
|
||||
|
||||
#. Verify that you can see the Overview screen as set up, shown in :numref:`debian10-setup` below:
|
||||
#. Verify that you can see the Overview screen has been set up, shown in :numref:`debian10-setup` below:
|
||||
|
||||
.. figure:: images/debian-uservm-3.png
|
||||
:align: center
|
||||
@ -127,8 +128,8 @@ steps will detail how to use the Debian CD-ROM (ISO) image to install Debian
|
||||
|
||||
Debian Setup Overview
|
||||
|
||||
#. Complete the Debian installation. Verify that you have set up a VDA
|
||||
disk partition, as shown in :numref:`partition-vda` below:
|
||||
#. Complete the Debian installation. Verify that you have set up a
|
||||
Virtual Disk (VDA) partition, as shown in :numref:`partition-vda` below:
|
||||
|
||||
.. figure:: images/debian-uservm-4.png
|
||||
:align: center
|
||||
@ -250,7 +251,7 @@ console so you can make command-line entries directly from it.
|
||||
$ sudo update-initramfs -u
|
||||
$ sudo poweroff
|
||||
|
||||
#. Log in to the Service VM and the modify the launch script to add the
|
||||
#. Log in to the Service VM and modify the launch script to add the
|
||||
`virtio-console` parameter to the Device Model for the Debian VM:
|
||||
|
||||
.. code-block:: none
|
||||
|
@ -93,7 +93,7 @@ This tutorial uses the Ubuntu 18.04 desktop ISO as the base image.
|
||||
|
||||
#. Right-click **QEMU/KVM** and select **New**.
|
||||
|
||||
a. Choose **Local install media (ISO image or CDROM)** and then click
|
||||
a. Choose **Local install media (ISO image or CD-ROM)** and then click
|
||||
**Forward**. A **Create a new virtual machine** box displays, as shown
|
||||
in :numref:`newVM-ubun` below.
|
||||
|
||||
|
@ -19,7 +19,7 @@ Install ACRN
|
||||
|
||||
#. Install ACRN using Ubuntu 16.04 or 18.04 as its Service VM.
|
||||
|
||||
.. important:: Need instructions from deleted document (using ubuntu
|
||||
.. important:: Need instructions from deleted document (using Ubuntu
|
||||
as SOS)
|
||||
|
||||
#. Make the acrn-kernel using the `kernel_config_uefi_sos
|
||||
@ -42,7 +42,7 @@ Install ACRN
|
||||
#. Make sure the networking bridge ``acrn-br0`` is created. If not,
|
||||
create it using the instructions in XXX.
|
||||
|
||||
.. important:: need instructions from deleted document (using ubuntu
|
||||
.. important:: need instructions from deleted document (using Ubuntu
|
||||
as SOS)
|
||||
|
||||
Set up and launch LXC/LXD
|
||||
@ -62,14 +62,14 @@ Set up and launch LXC/LXD
|
||||
- Before launching a container, make sure ``lxc-checkconfig | grep missing`` does not show any missing
|
||||
kernel features.
|
||||
|
||||
2. Create an Ubuntu 18.04 container named **openstack**::
|
||||
2. Create an Ubuntu 18.04 container named ``openstack``::
|
||||
|
||||
$ lxc init ubuntu:18.04 openstack
|
||||
|
||||
3. Export the kernel interfaces necessary to launch a Service VM in the
|
||||
**openstack** container:
|
||||
``openstack`` container:
|
||||
|
||||
a. Edit the **openstack** config file using the command::
|
||||
a. Edit the ``openstack`` config file using the command::
|
||||
|
||||
$ lxc config edit openstack
|
||||
|
||||
@ -89,18 +89,18 @@ Set up and launch LXC/LXD
|
||||
|
||||
Save and exit the editor.
|
||||
|
||||
b. Run the following commands to configure OpenStack::
|
||||
b. Run the following commands to configure ``openstack``::
|
||||
|
||||
$ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0
|
||||
$ lxc config device add openstack acrn_vhm unix-char path=/dev/acrn_vhm
|
||||
$ lxc config device add openstack loop-control unix-char path=/dev/loop-control
|
||||
$ for n in {0..15}; do lxc config device add openstack loop$n unix-block path=/dev/loop$n; done;
|
||||
|
||||
4. Launch the **openstack** container::
|
||||
4. Launch the ``openstack`` container::
|
||||
|
||||
$ lxc start openstack
|
||||
|
||||
5. Log in to the **openstack** container::
|
||||
5. Log in to the ``openstack`` container::
|
||||
|
||||
$ lxc exec openstack -- su -l
|
||||
|
||||
@ -122,15 +122,15 @@ Set up and launch LXC/LXD
|
||||
route-metric: 200
|
||||
|
||||
|
||||
7. Log out and restart the **openstack** container::
|
||||
7. Log out and restart the ``openstack`` container::
|
||||
|
||||
$ lxc restart openstack
|
||||
|
||||
8. Log in to the **openstack** container again::
|
||||
8. Log in to the ``openstack`` container again::
|
||||
|
||||
$ xc exec openstack -- su -l
|
||||
|
||||
9. If needed, set up the proxy inside the **openstack** container via
|
||||
9. If needed, set up the proxy inside the ``openstack`` container via
|
||||
``/etc/environment`` and make sure ``no_proxy`` is properly set up.
|
||||
Both IP addresses assigned to **eth0** and
|
||||
**eth1** and their subnets must be included. For example::
|
||||
@ -142,18 +142,18 @@ Set up and launch LXC/LXD
|
||||
$ sudo useradd -s /bin/bash -d /opt/stack -m stack
|
||||
$ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||
|
||||
11. Log out and restart the **openstack** container::
|
||||
11. Log out and restart the ``openstack`` container::
|
||||
|
||||
$ lxc restart openstack
|
||||
|
||||
The **openstack** container is now properly configured for OpenStack.
|
||||
The ``openstack`` container is now properly configured for OpenStack.
|
||||
Use the ``lxc list`` command to verify that both **eth0** and **eth1**
|
||||
appear in the container.
|
||||
|
||||
Set up ACRN prerequisites inside the container
|
||||
**********************************************
|
||||
|
||||
1. Log in to the **openstack** container as the **stack** user::
|
||||
1. Log in to the ``openstack`` container as the **stack** user::
|
||||
|
||||
$ lxc exec openstack -- su -l stack
|
||||
|
||||
@ -174,7 +174,7 @@ Set up ACRN prerequisites inside the container
|
||||
|
||||
3. Download, compile, and install ``iasl``. Refer to XXX.
|
||||
|
||||
.. important:: need instructions from deleted document (using ubuntu
|
||||
.. important:: need instructions from deleted document (using Ubuntu
|
||||
as SOS)
|
||||
|
||||
Set up libvirt
|
||||
@ -409,7 +409,7 @@ instance.
|
||||
:name: os-06b-create-image
|
||||
|
||||
Give the image a name (**acrnImage**), select the **QCOW2 - QEMU
|
||||
Emulator** format, and click on **Create Image** :
|
||||
Emulator** format, and click on **Create Image**:
|
||||
|
||||
.. figure:: images/OpenStack-06e-create-image.png
|
||||
:align: center
|
||||
@ -541,7 +541,7 @@ instance.
|
||||
|
||||
Click on the **Security Groups** tab and select
|
||||
the **acrnSecuGroup** security group you created earlier. Remove the
|
||||
**default** security group if its in the "Allocated" list:
|
||||
**default** security group if it's in the "Allocated" list:
|
||||
|
||||
.. figure:: images/OpenStack-10d-only-acrn-security-group.png
|
||||
:align: center
|
||||
@ -614,7 +614,7 @@ Hypervisors**:
|
||||
|
||||
.. note::
|
||||
OpenStack logs to the systemd journal and libvirt logs to
|
||||
``/var/log/libvirt/libvirtd.log``
|
||||
``/var/log/libvirt/libvirtd.log``.
|
||||
|
||||
Here are some other tasks you can try when the instance is created and
|
||||
running:
|
||||
|
@ -112,7 +112,7 @@ enable SGX support in the BIOS and in ACRN:
|
||||
SGX Capability Exposure
|
||||
***********************
|
||||
ACRN exposes SGX capability and EPC resource to a guest VM via CPUIDs and
|
||||
Processor Model-Specific Registers (MSRs), as explained in the following
|
||||
Processor model-specific registers (MSRs), as explained in the following
|
||||
sections.
|
||||
|
||||
CPUID Virtualization
|
||||
@ -140,7 +140,7 @@ CPUID Leaf 12H
|
||||
* CPUID_12H.0.EAX[1] SGX2: If 1, indicates that Intel SGX supports the
|
||||
collection of SGX2 leaf functions. If hardware supports it and SGX enabled
|
||||
for the VM, this bit will be set.
|
||||
* Other fields of CPUID_12H.0.EAX aligns with the physical CPUID.
|
||||
* Other fields of CPUID_12H.0.EAX align with the physical CPUID.
|
||||
|
||||
**Intel SGX Attributes Enumeration**
|
||||
|
||||
@ -163,7 +163,7 @@ MSR Virtualization
|
||||
IA32_FEATURE_CONTROL
|
||||
--------------------
|
||||
|
||||
The hypervisor will opt-in to SGX for VM if SGX is enabled for VM.
|
||||
The hypervisor will opt in to SGX for VM if SGX is enabled for VM.
|
||||
|
||||
* MSR_IA32_FEATURE_CONTROL_LOCK is set
|
||||
* MSR_IA32_FEATURE_CONTROL_SGX_GE is set
|
||||
|
@ -11,7 +11,7 @@ Function), a "lightweight" PCIe function which is a passthrough device for
|
||||
VMs.
|
||||
|
||||
For details, refer to Chapter 9 of PCI-SIG's
|
||||
`PCI Express Base SpecificationRevision 4.0, Version 1.0
|
||||
`PCI Express Base Specification Revision 4.0, Version 1.0
|
||||
<https://pcisig.com/pci-express-architecture-configuration-space-test-specification-revision-40-version-10>`_.
|
||||
|
||||
SR-IOV Architectural Overview
|
||||
@ -42,7 +42,7 @@ SR-IOV Extended Capability
|
||||
The SR-IOV Extended Capability defined here is a PCIe extended
|
||||
capability that must be implemented in each PF device that supports the
|
||||
SR-IOV feature. This capability is used to describe and control a PF's
|
||||
SR-IOV Capabilities.
|
||||
SR-IOV capabilities.
|
||||
|
||||
.. figure:: images/sriov-image2.png
|
||||
:align: center
|
||||
@ -114,18 +114,18 @@ SR-IOV Architecture in ACRN
|
||||
device enumeration phase.
|
||||
|
||||
2. The hypervisor intercepts the PF's SR-IOV capability and accesses whether
|
||||
to enable/disable VF devices based on the *VF\_ENABLE* state. All
|
||||
to enable/disable VF devices based on the ``VF_ENABLE`` state. All
|
||||
read/write requests for a PF device passthrough to the PF physical
|
||||
device.
|
||||
|
||||
3. The hypervisor waits for 100ms after *VF\_ENABLE* is set and initializes
|
||||
3. The hypervisor waits for 100ms after ``VF_ENABLE`` is set and initializes
|
||||
VF devices. The differences between a normal passthrough device and
|
||||
SR-IOV VF device are physical device detection, BARs, and MSIx
|
||||
initialization. The hypervisor uses *Subsystem Vendor ID* to detect the
|
||||
SR-IOV VF physical device instead of *Vendor ID* since no valid
|
||||
*Vendor ID* exists for the SR-IOV VF physical device. The VF BARs are
|
||||
SR-IOV VF device are physical device detection, BARs, and MSI-X
|
||||
initialization. The hypervisor uses ``Subsystem Vendor ID`` to detect the
|
||||
SR-IOV VF physical device instead of ``Vendor ID`` since no valid
|
||||
``Vendor ID`` exists for the SR-IOV VF physical device. The VF BARs are
|
||||
initialized by its associated PF's SR-IOV capabilities, not PCI
|
||||
standard BAR registers. The MSIx mapping base address is also from the
|
||||
standard BAR registers. The MSI-X mapping base address is also from the
|
||||
PF's SR-IOV capabilities, not PCI standard BAR registers.
|
||||
|
||||
SR-IOV Passthrough VF Architecture In ACRN
|
||||
@ -144,11 +144,11 @@ SR-IOV Passthrough VF Architecture In ACRN
|
||||
SR-IOV VF device. When the User VM starts, ``acrn-dm`` invokes a
|
||||
hypercall to set the *vdev-VF0* device in the User VM.
|
||||
|
||||
3. The hypervisor emulates *Device ID/Vendor ID* and *Memory Space Enable
|
||||
(MSE)* in the configuration space for an assigned SR-IOV VF device. The
|
||||
assigned VF *Device ID* comes from its associated PF's capability. The
|
||||
*Vendor ID* is the same as the PF's *Vendor ID* and the *MSE* is always
|
||||
set when reading the SR-IOV VF device's *CONTROL* register.
|
||||
3. The hypervisor emulates ``Device ID/Vendor ID`` and ``Memory Space Enable
|
||||
(MSE)`` in the configuration space for an assigned SR-IOV VF device. The
|
||||
assigned VF ``Device ID`` comes from its associated PF's capability. The
|
||||
``Vendor ID`` is the same as the PF's ``Vendor ID`` and the ``MSE`` is always
|
||||
set when reading the SR-IOV VF device's control register.
|
||||
|
||||
4. The vendor-specific VF driver in the target VM probes the assigned SR-IOV
|
||||
VF device.
|
||||
@ -177,13 +177,13 @@ SR-IOV VF Enable Flow
|
||||
|
||||
SR-IOV VF Enable Flow
|
||||
|
||||
The application enables n VF devices via a SR-IOV PF device *sysfs* node.
|
||||
The application enables n VF devices via a SR-IOV PF device ``sysfs`` node.
|
||||
The hypervisor intercepts all SR-IOV capability access and checks the
|
||||
*VF\_ENABLE* state. If *VF\_ENABLE* is set, the hypervisor creates n
|
||||
``VF_ENABLE`` state. If ``VF_ENABLE`` is set, the hypervisor creates n
|
||||
virtual devices after 100ms so that VF physical devices have enough time to
|
||||
be created. The Service VM waits 100ms and then only accesses the first VF
|
||||
device's configuration space including *Class Code, Reversion ID, Subsystem
|
||||
Vendor ID, Subsystem ID*. The Service VM uses the first VF device
|
||||
device's configuration space including Class Code, Reversion ID, Subsystem
|
||||
Vendor ID, Subsystem ID. The Service VM uses the first VF device
|
||||
information to initialize subsequent VF devices.
|
||||
|
||||
SR-IOV VF Disable Flow
|
||||
@ -196,11 +196,11 @@ SR-IOV VF Disable Flow
|
||||
SR-IOV VF Disable Flow
|
||||
|
||||
The application disables SR-IOV VF devices by writing zero to the SR-IOV PF
|
||||
device *sysfs* node. The hypervisor intercepts all SR-IOV capability
|
||||
accesses and checks the *VF\_ENABLE* state. If *VF\_ENABLE* is clear, the
|
||||
device ``sysfs`` node. The hypervisor intercepts all SR-IOV capability
|
||||
accesses and checks the ``VF_ENABLE`` state. If ``VF_ENABLE`` is clear, the
|
||||
hypervisor makes VF virtual devices invisible from the Service VM so that all
|
||||
access to VF devices will return 0xFFFFFFFF as an error. The VF physical
|
||||
devices are removed within 1s of when *VF\_ENABLE* is clear.
|
||||
access to VF devices will return ``0xFFFFFFFF`` as an error. The VF physical
|
||||
devices are removed within 1s of when ``VF_ENABLE`` is clear.
|
||||
|
||||
SR-IOV VF Assignment Policy
|
||||
---------------------------
|
||||
@ -226,9 +226,9 @@ We use the Intel 82576 NIC as an example in the following instructions. We
|
||||
only support LaaG (Linux as a Guest).
|
||||
|
||||
1. Ensure that the 82576 VF driver is compiled into the User VM Kernel
|
||||
(set *CONFIG\_IGBVF=y* in the Kernel Config).
|
||||
(set ``CONFIG_IGBVF=y`` in the Kernel Config).
|
||||
|
||||
#. When the Service VM boots up, the ``\ *lspci -v*\`` command indicates
|
||||
#. When the Service VM boots, the ``lspci -v`` command indicates
|
||||
that the Intel 82576 NIC devices have SR-IOV capability and their PF
|
||||
drivers are ``igb``.
|
||||
|
||||
@ -238,7 +238,7 @@ only support LaaG (Linux as a Guest).
|
||||
|
||||
82576 SR-IOV PF devices
|
||||
|
||||
#. Input the ``\ *echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs*\``
|
||||
#. Input the ``echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs``
|
||||
command in the Service VM to enable n VF devices for the first PF
|
||||
device (\ *enp109s0f0)*. The number *n* can't be more than *TotalVFs*
|
||||
which comes from the return value of command
|
||||
@ -261,17 +261,15 @@ only support LaaG (Linux as a Guest).
|
||||
|
||||
a. Unbind the igbvf driver in the Service VM.
|
||||
|
||||
i. *modprobe pci\_stub*
|
||||
i. ``modprobe pci\_stub``
|
||||
|
||||
ii. *echo "8086 10ca" > /sys/bus/pci/drivers/pci-stub/new\_id*
|
||||
ii. ``echo "8086 10ca" > /sys/bus/pci/drivers/pci-stub/new\_id``
|
||||
|
||||
iii. *echo "0000:6d:10.0" >
|
||||
/sys/bus/pci/devices/0000:6d:10.0/driver/unbind*
|
||||
iii. ``echo "0000:6d:10.0" > /sys/bus/pci/devices/0000:6d:10.0/driver/unbind``
|
||||
|
||||
iv. *echo "0000:6d:10.0" >
|
||||
/sys/bus/pci/drivers/pci-stub/bind*
|
||||
iv. ``echo "0000:6d:10.0" > /sys/bus/pci/drivers/pci-stub/bind``
|
||||
|
||||
b. Add the SR-IOV VF device parameter ("*-s X, passthru,6d/10/0*\ ") in
|
||||
b. Add the SR-IOV VF device parameter (``-s X, passthru,6d/10/0``) in
|
||||
the launch User VM script
|
||||
|
||||
.. figure:: images/sriov-image12.png
|
||||
@ -287,9 +285,9 @@ SR-IOV Limitations In ACRN
|
||||
|
||||
1. The SR-IOV migration feature is not supported.
|
||||
|
||||
2. If one SR-IOV PF device is detected during the enumeration phase, but
|
||||
2. If an SR-IOV PF device is detected during the enumeration phase, but
|
||||
not enough room exists for its total VF devices, the PF device will be
|
||||
dropped. The platform uses the *MAX_PCI_DEV_NUM* ACRN configuration to
|
||||
support the maximum number of PCI devices. Make sure *MAX_PCI_DEV_NUM* is
|
||||
dropped. The platform uses the ``MAX_PCI_DEV_NUM`` ACRN configuration to
|
||||
support the maximum number of PCI devices. Make sure ``MAX_PCI_DEV_NUM`` is
|
||||
more than the number of all PCI devices, including the total SR-IOV VF
|
||||
devices.
|
||||
|
@ -3,7 +3,7 @@
|
||||
Using GRUB to boot ACRN
|
||||
#######################
|
||||
|
||||
`GRUB <http://www.gnu.org/software/grub/>`_ is a multiboot boot loader
|
||||
`GRUB <http://www.gnu.org/software/grub/>`_ is a multiboot bootloader
|
||||
used by many popular Linux distributions. It also supports booting the
|
||||
ACRN hypervisor. See
|
||||
`<http://www.gnu.org/software/grub/grub-download.html>`_ to get the
|
||||
@ -47,9 +47,9 @@ ELF format when ``CONFIG_RELOC`` is not set, or RAW format when
|
||||
Using pre-installed GRUB
|
||||
************************
|
||||
|
||||
Most Linux distributions use GRUB version 2 by default. We can re-use
|
||||
pre-installed GRUB to load ACRN hypervisor if its version 2.02 or
|
||||
higher.
|
||||
Most Linux distributions use GRUB version 2 by default. If its version
|
||||
2.02 or higher, we can re-use the pre-installed GRUB to load the ACRN
|
||||
hypervisor.
|
||||
|
||||
Here's an example using Ubuntu to load ACRN on a scenario with two
|
||||
pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
|
||||
@ -104,7 +104,7 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
|
||||
``kernel_mod_tag`` of VM1 in the
|
||||
``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file.
|
||||
|
||||
The guest kernel command-line arguments is configured in the
|
||||
The guest kernel command-line arguments are configured in the
|
||||
hypervisor source code by default if no ``$(VMx bootargs)`` is present.
|
||||
If ``$(VMx bootargs)`` is present, the default command-line arguments
|
||||
are overridden by the ``$(VMx bootargs)`` parameters.
|
||||
@ -112,12 +112,12 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM):
|
||||
The ``$(Service VM bootargs)`` parameter in the multiboot command
|
||||
is appended to the end of the Service VM kernel command line. This is
|
||||
useful to override some Service VM kernel cmdline parameters because the
|
||||
later one would win if the same parameters were configured in the Linux
|
||||
later one would be used if the same parameters were configured in the Linux
|
||||
kernel cmdline. For example, adding ``root=/dev/sda3`` will override the
|
||||
original root device to ``/dev/sda3`` for the Service VM kernel.
|
||||
|
||||
All parameters after a ``#`` character are ignored since GRUB
|
||||
treat them as comments.
|
||||
treats them as comments.
|
||||
|
||||
``\``, ``$``, ``#`` are special characters in GRUB. An escape
|
||||
character ``\`` must be added before these special characters if they
|
||||
@ -147,7 +147,7 @@ to build and install your own GRUB, and then follow the steps described
|
||||
earlier in `pre-installed-grub`_.
|
||||
|
||||
|
||||
Here we provide another simple method to build GRUB in efi application format:
|
||||
Here we provide another simple method to build GRUB in EFI application format:
|
||||
|
||||
#. Make GRUB efi application:
|
||||
|
||||
|
@ -4,7 +4,7 @@ Getting Started Guide for ACRN logical partition mode
|
||||
#####################################################
|
||||
|
||||
The ACRN hypervisor supports a logical partition scenario in which the User
|
||||
OS (such as Ubuntu OS) running in a pre-launched VM can bypass the ACRN
|
||||
OS, running in a pre-launched VM, can bypass the ACRN
|
||||
hypervisor and directly access isolated PCI devices. The following
|
||||
guidelines provide step-by-step instructions on how to set up the ACRN
|
||||
hypervisor logical partition scenario on Intel NUC while running two
|
||||
@ -63,9 +63,9 @@ Update kernel image and modules of pre-launched VM
|
||||
#. The current ACRN logical partition scenario implementation requires a
|
||||
multi-boot capable bootloader to boot both the ACRN hypervisor and the
|
||||
bootable kernel image built from the previous step. Install the Ubuntu OS
|
||||
on the on-board NVMe SSD by following the `Ubuntu desktop installation
|
||||
on the onboard NVMe SSD by following the `Ubuntu desktop installation
|
||||
instructions <https://tutorials.ubuntu.com/tutorial/tutorial-install-ubuntu-desktop>`_ The
|
||||
Ubuntu installer creates 3 disk partitions on the on-board NVMe SSD. By
|
||||
Ubuntu installer creates 3 disk partitions on the onboard NVMe SSD. By
|
||||
default, the GRUB bootloader is installed on the EFI System Partition
|
||||
(ESP) that's used to bootstrap the ACRN hypervisor.
|
||||
|
||||
@ -101,7 +101,7 @@ Update ACRN hypervisor image
|
||||
****************************
|
||||
|
||||
#. Before building the ACRN hypervisor, find the I/O address of the serial
|
||||
port and the PCI BDF addresses of the SATA controller nd the USB
|
||||
port and the PCI BDF addresses of the SATA controller and the USB
|
||||
controllers on the Intel NUC. Enter the following command to get the
|
||||
I/O addresses of the serial port. The Intel NUC supports one serial port, **ttyS0**.
|
||||
Connect the serial port to the development workstation in order to access
|
||||
@ -155,11 +155,11 @@ Update ACRN hypervisor image
|
||||
The ``acrn.bin`` will be generated to ``./build/hypervisor/acrn.bin``.
|
||||
The ``ACPI_VM0.bin`` and ``ACPI_VM1.bin`` will be generated to ``./build/hypervisor/acpi/``.
|
||||
|
||||
#. Check the Ubuntu boot loader name.
|
||||
#. Check the Ubuntu bootloader name.
|
||||
|
||||
In the current design, the logical partition depends on the GRUB boot
|
||||
loader; otherwise, the hypervisor will fail to boot. Verify that the
|
||||
default boot loader is GRUB:
|
||||
default bootloader is GRUB:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
@ -207,7 +207,7 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image
|
||||
(or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``.
|
||||
The kernel command-line arguments used to boot the pre-launched VMs is
|
||||
located in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` header file
|
||||
and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id number and ``*`` are arguments).
|
||||
and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM ID number and ``*`` are arguments).
|
||||
The multiboot2 module param ``XXXXXX`` is the bzImage tag and must exactly match the ``kernel_mod_tag``
|
||||
configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` file.
|
||||
The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0, the parameter ``ACPI_VM0`` is
|
||||
|
@ -94,7 +94,7 @@ Steps for Using VxWorks as User VM
|
||||
|
||||
#. Follow XXX to boot the ACRN Service VM.
|
||||
|
||||
.. important:: need instructions from deleted document (using sdc
|
||||
.. important:: need instructions from deleted document (using SDC
|
||||
mode on the Intel NUC)
|
||||
|
||||
#. Boot VxWorks as User VM.
|
||||
|
@ -62,9 +62,9 @@ Download Win10 ISO and drivers
|
||||
- Check **I accept the terms in the license agreement**. Click **Continue**.
|
||||
- From the list, right check the item labeled **Oracle VirtIO Drivers
|
||||
Version for Microsoft Windows 1.x.x, yy MB**, and then **Save link as
|
||||
...**. Currently, it is named **V982789-01.zip**.
|
||||
...**. Currently, it is named ``V982789-01.zip``.
|
||||
- Click **Download**. When the download is complete, unzip the file. You
|
||||
will see an ISO named **winvirtio.iso**.
|
||||
will see an ISO named ``winvirtio.iso``.
|
||||
|
||||
Create a raw disk
|
||||
-----------------
|
||||
@ -230,21 +230,22 @@ Explanation for acrn-dm popular command lines
|
||||
.. note:: Use these acrn-dm command line entries according to your
|
||||
real requirements.
|
||||
|
||||
* **-s 2,passthru,0/2/0,gpu**:
|
||||
* ``-s 2,passthru,0/2/0,gpu``:
|
||||
This is GVT-d to passthrough the VGA controller to Windows.
|
||||
You may need to change 0/2/0 to match the bdf of the VGA controller on your platform.
|
||||
|
||||
* **-s 3,ahci,hd:/root/img/win10.img**:
|
||||
* ``-s 3,ahci,hd:/root/img/win10.img``:
|
||||
This is the hard disk where Windows 10 should be installed..
|
||||
Make sure that the slot ID **3** points to your win10 img path.
|
||||
|
||||
* **-s 4,virtio-net,tap0**:
|
||||
* ``-s 4,virtio-net,tap0``:
|
||||
This is for the network virtualization.
|
||||
|
||||
* **-s 5,fbuf,tcp=0.0.0.0:5900,w=800,h=600**:
|
||||
This opens port 5900 on the Service VM which can be connected to via vncviewer.
|
||||
* ``-s 5,fbuf,tcp=0.0.0.0:5900,w=800,h=600``:
|
||||
This opens port 5900 on the Service VM, which can be connected to via
|
||||
``vncviewer``.
|
||||
|
||||
* **-s 6,virtio-input,/dev/input/event4**:
|
||||
* ``-s 6,virtio-input,/dev/input/event4``:
|
||||
This is to passthrough the mouse/keyboard to Windows via virtio.
|
||||
Change ``event4`` accordingly. Use the following command to check
|
||||
the event node on your Service VM::
|
||||
@ -252,19 +253,19 @@ Explanation for acrn-dm popular command lines
|
||||
<To get the input event of mouse>
|
||||
# cat /proc/bus/input/devices | grep mouse
|
||||
|
||||
* **-s 7,ahci,cd:/root/img/Windows10.iso**:
|
||||
* ``-s 7,ahci,cd:/root/img/Windows10.iso``:
|
||||
This is the IOS image used to install Windows 10. It appears as a CD-ROM
|
||||
device. Make sure that the slot ID **7** points to your win10 ISO path.
|
||||
|
||||
* **-s 8,ahci,cd:/root/img/winvirtio.iso**:
|
||||
* ``-s 8,ahci,cd:/root/img/winvirtio.iso``:
|
||||
This is CD-ROM device to install the virtio Windows driver. Make sure it points to your VirtIO ISO path.
|
||||
|
||||
* **-s 9,passthru,0/14/0**:
|
||||
* ``-s 9,passthru,0/14/0``:
|
||||
This is to passthrough the USB controller to Windows.
|
||||
You may need to change 0/14/0 to match the bdf of the USB controller on
|
||||
You may need to change ``0/14/0`` to match the BDF of the USB controller on
|
||||
your platform.
|
||||
|
||||
* **--ovmf /usr/share/acrn/bios/OVMF.fd**:
|
||||
* ``--ovmf /usr/share/acrn/bios/OVMF.fd``:
|
||||
Make sure it points to your OVMF binary path.
|
||||
|
||||
Secure boot enabling
|
||||
@ -281,5 +282,3 @@ obtain a licensed version of Windows.
|
||||
|
||||
For Windows 10 activation steps, refer to
|
||||
`Activate Windows 10 <https://support.microsoft.com/en-us/help/12440/windows-10-activate>`__.
|
||||
|
||||
.. comment Reviewed for grammatical content on 20 May 2020.
|
||||
|
@ -64,7 +64,7 @@ Launch the RTVM
|
||||
|
||||
$ wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2020w01.1-140000p/preempt-rt-32030.img.xz
|
||||
|
||||
#. Decompress the xz image::
|
||||
#. Decompress the ``image.xz`` image::
|
||||
|
||||
$ xz -d preempt-rt-32030.img.xz
|
||||
|
||||
|
@ -13,7 +13,7 @@ needed.
|
||||
|
||||
Yocto Project layers support the inclusion of technologies, hardware
|
||||
components, and software components. Layers are repositories containing
|
||||
related sets of instructions which tell the Yocto Project build system
|
||||
related sets of instructions that tell the Yocto Project build system
|
||||
what to do.
|
||||
|
||||
The meta-acrn layer
|
||||
|
@ -15,7 +15,7 @@ Other :ref:`ACRN supported platforms <hardware>` should work as well.
|
||||
Introduction to Zephyr
|
||||
**********************
|
||||
|
||||
The Zephyr RTOS is a scalable real-time operating-system supporting multiple hardware architectures,
|
||||
The Zephyr RTOS is a scalable real-time operating system supporting multiple hardware architectures,
|
||||
optimized for resource constrained devices, and built with safety and security in mind.
|
||||
|
||||
Steps for Using Zephyr as User VM
|
||||
@ -38,7 +38,7 @@ Steps for Using Zephyr as User VM
|
||||
|
||||
This will build the application ELF binary in ``samples/hello_world/build/zephyr/zephyr.elf``.
|
||||
|
||||
#. Build grub2 boot loader image
|
||||
#. Build grub2 bootloader image
|
||||
|
||||
We can build the grub2 bootloader for Zephyr using ``boards/x86/common/scripts/build_grub.sh``
|
||||
found in the `Zephyr source code <https://github.com/zephyrproject-rtos/zephyr>`_.
|
||||
@ -90,7 +90,7 @@ Steps for Using Zephyr as User VM
|
||||
|
||||
You now have a virtual disk image with a bootable Zephyr in ``zephyr.img``. If the Zephyr build system is not
|
||||
the ACRN Service VM, then you will need to transfer this image to the
|
||||
ACRN Service VM (via, e.g, a USB drive or network )
|
||||
ACRN Service VM (via, e.g, a USB drive or network)
|
||||
|
||||
#. Follow XXX to boot "The ACRN Service OS" based on Clear Linux OS 28620
|
||||
(ACRN tag: acrn-2019w14.3-140000p)
|
||||
|
@ -6,7 +6,7 @@ Enable vUART Configurations
|
||||
Introduction
|
||||
============
|
||||
|
||||
The virtual universal asynchronous receiver-transmitter (vUART) supports
|
||||
The virtual universal asynchronous receiver/transmitter (vUART) supports
|
||||
two functions: one is the console, the other is communication. vUART
|
||||
only works on a single function.
|
||||
|
||||
@ -36,16 +36,16 @@ Console enable list
|
||||
| Scenarios | vm0 | vm1 | vm2 | vm3 |
|
||||
+=================+=======================+====================+================+================+
|
||||
| SDC | Service VM | Post-launched | Post-launched | |
|
||||
| | (vuart enable) | | | |
|
||||
| | (vUART enable) | | | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Hybrid | Pre-launched (Zephyr) | Service VM | Post-launched | |
|
||||
| | (vuart enable) | (vuart enable) | | |
|
||||
| | (vUART enable) | (vUART enable) | | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Industry | Service VM | Post-launched | Post-launched | Post-launched |
|
||||
| | (vuart enable) | | (vuart enable) | |
|
||||
| | (vUART enable) | | (vUART enable) | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
| Logic_partition | Pre-launched | Pre-launched RTVM | Post-launched | |
|
||||
| | (vuart enable) | (vuart enable) | RTVM | |
|
||||
| | (vUART enable) | (vUART enable) | RTVM | |
|
||||
+-----------------+-----------------------+--------------------+----------------+----------------+
|
||||
|
||||
How to configure a console port
|
||||
@ -78,10 +78,10 @@ To enable the communication port, configure ``vuart[1]`` in the two VMs that wan
|
||||
|
||||
The port_base and IRQ should differ from the ``vuart[0]`` in the same VM.
|
||||
|
||||
**t_vuart.vm_id** is the target VM's vm_id, start from 0. (0 means VM0)
|
||||
``t_vuart.vm_id`` is the target VM's vm_id, start from 0. (0 means VM0)
|
||||
|
||||
**t_vuart.vuart_id** is the target vuart index in the target VM. start
|
||||
from 1. (1 means ``vuart[1]``)
|
||||
``t_vuart.vuart_id`` is the target vuart index in the target VM. Start
|
||||
from ``1``. (``1`` means ``vuart[1]``)
|
||||
|
||||
Example:
|
||||
|
||||
@ -116,10 +116,10 @@ Communication vUART enable list
|
||||
| SDC | Service VM | Post-launched | Post-launched | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Hybrid | Pre-launched (Zephyr) | Service VM | Post-launched | |
|
||||
| | (vuart enable COM2) | (vuart enable COM2)| | |
|
||||
| | (vUART enable COM2) | (vUART enable COM2)| | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Industry | Service VM | Post-launched | Post-launched RTVM | Post-launched |
|
||||
| | (vuart enable COM2) | | (vuart enable COM2) | |
|
||||
| | (vUART enable COM2) | | (vUART enable COM2) | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
| Logic_partition | Pre-launched | Pre-launched RTVM | | |
|
||||
+-----------------+-----------------------+--------------------+---------------------+----------------+
|
||||
@ -149,17 +149,17 @@ access the corresponding port. For example, in Clear Linux:
|
||||
|
||||
You can find the message from VM1 ``/dev/ttyS1``.
|
||||
|
||||
If you are not sure which port is the communication port, you can run
|
||||
If you are not sure which one is the communication port, you can run
|
||||
``dmesg | grep ttyS`` under the Linux shell to check the base address.
|
||||
If it matches what you have set in the ``vm_configuration.c`` file, it
|
||||
is the correct port.
|
||||
|
||||
|
||||
#. With minicom
|
||||
#. With Minicom
|
||||
|
||||
Run ``minicom -D /dev/ttyS1`` on both VM1 and VM2 and enter ``test``
|
||||
in VM1's minicom. The message should appear in VM2's minicom. Disable
|
||||
flow control in minicom.
|
||||
in VM1's Minicom. The message should appear in VM2's Minicom. Disable
|
||||
flow control in Minicom.
|
||||
|
||||
|
||||
#. Limitations
|
||||
@ -210,7 +210,7 @@ started, as shown in the diagram below:
|
||||
|
||||
.. note::
|
||||
For operating systems such as VxWorks and Windows that depend on the
|
||||
ACPI table to probe the uart driver, adding the vUART configuration in
|
||||
ACPI table to probe the UART driver, adding the vUART configuration in
|
||||
the hypervisor is not sufficient. Currently, we recommend that you use
|
||||
the configuration in the figure 3 data flow. This may be refined in the
|
||||
future.
|
||||
|
@ -20,7 +20,7 @@ between the platform owner and the platform firmware. According to
|
||||
section 1.5, the PK is a self-signed certificate owned by the OEM, and
|
||||
the OEM can generate their own PK.
|
||||
|
||||
Here we show two ways to generate a PK: openssl and Microsoft tools.
|
||||
Here we show two ways to generate a PK: ``openssl`` and Microsoft tools.
|
||||
|
||||
Generate PK Using openssl
|
||||
=========================
|
||||
@ -184,7 +184,7 @@ which we'll summarize below.
|
||||
Provider Name: Microsoft Smart Card Key Storage Provider
|
||||
CertUtil: -csplist command completed successfully.
|
||||
|
||||
- Create request inf file, for example::
|
||||
- Create request ``inf`` file, for example::
|
||||
|
||||
[Version]
|
||||
Signature= "$Windows NT$"
|
||||
@ -429,7 +429,7 @@ DB (Allowed Signature database):
|
||||
|
||||
`Microsoft Corporation UEFI CA 2011
|
||||
<https://go.microsoft.com/fwlink/p/?LinkID=321194>`_:
|
||||
Microsoft signer for 3rd party UEFI binaries via DevCenter program.
|
||||
Microsoft signer for third party UEFI binaries via DevCenter program.
|
||||
|
||||
Compile OVMF with secure boot support
|
||||
*************************************
|
||||
@ -461,7 +461,7 @@ Notes:
|
||||
- ``source edksetup.sh``, this step is needed for compilation every time
|
||||
a shell is created.
|
||||
|
||||
- This will generate the fw section at
|
||||
- This will generate the ``fw`` section at
|
||||
``Build/OvmfX64/DEBUG_GCC5/FV/OVMF_CODE.fd`` or
|
||||
``Build/OvmfX64/RELEASE_GCC5/FV/OVMF_CODE.fd``
|
||||
|
||||
@ -476,7 +476,7 @@ Notes:
|
||||
Use QEMU to inject secure boot keys into OVMF
|
||||
*********************************************
|
||||
|
||||
We follow the `OpenSUSE: UEFI Secure boot using qemu-kvm document
|
||||
We follow the `openSUSE: UEFI Secure boot using qemu-kvm document
|
||||
<https://en.opensuse.org/openSUSE:UEFI_Secure_boot_using_qemu-kvm>`_
|
||||
to import PK, KEK, and DB into OVMF, Ubuntu 16.04 used.
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user