doc: clean up utf8 characters

Stray non-ASCII characters can creep in when pasting from Word or Google
Docs, particularly for "smart" single and double quotes and non-breaking
spaces.  Change these to their ASCII equivalents.  Also fixed some very
long lines of text to wrap at 80-ish characters.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
David B. Kinder
2020-04-13 15:29:10 -07:00
committed by David Kinder
parent 138c3aeadd
commit f5f16f4e64
19 changed files with 249 additions and 134 deletions

View File

@@ -6,7 +6,8 @@ Platform S5 Enable Guide
Introduction
************
S5 is one of the `ACPI sleep states <http://acpi.sourceforge.net/documentation/sleep.html>`_ that refers to the system being shut down (although some power may still be supplied to
S5 is one of the `ACPI sleep states <http://acpi.sourceforge.net/documentation/sleep.html>`_
that refers to the system being shut down (although some power may still be supplied to
certain devices). In this document, S5 means the function to shut down the
**User VMs**, **the Service VM**, the hypervisor, and the hardware. In most cases,
directly shutting down the power of a computer system is not advisable because it can
@@ -30,14 +31,16 @@ The diagram below shows the overall architecture:
- **Scenario I**:
The User VM's serial port device (``ttySn``) is emulated in the Device Model, the channel from the Service VM to the User VM:
The User VM's serial port device (``ttySn``) is emulated in the
Device Model, the channel from the Service VM to the User VM:
.. graphviz:: images/s5-scenario-1.dot
:name: s5-scenario-1
- **Scenario II**:
The User VM's (like RT-Linux or other RT-VMs) serial port device (``ttySn``) is emulated in the Hypervisor,
The User VM's (like RT-Linux or other RT-VMs) serial port device
(``ttySn``) is emulated in the Hypervisor,
the channel from the Service OS to the User VM:
.. graphviz:: images/s5-scenario-2.dot
@@ -186,7 +189,7 @@ How to test
Active: active (running) since Tue 2019-09-10 07:15:06 UTC; 1min 11s ago
Main PID: 840 (life_mngr)
.. note:: For WaaG, we need to close ``windbg`` by using the ``"bcdedit /set debug off`` command
.. note:: For WaaG, we need to close ``windbg`` by using the ``bcdedit /set debug off`` command
IF you executed the ``bcdedit /set debug on`` when you set up the WaaG, because it occupies the ``COM2``.
#. Use the``acrnctl stop`` command on the Service VM to trigger S5 to the User VMs:

View File

@@ -12,9 +12,13 @@ higher priorities VMs (such as RTVMs) are not impacted.
Using RDT includes three steps:
1. Detect and enumerate RDT allocation capabilities on supported resources such as cache and memory bandwidth.
#. Set up resource mask array MSRs (Model-Specific Registers) for each CLOS (Class of Service, which is a resource allocation), basically to limit or allow access to resource usage.
#. Select the CLOS for the CPU associated with the VM that will apply the resource mask on the CP.
1. Detect and enumerate RDT allocation capabilities on supported
resources such as cache and memory bandwidth.
#. Set up resource mask array MSRs (Model-Specific Registers) for each
CLOS (Class of Service, which is a resource allocation), basically to
limit or allow access to resource usage.
#. Select the CLOS for the CPU associated with the VM that will apply
the resource mask on the CP.
Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
@@ -24,7 +28,10 @@ Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
The following sections discuss how to detect, enumerate capabilities, and
configure RDT resources for VMs in the ACRN hypervisor.
For further details, refer to the ACRN RDT high-level design :ref:`hv_rdt` and `Intel 64 and IA-32 Architectures Software Developer's Manual, (Section 17.19 Intel Resource Director Technology Allocation Features) <https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_
For further details, refer to the ACRN RDT high-level design
:ref:`hv_rdt` and `Intel 64 and IA-32 Architectures Software Developer's
Manual, (Section 17.19 Intel Resource Director Technology Allocation Features)
<https://software.intel.com/en-us/download/intel-64-and-ia-32-architectures-sdm-combined-volumes-3a-3b-3c-and-3d-system-programming-guide>`_
.. _rdt_detection_capabilities:
@@ -48,10 +55,16 @@ index. For example, run ``cpuid 0x10 0x2`` to query the L2 CAT capability.
L3/L2 bit encoding:
* EAX [bit 4:0] reports the length of the cache mask minus one. For example, a value 0xa means the cache mask is 0x7ff.
* EBX [bit 31:0] reports a bit mask. Each set bit indicates the corresponding unit of the cache allocation that can be used by other entities in the platform (e.g. integrated graphics engine).
* ECX [bit 2] if set, indicates that cache Code and Data Prioritization Technology is supported.
* EDX [bit 15:0] reports the maximum CLOS supported for the resource minus one. For example, a value of 0xf means the max CLOS supported is 0x10.
* EAX [bit 4:0] reports the length of the cache mask minus one. For
example, a value 0xa means the cache mask is 0x7ff.
* EBX [bit 31:0] reports a bit mask. Each set bit indicates the
corresponding unit of the cache allocation that can be used by other
entities in the platform (e.g. integrated graphics engine).
* ECX [bit 2] if set, indicates that cache Code and Data Prioritization
Technology is supported.
* EDX [bit 15:0] reports the maximum CLOS supported for the resource
minus one. For example, a value of 0xf means the max CLOS supported
is 0x10.
.. code-block:: none
@@ -82,7 +95,8 @@ Tuning RDT resources in HV debug shell
This section explains how to configure the RDT resources from the HV debug
shell.
#. Check the PCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is running on PCPU0, and VM1 is running on PCPU1:
#. Check the PCPU IDs of each VM; the ``vcpu_list`` below shows that VM0 is
running on PCPU0, and VM1 is running on PCPU1:
.. code-block:: none
@@ -93,14 +107,24 @@ shell.
0 0 0 PRIMARY Running
1 1 0 PRIMARY Running
#. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``. For example, if you want to restrict VM1 to use the lower 4 ways of LLC cache and you want to allocate the upper 7 ways of LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0 is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is programmed to 0x7f0. Finally, resource mask the MSR that corresponds to CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
#. Set the resource mask array MSRs for each CLOS with a ``wrmsr <reg_num> <value>``.
For example, if you want to restrict VM1 to use the
lower 4 ways of LLC cache and you want to allocate the upper 7 ways of
LLC to access to VM0, you must first assign a CLOS for each VM (e.g. VM0
is assigned CLOS0 and VM1 CLOS1). Next, resource mask the MSR that
corresponds to the CLOS0. In our example, IA32_L3_MASK_BASE + 0 is
programmed to 0x7f0. Finally, resource mask the MSR that corresponds to
CLOS1. In our example, IA32_L3_MASK_BASE + 1 is set to 0xf.
.. code-block:: none
ACRN:\>wrmsr -p1 0xc90 0x7f0
ACRN:\>wrmsr -p1 0xc91 0xf
#. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32] (0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
#. Assign CLOS1 to PCPU1 by programming the MSR IA32_PQR_ASSOC [bit 63:32]
(0xc8f) to 0x100000000 to use CLOS1 and assign CLOS0 to PCPU 0 by
programming MSR IA32_PQR_ASSOC [bit 63:32] to 0x0. Note that
IA32_PQR_ASSOC is per LP MSR and CLOS must be programmed on each LP.
.. code-block:: none
@@ -112,7 +136,12 @@ shell.
Configure RDT for VM using VM Configuration
*******************************************
#. RDT on ACRN is enabled by default on supported platforms. This information can be found using an offline tool that generates a platform-specific xml file that helps ACRN identify RDT-supported platforms. This feature can be also be toggled using the CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first step is to clone the ACRN source code (if you haven't already done so):
#. RDT on ACRN is enabled by default on supported platforms. This
information can be found using an offline tool that generates a
platform-specific xml file that helps ACRN identify RDT-supported
platforms. This feature can be also be toggled using the
CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first
step is to clone the ACRN source code (if you haven't already done so):
.. code-block:: none
@@ -122,7 +151,9 @@ Configure RDT for VM using VM Configuration
.. figure:: images/menuconfig-rdt.png
:align: center
#. The predefined cache masks can be found at ``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards. For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``.
#. The predefined cache masks can be found at
``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards.
For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``.
.. code-block:: none
:emphasize-lines: 3,7,11,15
@@ -147,9 +178,17 @@ Configure RDT for VM using VM Configuration
};
.. note::
Users can change the mask values, but the cache mask must have **continuous bits** or a #GP fault can be triggered. Similary, when programming an MBA delay value, be sure to set the value to less than or equal to the MAX delay value.
Users can change the mask values, but the cache mask must have
**continuous bits** or a #GP fault can be triggered. Similary, when
programming an MBA delay value, be sure to set the value to less than or
equal to the MAX delay value.
#. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilities`_ to identify the MAX CLOS that can be used. ACRN uses the **the lowest common MAX CLOS** value among all RDT resources to avoid resource misconfigurations. For example, configuration data for the Service VM sharing mode can be found at ``hypervisor/arch/x86/configs/vm_config.c``
#. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilities`_
to identify the MAX CLOS that can be used. ACRN uses the
**the lowest common MAX CLOS** value among all RDT resources to avoid
resource misconfigurations. For example, configuration data for the
Service VM sharing mode can be found at
``hypervisor/arch/x86/configs/vm_config.c``
.. code-block:: none
:emphasize-lines: 6
@@ -171,9 +210,15 @@ Configure RDT for VM using VM Configuration
};
.. note::
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2>...clos n). So, carefully program each VM's CLOS accordingly.
In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2> ...clos n).
So, carefully program each VM's CLOS accordingly.
#. Careful consideration should be made when assigning vCPU affinity. In a cache isolation configuration, in addition to isolating CAT-capable caches, you must also isolate lower-level caches. In the following example, logical processor #0 and #2 share L1 and L2 caches. In this case, do not assign LP #0 and LP #2 to different VMs that need to do cache isolation. Assign LP #1 and LP #3 with similar consideration:
#. Careful consideration should be made when assigning vCPU affinity. In
a cache isolation configuration, in addition to isolating CAT-capable
caches, you must also isolate lower-level caches. In the following
example, logical processor #0 and #2 share L1 and L2 caches. In this
case, do not assign LP #0 and LP #2 to different VMs that need to do
cache isolation. Assign LP #1 and LP #3 with similar consideration:
.. code-block:: none
:emphasize-lines: 3
@@ -194,10 +239,15 @@ Configure RDT for VM using VM Configuration
PU L#2 (P#1)
PU L#3 (P#3)
#. Bandwidth control is per-core (not per LP), so max delay values of per-LP CLOS is applied to the core. If HT is turned on, dont place high priority threads on sibling LPs running lower priority threads.
#. Bandwidth control is per-core (not per LP), so max delay values of
per-LP CLOS is applied to the core. If HT is turned on, don't place high
priority threads on sibling LPs running lower priority threads.
#. Based on our scenario, build the ACRN hypervisor and copy the artifact ``acrn.efi`` to the
``/boot/EFI/acrn`` directory. If needed, update the devicemodel ``acrn-dm`` as well in ``/usr/bin`` directory. see :ref:`getting-started-building` for building instructions.
#. Based on our scenario, build the ACRN hypervisor and copy the
artifact ``acrn.efi`` to the
``/boot/EFI/acrn`` directory. If needed, update the devicemodel
``acrn-dm`` as well in ``/usr/bin`` directory. see
:ref:`getting-started-building` for building instructions.
.. code-block:: none

View File

@@ -38,11 +38,11 @@ Here is example pseudocode of a cyclictest implementation.
.. code-block:: none
while (!shutdown) {
...
clock_nanosleep(&next)
clock_gettime(&now)
latency = calcdiff(now, next)
...
next += interval
}
@@ -161,7 +161,9 @@ CPU hardware differences in Linux performance measurements and presents a
simple command line interface. Perf is based on the ``perf_events`` interface
exported by recent versions of the Linux kernel.
**PMU** tools is a collection of tools for profile collection and performance analysis on Intel CPUs on top of Linux Perf. Refer to the following links for perf usage:
**PMU** tools is a collection of tools for profile collection and
performance analysis on Intel CPUs on top of Linux Perf. Refer to the
following links for perf usage:
- https://perf.wiki.kernel.org/index.php/Main_Page
- https://perf.wiki.kernel.org/index.php/Tutorial
@@ -174,7 +176,8 @@ Top-down Micro-Architecture Analysis Method (TMAM)
The Top-down Micro-Architecture Analysis Method (TMAM), based on Top-Down
Characterization methodology, aims to provide an insight into whether you
have made wise choices with your algorithms and data structures. See the
Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual <http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual
<http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
Appendix B.1 for more details on TMAM. Refer to this `technical paper
<https://fd.io/docs/whitepapers/performance_analysis_sw_data_planes_dec21_2017.pdf>`_
which adopts TMAM for systematic performance benchmarking and analysis
@@ -197,4 +200,3 @@ Example: Using Perf to analyze TMAM level 1 on CPU core 1
S0-C1 1 10.6% 1.5% 3.9% 84.0%
0.006737123 seconds time elapsed

View File

@@ -35,7 +35,9 @@ Install Kata Containers
The Kata Containers installation from Clear Linux's official repository does
not work with ACRN at the moment. Therefore, you must install Kata
Containers using the `manual installation <https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_ instructions (using a ``rootfs`` image).
Containers using the `manual installation
<https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
instructions (using a ``rootfs`` image).
#. Install the build dependencies.
@@ -45,7 +47,8 @@ Containers using the `manual installation <https://github.com/kata-containers/do
#. Install Kata Containers.
At a high level, the `manual installation <https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
At a high level, the `manual installation
<https://github.com/kata-containers/documentation/blob/master/Developer-Guide.md>`_
steps are:
#. Build and install the Kata runtime.
@@ -89,7 +92,7 @@ outputs:
$ kata-runtime kata-env | awk -v RS= '/\[Hypervisor\]/'
[Hypervisor]
MachineType = ""
Version = "DM version is: 1.5-unstable-2020w02.5.140000p_261 (daily tag:2020w02.5.140000p), build by mockbuild@2020-01-12 08:44:52"
Version = "DM version is: 1.5-unstable-"2020w02.5.140000p_261" (daily tag:"2020w02.5.140000p"), build by mockbuild@2020-01-12 08:44:52"
Path = "/usr/bin/acrn-dm"
BlockDeviceDriver = "virtio-blk"
EntropySource = "/dev/urandom"

View File

@@ -10,7 +10,9 @@ extended capability and manages entire physical devices; and VF (Virtual
Function), a "lightweight" PCIe function which is a passthrough device for
VMs.
For details, refer to Chapter 9 of PCI-SIG's `PCI Express Base SpecificationRevision 4.0, Version 1.0 <https://pcisig.com/pci-express-architecture-configuration-space-test-specification-revision-40-version-10>`_.
For details, refer to Chapter 9 of PCI-SIG's
`PCI Express Base SpecificationRevision 4.0, Version 1.0
<https://pcisig.com/pci-express-architecture-configuration-space-test-specification-revision-40-version-10>`_.
SR-IOV Architectural Overview
-----------------------------
@@ -31,7 +33,7 @@ SR-IOV Architectural Overview
- **PF** - A PCIe Function that supports the SR-IOV capability
and is accessible to an SR-PCIM, a VI, or an SI.
- **VF** - A light-weight PCIe Function that is directly accessible by an
- **VF** - A "light-weight" PCIe Function that is directly accessible by an
SI.
SR-IOV Extended Capability
@@ -39,7 +41,7 @@ SR-IOV Extended Capability
The SR-IOV Extended Capability defined here is a PCIe extended
capability that must be implemented in each PF device that supports the
SR-IOV feature. This capability is used to describe and control a PFs
SR-IOV feature. This capability is used to describe and control a PF's
SR-IOV Capabilities.
.. figure:: images/sriov-image2.png
@@ -84,17 +86,17 @@ SR-IOV Capabilities.
supported by the PF.
- **System Page Size** - The field that defines the page size the system
will use to map the VFs memory addresses. Software must set the
will use to map the VFs' memory addresses. Software must set the
value of the *System Page Size* to one of the page sizes set in the
*Supported Page Sizes* field.
- **VF BARs** - Fields that must define the VFs Base Address
- **VF BARs** - Fields that must define the VF's Base Address
Registers (BARs). These fields behave as normal PCI BARs.
- **VF Migration State Array Offset** - Register that contains a
PF BAR relative pointer to the VF Migration State Array.
- **VF Migration State Array** Located using the VF Migration
- **VF Migration State Array** - Located using the VF Migration
State Array Offset register of the SR-IOV Capability block.
For details, refer to the *PCI Express Base Specification Revision 4.0, Version 1.0 Chapter 9.3.3*.
@@ -111,7 +113,7 @@ SR-IOV Architecture In ACRN
1. A hypervisor detects a SR-IOV capable PCIe device in the physical PCI
device enumeration phase.
2. The hypervisor intercepts the PFs SR-IOV capability and accesses whether
2. The hypervisor intercepts the PF's SR-IOV capability and accesses whether
to enable/disable VF devices based on the *VF\_ENABLE* state. All
read/write requests for a PF device passthrough to the PF physical
device.
@@ -122,9 +124,9 @@ SR-IOV Architecture In ACRN
initialization. The hypervisor uses *Subsystem Vendor ID* to detect the
SR-IOV VF physical device instead of *Vendor ID* since no valid
*Vendor ID* exists for the SR-IOV VF physical device. The VF BARs are
initialized by its associated PFs SR-IOV capabilities, not PCI
initialized by its associated PF's SR-IOV capabilities, not PCI
standard BAR registers. The MSIx mapping base address is also from the
PFs SR-IOV capabilities, not PCI standard BAR registers.
PF's SR-IOV capabilities, not PCI standard BAR registers.
SR-IOV Passthrough VF Architecture In ACRN
------------------------------------------
@@ -144,8 +146,8 @@ SR-IOV Passthrough VF Architecture In ACRN
3. The hypervisor emulates *Device ID/Vendor ID* and *Memory Space Enable
(MSE)* in the configuration space for an assigned SR-IOV VF device. The
assigned VF *Device ID* comes from its associated PFs capability. The
*Vendor ID* is the same as the PFs *Vendor ID* and the *MSE* is always
assigned VF *Device ID* comes from its associated PF's capability. The
*Vendor ID* is the same as the PF's *Vendor ID* and the *MSE* is always
set when reading the SR-IOV VF device's *CONTROL* register.
4. The vendor-specific VF driver in the target VM probes the assigned SR-IOV
@@ -180,7 +182,7 @@ The hypervisor intercepts all SR-IOV capability access and checks the
*VF\_ENABLE* state. If *VF\_ENABLE* is set, the hypervisor creates n
virtual devices after 100ms so that VF physical devices have enough time to
be created. The Service VM waits 100ms and then only accesses the first VF
devices configuration space including *Class Code, Reversion ID, Subsystem
device's configuration space including *Class Code, Reversion ID, Subsystem
Vendor ID, Subsystem ID*. The Service VM uses the first VF device
information to initialize subsequent VF devices.
@@ -238,8 +240,10 @@ only support LaaG (Linux as a Guest).
#. Input the ``\ *echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs*\``
command in the Service VM to enable n VF devices for the first PF
device (\ *enp109s0f0)*. The number *n* cant be more than *TotalVFs*
which comes from the return value of command ``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we use *n = 2* as an example.
device (\ *enp109s0f0)*. The number *n* can't be more than *TotalVFs*
which comes from the return value of command
``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we
use *n = 2* as an example.
.. figure:: images/sriov-image10.png
:align: center
@@ -267,7 +271,7 @@ only support LaaG (Linux as a Guest).
iv. *echo "0000:6d:10.0" >
/sys/bus/pci/drivers/pci-stub/bind*
b. Add the SR-IOV VF device parameter (*-s X, passthru,6d/10/0*\ ) in
b. Add the SR-IOV VF device parameter ("*-s X, passthru,6d/10/0*\ ") in
the launch User VM script
.. figure:: images/sriov-image12.png

View File

@@ -47,12 +47,13 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
.. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file.
The param ``xxxxxx`` is VM0s kernel file tag and must exactly match the
The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the
``kernel_mod_tag`` of VM0 which is configured in the ``hypervisor/scenarios/hybrid/vm_configurations.c``
file. The multiboot module ``/boot/bzImage`` is the Service VM kernel
file. The param ``yyyyyy`` is the bzImage tag and must exactly match the
``kernel_mod_tag`` of VM1 in the ``hypervisor/scenarios/hybrid/vm_configurations.c``
file. The kernel command line arguments used to boot the Service VM are located in the header file ``hypervisor/scenarios/hybrid/vm_configurations.h``
file. The kernel command line arguments used to boot the Service VM are
located in the header file ``hypervisor/scenarios/hybrid/vm_configurations.h``
and are configured by the `SOS_VM_BOOTARGS` macro.
#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu
@@ -68,7 +69,7 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo
$ sudo update-grub
#. Reboot the NUC. Select the **ACRN hypervisor Hybrid Scenario** entry to boot
the ACRN hypervisor on the NUCs display. The GRUB loader will boot the
the ACRN hypervisor on the NUC's display. The GRUB loader will boot the
hypervisor, and the hypervisor will start the VMs automatically.
Hybrid Scenario Startup Checking
@@ -83,7 +84,7 @@ Hybrid Scenario Startup Checking
a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**.
#. Enter :kbd:`Ctrl+Spacebar` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console.
#. Verify that the VM1s Service VM can boot up and you can log in.
#. Verify that the VM1's Service VM can boot up and you can log in.
#. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script.
#. Go to the Service VM console, and enter :kbd:`Ctrl+Spacebar` to return to the ACRN hypervisor shell.
#. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console.

View File

@@ -161,7 +161,7 @@ Update ACRN hypervisor Image
* Set ACRN Scenario as "Logical Partition VMs";
* Set Maximum number of VCPUs per VM as "2";
* Set Maximum number of PCPU as "4";
* Clear/Disable Enable hypervisor relocation.
* Clear/Disable "Enable hypervisor relocation".
We recommend keeping the default values of items not mentioned above.

View File

@@ -72,7 +72,7 @@ Build the Service VM Kernel
$ WORKDIR=`pwd`;
$ JOBS=`nproc`
$ git clone -b master https://github.com/projectacrn/acrn-kernel.git
$ git clone -b master https://github.com/projectacrn/acrn-kernel.git
$ cd acrn-kernel && mkdir -p ${WORKDIR}/{build,build-rootfs}
$ cp kernel_config_uefi_sos ${WORKDIR}/build/.config
$ make olddefconfig O=${WORKDIR}/build && make -j${JOBS} O=${WORKDIR}/build
@@ -256,7 +256,7 @@ ACRN Windows verified feature list
, "Virtio input - keyboard", "Working"
, "GOP & VNC remote display", "Working"
"GVT-g", "GVT-g without local display", "Working with 3D benchmark"
, "GVT-g  with local display", "Working with 3D benchmark"
, "GVT-g with local display", "Working with 3D benchmark"
"Tools", "WinDbg", "Working"
"Test cases", "Install Windows 10 from scratch", "OK"
, "Windows reboot", "OK"