mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-06-22 13:37:10 +00:00
doc: doc spelling and grammer fixing
Continuing with additional spelling and grammar fixes missed during retular reviews. Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
This commit is contained in:
parent
22c5dd2c58
commit
62d0088565
@ -314,7 +314,7 @@ Additional scenario XML elements:
|
|||||||
(SOS_COM_BASE for Service VM); disable by returning INVALID_COM_BASE.
|
(SOS_COM_BASE for Service VM); disable by returning INVALID_COM_BASE.
|
||||||
|
|
||||||
``irq`` (a child node of ``vuart``):
|
``irq`` (a child node of ``vuart``):
|
||||||
vCOM irq.
|
vCOM IRQ.
|
||||||
|
|
||||||
``target_vm_id`` (a child node of ``vuart1``):
|
``target_vm_id`` (a child node of ``vuart1``):
|
||||||
COM2 is used for VM communications. When it is enabled, specify which
|
COM2 is used for VM communications. When it is enabled, specify which
|
||||||
|
@ -56,7 +56,7 @@ You'll need git installed to get the working folders set up:
|
|||||||
|
|
||||||
sudo dnf install git
|
sudo dnf install git
|
||||||
|
|
||||||
We use the source header files to generate API docs and we use github.io
|
We use the source header files to generate API documentation and we use github.io
|
||||||
for publishing the generated documentation. Here's the recommended
|
for publishing the generated documentation. Here's the recommended
|
||||||
folder setup for documentation contributions and generation:
|
folder setup for documentation contributions and generation:
|
||||||
|
|
||||||
@ -88,7 +88,7 @@ repos (though ``https`` clones work too):
|
|||||||
#. At a command prompt, create the working folder and clone the acrn-hypervisor
|
#. At a command prompt, create the working folder and clone the acrn-hypervisor
|
||||||
repository to your local computer (and if you have publishing rights, the
|
repository to your local computer (and if you have publishing rights, the
|
||||||
projectacrn.github.io repo). If you don't have publishing rights
|
projectacrn.github.io repo). If you don't have publishing rights
|
||||||
you'll still be able to generate the docs locally, but not publish them:
|
you'll still be able to generate the documentation files locally, but not publish them:
|
||||||
|
|
||||||
.. code-block:: bash
|
.. code-block:: bash
|
||||||
|
|
||||||
@ -194,7 +194,7 @@ generation and a ``make html`` again generally cleans this up.
|
|||||||
|
|
||||||
The ``read-the-docs`` theme is installed as part of the
|
The ``read-the-docs`` theme is installed as part of the
|
||||||
``requirements.txt`` list above. Tweaks to the standard
|
``requirements.txt`` list above. Tweaks to the standard
|
||||||
``read-the-docs`` look and feel are added by using CSS
|
``read-the-docs`` appearance are added by using CSS
|
||||||
and JavaScript customization found in ``doc/static``, and
|
and JavaScript customization found in ``doc/static``, and
|
||||||
theme template overrides found in ``doc/_templates``.
|
theme template overrides found in ``doc/_templates``.
|
||||||
|
|
||||||
|
@ -7,7 +7,7 @@ You can use inter-VM communication based on the ``ivshmem`` dm-land
|
|||||||
solution or hv-land solution, according to the usage scenario needs.
|
solution or hv-land solution, according to the usage scenario needs.
|
||||||
(See :ref:`ivshmem-hld` for a high-level description of these solutions.)
|
(See :ref:`ivshmem-hld` for a high-level description of these solutions.)
|
||||||
While both solutions can be used at the same time, VMs using different
|
While both solutions can be used at the same time, VMs using different
|
||||||
solutions can not communicate with each other.
|
solutions cannot communicate with each other.
|
||||||
|
|
||||||
ivshmem dm-land usage
|
ivshmem dm-land usage
|
||||||
*********************
|
*********************
|
||||||
@ -75,7 +75,8 @@ dm-land example
|
|||||||
This example uses dm-land inter-VM communication between two
|
This example uses dm-land inter-VM communication between two
|
||||||
Linux-based post-launched VMs (VM1 and VM2).
|
Linux-based post-launched VMs (VM1 and VM2).
|
||||||
|
|
||||||
.. note:: An ``ivshmem`` Windows driver exists and can be found `here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_
|
.. note:: An ``ivshmem`` Windows driver exists and can be found
|
||||||
|
`here <https://github.com/virtio-win/kvm-guest-drivers-windows/tree/master/ivshmem>`_.
|
||||||
|
|
||||||
1. Add a new virtual PCI device for both VMs: the device type is
|
1. Add a new virtual PCI device for both VMs: the device type is
|
||||||
``ivshmem``, shared memory name is ``dm:/test``, and shared memory
|
``ivshmem``, shared memory name is ``dm:/test``, and shared memory
|
||||||
|
@ -6,9 +6,9 @@ Enable RDT Configuration
|
|||||||
On x86 platforms that support Intel Resource Director Technology (RDT)
|
On x86 platforms that support Intel Resource Director Technology (RDT)
|
||||||
allocation features such as Cache Allocation Technology (CAT) and Memory
|
allocation features such as Cache Allocation Technology (CAT) and Memory
|
||||||
Bandwidth Allocation (MBA), the ACRN hypervisor can be used to limit regular
|
Bandwidth Allocation (MBA), the ACRN hypervisor can be used to limit regular
|
||||||
VMs which may be over-utilizing common resources such as cache and memory
|
VMs that may be over-utilizing common resources such as cache and memory
|
||||||
bandwidth relative to their priorities so that the performance of other
|
bandwidth relative to their priorities so that the performance of other
|
||||||
higher priorities VMs (such as RTVMs) are not impacted.
|
higher priority VMs (such as RTVMs) is not impacted.
|
||||||
|
|
||||||
Using RDT includes three steps:
|
Using RDT includes three steps:
|
||||||
|
|
||||||
@ -22,7 +22,7 @@ Using RDT includes three steps:
|
|||||||
|
|
||||||
Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
|
Steps #2 and #3 configure RDT resources for a VM and can be done in two ways:
|
||||||
|
|
||||||
* Using a HV debug shell (See `Tuning RDT resources in HV debug shell`_)
|
* Using an HV debug shell (See `Tuning RDT resources in HV debug shell`_)
|
||||||
* Using a VM configuration (See `Configure RDT for VM using VM Configuration`_)
|
* Using a VM configuration (See `Configure RDT for VM using VM Configuration`_)
|
||||||
|
|
||||||
The following sections discuss how to detect, enumerate capabilities, and
|
The following sections discuss how to detect, enumerate capabilities, and
|
||||||
@ -94,7 +94,7 @@ MBA bit encoding:
|
|||||||
ACRN takes the lowest common CLOS max value between the supported
|
ACRN takes the lowest common CLOS max value between the supported
|
||||||
resources as maximum supported CLOS ID. For example, if max CLOS
|
resources as maximum supported CLOS ID. For example, if max CLOS
|
||||||
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
|
supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM
|
||||||
to 8. ACRN recommends to have consistent capabilities across all RDT
|
to 8. ACRN recommends having consistent capabilities across all RDT
|
||||||
resources by using a common subset CLOS. This is done in order to minimize
|
resources by using a common subset CLOS. This is done in order to minimize
|
||||||
misconfiguration errors.
|
misconfiguration errors.
|
||||||
|
|
||||||
@ -149,7 +149,7 @@ Configure RDT for VM using VM Configuration
|
|||||||
platform-specific XML file that helps ACRN identify RDT-supported
|
platform-specific XML file that helps ACRN identify RDT-supported
|
||||||
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
|
platforms. RDT on ACRN is enabled by configuring the ``FEATURES``
|
||||||
sub-section of the scenario XML file as in the below example. For
|
sub-section of the scenario XML file as in the below example. For
|
||||||
details on building ACRN with scenario refer to :ref:`build-with-acrn-scenario`.
|
details on building ACRN with a scenario, refer to :ref:`build-with-acrn-scenario`.
|
||||||
|
|
||||||
.. code-block:: none
|
.. code-block:: none
|
||||||
:emphasize-lines: 6
|
:emphasize-lines: 6
|
||||||
@ -198,7 +198,7 @@ Configure RDT for VM using VM Configuration
|
|||||||
|
|
||||||
#. Configure each CPU in VMs to a desired CLOS ID in the ``VM`` section of the
|
#. Configure each CPU in VMs to a desired CLOS ID in the ``VM`` section of the
|
||||||
scenario file. Follow `RDT detection and resource capabilities`_
|
scenario file. Follow `RDT detection and resource capabilities`_
|
||||||
to identify the maximum supported CLOS ID that can be used. ACRN uses the
|
to identify the maximum supported CLOS ID that can be used. ACRN uses
|
||||||
**the lowest common MAX CLOS** value among all RDT resources to avoid
|
**the lowest common MAX CLOS** value among all RDT resources to avoid
|
||||||
resource misconfigurations.
|
resource misconfigurations.
|
||||||
|
|
||||||
|
@ -56,15 +56,15 @@ the ending point as ``now``.
|
|||||||
Log and trace data collection
|
Log and trace data collection
|
||||||
=============================
|
=============================
|
||||||
|
|
||||||
#. Add timestamps (in TSC) at ``next`` and ``now``.
|
#. Add time stamps (in TSC) at ``next`` and ``now``.
|
||||||
#. Capture the log with the above timestamps in the RTVM.
|
#. Capture the log with the above time stamps in the RTVM.
|
||||||
#. Capture the ``acrntrace`` log in the Service VM at the same time.
|
#. Capture the ``acrntrace`` log in the Service VM at the same time.
|
||||||
|
|
||||||
Offline analysis
|
Offline analysis
|
||||||
================
|
================
|
||||||
|
|
||||||
#. Convert the raw trace data to human readable format.
|
#. Convert the raw trace data to human readable format.
|
||||||
#. Merge the logs in the RTVM and the ACRN hypervisor trace based on timestamps (in TSC).
|
#. Merge the logs in the RTVM and the ACRN hypervisor trace based on time stamps (in TSC).
|
||||||
#. Check to see if any ``vmexit`` occurred within the critical sections. The pattern is as follows:
|
#. Check to see if any ``vmexit`` occurred within the critical sections. The pattern is as follows:
|
||||||
|
|
||||||
.. figure:: images/vm_exits_log.png
|
.. figure:: images/vm_exits_log.png
|
||||||
@ -158,7 +158,7 @@ the bottleneck of the application.
|
|||||||
|
|
||||||
``Perf`` is a profiler tool for Linux 2.6+ based systems that abstracts away
|
``Perf`` is a profiler tool for Linux 2.6+ based systems that abstracts away
|
||||||
CPU hardware differences in Linux performance measurements and presents a
|
CPU hardware differences in Linux performance measurements and presents a
|
||||||
simple command line interface. Perf is based on the ``perf_events`` interface
|
simple command-line interface. Perf is based on the ``perf_events`` interface
|
||||||
exported by recent versions of the Linux kernel.
|
exported by recent versions of the Linux kernel.
|
||||||
|
|
||||||
``PMU tools`` is a collection of tools for profile collection and
|
``PMU tools`` is a collection of tools for profile collection and
|
||||||
@ -168,7 +168,7 @@ following links for perf usage:
|
|||||||
- https://perf.wiki.kernel.org/index.php/Main_Page
|
- https://perf.wiki.kernel.org/index.php/Main_Page
|
||||||
- https://perf.wiki.kernel.org/index.php/Tutorial
|
- https://perf.wiki.kernel.org/index.php/Tutorial
|
||||||
|
|
||||||
Refer to https://github.com/andikleen/pmu-tools for pmu usage.
|
Refer to https://github.com/andikleen/pmu-tools for PMU usage.
|
||||||
|
|
||||||
Top-down Microarchitecture Analysis Method (TMAM)
|
Top-down Microarchitecture Analysis Method (TMAM)
|
||||||
==================================================
|
==================================================
|
||||||
@ -180,7 +180,7 @@ Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual
|
|||||||
<http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
|
<http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-optimization-manual.pdf>`_,
|
||||||
Appendix B.1 for more details on TMAM. Refer to this `technical paper
|
Appendix B.1 for more details on TMAM. Refer to this `technical paper
|
||||||
<https://fd.io/docs/whitepapers/performance_analysis_sw_data_planes_dec21_2017.pdf>`_
|
<https://fd.io/docs/whitepapers/performance_analysis_sw_data_planes_dec21_2017.pdf>`_
|
||||||
which adopts TMAM for systematic performance benchmarking and analysis
|
that adopts TMAM for systematic performance benchmarking and analysis
|
||||||
of compute-native Network Function data planes that are executed on
|
of compute-native Network Function data planes that are executed on
|
||||||
commercial-off-the-shelf (COTS) servers using available open-source
|
commercial-off-the-shelf (COTS) servers using available open-source
|
||||||
measurement tools.
|
measurement tools.
|
||||||
|
@ -114,7 +114,7 @@ Tip: Utilize Preempt-RT Linux mechanisms to reduce the access of ICR from the RT
|
|||||||
#. Add ``domain`` to ``isolcpus`` ( ``isolcpus=nohz,domain,1`` ) to the kernel parameters.
|
#. Add ``domain`` to ``isolcpus`` ( ``isolcpus=nohz,domain,1`` ) to the kernel parameters.
|
||||||
#. Add ``idle=poll`` to the kernel parameters.
|
#. Add ``idle=poll`` to the kernel parameters.
|
||||||
#. Add ``rcu_nocb_poll`` along with ``rcu_nocbs=1`` to the kernel parameters.
|
#. Add ``rcu_nocb_poll`` along with ``rcu_nocbs=1`` to the kernel parameters.
|
||||||
#. Disable the logging service isuch as ``journald`` or ``syslogd`` if possible.
|
#. Disable the logging service such as ``journald`` or ``syslogd`` if possible.
|
||||||
|
|
||||||
The parameters shown above are recommended for the guest Preempt-RT
|
The parameters shown above are recommended for the guest Preempt-RT
|
||||||
Linux. For an UP RTVM, ICR interception is not a problem. But for an SMP
|
Linux. For an UP RTVM, ICR interception is not a problem. But for an SMP
|
||||||
@ -171,7 +171,7 @@ Tip: Disable timer migration on Preempt-RT Linux.
|
|||||||
echo 0 > /proc/kernel/timer_migration
|
echo 0 > /proc/kernel/timer_migration
|
||||||
|
|
||||||
Tip: Add ``mce=off`` to RT VM kernel parameters.
|
Tip: Add ``mce=off`` to RT VM kernel parameters.
|
||||||
This parameter disables the mce periodic timer and avoids a VM-exit.
|
This parameter disables the MCE periodic timer and avoids a VM-exit.
|
||||||
|
|
||||||
Tip: Disable the Intel processor C-state and P-state of the RTVM.
|
Tip: Disable the Intel processor C-state and P-state of the RTVM.
|
||||||
Power management of a processor could save power, but it could also impact
|
Power management of a processor could save power, but it could also impact
|
||||||
|
@ -103,7 +103,8 @@ This tutorial uses the Ubuntu 18.04 desktop ISO as the base image.
|
|||||||
|
|
||||||
Create a New Virtual Machine
|
Create a New Virtual Machine
|
||||||
|
|
||||||
#. Choose **Use ISO image** and click **Browse** - **Browse Local**. Select the ISO which you get from Step 2 above.
|
#. Choose **Use ISO image** and click **Browse** - **Browse Local**.
|
||||||
|
Select the ISO that you get from Step 2 above.
|
||||||
|
|
||||||
#. Choose the **OS type:** Linux, **Version:** Ubuntu 18.04 LTS and then click **Forward**.
|
#. Choose the **OS type:** Linux, **Version:** Ubuntu 18.04 LTS and then click **Forward**.
|
||||||
|
|
||||||
|
@ -26,7 +26,7 @@ Install ACRN
|
|||||||
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
|
<https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_
|
||||||
configuration file (from the ``acrn-kernel`` repo).
|
configuration file (from the ``acrn-kernel`` repo).
|
||||||
|
|
||||||
#. Add the following kernel bootarg to give the Service VM more loop
|
#. Add the following kernel boot arg to give the Service VM more loop
|
||||||
devices. Refer to `Kernel Boot Parameters
|
devices. Refer to `Kernel Boot Parameters
|
||||||
<https://wiki.ubuntu.com/Kernel/KernelBootParameters>`_ documentation::
|
<https://wiki.ubuntu.com/Kernel/KernelBootParameters>`_ documentation::
|
||||||
|
|
||||||
@ -122,7 +122,7 @@ Set up and launch LXC/LXD
|
|||||||
route-metric: 200
|
route-metric: 200
|
||||||
|
|
||||||
|
|
||||||
7. Log out and restart the ``openstack`` container::
|
7. Log off and restart the ``openstack`` container::
|
||||||
|
|
||||||
$ lxc restart openstack
|
$ lxc restart openstack
|
||||||
|
|
||||||
@ -142,7 +142,7 @@ Set up and launch LXC/LXD
|
|||||||
$ sudo useradd -s /bin/bash -d /opt/stack -m stack
|
$ sudo useradd -s /bin/bash -d /opt/stack -m stack
|
||||||
$ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
$ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
|
||||||
|
|
||||||
11. Log out and restart the ``openstack`` container::
|
11. Log off and restart the ``openstack`` container::
|
||||||
|
|
||||||
$ lxc restart openstack
|
$ lxc restart openstack
|
||||||
|
|
||||||
@ -170,7 +170,8 @@ Set up ACRN prerequisites inside the container
|
|||||||
$ make
|
$ make
|
||||||
$ cd misc/acrn-manager/; make
|
$ cd misc/acrn-manager/; make
|
||||||
|
|
||||||
Install only the user-space components: acrn-dm, acrnctl, and acrnd
|
Install only the user-space components: ``acrn-dm``, ``acrnctl``, and
|
||||||
|
``acrnd``
|
||||||
|
|
||||||
3. Download, compile, and install ``iasl``. Refer to XXX.
|
3. Download, compile, and install ``iasl``. Refer to XXX.
|
||||||
|
|
||||||
@ -286,8 +287,8 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
|
|||||||
|
|
||||||
a. Inside the container, use the command ``ip a`` to identify the ``br-ex`` bridge
|
a. Inside the container, use the command ``ip a`` to identify the ``br-ex`` bridge
|
||||||
interface. ``br-ex`` should have two IPs. One should be visible to
|
interface. ``br-ex`` should have two IPs. One should be visible to
|
||||||
the native Ubuntu's ``acrn-br0`` interface (e.g. inet 192.168.1.104/24).
|
the native Ubuntu's ``acrn-br0`` interface (e.g. iNet 192.168.1.104/24).
|
||||||
The other one is internal to OpenStack (e.g. inet 172.24.4.1/24). The
|
The other one is internal to OpenStack (e.g. iNet 172.24.4.1/24). The
|
||||||
latter corresponds to the public network in OpenStack.
|
latter corresponds to the public network in OpenStack.
|
||||||
|
|
||||||
b. Set up SNAT to establish a link between ``acrn-br0`` and OpenStack.
|
b. Set up SNAT to establish a link between ``acrn-br0`` and OpenStack.
|
||||||
@ -479,9 +480,9 @@ instance.
|
|||||||
:width: 1200px
|
:width: 1200px
|
||||||
:name: os-08d-security-group
|
:name: os-08d-security-group
|
||||||
|
|
||||||
#. Create a public/private key pair used to access the created instance.
|
#. Create a public/private keypair used to access the created instance.
|
||||||
Go to **Project / Compute / Key Pairs** and click on **+Create Key
|
Go to **Project / Compute / Key Pairs** and click on **+Create Key
|
||||||
Pair**, give the key pair a name (**acrnKeyPair**) and Key Type
|
Pair**, give the keypair a name (**acrnKeyPair**) and Key Type
|
||||||
(**SSH Key**) and click on **Create Key Pair**:
|
(**SSH Key**) and click on **Create Key Pair**:
|
||||||
|
|
||||||
.. figure:: images/OpenStack-09a-create-key-pair.png
|
.. figure:: images/OpenStack-09a-create-key-pair.png
|
||||||
@ -489,7 +490,7 @@ instance.
|
|||||||
:width: 1200px
|
:width: 1200px
|
||||||
:name: os-09a-key-pair
|
:name: os-09a-key-pair
|
||||||
|
|
||||||
You should save the **private** key pair file safely,
|
You should save the **private** keypair file safely,
|
||||||
for future use:
|
for future use:
|
||||||
|
|
||||||
.. figure:: images/OpenStack-09c-key-pair-private-key.png
|
.. figure:: images/OpenStack-09c-key-pair-private-key.png
|
||||||
@ -613,7 +614,7 @@ Hypervisors**:
|
|||||||
:name: os-12d-running
|
:name: os-12d-running
|
||||||
|
|
||||||
.. note::
|
.. note::
|
||||||
OpenStack logs to the systemd journal and libvirt logs to
|
OpenStack logs to the ``systemd`` journal and ``libvirt`` logs to
|
||||||
``/var/log/libvirt/libvirtd.log``.
|
``/var/log/libvirt/libvirtd.log``.
|
||||||
|
|
||||||
Here are some other tasks you can try when the instance is created and
|
Here are some other tasks you can try when the instance is created and
|
||||||
|
@ -41,8 +41,8 @@ No Enclave in a Hypervisor
|
|||||||
|
|
||||||
ACRN does not support running an enclave in a hypervisor since the whole
|
ACRN does not support running an enclave in a hypervisor since the whole
|
||||||
hypervisor is currently running in VMX root mode, ring 0, and an enclave must
|
hypervisor is currently running in VMX root mode, ring 0, and an enclave must
|
||||||
run in ring 3. ACRN SGX virtualization in provides the capability to (non-SOS)
|
run in ring 3. ACRN SGX virtualization provides the capability to
|
||||||
VMs.
|
non-Service VMs.
|
||||||
|
|
||||||
Enable SGX on Host
|
Enable SGX on Host
|
||||||
------------------
|
------------------
|
||||||
|
@ -7,7 +7,7 @@ SR-IOV (Single Root Input/Output Virtualization) can isolate PCIe devices
|
|||||||
to improve performance that is similar to bare-metal levels. SR-IOV consists
|
to improve performance that is similar to bare-metal levels. SR-IOV consists
|
||||||
of two basic units: PF (Physical Function), which supports SR-IOV PCIe
|
of two basic units: PF (Physical Function), which supports SR-IOV PCIe
|
||||||
extended capability and manages entire physical devices; and VF (Virtual
|
extended capability and manages entire physical devices; and VF (Virtual
|
||||||
Function), a "lightweight" PCIe function which is a passthrough device for
|
Function), a "lightweight" PCIe function that is a passthrough device for
|
||||||
VMs.
|
VMs.
|
||||||
|
|
||||||
For details, refer to Chapter 9 of PCI-SIG's
|
For details, refer to Chapter 9 of PCI-SIG's
|
||||||
@ -58,14 +58,14 @@ SR-IOV capabilities.
|
|||||||
|
|
||||||
- **SR-IOV Status** - VF Migration Status.
|
- **SR-IOV Status** - VF Migration Status.
|
||||||
|
|
||||||
- **InitialVFs** - Indicates to the SR-PCIM the number of VFs that are
|
- **Initial VFs** - Indicates to the SR-PCIM the number of VFs that are
|
||||||
initially associated with the PF.
|
initially associated with the PF.
|
||||||
|
|
||||||
- **TotalVFs** - Indicates the maximum number of VFs that can be
|
- **Total VFs** - Indicates the maximum number of VFs that can be
|
||||||
associated with the PF.
|
associated with the PF.
|
||||||
|
|
||||||
- **NumVFs** - Controls the number of VFs that are visible. *NumVFs* <=
|
- **Num VFs** - Controls the number of VFs that are visible. *Num VFs* <=
|
||||||
*InitialVFs* = *TotalVFs*.
|
*Initial VFs* = *Total VFs*.
|
||||||
|
|
||||||
- **Function Link Dependency** - The field used to describe
|
- **Function Link Dependency** - The field used to describe
|
||||||
dependencies between PFs. VF dependencies are the same as the
|
dependencies between PFs. VF dependencies are the same as the
|
||||||
@ -110,7 +110,7 @@ SR-IOV Architecture in ACRN
|
|||||||
|
|
||||||
SR-IOV Architectural in ACRN
|
SR-IOV Architectural in ACRN
|
||||||
|
|
||||||
1. A hypervisor detects a SR-IOV capable PCIe device in the physical PCI
|
1. A hypervisor detects an SR-IOV capable PCIe device in the physical PCI
|
||||||
device enumeration phase.
|
device enumeration phase.
|
||||||
|
|
||||||
2. The hypervisor intercepts the PF's SR-IOV capability and accesses whether
|
2. The hypervisor intercepts the PF's SR-IOV capability and accesses whether
|
||||||
@ -162,10 +162,10 @@ SR-IOV Initialization Flow
|
|||||||
|
|
||||||
SR-IOV Initialization Flow
|
SR-IOV Initialization Flow
|
||||||
|
|
||||||
When a SR-IOV capable device is initialized, all access to the
|
When an SR-IOV capable device is initialized, all access to the
|
||||||
configuration space will passthrough to the physical device directly.
|
configuration space will passthrough to the physical device directly.
|
||||||
The Service VM can identify all capabilities of the device from the SR-IOV
|
The Service VM can identify all capabilities of the device from the SR-IOV
|
||||||
extended capability and then create an *sysfs* node for SR-IOV management.
|
extended capability and then create a *sysfs* node for SR-IOV management.
|
||||||
|
|
||||||
SR-IOV VF Enable Flow
|
SR-IOV VF Enable Flow
|
||||||
---------------------
|
---------------------
|
||||||
@ -177,7 +177,7 @@ SR-IOV VF Enable Flow
|
|||||||
|
|
||||||
SR-IOV VF Enable Flow
|
SR-IOV VF Enable Flow
|
||||||
|
|
||||||
The application enables n VF devices via a SR-IOV PF device ``sysfs`` node.
|
The application enables ``n`` VF devices via an SR-IOV PF device ``sysfs`` node.
|
||||||
The hypervisor intercepts all SR-IOV capability access and checks the
|
The hypervisor intercepts all SR-IOV capability access and checks the
|
||||||
``VF_ENABLE`` state. If ``VF_ENABLE`` is set, the hypervisor creates n
|
``VF_ENABLE`` state. If ``VF_ENABLE`` is set, the hypervisor creates n
|
||||||
virtual devices after 100ms so that VF physical devices have enough time to
|
virtual devices after 100ms so that VF physical devices have enough time to
|
||||||
@ -241,7 +241,7 @@ only support LaaG (Linux as a Guest).
|
|||||||
#. Input the ``echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs``
|
#. Input the ``echo n > /sys/class/net/enp109s0f0/device/sriov\_numvfs``
|
||||||
command in the Service VM to enable n VF devices for the first PF
|
command in the Service VM to enable n VF devices for the first PF
|
||||||
device (\ *enp109s0f0)*. The number *n* can't be more than *TotalVFs*
|
device (\ *enp109s0f0)*. The number *n* can't be more than *TotalVFs*
|
||||||
which comes from the return value of command
|
coming from the return value of command
|
||||||
``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we
|
``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we
|
||||||
use *n = 2* as an example.
|
use *n = 2* as an example.
|
||||||
|
|
||||||
@ -257,7 +257,7 @@ only support LaaG (Linux as a Guest).
|
|||||||
|
|
||||||
82576 SR-IOV VF NIC
|
82576 SR-IOV VF NIC
|
||||||
|
|
||||||
#. Passthrough a SR-IOV VF device to guest.
|
#. Passthrough an SR-IOV VF device to guest.
|
||||||
|
|
||||||
a. Unbind the igbvf driver in the Service VM.
|
a. Unbind the igbvf driver in the Service VM.
|
||||||
|
|
||||||
|
Loading…
Reference in New Issue
Block a user