From 62d00885651b0b540afc0c5207b1044acd09df31 Mon Sep 17 00:00:00 2001 From: "David B. Kinder" Date: Thu, 29 Oct 2020 16:16:20 -0700 Subject: [PATCH] doc: doc spelling and grammer fixing Continuing with additional spelling and grammar fixes missed during retular reviews. Signed-off-by: David B. Kinder --- doc/tutorials/acrn_configuration_tool.rst | 2 +- doc/tutorials/docbuild.rst | 6 ++--- doc/tutorials/enable_ivshmem.rst | 5 +++-- doc/tutorials/rdt_configuration.rst | 12 +++++----- doc/tutorials/realtime_performance_tuning.rst | 12 +++++----- doc/tutorials/rtvm_performance_tips.rst | 4 ++-- doc/tutorials/running_ubun_as_user_vm.rst | 3 ++- doc/tutorials/setup_openstack_libvirt.rst | 21 +++++++++--------- doc/tutorials/sgx_virtualization.rst | 4 ++-- doc/tutorials/sriov_virtualization.rst | 22 +++++++++---------- 10 files changed, 47 insertions(+), 44 deletions(-) diff --git a/doc/tutorials/acrn_configuration_tool.rst b/doc/tutorials/acrn_configuration_tool.rst index 3f23b1f7e..f0f9be79e 100644 --- a/doc/tutorials/acrn_configuration_tool.rst +++ b/doc/tutorials/acrn_configuration_tool.rst @@ -314,7 +314,7 @@ Additional scenario XML elements: (SOS_COM_BASE for Service VM); disable by returning INVALID_COM_BASE. ``irq`` (a child node of ``vuart``): - vCOM irq. + vCOM IRQ. ``target_vm_id`` (a child node of ``vuart1``): COM2 is used for VM communications. When it is enabled, specify which diff --git a/doc/tutorials/docbuild.rst b/doc/tutorials/docbuild.rst index 5ebe10037..1dbf9d315 100644 --- a/doc/tutorials/docbuild.rst +++ b/doc/tutorials/docbuild.rst @@ -56,7 +56,7 @@ You'll need git installed to get the working folders set up: sudo dnf install git -We use the source header files to generate API docs and we use github.io +We use the source header files to generate API documentation and we use github.io for publishing the generated documentation. Here's the recommended folder setup for documentation contributions and generation: @@ -88,7 +88,7 @@ repos (though ``https`` clones work too): #. At a command prompt, create the working folder and clone the acrn-hypervisor repository to your local computer (and if you have publishing rights, the projectacrn.github.io repo). If you don't have publishing rights - you'll still be able to generate the docs locally, but not publish them: + you'll still be able to generate the documentation files locally, but not publish them: .. code-block:: bash @@ -194,7 +194,7 @@ generation and a ``make html`` again generally cleans this up. The ``read-the-docs`` theme is installed as part of the ``requirements.txt`` list above. Tweaks to the standard -``read-the-docs`` look and feel are added by using CSS +``read-the-docs`` appearance are added by using CSS and JavaScript customization found in ``doc/static``, and theme template overrides found in ``doc/_templates``. diff --git a/doc/tutorials/enable_ivshmem.rst b/doc/tutorials/enable_ivshmem.rst index c84c0f4a5..9330749c7 100644 --- a/doc/tutorials/enable_ivshmem.rst +++ b/doc/tutorials/enable_ivshmem.rst @@ -7,7 +7,7 @@ You can use inter-VM communication based on the ``ivshmem`` dm-land solution or hv-land solution, according to the usage scenario needs. (See :ref:`ivshmem-hld` for a high-level description of these solutions.) While both solutions can be used at the same time, VMs using different -solutions can not communicate with each other. +solutions cannot communicate with each other. ivshmem dm-land usage ********************* @@ -75,7 +75,8 @@ dm-land example This example uses dm-land inter-VM communication between two Linux-based post-launched VMs (VM1 and VM2). -.. note:: An ``ivshmem`` Windows driver exists and can be found `here `_ +.. note:: An ``ivshmem`` Windows driver exists and can be found + `here `_. 1. Add a new virtual PCI device for both VMs: the device type is ``ivshmem``, shared memory name is ``dm:/test``, and shared memory diff --git a/doc/tutorials/rdt_configuration.rst b/doc/tutorials/rdt_configuration.rst index bfd3976ff..608d1aa1c 100644 --- a/doc/tutorials/rdt_configuration.rst +++ b/doc/tutorials/rdt_configuration.rst @@ -6,9 +6,9 @@ Enable RDT Configuration On x86 platforms that support Intel Resource Director Technology (RDT) allocation features such as Cache Allocation Technology (CAT) and Memory Bandwidth Allocation (MBA), the ACRN hypervisor can be used to limit regular -VMs which may be over-utilizing common resources such as cache and memory +VMs that may be over-utilizing common resources such as cache and memory bandwidth relative to their priorities so that the performance of other -higher priorities VMs (such as RTVMs) are not impacted. +higher priority VMs (such as RTVMs) is not impacted. Using RDT includes three steps: @@ -22,7 +22,7 @@ Using RDT includes three steps: Steps #2 and #3 configure RDT resources for a VM and can be done in two ways: -* Using a HV debug shell (See `Tuning RDT resources in HV debug shell`_) +* Using an HV debug shell (See `Tuning RDT resources in HV debug shell`_) * Using a VM configuration (See `Configure RDT for VM using VM Configuration`_) The following sections discuss how to detect, enumerate capabilities, and @@ -94,7 +94,7 @@ MBA bit encoding: ACRN takes the lowest common CLOS max value between the supported resources as maximum supported CLOS ID. For example, if max CLOS supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM - to 8. ACRN recommends to have consistent capabilities across all RDT + to 8. ACRN recommends having consistent capabilities across all RDT resources by using a common subset CLOS. This is done in order to minimize misconfiguration errors. @@ -149,7 +149,7 @@ Configure RDT for VM using VM Configuration platform-specific XML file that helps ACRN identify RDT-supported platforms. RDT on ACRN is enabled by configuring the ``FEATURES`` sub-section of the scenario XML file as in the below example. For - details on building ACRN with scenario refer to :ref:`build-with-acrn-scenario`. + details on building ACRN with a scenario, refer to :ref:`build-with-acrn-scenario`. .. code-block:: none :emphasize-lines: 6 @@ -198,7 +198,7 @@ Configure RDT for VM using VM Configuration #. Configure each CPU in VMs to a desired CLOS ID in the ``VM`` section of the scenario file. Follow `RDT detection and resource capabilities`_ - to identify the maximum supported CLOS ID that can be used. ACRN uses the + to identify the maximum supported CLOS ID that can be used. ACRN uses **the lowest common MAX CLOS** value among all RDT resources to avoid resource misconfigurations. diff --git a/doc/tutorials/realtime_performance_tuning.rst b/doc/tutorials/realtime_performance_tuning.rst index 33947726a..95e0708dd 100644 --- a/doc/tutorials/realtime_performance_tuning.rst +++ b/doc/tutorials/realtime_performance_tuning.rst @@ -56,15 +56,15 @@ the ending point as ``now``. Log and trace data collection ============================= -#. Add timestamps (in TSC) at ``next`` and ``now``. -#. Capture the log with the above timestamps in the RTVM. +#. Add time stamps (in TSC) at ``next`` and ``now``. +#. Capture the log with the above time stamps in the RTVM. #. Capture the ``acrntrace`` log in the Service VM at the same time. Offline analysis ================ #. Convert the raw trace data to human readable format. -#. Merge the logs in the RTVM and the ACRN hypervisor trace based on timestamps (in TSC). +#. Merge the logs in the RTVM and the ACRN hypervisor trace based on time stamps (in TSC). #. Check to see if any ``vmexit`` occurred within the critical sections. The pattern is as follows: .. figure:: images/vm_exits_log.png @@ -158,7 +158,7 @@ the bottleneck of the application. ``Perf`` is a profiler tool for Linux 2.6+ based systems that abstracts away CPU hardware differences in Linux performance measurements and presents a -simple command line interface. Perf is based on the ``perf_events`` interface +simple command-line interface. Perf is based on the ``perf_events`` interface exported by recent versions of the Linux kernel. ``PMU tools`` is a collection of tools for profile collection and @@ -168,7 +168,7 @@ following links for perf usage: - https://perf.wiki.kernel.org/index.php/Main_Page - https://perf.wiki.kernel.org/index.php/Tutorial -Refer to https://github.com/andikleen/pmu-tools for pmu usage. +Refer to https://github.com/andikleen/pmu-tools for PMU usage. Top-down Microarchitecture Analysis Method (TMAM) ================================================== @@ -180,7 +180,7 @@ Intel |reg| 64 and IA-32 `Architectures Optimization Reference Manual `_, Appendix B.1 for more details on TMAM. Refer to this `technical paper `_ -which adopts TMAM for systematic performance benchmarking and analysis +that adopts TMAM for systematic performance benchmarking and analysis of compute-native Network Function data planes that are executed on commercial-off-the-shelf (COTS) servers using available open-source measurement tools. diff --git a/doc/tutorials/rtvm_performance_tips.rst b/doc/tutorials/rtvm_performance_tips.rst index 4fdf03186..f9080af69 100644 --- a/doc/tutorials/rtvm_performance_tips.rst +++ b/doc/tutorials/rtvm_performance_tips.rst @@ -114,7 +114,7 @@ Tip: Utilize Preempt-RT Linux mechanisms to reduce the access of ICR from the RT #. Add ``domain`` to ``isolcpus`` ( ``isolcpus=nohz,domain,1`` ) to the kernel parameters. #. Add ``idle=poll`` to the kernel parameters. #. Add ``rcu_nocb_poll`` along with ``rcu_nocbs=1`` to the kernel parameters. - #. Disable the logging service isuch as ``journald`` or ``syslogd`` if possible. + #. Disable the logging service such as ``journald`` or ``syslogd`` if possible. The parameters shown above are recommended for the guest Preempt-RT Linux. For an UP RTVM, ICR interception is not a problem. But for an SMP @@ -171,7 +171,7 @@ Tip: Disable timer migration on Preempt-RT Linux. echo 0 > /proc/kernel/timer_migration Tip: Add ``mce=off`` to RT VM kernel parameters. - This parameter disables the mce periodic timer and avoids a VM-exit. + This parameter disables the MCE periodic timer and avoids a VM-exit. Tip: Disable the Intel processor C-state and P-state of the RTVM. Power management of a processor could save power, but it could also impact diff --git a/doc/tutorials/running_ubun_as_user_vm.rst b/doc/tutorials/running_ubun_as_user_vm.rst index 9fa44d38d..317778762 100644 --- a/doc/tutorials/running_ubun_as_user_vm.rst +++ b/doc/tutorials/running_ubun_as_user_vm.rst @@ -103,7 +103,8 @@ This tutorial uses the Ubuntu 18.04 desktop ISO as the base image. Create a New Virtual Machine - #. Choose **Use ISO image** and click **Browse** - **Browse Local**. Select the ISO which you get from Step 2 above. + #. Choose **Use ISO image** and click **Browse** - **Browse Local**. + Select the ISO that you get from Step 2 above. #. Choose the **OS type:** Linux, **Version:** Ubuntu 18.04 LTS and then click **Forward**. diff --git a/doc/tutorials/setup_openstack_libvirt.rst b/doc/tutorials/setup_openstack_libvirt.rst index 10f72571b..e4622b852 100644 --- a/doc/tutorials/setup_openstack_libvirt.rst +++ b/doc/tutorials/setup_openstack_libvirt.rst @@ -26,7 +26,7 @@ Install ACRN `_ configuration file (from the ``acrn-kernel`` repo). -#. Add the following kernel bootarg to give the Service VM more loop +#. Add the following kernel boot arg to give the Service VM more loop devices. Refer to `Kernel Boot Parameters `_ documentation:: @@ -122,7 +122,7 @@ Set up and launch LXC/LXD route-metric: 200 -7. Log out and restart the ``openstack`` container:: +7. Log off and restart the ``openstack`` container:: $ lxc restart openstack @@ -142,7 +142,7 @@ Set up and launch LXC/LXD $ sudo useradd -s /bin/bash -d /opt/stack -m stack $ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers -11. Log out and restart the ``openstack`` container:: +11. Log off and restart the ``openstack`` container:: $ lxc restart openstack @@ -170,7 +170,8 @@ Set up ACRN prerequisites inside the container $ make $ cd misc/acrn-manager/; make - Install only the user-space components: acrn-dm, acrnctl, and acrnd + Install only the user-space components: ``acrn-dm``, ``acrnctl``, and + ``acrnd`` 3. Download, compile, and install ``iasl``. Refer to XXX. @@ -286,8 +287,8 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions /sys/class/net/enp109s0f0/device/sriov\_numvfs`` command in the Service VM to enable n VF devices for the first PF device (\ *enp109s0f0)*. The number *n* can't be more than *TotalVFs* - which comes from the return value of command + coming from the return value of command ``cat /sys/class/net/enp109s0f0/device/sriov\_totalvfs``. Here we use *n = 2* as an example. @@ -257,7 +257,7 @@ only support LaaG (Linux as a Guest). 82576 SR-IOV VF NIC -#. Passthrough a SR-IOV VF device to guest. +#. Passthrough an SR-IOV VF device to guest. a. Unbind the igbvf driver in the Service VM.