From 72a9b7bae366a0961bf7745281019908eaa07c81 Mon Sep 17 00:00:00 2001 From: "Reyes, Amy" Date: Thu, 24 Feb 2022 16:56:37 -0800 Subject: [PATCH] doc: Style cleanup in usercrash, trusty, vuart docs - Minor style changes per Acrolinx recommendations and for consistency Signed-off-by: Reyes, Amy --- doc/developer-guides/hld/vuart-virt-hld.rst | 46 +++++----- doc/developer-guides/trusty.rst | 18 ++-- doc/tutorials/setup_openstack_libvirt.rst | 88 +++++++++---------- .../acrn_crashlog/usercrash/README.rst | 43 ++++----- 4 files changed, 97 insertions(+), 98 deletions(-) diff --git a/doc/developer-guides/hld/vuart-virt-hld.rst b/doc/developer-guides/hld/vuart-virt-hld.rst index 15ff69ac0..c86bd7cdd 100644 --- a/doc/developer-guides/hld/vuart-virt-hld.rst +++ b/doc/developer-guides/hld/vuart-virt-hld.rst @@ -17,7 +17,7 @@ port base and IRQ. UART virtualization architecture -Each vUART has two FIFOs: 8192 bytes Tx FIFO and 256 bytes Rx FIFO. +Each vUART has two FIFOs: 8192 bytes TX FIFO and 256 bytes RX FIFO. Currently, we only provide 4 ports for use. - COM1 (port base: 0x3F8, irq: 4) @@ -34,14 +34,14 @@ Console vUART ************* A vUART can be used as a console port, and it can be activated by -a ``vm_console `` command in the hypervisor console. From -:numref:`console-uart-arch`, there is only one physical UART, but four -console vUARTs (green color blocks). A hypervisor console is implemented -above the physical UART, and it works in polling mode. There is a timer -in the hv console. The timer handler dispatches the input from physical UART -to the vUART or the hypervisor shell process and gets data from vUART's -Tx FIFO and sends it to the physical UART. The data in vUART's FIFOs will be -overwritten when it is not taken out in time. +a ``vm_console `` command in the hypervisor console. +:numref:`console-uart-arch` shows only one physical UART, but four console +vUARTs (green color blocks). A hypervisor console is implemented above the +physical UART, and it works in polling mode. The hypervisor console has a +timer. The timer handler sends input from the physical UART to the +vUART or the hypervisor shell process. The timer handler also gets data from +the vUART's TX FIFO and sends it to the physical UART. The data in the vUART's +FIFOs is overwritten if it is not taken out in time. .. figure:: images/uart-virt-hld-2.png :align: center @@ -66,7 +66,7 @@ Operations in VM0 - VM traps to hypervisor, and the vUART PIO handler is called. -- Puts the data to its target vUART's Rx FIFO. +- Puts the data to its target vUART's RX FIFO. - Injects a Data Ready interrupt to VM1. @@ -80,9 +80,9 @@ Operations in VM1 - Reads LSR register, finds a Data Ready interrupt. -- Reads data from Rx FIFO. +- Reads data from RX FIFO. -- If Rx FIFO is not full, injects THRE interrupt to VM0. +- If RX FIFO is not full, injects THRE interrupt to VM0. .. figure:: images/uart-virt-hld-3.png :align: center @@ -120,7 +120,7 @@ Usage } The kernel bootargs ``console=ttySx`` should be the same with - vuart[0]; otherwise, the kernel console log can not be captured by + vuart[0]; otherwise, the kernel console log cannot be captured by the hypervisor. Then, after bringing up the system, you can switch the console to the target VM by: @@ -131,8 +131,8 @@ Usage - For communication vUART - To enable the communication port, you should configure vuart[1] in - the two VMs which want to communicate. The port_base and IRQ should + To enable the communication port, configure vuart[1] in + the two VMs that need to communicate. The port_base and IRQ should not repeat with the vuart[0] in the same VM. t_vuart.vm_id is the target VM's vm_id, start from 0 (0 means VM0). t_vuart.vuart_id is the target vUART index in the target VM, start from 1 (1 means vuart[1]). @@ -159,13 +159,13 @@ Usage .t_vuart.vuart_id = 1U, }, -.. note:: The device mode also has a virtual UART, and also uses 0x3F8 +.. note:: The Device Model also has a virtual UART and uses 0x3F8 and 0x2F8 as port base. If you add ``-s , lpc`` in the launch - script, the device model will create COM0 and COM1 for the post - launched VM. It will also add the port info to the ACPI table. This is - useful for Windows and vxworks as they probe the driver according to the ACPI - table. + script, the Device Model will create COM0 and COM1 for the post-launched VM. + It will also add the port information to the ACPI table. This configuration + is useful for Windows and VxWorks as they probe the driver according to the + ACPI table. - If the user enables both the device model UART and the hypervisor vUART at the - same port address, access to the port address will be responded to - by the hypervisor vUART directly, and will not pass to the device model. + If you enable the Device Model UART and the hypervisor vUART at the + same port address, access to the port address will be responded to by the + hypervisor vUART directly, and will not pass to the Device Model. diff --git a/doc/developer-guides/trusty.rst b/doc/developer-guides/trusty.rst index b17d5bb7f..33440179a 100644 --- a/doc/developer-guides/trusty.rst +++ b/doc/developer-guides/trusty.rst @@ -21,10 +21,9 @@ Trusty consists of: communication with trusted applications executed within the Trusty OS using the kernel drivers -LK (`Little Kernel`_) is a tiny operating system suited for small embedded -devices, bootloaders, and other environments where OS primitives such as -threads, mutexes, and timers are needed, but there's a desire to keep things -small and lightweight. LK has been chosen as the Trusty OS kernel. +LK (`Little Kernel`_) is a tiny operating system for small embedded +devices, bootloaders, and other environments that need OS primitives such as +threads, mutexes, and timers. LK has been chosen as the Trusty OS kernel. Trusty Architecture ******************* @@ -45,7 +44,7 @@ Trusty Architecture Trusty Specific Hypercalls ************************** -There are a few :ref:`hypercall_apis` that are related to Trusty. +The following :ref:`hypercall_apis` are related to Trusty. .. doxygengroup:: trusty_hypercall :project: Project ACRN @@ -96,7 +95,7 @@ EPT Hierarchy ************* As per the Trusty design, Trusty can access the Normal World's memory, but the -Normal World cannot access the Secure World's memory. Hence it means the Secure +Normal World cannot access the Secure World's memory. The Secure World EPTP page table hierarchy must contain the Normal World GPA address space, while the Trusty world's GPA address space must be removed from the Normal World EPTP page table hierarchy. @@ -113,10 +112,9 @@ PD and PT for high memory (>= 511 GB) are valid for the Trusty World's EPT only. Benefit ======= -This design will benefit the EPT changes of the Normal World. There are -requirements to modify the Normal World's EPT during runtime such as increasing -memory and changing attributes. If such behavior happens, only PD and PT -for the Normal World need to be updated. +The Normal World's EPT can be modified during runtime. Examples include +increasing memory and changing attributes. If such behavior happens, only PD and +PT for the Normal World need to be updated. .. figure:: images/ept-hierarchy.png :align: center diff --git a/doc/tutorials/setup_openstack_libvirt.rst b/doc/tutorials/setup_openstack_libvirt.rst index c8a8ece4e..5841d9c63 100644 --- a/doc/tutorials/setup_openstack_libvirt.rst +++ b/doc/tutorials/setup_openstack_libvirt.rst @@ -7,11 +7,11 @@ Introduction ************ This document provides instructions for setting up libvirt to configure -ACRN. We use OpenStack to use libvirt and we'll install OpenStack in a container -to avoid crashing your system and to take advantage of easy -snapshots/restores so that you can quickly roll back your system in the +ACRN. We use OpenStack to use libvirt. We'll show how to install OpenStack in a +container to avoid crashing your system and to take advantage of easy +snapshots and restores so that you can quickly roll back your system in the event of setup failure. (You should only install OpenStack directly on Ubuntu if -you have a dedicated testing machine). This setup utilizes LXC/LXD on +you have a dedicated testing machine.) This setup utilizes LXC/LXD on Ubuntu 20.04. Install ACRN @@ -81,8 +81,8 @@ Set Up and Launch LXC/LXD .. note:: Make sure to respect the indentation as to keep these options within - the **config** section. It is a good idea after saving your changes - to check that they have been correctly recorded (``lxc config show openstack``). + the **config** section. After saving your changes, + check that they have been correctly recorded (``lxc config show openstack``). b. Run the following commands to configure ``openstack``:: @@ -102,7 +102,7 @@ Set Up and Launch LXC/LXD 6. Let ``systemd`` manage **eth1** in the container, with **eth0** as the default route: - Edit ``/etc/netplan/50-cloud-init.yaml`` + Edit ``/etc/netplan/50-cloud-init.yaml`` as follows: .. code-block:: none @@ -132,7 +132,7 @@ Set Up and Launch LXC/LXD no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16 -10. Add a new user named **stack** and set permissions +10. Add a new user named **stack** and set permissions: .. code-block:: none @@ -203,7 +203,7 @@ Set Up Libvirt $ make $ sudo make install - .. note:: The ``dev-acrn-v6.1.0`` branch is used in this tutorial. It is + .. note:: The ``dev-acrn-v6.1.0`` branch is used in this tutorial and is the default branch. 4. Edit and enable these options in ``/etc/libvirt/libvirtd.conf``:: @@ -293,8 +293,8 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions ** next to the Allocated **UbuntuCloud** flavor and see + Click **>** next to the Allocated **UbuntuCloud** flavor and see details about your choice: .. figure:: images/OpenStack-10d-flavor-selected.png @@ -533,7 +533,7 @@ instance. :width: 900px :name: os-10d-launch - Click on the **Networks** tab, and select the internal **shared** + Click the **Networks** tab, and select the internal **shared** network from the "Available" list: .. figure:: images/OpenStack-10e-select-network.png @@ -541,7 +541,7 @@ instance. :width: 1200px :name: os-10e-launch - Click on the **Security Groups** tab and select + Click the **Security Groups** tab and select the **acrnSecuGroup** security group you created earlier. Remove the **default** security group if it's in the "Allocated" list: @@ -550,8 +550,8 @@ instance. :width: 1200px :name: os-10d-security - Click on the **Key Pair** tab and verify the **acrnKeyPair** you - created earlier is in the "Allocated" list, and click on **Launch + Click the **Key Pair** tab and verify the **acrnKeyPair** you + created earlier is in the "Allocated" list, and click **Launch Instance**: .. figure:: images/OpenStack-10g-show-keypair-launch.png @@ -561,7 +561,7 @@ instance. It will take a few minutes to complete launching the instance. -#. Click on the **Project / Compute / Instances** tab to monitor +#. Click the **Project / Compute / Instances** tab to monitor progress. When the instance status is "Active" and power state is "Running", associate a floating IP to the instance so you can access it: @@ -571,7 +571,7 @@ instance. :width: 1200px :name: os-11-running - On the **Manage Floating IP Associations** screen, click on the **+** + On the **Manage Floating IP Associations** screen, click the **+** to add an association: .. figure:: images/OpenStack-11a-manage-floating-ip.png @@ -579,7 +579,7 @@ instance. :width: 700px :name: os-11a-running - Select **public** pool, and click on **Allocate IP**: + Select **public** pool, and click **Allocate IP**: .. figure:: images/OpenStack-11b-allocate-floating-ip.png :align: center @@ -597,8 +597,8 @@ instance. Final Steps *********** -With that, the OpenStack instance is running and connected to the -network. You can graphically see this by returning to the **Project / +The OpenStack instance is now running and connected to the +network. You can confirm by returning to the **Project / Network / Network Topology** view: .. figure:: images/OpenStack-12b-running-topology-instance.png @@ -606,7 +606,7 @@ Network / Network Topology** view: :width: 1200px :name: os-12b-running -You can also see a hypervisor summary by clicking on **Admin / Compute / +You can also see a hypervisor summary by clicking **Admin / Compute / Hypervisors**: .. figure:: images/OpenStack-12d-compute-hypervisor.png diff --git a/misc/debug_tools/acrn_crashlog/usercrash/README.rst b/misc/debug_tools/acrn_crashlog/usercrash/README.rst index b42fdbc92..053fc5733 100644 --- a/misc/debug_tools/acrn_crashlog/usercrash/README.rst +++ b/misc/debug_tools/acrn_crashlog/usercrash/README.rst @@ -6,22 +6,22 @@ Usercrash Description *********** -The ``usercrash`` tool gets the crash info for the crashing process in -userspace. The collected information is saved as usercrash_xx under +The ``usercrash`` tool gets the crash information for the crashing process in +user space. The collected information is saved as usercrash_xx under ``/var/log/usercrashes/``. Design ****** -``usercrash`` is designed using a Client/Server model. The server is +``usercrash`` is designed using a client/server model. The server is autostarted at boot. The client is configured in ``core_pattern``, which -will be triggered when a crash occurs in userspace. The client then +will be triggered when a crash occurs in user space. The client then sends the crash event to the server. The server checks the files under -``/var/log/usercrashes/`` and creates a new file usercrash_xx (xx means +``/var/log/usercrashes/`` and creates a file usercrash_xx (xx means the index of the crash file). Then it sends the file descriptor (fd) to -the client. The client is responsible for collecting crash information -and saving it in the crashlog file. After the saving work is done, the -client notifies server and the server will clean up. +the client. The client collects the crash information +and saves it in the crash file. After the saving work is done, the +client notifies the server. The server cleans up. The workflow diagram: @@ -60,9 +60,9 @@ Usage client and default app. Once a crash occurs in user space, the client and default app will be invoked separately. -- The ``debugger`` is an independent tool to dump the debug information of the - specific process, including backtrace, stack, opened files, registers value, - memory content around registers, and etc. +- The ``debugger`` is an independent tool to dump the debugging information of the + specific process, including backtrace, stack, opened files, register values, + and memory content around registers. .. code-block:: none @@ -75,14 +75,15 @@ Usage Source Code *********** -- client.c : This file is the implementation for client of ``usercrash``, which - is responsible for delivering the ``usercrash`` event to the server, and - collecting crash information and saving it to the crashfile. +- client.c : This file is the implementation for the client of ``usercrash``. + The client is responsible for delivering the ``usercrash`` event to the + server, and collecting crash information and saving it to the crash file. - crash_dump.c : This file is the implementation for dumping the crash - information, including backtrace stack, opened files, registers value, memory - content around registers, and etc. -- debugger.c : This file is to implement a tool, which runs in command line to - dump the process information list above. -- protocol.c : This file is the socket protocol implement file. -- server.c : This file is the implement file for server of ``usercrash``, which - is responsible for creating the crashfile and handle the events from client. + information, including backtrace stack, opened files, register values, and + memory content around registers. +- debugger.c : This file implements a tool, which runs in command line to + dump the process information listed above. +- protocol.c : This file is the socket protocol implementation file. +- server.c : This file is the implementation file for the server of + ``usercrash``. The server is responsible for creating the crash file and + handling the events from the client.