doc: Style cleanup in usercrash, trusty, vuart docs

- Minor style changes per Acrolinx recommendations and for consistency

Signed-off-by: Reyes, Amy <amy.reyes@intel.com>
This commit is contained in:
Reyes, Amy 2022-02-24 16:56:37 -08:00 committed by David Kinder
parent 85fe6d7d1a
commit 72a9b7bae3
4 changed files with 97 additions and 98 deletions

View File

@ -17,7 +17,7 @@ port base and IRQ.
UART virtualization architecture UART virtualization architecture
Each vUART has two FIFOs: 8192 bytes Tx FIFO and 256 bytes Rx FIFO. Each vUART has two FIFOs: 8192 bytes TX FIFO and 256 bytes RX FIFO.
Currently, we only provide 4 ports for use. Currently, we only provide 4 ports for use.
- COM1 (port base: 0x3F8, irq: 4) - COM1 (port base: 0x3F8, irq: 4)
@ -34,14 +34,14 @@ Console vUART
************* *************
A vUART can be used as a console port, and it can be activated by A vUART can be used as a console port, and it can be activated by
a ``vm_console <vm_id>`` command in the hypervisor console. From a ``vm_console <vm_id>`` command in the hypervisor console.
:numref:`console-uart-arch`, there is only one physical UART, but four :numref:`console-uart-arch` shows only one physical UART, but four console
console vUARTs (green color blocks). A hypervisor console is implemented vUARTs (green color blocks). A hypervisor console is implemented above the
above the physical UART, and it works in polling mode. There is a timer physical UART, and it works in polling mode. The hypervisor console has a
in the hv console. The timer handler dispatches the input from physical UART timer. The timer handler sends input from the physical UART to the
to the vUART or the hypervisor shell process and gets data from vUART's vUART or the hypervisor shell process. The timer handler also gets data from
Tx FIFO and sends it to the physical UART. The data in vUART's FIFOs will be the vUART's TX FIFO and sends it to the physical UART. The data in the vUART's
overwritten when it is not taken out in time. FIFOs is overwritten if it is not taken out in time.
.. figure:: images/uart-virt-hld-2.png .. figure:: images/uart-virt-hld-2.png
:align: center :align: center
@ -66,7 +66,7 @@ Operations in VM0
- VM traps to hypervisor, and the vUART PIO handler is called. - VM traps to hypervisor, and the vUART PIO handler is called.
- Puts the data to its target vUART's Rx FIFO. - Puts the data to its target vUART's RX FIFO.
- Injects a Data Ready interrupt to VM1. - Injects a Data Ready interrupt to VM1.
@ -80,9 +80,9 @@ Operations in VM1
- Reads LSR register, finds a Data Ready interrupt. - Reads LSR register, finds a Data Ready interrupt.
- Reads data from Rx FIFO. - Reads data from RX FIFO.
- If Rx FIFO is not full, injects THRE interrupt to VM0. - If RX FIFO is not full, injects THRE interrupt to VM0.
.. figure:: images/uart-virt-hld-3.png .. figure:: images/uart-virt-hld-3.png
:align: center :align: center
@ -120,7 +120,7 @@ Usage
} }
The kernel bootargs ``console=ttySx`` should be the same with The kernel bootargs ``console=ttySx`` should be the same with
vuart[0]; otherwise, the kernel console log can not be captured by vuart[0]; otherwise, the kernel console log cannot be captured by the
hypervisor. Then, after bringing up the system, you can switch the console hypervisor. Then, after bringing up the system, you can switch the console
to the target VM by: to the target VM by:
@ -131,8 +131,8 @@ Usage
- For communication vUART - For communication vUART
To enable the communication port, you should configure vuart[1] in To enable the communication port, configure vuart[1] in
the two VMs which want to communicate. The port_base and IRQ should the two VMs that need to communicate. The port_base and IRQ should
not repeat with the vuart[0] in the same VM. t_vuart.vm_id is the not repeat with the vuart[0] in the same VM. t_vuart.vm_id is the
target VM's vm_id, start from 0 (0 means VM0). t_vuart.vuart_id is the target VM's vm_id, start from 0 (0 means VM0). t_vuart.vuart_id is the
target vUART index in the target VM, start from 1 (1 means vuart[1]). target vUART index in the target VM, start from 1 (1 means vuart[1]).
@ -159,13 +159,13 @@ Usage
.t_vuart.vuart_id = 1U, .t_vuart.vuart_id = 1U,
}, },
.. note:: The device mode also has a virtual UART, and also uses 0x3F8 .. note:: The Device Model also has a virtual UART and uses 0x3F8
and 0x2F8 as port base. If you add ``-s <slot>, lpc`` in the launch and 0x2F8 as port base. If you add ``-s <slot>, lpc`` in the launch
script, the device model will create COM0 and COM1 for the post script, the Device Model will create COM0 and COM1 for the post-launched VM.
launched VM. It will also add the port info to the ACPI table. This is It will also add the port information to the ACPI table. This configuration
useful for Windows and vxworks as they probe the driver according to the ACPI is useful for Windows and VxWorks as they probe the driver according to the
table. ACPI table.
If the user enables both the device model UART and the hypervisor vUART at the If you enable the Device Model UART and the hypervisor vUART at the
same port address, access to the port address will be responded to same port address, access to the port address will be responded to by the
by the hypervisor vUART directly, and will not pass to the device model. hypervisor vUART directly, and will not pass to the Device Model.

View File

@ -21,10 +21,9 @@ Trusty consists of:
communication with trusted applications executed within the Trusty OS using communication with trusted applications executed within the Trusty OS using
the kernel drivers the kernel drivers
LK (`Little Kernel`_) is a tiny operating system suited for small embedded LK (`Little Kernel`_) is a tiny operating system for small embedded
devices, bootloaders, and other environments where OS primitives such as devices, bootloaders, and other environments that need OS primitives such as
threads, mutexes, and timers are needed, but there's a desire to keep things threads, mutexes, and timers. LK has been chosen as the Trusty OS kernel.
small and lightweight. LK has been chosen as the Trusty OS kernel.
Trusty Architecture Trusty Architecture
******************* *******************
@ -45,7 +44,7 @@ Trusty Architecture
Trusty Specific Hypercalls Trusty Specific Hypercalls
************************** **************************
There are a few :ref:`hypercall_apis` that are related to Trusty. The following :ref:`hypercall_apis` are related to Trusty.
.. doxygengroup:: trusty_hypercall .. doxygengroup:: trusty_hypercall
:project: Project ACRN :project: Project ACRN
@ -96,7 +95,7 @@ EPT Hierarchy
************* *************
As per the Trusty design, Trusty can access the Normal World's memory, but the As per the Trusty design, Trusty can access the Normal World's memory, but the
Normal World cannot access the Secure World's memory. Hence it means the Secure Normal World cannot access the Secure World's memory. The Secure
World EPTP page table hierarchy must contain the Normal World GPA address space, World EPTP page table hierarchy must contain the Normal World GPA address space,
while the Trusty world's GPA address space must be removed from the Normal World while the Trusty world's GPA address space must be removed from the Normal World
EPTP page table hierarchy. EPTP page table hierarchy.
@ -113,10 +112,9 @@ PD and PT for high memory (>= 511 GB) are valid for the Trusty World's EPT only.
Benefit Benefit
======= =======
This design will benefit the EPT changes of the Normal World. There are The Normal World's EPT can be modified during runtime. Examples include
requirements to modify the Normal World's EPT during runtime such as increasing increasing memory and changing attributes. If such behavior happens, only PD and
memory and changing attributes. If such behavior happens, only PD and PT PT for the Normal World need to be updated.
for the Normal World need to be updated.
.. figure:: images/ept-hierarchy.png .. figure:: images/ept-hierarchy.png
:align: center :align: center

View File

@ -7,11 +7,11 @@ Introduction
************ ************
This document provides instructions for setting up libvirt to configure This document provides instructions for setting up libvirt to configure
ACRN. We use OpenStack to use libvirt and we'll install OpenStack in a container ACRN. We use OpenStack to use libvirt. We'll show how to install OpenStack in a
to avoid crashing your system and to take advantage of easy container to avoid crashing your system and to take advantage of easy
snapshots/restores so that you can quickly roll back your system in the snapshots and restores so that you can quickly roll back your system in the
event of setup failure. (You should only install OpenStack directly on Ubuntu if event of setup failure. (You should only install OpenStack directly on Ubuntu if
you have a dedicated testing machine). This setup utilizes LXC/LXD on you have a dedicated testing machine.) This setup utilizes LXC/LXD on
Ubuntu 20.04. Ubuntu 20.04.
Install ACRN Install ACRN
@ -81,8 +81,8 @@ Set Up and Launch LXC/LXD
.. note:: .. note::
Make sure to respect the indentation as to keep these options within Make sure to respect the indentation as to keep these options within
the **config** section. It is a good idea after saving your changes the **config** section. After saving your changes,
to check that they have been correctly recorded (``lxc config show openstack``). check that they have been correctly recorded (``lxc config show openstack``).
b. Run the following commands to configure ``openstack``:: b. Run the following commands to configure ``openstack``::
@ -102,7 +102,7 @@ Set Up and Launch LXC/LXD
6. Let ``systemd`` manage **eth1** in the container, with **eth0** as the 6. Let ``systemd`` manage **eth1** in the container, with **eth0** as the
default route: default route:
Edit ``/etc/netplan/50-cloud-init.yaml`` Edit ``/etc/netplan/50-cloud-init.yaml`` as follows:
.. code-block:: none .. code-block:: none
@ -132,7 +132,7 @@ Set Up and Launch LXC/LXD
no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16 no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16
10. Add a new user named **stack** and set permissions 10. Add a new user named **stack** and set permissions:
.. code-block:: none .. code-block:: none
@ -203,7 +203,7 @@ Set Up Libvirt
$ make $ make
$ sudo make install $ sudo make install
.. note:: The ``dev-acrn-v6.1.0`` branch is used in this tutorial. It is .. note:: The ``dev-acrn-v6.1.0`` branch is used in this tutorial and is
the default branch. the default branch.
4. Edit and enable these options in ``/etc/libvirt/libvirtd.conf``:: 4. Edit and enable these options in ``/etc/libvirt/libvirtd.conf``::
@ -293,8 +293,8 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://
a. Inside the container, use the command ``ip a`` to identify the ``br-ex`` bridge a. Inside the container, use the command ``ip a`` to identify the ``br-ex`` bridge
interface. ``br-ex`` should have two IPs. One should be visible to interface. ``br-ex`` should have two IPs. One should be visible to
the native Ubuntu's ``acrn-br0`` interface (e.g. iNet 192.168.1.104/24). the native Ubuntu's ``acrn-br0`` interface (for example, iNet 192.168.1.104/24).
The other one is internal to OpenStack (e.g. iNet 172.24.4.1/24). The The other one is internal to OpenStack (for example, iNet 172.24.4.1/24). The
latter corresponds to the public network in OpenStack. latter corresponds to the public network in OpenStack.
b. Set up SNAT to establish a link between ``acrn-br0`` and OpenStack. b. Set up SNAT to establish a link between ``acrn-br0`` and OpenStack.
@ -315,7 +315,7 @@ Use the OpenStack management interface URL reported in a previous step
to finish setting up the network and configure and create an OpenStack to finish setting up the network and configure and create an OpenStack
instance. instance.
1. Begin by using your browser to login as **admin** to the OpenStack management 1. Begin by using your browser to log in as **admin** to the OpenStack management
dashboard (using the URL reported previously). Use the admin dashboard (using the URL reported previously). Use the admin
password you set in the ``devstack/local.conf`` file: password you set in the ``devstack/local.conf`` file:
@ -324,7 +324,7 @@ instance.
:width: 1200px :width: 1200px
:name: os-01-login :name: os-01-login
Click on the **Project / Network Topology** and then the **Topology** tab Click **Project / Network Topology** and then the **Topology** tab
to view the existing **public** (external) and **shared** (internal) networks: to view the existing **public** (external) and **shared** (internal) networks:
.. figure:: images/OpenStack-02-topology.png .. figure:: images/OpenStack-02-topology.png
@ -342,7 +342,7 @@ instance.
:name: os-03-router :name: os-03-router
Give it a name (**acrn_router**), select **public** for the external network, Give it a name (**acrn_router**), select **public** for the external network,
and select create router: and select **Create Router**:
.. figure:: images/OpenStack-03a-create-router.png .. figure:: images/OpenStack-03a-create-router.png
:align: center :align: center
@ -350,21 +350,21 @@ instance.
:name: os-03a-router :name: os-03a-router
That added the external network to the router. Now add That added the external network to the router. Now add
the internal network too. Click on the acrn_router name: the internal network too. Click the acrn_router name:
.. figure:: images/OpenStack-03b-created-router.png .. figure:: images/OpenStack-03b-created-router.png
:align: center :align: center
:width: 1200px :width: 1200px
:name: os-03b-router :name: os-03b-router
Go to the interfaces tab, and click on **+Add interface**: Go to the **Interfaces** tab, and click **+Add interface**:
.. figure:: images/OpenStack-04a-add-interface.png .. figure:: images/OpenStack-04a-add-interface.png
:align: center :align: center
:width: 1200px :width: 1200px
:name: os-04a-add-interface :name: os-04a-add-interface
Select the subnet of the shared (private) network and click submit: Select the subnet of the shared (private) network and click **Submit**:
.. figure:: images/OpenStack-04b-add-interface.png .. figure:: images/OpenStack-04b-add-interface.png
:align: center :align: center
@ -379,7 +379,7 @@ instance.
:width: 1200px :width: 1200px
:name: os-04c-add-interface :name: os-04c-add-interface
View the router graphically by clicking on the "Network Topology" tab: View the router graphically by clicking the **Network Topology** tab:
.. figure:: images/OpenStack-05-topology.png .. figure:: images/OpenStack-05-topology.png
:align: center :align: center
@ -390,8 +390,8 @@ instance.
networking. networking.
#. Next, we'll prepare for launching an OpenStack instance. #. Next, we'll prepare for launching an OpenStack instance.
Click on the **Admin / Compute/ Image** tab and then the **+Create Click the **Admin / Compute / Image** tab and then the **+Create
image** button: Image** button:
.. figure:: images/OpenStack-06-create-image.png .. figure:: images/OpenStack-06-create-image.png
:align: center :align: center
@ -411,17 +411,17 @@ instance.
:name: os-06b-create-image :name: os-06b-create-image
Give the image a name (**Ubuntu20.04**), select the **QCOW2 - QEMU Give the image a name (**Ubuntu20.04**), select the **QCOW2 - QEMU
Emulator** format, and click on **Create Image**: Emulator** format, and click **Create Image**:
.. figure:: images/OpenStack-06e-create-image.png .. figure:: images/OpenStack-06e-create-image.png
:align: center :align: center
:width: 900px :width: 900px
:name: os-063-create-image :name: os-063-create-image
This will take a few minutes to complete. This task will take a few minutes to complete.
#. Next, click on the **Admin / Computer / Flavors** tabs and then the #. Next, click the **Admin / Compute / Flavors** tab and then the
**+Create Flavor** button. This is where you'll define a machine flavor name **+Create Flavor** button. Define a machine flavor name
(**UbuntuCloud**), and specify its resource requirements: the number of vCPUs (**2**), RAM size (**UbuntuCloud**), and specify its resource requirements: the number of vCPUs (**2**), RAM size
(**512MB**), and root disk size (**4GB**): (**512MB**), and root disk size (**4GB**):
@ -430,7 +430,7 @@ instance.
:width: 700px :width: 700px
:name: os-07a-create-flavor :name: os-07a-create-flavor
Click on **Create Flavor** and you'll return to see a list of Click **Create Flavor** and you'll return to see a list of
available flavors plus the new one you created (**UbuntuCloud**): available flavors plus the new one you created (**UbuntuCloud**):
.. figure:: images/OpenStack-07b-flavor-created.png .. figure:: images/OpenStack-07b-flavor-created.png
@ -439,11 +439,11 @@ instance.
:name: os-07b-create-flavor :name: os-07b-create-flavor
#. OpenStack security groups act as a virtual firewall controlling #. OpenStack security groups act as a virtual firewall controlling
connections between instances, allowing connections such as SSH, and connections between instances, allowing connections such as SSH and
HTTPS. These next steps create a security group allowing SSH and ICMP HTTPS. These next steps create a security group allowing SSH and ICMP
connections. connections.
Go to **Project / Network / Security Groups** and click on the **+Create Go to **Project / Network / Security Groups** and click the **+Create
Security Group** button: Security Group** button:
.. figure:: images/OpenStack-08-security-group.png .. figure:: images/OpenStack-08-security-group.png
@ -460,7 +460,7 @@ instance.
:name: os-08a-security-group :name: os-08a-security-group
You'll return to a rule management screen for this new group. Click You'll return to a rule management screen for this new group. Click
on the **+Add Rule** button: the **+Add Rule** button:
.. figure:: images/OpenStack-08b-add-rule.png .. figure:: images/OpenStack-08b-add-rule.png
:align: center :align: center
@ -474,7 +474,7 @@ instance.
:width: 1200px :width: 1200px
:name: os-08c-security-group :name: os-08c-security-group
Similarly, add another rule to add a **All ICMP** rule too: Similarly, add another rule to add an **All ICMP** rule too:
.. figure:: images/OpenStack-08d-add-All-ICMP-rule.png .. figure:: images/OpenStack-08d-add-All-ICMP-rule.png
:align: center :align: center
@ -482,16 +482,16 @@ instance.
:name: os-08d-security-group :name: os-08d-security-group
#. Create a public/private keypair used to access the created instance. #. Create a public/private keypair used to access the created instance.
Go to **Project / Compute / Key Pairs** and click on **+Create Key Go to **Project / Compute / Key Pairs** and click **+Create Key
Pair**, give the keypair a name (**acrnKeyPair**) and Key Type Pair**, give the keypair a name (**acrnKeyPair**) and Key Type
(**SSH Key**) and click on **Create Key Pair**: (**SSH Key**) and click **Create Key Pair**:
.. figure:: images/OpenStack-09a-create-key-pair.png .. figure:: images/OpenStack-09a-create-key-pair.png
:align: center :align: center
:width: 1200px :width: 1200px
:name: os-09a-key-pair :name: os-09a-key-pair
You should save the **private** keypair file safely, Save the **private** keypair file safely,
for future use: for future use:
.. figure:: images/OpenStack-09c-key-pair-private-key.png .. figure:: images/OpenStack-09c-key-pair-private-key.png
@ -500,7 +500,7 @@ instance.
:name: os-09c-key-pair :name: os-09c-key-pair
#. Now we're ready to launch an instance. Go to **Project / Compute / #. Now we're ready to launch an instance. Go to **Project / Compute /
Instance**, click on the **Launch Instance** button, give it a name Instance**, click the **Launch Instance** button, give it a name
(**UbuntuOnACRN**) and click **Next**: (**UbuntuOnACRN**) and click **Next**:
.. figure:: images/OpenStack-10a-launch-instance-name.png .. figure:: images/OpenStack-10a-launch-instance-name.png
@ -525,7 +525,7 @@ instance.
:width: 900px :width: 900px
:name: os-10c-launch :name: os-10c-launch
Click on **>** next to the Allocated **UbuntuCloud** flavor and see Click **>** next to the Allocated **UbuntuCloud** flavor and see
details about your choice: details about your choice:
.. figure:: images/OpenStack-10d-flavor-selected.png .. figure:: images/OpenStack-10d-flavor-selected.png
@ -533,7 +533,7 @@ instance.
:width: 900px :width: 900px
:name: os-10d-launch :name: os-10d-launch
Click on the **Networks** tab, and select the internal **shared** Click the **Networks** tab, and select the internal **shared**
network from the "Available" list: network from the "Available" list:
.. figure:: images/OpenStack-10e-select-network.png .. figure:: images/OpenStack-10e-select-network.png
@ -541,7 +541,7 @@ instance.
:width: 1200px :width: 1200px
:name: os-10e-launch :name: os-10e-launch
Click on the **Security Groups** tab and select Click the **Security Groups** tab and select
the **acrnSecuGroup** security group you created earlier. Remove the the **acrnSecuGroup** security group you created earlier. Remove the
**default** security group if it's in the "Allocated" list: **default** security group if it's in the "Allocated" list:
@ -550,8 +550,8 @@ instance.
:width: 1200px :width: 1200px
:name: os-10d-security :name: os-10d-security
Click on the **Key Pair** tab and verify the **acrnKeyPair** you Click the **Key Pair** tab and verify the **acrnKeyPair** you
created earlier is in the "Allocated" list, and click on **Launch created earlier is in the "Allocated" list, and click **Launch
Instance**: Instance**:
.. figure:: images/OpenStack-10g-show-keypair-launch.png .. figure:: images/OpenStack-10g-show-keypair-launch.png
@ -561,7 +561,7 @@ instance.
It will take a few minutes to complete launching the instance. It will take a few minutes to complete launching the instance.
#. Click on the **Project / Compute / Instances** tab to monitor #. Click the **Project / Compute / Instances** tab to monitor
progress. When the instance status is "Active" and power state is progress. When the instance status is "Active" and power state is
"Running", associate a floating IP to the instance "Running", associate a floating IP to the instance
so you can access it: so you can access it:
@ -571,7 +571,7 @@ instance.
:width: 1200px :width: 1200px
:name: os-11-running :name: os-11-running
On the **Manage Floating IP Associations** screen, click on the **+** On the **Manage Floating IP Associations** screen, click the **+**
to add an association: to add an association:
.. figure:: images/OpenStack-11a-manage-floating-ip.png .. figure:: images/OpenStack-11a-manage-floating-ip.png
@ -579,7 +579,7 @@ instance.
:width: 700px :width: 700px
:name: os-11a-running :name: os-11a-running
Select **public** pool, and click on **Allocate IP**: Select **public** pool, and click **Allocate IP**:
.. figure:: images/OpenStack-11b-allocate-floating-ip.png .. figure:: images/OpenStack-11b-allocate-floating-ip.png
:align: center :align: center
@ -597,8 +597,8 @@ instance.
Final Steps Final Steps
*********** ***********
With that, the OpenStack instance is running and connected to the The OpenStack instance is now running and connected to the
network. You can graphically see this by returning to the **Project / network. You can confirm by returning to the **Project /
Network / Network Topology** view: Network / Network Topology** view:
.. figure:: images/OpenStack-12b-running-topology-instance.png .. figure:: images/OpenStack-12b-running-topology-instance.png
@ -606,7 +606,7 @@ Network / Network Topology** view:
:width: 1200px :width: 1200px
:name: os-12b-running :name: os-12b-running
You can also see a hypervisor summary by clicking on **Admin / Compute / You can also see a hypervisor summary by clicking **Admin / Compute /
Hypervisors**: Hypervisors**:
.. figure:: images/OpenStack-12d-compute-hypervisor.png .. figure:: images/OpenStack-12d-compute-hypervisor.png

View File

@ -6,22 +6,22 @@ Usercrash
Description Description
*********** ***********
The ``usercrash`` tool gets the crash info for the crashing process in The ``usercrash`` tool gets the crash information for the crashing process in
userspace. The collected information is saved as usercrash_xx under user space. The collected information is saved as usercrash_xx under
``/var/log/usercrashes/``. ``/var/log/usercrashes/``.
Design Design
****** ******
``usercrash`` is designed using a Client/Server model. The server is ``usercrash`` is designed using a client/server model. The server is
autostarted at boot. The client is configured in ``core_pattern``, which autostarted at boot. The client is configured in ``core_pattern``, which
will be triggered when a crash occurs in userspace. The client then will be triggered when a crash occurs in user space. The client then
sends the crash event to the server. The server checks the files under sends the crash event to the server. The server checks the files under
``/var/log/usercrashes/`` and creates a new file usercrash_xx (xx means ``/var/log/usercrashes/`` and creates a file usercrash_xx (xx means
the index of the crash file). Then it sends the file descriptor (fd) to the index of the crash file). Then it sends the file descriptor (fd) to
the client. The client is responsible for collecting crash information the client. The client collects the crash information
and saving it in the crashlog file. After the saving work is done, the and saves it in the crash file. After the saving work is done, the
client notifies server and the server will clean up. client notifies the server. The server cleans up.
The workflow diagram: The workflow diagram:
@ -60,9 +60,9 @@ Usage
client and default app. Once a crash occurs in user space, the client and client and default app. Once a crash occurs in user space, the client and
default app will be invoked separately. default app will be invoked separately.
- The ``debugger`` is an independent tool to dump the debug information of the - The ``debugger`` is an independent tool to dump the debugging information of the
specific process, including backtrace, stack, opened files, registers value, specific process, including backtrace, stack, opened files, register values,
memory content around registers, and etc. and memory content around registers.
.. code-block:: none .. code-block:: none
@ -75,14 +75,15 @@ Usage
Source Code Source Code
*********** ***********
- client.c : This file is the implementation for client of ``usercrash``, which - client.c : This file is the implementation for the client of ``usercrash``.
is responsible for delivering the ``usercrash`` event to the server, and The client is responsible for delivering the ``usercrash`` event to the
collecting crash information and saving it to the crashfile. server, and collecting crash information and saving it to the crash file.
- crash_dump.c : This file is the implementation for dumping the crash - crash_dump.c : This file is the implementation for dumping the crash
information, including backtrace stack, opened files, registers value, memory information, including backtrace stack, opened files, register values, and
content around registers, and etc. memory content around registers.
- debugger.c : This file is to implement a tool, which runs in command line to - debugger.c : This file implements a tool, which runs in command line to
dump the process information list above. dump the process information listed above.
- protocol.c : This file is the socket protocol implement file. - protocol.c : This file is the socket protocol implementation file.
- server.c : This file is the implement file for server of ``usercrash``, which - server.c : This file is the implementation file for the server of
is responsible for creating the crashfile and handle the events from client. ``usercrash``. The server is responsible for creating the crash file and
handling the events from the client.