mirror of
https://github.com/projectacrn/acrn-hypervisor.git
synced 2025-07-31 15:30:56 +00:00
doc: refine enable s5 document
Lifecycle Manager will be refined in v2.7, this patch will refine enable s5 document to align with the latest code. v1-->v2: Remove the prompt from all instructions in this document. Tracked-On: #6652 Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
This commit is contained in:
parent
5bc0c97a12
commit
1d26695626
@ -527,6 +527,7 @@ Build ACRN
|
||||
disk="/media/$USER/"$(ls /media/$USER)
|
||||
cp linux-5.10.52-acrn-sos-x86.tar.gz $disk/
|
||||
cp ~/acrn-work/acrn-hypervisor/build/hypervisor/acrn.bin $disk/
|
||||
cp ~/acrn-work/acrn-hypervisor/build/hypervisor/serial.conf $disk/
|
||||
cp ~/acrn-work/my_board/output/launch_uos_id3.sh $disk/
|
||||
cp ~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl $disk/
|
||||
cp ~/acrn-work/acrn-hypervisor/build/acrn-2.6-unstable.tar.gz $disk/
|
||||
@ -535,6 +536,9 @@ Build ACRN
|
||||
Even though our sample default scenario defines six User VMs, we're only
|
||||
going to launch one of them, so we'll only need the one launch script.
|
||||
|
||||
.. note:: The :file:`serial.conf` is only generated if non standard vUARTs (not COM1~COM4)
|
||||
are configured for Service VM in scenario XML file.
|
||||
|
||||
#. Insert the USB disk you just used into the target system and run these
|
||||
commands to copy the tar files locally:
|
||||
|
||||
@ -567,6 +571,7 @@ Build ACRN
|
||||
|
||||
sudo mkdir -p /boot/acrn/
|
||||
sudo cp $disk/acrn.bin /boot/acrn
|
||||
sudo cp $disk/serial.conf /etc
|
||||
sudo cp $disk/iasl /usr/sbin/
|
||||
cp $disk/launch_uos_id3.sh ~/acrn-work
|
||||
sudo umount $disk/
|
||||
@ -576,7 +581,13 @@ Build ACRN
|
||||
Install ACRN
|
||||
************
|
||||
|
||||
In the following steps, you will configure GRUB on the target system.
|
||||
In the following steps, you will install serial configuration tool and configure GRUB on the target system.
|
||||
|
||||
#. Install serial configuration tool in the target system as follows:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
sudo apt-get install setserial
|
||||
|
||||
#. On the target, find the root filesystem (rootfs) device name by using the
|
||||
``lsblk`` command:
|
||||
|
@ -9,8 +9,8 @@ Introduction
|
||||
S5 is one of the `ACPI sleep states <http://acpi.sourceforge.net/documentation/sleep.html>`_
|
||||
that refers to the system being shut down (although some power may still be
|
||||
supplied to certain devices). In this document, S5 means the function to
|
||||
shut down the **User VMs**, **the Service VM**, the hypervisor, and the
|
||||
hardware. In most cases, directly shutting down the power of a computer
|
||||
shut down the **User VMs**, **Service VM**, the hypervisor, and the
|
||||
hardware. In most cases, directly powering off a computer
|
||||
system is not advisable because it can damage some components. It can cause
|
||||
corruption and put the system in an unknown or unstable state. On ACRN, the
|
||||
User VM must be shut down before powering off the Service VM. Especially for
|
||||
@ -31,135 +31,200 @@ The diagram below shows the overall architecture:
|
||||
|
||||
S5 overall architecture
|
||||
|
||||
- **Scenario I**:
|
||||
- **vUART channel**:
|
||||
|
||||
The User VM's serial port device (``ttySn``) is emulated in the
|
||||
Device Model, the channel from the Service VM to the User VM:
|
||||
|
||||
.. graphviz:: images/s5-scenario-1.dot
|
||||
:name: s5-scenario-1
|
||||
|
||||
- **Scenario II**:
|
||||
|
||||
The User VM's (like RT-Linux or other RT-VMs) serial port device
|
||||
(``ttySn``) is emulated in the Hypervisor,
|
||||
the channel from the Service OS to the User VM:
|
||||
The User VM's serial port device (``/dev/ttySn``) is emulated in the
|
||||
Hypervisor. The channel from the Service VM to the User VM:
|
||||
|
||||
.. graphviz:: images/s5-scenario-2.dot
|
||||
:name: s5-scenario-2
|
||||
|
||||
Initiate a system S5 from within a User VM (e.g. HMI)
|
||||
=====================================================
|
||||
Lifecycle Manager Overview
|
||||
==========================
|
||||
|
||||
As part of the S5 reference design, a Lifecycle Manager daemon (``life_mngr`` in Linux,
|
||||
``life_mngr_win.exe`` in Windows) runs in the Service VM and User VMs to implement S5.
|
||||
Operator or user can use ``s5_trigger_linux.py`` or ``s5_trigger_win.py`` script to initialize
|
||||
a system S5 in the Service VM or User VMs. The Lifecycle Manager in the Service VM and
|
||||
User VMs wait for system S5 request on the local socket port.
|
||||
|
||||
Initiate a System S5 from within a User VM (e.g., HMI)
|
||||
======================================================
|
||||
|
||||
As shown in the :numref:`s5-architecture`, a request to Service VM initiates the shutdown flow.
|
||||
This could come from a User VM, most likely the HMI (running Windows or Linux).
|
||||
When a human operator initiates the flow, the Lifecycle Manager (``life_mngr``) running in that
|
||||
User VM will send the request via the vUART to the Lifecycle Manager in the Service VM which in
|
||||
turn acknowledges the request and triggers the following flow.
|
||||
When a human operator initiates the flow through running ``s5_trigger_linux.py`` or ``s5_trigger_win.py``,
|
||||
the Lifecycle Manager (``life_mngr``) running in that User VM sends the system S5 request via
|
||||
the vUART to the Lifecycle Manager in the Service VM which in turn acknowledges the request.
|
||||
The Lifecycle Manager in Service VM sends ``poweroff_cmd`` request to User VMs, when the Lifecycle Manager
|
||||
in User VMs receives ``poweroff_cmd`` request, it sends ``ack_poweroff`` to the Service VM;
|
||||
then it shuts down the User VMs. If the User VMs is not ready to shut down, it can ignore the
|
||||
``poweroff_cmd`` request.
|
||||
|
||||
.. note:: The User VM need to be authorized to be able to request a Shutdown, this is achieved by adding
|
||||
``--pm_notify_channel uart,allow_trigger_s5`` in the launch script of that VM.
|
||||
And, there is only one VM in the system can be configured to request a shutdown. If there is a second User
|
||||
VM launched with ``--pm_notify_channel uart,allow_trigger_s5``, ACRN will stop launching it and throw
|
||||
out below error message:
|
||||
``initiate a connection on a socket error``
|
||||
``create socket to connect life-cycle manager failed``
|
||||
.. note:: The User VM need to be authorized to be able to request a system S5, this is achieved
|
||||
by configuring ``ALLOW_TRIGGER_S5`` in the Lifecycle Manager service configuration :file:`/etc/life_mngr.conf`
|
||||
in the Service VM. There is only one User VM in the system can be configured to request a shutdown.
|
||||
If this configuration is wrong, the system S5 request from User VM is rejected by
|
||||
Lifecycle Manager of Service VM, the following error message is recorded in Lifecycle Manager
|
||||
log :file:`/var/log/life_mngr.log` of Service VM:
|
||||
``The user VM is not allowed to trigger system shutdown``
|
||||
|
||||
Trigger the User VM's S5
|
||||
========================
|
||||
Initiate a System S5 within the Service VM
|
||||
==========================================
|
||||
|
||||
On the Service VM side, it uses the ``acrnctl`` tool to trigger the User VM's S5 flow:
|
||||
``acrnctl stop user-vm-name``. Then, the Device Model sends a ``shutdown`` command
|
||||
to the User VM through a channel. If the User VM receives the command, it will send an ``ACKED``
|
||||
to the Device Model. It is the Service VM's responsibility to check whether the User VMs
|
||||
shut down successfully or not, and to decide when to shut the Service VM itself down.
|
||||
On the Service VM side, it uses the ``s5_trigger_linux.py`` to trigger the system S5 flow. Then,
|
||||
the Lifecycle Manager in service VM sends a ``poweroff_cmd`` request to the lifecycle manager in each
|
||||
User VM through the vUART channel. If the User VM receives this request, it will send an ``ack_poweroff``
|
||||
to the lifecycle manager in Service VM. It is the Service VM's responsibility to check whether the
|
||||
User VMs shut down successfully or not, and to decide when to shut the Service VM itself down.
|
||||
|
||||
User VM "Lifecycle Manager"
|
||||
===========================
|
||||
|
||||
As part of the S5 reference design, a Lifecycle Manager daemon (``life_mngr`` in Linux,
|
||||
``life_mngr_win.exe`` in Windows) runs in the User VM to implement S5. It waits for the shutdown
|
||||
request from the Service VM on the serial port. The simple protocol between the Service VM and
|
||||
User VM is as follows: when the daemon receives ``shutdown``, it sends ``ACKED`` to the Service VM;
|
||||
then it shuts down the User VM. If the User VM is not ready to shut down,
|
||||
it can ignore the ``shutdown`` command.
|
||||
.. note:: Service VM is always allowed to trigger system S5 by default.
|
||||
|
||||
.. _enable_s5:
|
||||
|
||||
Enable S5
|
||||
*********
|
||||
|
||||
The procedure for enabling S5 is specific to the particular OS:
|
||||
1. Configure communication vUART for Service VM and User VMs:
|
||||
|
||||
* For Linux (LaaG) or Windows (WaaG), include these lines in the launch script:
|
||||
Add these lines in the hypervisor scenario XML file manually:
|
||||
|
||||
.. code-block:: bash
|
||||
Example::
|
||||
|
||||
# Power Management (PM) configuration using vUART channel
|
||||
pm_channel="--pm_notify_channel uart"
|
||||
pm_by_vuart="--pm_by_vuart pty,/run/acrn/life_mngr_"$vm_name
|
||||
pm_vuart_node="-s 1:0,lpc -l com2,/run/acrn/life_mngr_"$vm_name
|
||||
/* VM0 */
|
||||
<vm_type>SERVICE_VM</vm_type>
|
||||
...
|
||||
<legacy_vuart id="1">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>CONFIG_COM_BASE</base>
|
||||
<irq>0</irq>
|
||||
<target_vm_id>1</target_vm_id>
|
||||
<target_uart_id>1</target_uart_id>
|
||||
</legacy_vuart>
|
||||
<legacy_vuart id="2">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>CONFIG_COM_BASE</base>
|
||||
<irq>0</irq>
|
||||
<target_vm_id>2</target_vm_id>
|
||||
<target_uart_id>2</target_uart_id>
|
||||
</legacy_vuart>
|
||||
...
|
||||
/* VM1 */
|
||||
<vm_type>POST_STD_VM</vm_type>
|
||||
...
|
||||
<legacy_vuart id="1">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>COM2_BASE</base>
|
||||
<irq>COM2_IRQ</irq>
|
||||
<target_vm_id>0</target_vm_id>
|
||||
<target_uart_id>1</target_uart_id>
|
||||
</legacy_vuart>
|
||||
...
|
||||
/* VM2 */
|
||||
<vm_type>POST_STD_VM</vm_type>
|
||||
...
|
||||
<legacy_vuart id="1">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>INVALID_COM_BASE</base>
|
||||
<irq>COM2_IRQ</irq>
|
||||
<target_vm_id>0</target_vm_id>
|
||||
<target_uart_id>2</target_uart_id>
|
||||
</legacy_vuart>
|
||||
<legacy_vuart id="2">
|
||||
<type>VUART_LEGACY_PIO</type>
|
||||
<base>COM2_BASE</base>
|
||||
<irq>COM2_IRQ</irq>
|
||||
<target_vm_id>0</target_vm_id>
|
||||
<target_uart_id>2</target_uart_id>
|
||||
</legacy_vuart>
|
||||
...
|
||||
/* VM3 */
|
||||
...
|
||||
|
||||
acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
...
|
||||
$pm_channel \
|
||||
$pm_by_vuart \
|
||||
$pm_vuart_node \
|
||||
...
|
||||
.. note:: These vUART is emulated in the hypervisor; expose the node as ``/dev/ttySn``.
|
||||
For the User VM with the minimal VM ID, the communication vUART id should be 1.
|
||||
For other User VMs, the vUART (id is 1) shoulbe be configured as invalid, the communication
|
||||
vUART id should be 2 or others.
|
||||
|
||||
* For RT-Linux, include these lines in the launch script:
|
||||
2. Build the Lifecycle Manager daemon, ``life_mngr``:
|
||||
|
||||
.. code-block:: bash
|
||||
.. code-block:: none
|
||||
|
||||
# Power Management (PM) configuration
|
||||
pm_channel="--pm_notify_channel uart"
|
||||
pm_by_vuart="--pm_by_vuart tty,/dev/ttyS1"
|
||||
cd acrn-hypervisor
|
||||
make life_mngr
|
||||
|
||||
/usr/bin/acrn-dm -A -m $mem_size -s 0:0,hostbridge \
|
||||
...
|
||||
$pm_channel \
|
||||
$pm_by_vuart \
|
||||
...
|
||||
#. For Service VM, LaaG VM and RT-Linux VM, run the Lifecycle Manager daemon:
|
||||
|
||||
.. note:: For RT-Linux, the vUART is emulated in the hypervisor; expose the node as ``/dev/ttySn``.
|
||||
|
||||
#. For LaaG and RT-Linux VMs, run the lifecycle manager daemon:
|
||||
|
||||
a. Use these commands to build the lifecycle manager daemon, ``life_mngr``.
|
||||
a. Copy ``life_mngr.conf``, ``s5_trigger_linux.py``, ``user_vm_shutdown.py``, ``life_mngr``,
|
||||
and ``life_mngr.service`` into the Service VM and User VMs.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ cd acrn-hypervisor
|
||||
$ make life_mngr
|
||||
scp build/misc/services/s5_trigger_linux.py root@<target board address>:~/
|
||||
scp build/misc/services/life_mngr root@<target board address>:/usr/bin/
|
||||
scp build/misc/services/life_mngr.conf root@<target board address>:/etc/life_mngr/
|
||||
scp build/misc/services/life_mngr.service root@<target board address>:/lib/systemd/system/
|
||||
|
||||
#. Copy ``life_mngr`` and ``life_mngr.service`` into the User VM:
|
||||
scp misc/services/life_mngr/user_vm_shutdown.py root@<target board address>:~/
|
||||
|
||||
.. note:: :file:`user_vm_shutdown.py` is only needed to be copied into Service VM.
|
||||
|
||||
#. Edit options in ``/etc/life_mngr/life_mngr.conf`` in the Service VM.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ scp build/misc/services/life_mngr root@<test board address>:/usr/bin/life_mngr
|
||||
$ scp build/misc/services/life_mngr.service root@<test board address>:/lib/systemd/system/life_mngr.service
|
||||
VM_TYPE=service_vm
|
||||
VM_NAME=Service_VM
|
||||
DEV_NAME=tty:/dev/ttyS8,/dev/ttyS9,/dev/ttyS10,/dev/ttyS11,/dev/ttyS12,/dev/ttyS13,/dev/ttyS14
|
||||
ALLOW_TRIGGER_S5=/dev/ttySn
|
||||
|
||||
#. Use the below commands to enable ``life_mngr.service`` and restart the User VM.
|
||||
.. note:: The mapping between User VM ID and communication serial device name (``/dev/ttySn``)
|
||||
in the :file:`/etc/serial.conf`. If ``/dev/ttySn`` is configured in the ``ALLOW_TRIGGER_S5``,
|
||||
this means system shutdown is allowed to be triggered in the corresponding User VM.
|
||||
|
||||
#. Edit options in ``/etc/life_mngr/life_mngr.conf`` in the User VM.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
# chmod +x /usr/bin/life_mngr
|
||||
# systemctl enable life_mngr.service
|
||||
# reboot
|
||||
VM_TYPE=user_vm
|
||||
VM_NAME=<User VM name>
|
||||
DEV_NAME=tty:/dev/ttyS1
|
||||
#ALLOW_TRIGGER_S5=/dev/ttySn
|
||||
|
||||
.. note:: The User VM name in this configuration file should be consistent with the VM name in the
|
||||
launch script for the Post-launched User VM or the VM name which is specified in the hypervisor
|
||||
scenario XML for the Pre-launched User VM.
|
||||
|
||||
#. Use the following commands to enable ``life_mngr.service`` and restart the Service VM and User VMs.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
sudo chmod +x /usr/bin/life_mngr
|
||||
sudo systemctl enable life_mngr.service
|
||||
sudo reboot
|
||||
|
||||
.. note:: For the Pre-launched User VM, need restart Lifecycle Manager service manually
|
||||
after Lifecycle Manager in Service VM starts.
|
||||
|
||||
#. For the WaaG VM, run the lifecycle manager daemon:
|
||||
|
||||
a) Build the ``life_mngr_win.exe`` application::
|
||||
a) Build the ``life_mngr_win.exe`` application and ``s5_trigger_win.py``::
|
||||
|
||||
$ cd acrn-hypervisor
|
||||
$ make life_mngr
|
||||
cd acrn-hypervisor
|
||||
make life_mngr
|
||||
|
||||
.. note:: If there is no ``x86_64-w64-mingw32-gcc`` compiler, you can run ``sudo apt install gcc-mingw-w64-x86-64``
|
||||
on Ubuntu to install it.
|
||||
.. note:: If there is no ``x86_64-w64-mingw32-gcc`` compiler, you can run
|
||||
``sudo apt install gcc-mingw-w64-x86-64`` on Ubuntu to install it.
|
||||
|
||||
#) Copy ``s5_trigger_win.py`` into the WaaG VM.
|
||||
|
||||
#) Set up a Windows environment:
|
||||
|
||||
I) Download the :kbd:`Visual Studio 2019` tool from `<https://visualstudio.microsoft.com/downloads/>`_,
|
||||
1) Download the Python3 from `<https://www.python.org/downloads/release/python-3810/>`_, install
|
||||
"Python 3.8.10" in WaaG.
|
||||
|
||||
#) If Lifecycle Manager for WaaG will be built in Windows,
|
||||
download the Visual Studio 2019 tool from `<https://visualstudio.microsoft.com/downloads/>`_,
|
||||
and choose the two options in the below screenshots to install "Microsoft Visual C++ Redistributable
|
||||
for Visual Studio 2015, 2017 and 2019 (x86 or X64)" in WaaG:
|
||||
|
||||
@ -167,6 +232,8 @@ The procedure for enabling S5 is specific to the particular OS:
|
||||
|
||||
.. figure:: images/Microsoft-Visual-C-install-option-2.png
|
||||
|
||||
.. note:: If Lifecycle Manager for WaaG is built in Linux, Visual Studio 2019 tool is not needed for WaaG.
|
||||
|
||||
#) In WaaG, use the :kbd:`Windows + R` shortcut key, input
|
||||
``shell:startup``, click :kbd:`OK`
|
||||
and then copy the ``life_mngr_win.exe`` application into this directory.
|
||||
@ -179,15 +246,15 @@ The procedure for enabling S5 is specific to the particular OS:
|
||||
|
||||
.. figure:: images/open-com-success.png
|
||||
|
||||
#. If the Service VM is being shut down (transitioning to the S5 state), it can call
|
||||
``acrnctl stop vm-name`` to shut down the User VMs.
|
||||
#. If ``s5_trigger_linux.py`` is run in the Service VM, the Service VM will shut down (transitioning to the S5 state),
|
||||
it sends poweroff request to shut down the User VMs.
|
||||
|
||||
.. note:: S5 state is not automatically triggered by a Service VM shutdown; this needs
|
||||
to be run before powering off the Service VM.
|
||||
to run ``s5_trigger_linux.py`` in the Service VM.
|
||||
|
||||
How to Test
|
||||
***********
|
||||
As described in :ref:`vuart_config`, two vUARTs are defined in
|
||||
As described in :ref:`vuart_config`, two vUARTs are defined for User VM in
|
||||
pre-defined ACRN scenarios: vUART0/ttyS0 for the console and
|
||||
vUART1/ttyS1 for S5-related communication (as shown in :ref:`s5-architecture`).
|
||||
|
||||
@ -204,49 +271,45 @@ How to Test
|
||||
|
||||
#. Refer to the :ref:`enable_s5` section to set up the S5 environment for the User VMs.
|
||||
|
||||
.. note:: RT-Linux's UUID must use ``495ae2e5-2603-4d64-af76-d4bc5a8ec0e5``. Also, the
|
||||
shared EFI image is required for launching the RT-Linux VM.
|
||||
|
||||
.. note:: Use the ``systemctl status life_mngr.service`` command to ensure the service is working on the LaaG or RT-Linux:
|
||||
|
||||
.. code-block:: console
|
||||
|
||||
* life_mngr.service - ACRN lifemngr daemon
|
||||
Loaded: loaded (/usr/lib/systemd/system/life_mngr.service; enabled; vendor p>
|
||||
Active: active (running) since Tue 2019-09-10 07:15:06 UTC; 1min 11s ago
|
||||
Main PID: 840 (life_mngr)
|
||||
Loaded: loaded (/lib/systemd/system/life_mngr.service; enabled; vendor preset: enabled)
|
||||
Active: active (running) since Thu 2021-11-11 12:43:53 CST; 36s ago
|
||||
Main PID: 197397 (life_mngr)
|
||||
|
||||
.. note:: For WaaG, we need to close ``windbg`` by using the ``bcdedit /set debug off`` command
|
||||
IF you executed the ``bcdedit /set debug on`` when you set up the WaaG, because it occupies the ``COM2``.
|
||||
|
||||
#. Use the ``acrnctl stop`` command on the Service VM to trigger S5 to the User VMs:
|
||||
#. Use the ``user_vm_shutdown.py`` in the Service VM to shut down the User VMs:
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: none
|
||||
|
||||
# acrnctl stop vm1
|
||||
sudo python3 ~/user_vm_shutdown.py <User VM name>
|
||||
|
||||
.. note:: The User VM name is configured in the :file:`life_mngr.conf` of User VM.
|
||||
|
||||
#. Use the ``acrnctl list`` command to check the User VM status.
|
||||
|
||||
.. code-block:: console
|
||||
.. code-block:: none
|
||||
|
||||
# acrnctl list
|
||||
vm1 stopped
|
||||
sudo acrnctl list
|
||||
<User VM name> stopped
|
||||
|
||||
System Shutdown
|
||||
***************
|
||||
|
||||
Using a coordinating script, ``misc/life_mngr/s5_trigger.sh``, in conjunction with
|
||||
the lifecycle manager in each VM, graceful system shutdown can be performed.
|
||||
Using a coordinating script, ``s5_trigger_linux.py`` or ``s5_trigger_win.py``,
|
||||
in conjunction with the Lifecycle Manager in each VM, graceful system shutdown
|
||||
can be performed.
|
||||
|
||||
.. note:: Please install ``s5_trigger.sh`` manually to root's home directory.
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
$ sudo install -p -m 0755 -t ~root misc/life_mngr/s5_trigger.sh
|
||||
|
||||
In the ``hybrid_rt`` scenario, the script can send a shutdown command via ``ttyS1``
|
||||
in the Service VM, which is connected to ``ttyS1`` in the pre-launched VM. The
|
||||
lifecycle manager in the pre-launched VM receives the shutdown command, sends an
|
||||
In the ``hybrid_rt`` scenario, operator can use the script to send a system shutdown
|
||||
request via ``/var/lib/life_mngr/monitor.sock`` to User VM which is configured to be allowed to
|
||||
trigger system S5, this system shutdown request is forwarded to the Service VM, the
|
||||
Service VM sends poweroff request to each User VMs (Pre-launched VM or Post-launched VM)
|
||||
through vUART. The Lifecycle Manager in the User VM receives the poweroff request, sends an
|
||||
ack message, and proceeds to shut itself down accordingly.
|
||||
|
||||
.. figure:: images/system_shutdown.png
|
||||
@ -254,22 +317,23 @@ ack message, and proceeds to shut itself down accordingly.
|
||||
|
||||
Graceful system shutdown flow
|
||||
|
||||
#. The HMI Windows Guest uses the lifecycle manager to send a shutdown request to
|
||||
the Service VM
|
||||
#. The lifecycle manager in the Service VM responds with an ack message and
|
||||
executes ``s5_trigger.sh``
|
||||
#. After receiving the ack message, the lifecycle manager in the HMI Windows Guest
|
||||
shuts down the guest
|
||||
#. The ``s5_trigger.sh`` script in the Service VM shuts down the Linux Guest by
|
||||
using ``acrnctl`` to send a shutdown request
|
||||
#. After receiving the shutdown request, the lifecycle manager in the Linux Guest
|
||||
responds with an ack message and shuts down the guest
|
||||
#. The ``s5_trigger.sh`` script in the Service VM shuts down the Pre-launched RTVM
|
||||
by sending a shutdown request to its ``ttyS1``
|
||||
#. After receiving the shutdown request, the lifecycle manager in the Pre-launched
|
||||
RTVM responds with an ack message
|
||||
#. The lifecycle manager in the Pre-launched RTVM shuts down the guest using
|
||||
standard PM registers
|
||||
#. After receiving the ack message, the ``s5_trigger.sh`` script in the Service VM
|
||||
shuts down the Service VM
|
||||
#. The hypervisor shuts down the system after all of its guests have shut down
|
||||
#. The HMI in the Windows VM uses ``s5_trigger_win.py`` to send
|
||||
system shutdown request to the Lifecycle Manager, Lifecycle Manager
|
||||
forwards this request to Lifecycle Manager in the Service VM.
|
||||
#. The Lifecycle Manager in the Service VM responds with an ack message and
|
||||
sends ``poweroff_cmd`` request to Windows VM.
|
||||
#. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in the HMI
|
||||
Windows VM responds with an ack message, then shuts down VM.
|
||||
#. The Lifecycle Manager in the Service VM sends ``poweroff_cmd`` request to
|
||||
Linux User VM.
|
||||
#. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in the
|
||||
Linux User VM responds with an ack message, then shuts down VM.
|
||||
#. The Lifecycle Manager in the Service VM sends ``poweroff_cmd`` request to
|
||||
Pre-launched RTVM.
|
||||
#. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in
|
||||
the Pre-launched RTVM responds with an ack message.
|
||||
#. The Lifecycle Manager in the Pre-launched RTVM shuts down the VM using
|
||||
ACPI PM registers.
|
||||
#. After receiving the ack message from all user VMs, the Lifecycle Manager
|
||||
in the Service VM shuts down VM.
|
||||
#. The hypervisor shuts down the system after all VMs have shut down.
|
||||
|
@ -2,5 +2,5 @@ digraph G {
|
||||
node [shape=plaintext fontsize=12];
|
||||
rankdir=LR;
|
||||
bgcolor="transparent";
|
||||
"ACRN-DM" -> "Service VM:/dev/ttyS1" -> "ACRN hypervisor" -> "User VM:/dev/ttyS1" [arrowsize=.5];
|
||||
"Service VM:/dev/ttyS8" -> "ACRN hypervisor" -> "User VM:/dev/ttyS1" [arrowsize=.5];
|
||||
}
|
||||
|
Binary file not shown.
Before Width: | Height: | Size: 29 KiB After Width: | Height: | Size: 20 KiB |
Loading…
Reference in New Issue
Block a user