diff --git a/doc/tutorials/enable_s5.rst b/doc/tutorials/enable_s5.rst index 1d9418d41..766b7d28e 100644 --- a/doc/tutorials/enable_s5.rst +++ b/doc/tutorials/enable_s5.rst @@ -21,71 +21,77 @@ ACRN S5 function is required. S5 Architecture *************** -ACRN provides a mechanism to trigger the S5 state transition throughout the system. -It uses a vUART channel to communicate between the Service and User VMs. -The diagram below shows the overall architecture: +ACRN provides a mechanism to trigger the S5 state transition throughout the +system. It uses a vUART channel to communicate between the Service VM and User +VMs. The diagram below shows the overall architecture: .. figure:: images/s5_overall_architecture.png :align: center :name: s5-architecture - S5 overall architecture + S5 Overall Architecture -- **vUART channel**: +**vUART channel**: - The User VM's serial port device (``/dev/ttySn``) is emulated in the - Hypervisor. The channel from the Service VM to the User VM: +The User VM's serial port device (``/dev/ttySn``) is emulated in the +hypervisor. The channel from the Service VM to the User VM: - .. graphviz:: images/s5-scenario-2.dot - :name: s5-scenario-2 +.. graphviz:: images/s5-scenario-2.dot + :name: s5-scenario-2 Lifecycle Manager Overview ========================== -As part of the S5 reference design, a Lifecycle Manager daemon (``life_mngr`` in Linux, -``life_mngr_win.exe`` in Windows) runs in the Service VM and User VMs to implement S5. -Operator or user can use ``s5_trigger_linux.py`` or ``s5_trigger_win.py`` script to initialize -a system S5 in the Service VM or User VMs. The Lifecycle Manager in the Service VM and -User VMs wait for system S5 request on the local socket port. +As part of the S5 reference design, a Lifecycle Manager daemon (``life_mngr`` in +Linux, ``life_mngr_win.exe`` in Windows) runs in the Service VM and User VMs to +implement S5. You can use the ``s5_trigger_linux.py`` or +``s5_trigger_win.py`` script to initialize a system S5 in the Service VM or User +VMs. The Lifecycle Manager in the Service VM and User VMs wait for the system S5 +request on the local socket port. Initiate a System S5 from within a User VM (e.g., HMI) ====================================================== -As shown in the :numref:`s5-architecture`, a request to Service VM initiates the shutdown flow. -This could come from a User VM, most likely the HMI (running Windows or Linux). -When a human operator initiates the flow through running ``s5_trigger_linux.py`` or ``s5_trigger_win.py``, -the Lifecycle Manager (``life_mngr``) running in that User VM sends the system S5 request via -the vUART to the Lifecycle Manager in the Service VM which in turn acknowledges the request. -The Lifecycle Manager in Service VM sends ``poweroff_cmd`` request to User VMs, when the Lifecycle Manager -in User VMs receives ``poweroff_cmd`` request, it sends ``ack_poweroff`` to the Service VM; -then it shuts down the User VMs. If the User VMs is not ready to shut down, it can ignore the -``poweroff_cmd`` request. +As shown in :numref:`s5-architecture`, a request to the Service VM initiates the +shutdown flow. This request could come from a User VM, most likely the human +machine interface (HMI) running Windows or Linux. When a human operator +initiates the flow by running ``s5_trigger_linux.py`` or ``s5_trigger_win.py``, +the Lifecycle Manager (``life_mngr``) running in that User VM sends the system +S5 request via the vUART to the Lifecycle Manager in the Service VM which in +turn acknowledges the request. The Lifecycle Manager in the Service VM sends a +``poweroff_cmd`` request to each User VM. When the Lifecycle Manager in a User +VM receives the ``poweroff_cmd`` request, it sends ``ack_poweroff`` to the +Service VM; then it shuts down the User VM. If a User VM is not ready to shut +down, it can ignore the ``poweroff_cmd`` request. -.. note:: The User VM need to be authorized to be able to request a system S5, this is achieved - by configuring ``ALLOW_TRIGGER_S5`` in the Lifecycle Manager service configuration :file:`/etc/life_mngr.conf` - in the Service VM. There is only one User VM in the system can be configured to request a shutdown. - If this configuration is wrong, the system S5 request from User VM is rejected by - Lifecycle Manager of Service VM, the following error message is recorded in Lifecycle Manager - log :file:`/var/log/life_mngr.log` of Service VM: - ``The user VM is not allowed to trigger system shutdown`` +.. note:: The User VM needs to be authorized to be able to request a system S5. + This is achieved by configuring ``ALLOW_TRIGGER_S5`` in the Lifecycle + Manager service configuration :file:`/etc/life_mngr.conf` in the Service VM. + Only one User VM in the system can be configured to request a shutdown. If + this configuration is wrong, the Lifecycle Manager of the Service VM rejects + the system S5 request from the User VM. The following error message is + recorded in the Lifecycle Manager log :file:`/var/log/life_mngr.log` of the + Service VM: ``The user VM is not allowed to trigger system shutdown``. Initiate a System S5 within the Service VM ========================================== -On the Service VM side, it uses the ``s5_trigger_linux.py`` to trigger the system S5 flow. Then, -the Lifecycle Manager in service VM sends a ``poweroff_cmd`` request to the lifecycle manager in each -User VM through the vUART channel. If the User VM receives this request, it will send an ``ack_poweroff`` -to the lifecycle manager in Service VM. It is the Service VM's responsibility to check whether the -User VMs shut down successfully or not, and to decide when to shut the Service VM itself down. +On the Service VM side, it uses the ``s5_trigger_linux.py`` to trigger the +system S5 flow. Then, the Lifecycle Manager in the Service VM sends a +``poweroff_cmd`` request to the Lifecycle Manager in each User VM through the +vUART channel. When the User VM receives this request, it sends an +``ack_poweroff`` to the Lifecycle Manager in the Service VM. The Service VM +checks whether the User VMs shut down successfully or not, and decides when to +shut itself down. -.. note:: Service VM is always allowed to trigger system S5 by default. +.. note:: The Service VM is always allowed to trigger system S5 by default. .. _enable_s5: Enable S5 ********* -1. Configure communication vUART for Service VM and User VMs: +1. Configure communication vUARTs for the Service VM and User VMs: Add these lines in the hypervisor scenario XML file manually: @@ -141,10 +147,10 @@ Enable S5 /* VM3 */ ... - .. note:: These vUART is emulated in the hypervisor; expose the node as ``/dev/ttySn``. - For the User VM with the minimal VM ID, the communication vUART id should be 1. - For other User VMs, the vUART (id is 1) shoulbe be configured as invalid, the communication - vUART id should be 2 or others. + .. note:: These vUARTs are emulated in the hypervisor; expose the node as + ``/dev/ttySn``. For the User VM with the lowest VM ID, the communication + vUART id should be 1. For other User VMs, the vUART (id is 1) should be + configured as invalid; the communication vUART id should be 2 or higher. 2. Build the Lifecycle Manager daemon, ``life_mngr``: @@ -153,10 +159,13 @@ Enable S5 cd acrn-hypervisor make life_mngr -#. For Service VM, LaaG VM and RT-Linux VM, run the Lifecycle Manager daemon: +#. For the Service VM, LaaG VM, and RT-Linux VM, run the Lifecycle Manager + daemon: - a. Copy ``life_mngr.conf``, ``s5_trigger_linux.py``, ``user_vm_shutdown.py``, ``life_mngr``, - and ``life_mngr.service`` into the Service VM and User VMs. + a. Copy ``life_mngr.conf``, ``s5_trigger_linux.py``, ``life_mngr``, + and ``life_mngr.service`` into the Service VM and User VMs. These commands + assume you have a network connection between the development computer and + target. You can also use a USB stick to transfer files. .. code-block:: none @@ -165,9 +174,11 @@ Enable S5 scp build/misc/services/life_mngr.conf root@:/etc/life_mngr/ scp build/misc/services/life_mngr.service root@:/lib/systemd/system/ - scp misc/services/life_mngr/user_vm_shutdown.py root@:~/ + #. Copy ``user_vm_shutdown.py`` into the Service VM. - .. note:: :file:`user_vm_shutdown.py` is only needed to be copied into Service VM. + .. code-block:: none + + scp misc/services/life_mngr/user_vm_shutdown.py root@:~/ #. Edit options in ``/etc/life_mngr/life_mngr.conf`` in the Service VM. @@ -178,9 +189,11 @@ Enable S5 DEV_NAME=tty:/dev/ttyS8,/dev/ttyS9,/dev/ttyS10,/dev/ttyS11,/dev/ttyS12,/dev/ttyS13,/dev/ttyS14 ALLOW_TRIGGER_S5=/dev/ttySn - .. note:: The mapping between User VM ID and communication serial device name (``/dev/ttySn``) - in the :file:`/etc/serial.conf`. If ``/dev/ttySn`` is configured in the ``ALLOW_TRIGGER_S5``, - this means system shutdown is allowed to be triggered in the corresponding User VM. + .. note:: The mapping between User VM ID and communication serial device + name (``/dev/ttySn``) is in the :file:`/etc/serial.conf`. If + ``/dev/ttySn`` is configured in the ``ALLOW_TRIGGER_S5``, this means + system shutdown is allowed to be triggered in the corresponding User + VM. #. Edit options in ``/etc/life_mngr/life_mngr.conf`` in the User VM. @@ -191,9 +204,10 @@ Enable S5 DEV_NAME=tty:/dev/ttyS1 #ALLOW_TRIGGER_S5=/dev/ttySn - .. note:: The User VM name in this configuration file should be consistent with the VM name in the - launch script for the Post-launched User VM or the VM name which is specified in the hypervisor - scenario XML for the Pre-launched User VM. + .. note:: The User VM name in this configuration file should be + consistent with the VM name in the launch script for the Post-launched + User VM or the VM name which is specified in the hypervisor scenario + XML for the Pre-launched User VM. #. Use the following commands to enable ``life_mngr.service`` and restart the Service VM and User VMs. @@ -203,12 +217,12 @@ Enable S5 sudo systemctl enable life_mngr.service sudo reboot - .. note:: For the Pre-launched User VM, need restart Lifecycle Manager service manually - after Lifecycle Manager in Service VM starts. + .. note:: For the Pre-launched User VM, restart the Lifecycle Manager + service manually after the Lifecycle Manager in the Service VM starts. -#. For the WaaG VM, run the lifecycle manager daemon: +#. For the WaaG VM, run the Lifecycle Manager daemon: - a) Build the ``life_mngr_win.exe`` application and ``s5_trigger_win.py``:: + a. Build the ``life_mngr_win.exe`` application and ``s5_trigger_win.py``:: cd acrn-hypervisor make life_mngr @@ -216,87 +230,101 @@ Enable S5 .. note:: If there is no ``x86_64-w64-mingw32-gcc`` compiler, you can run ``sudo apt install gcc-mingw-w64-x86-64`` on Ubuntu to install it. - #) Copy ``s5_trigger_win.py`` into the WaaG VM. + #. Copy ``s5_trigger_win.py`` into the WaaG VM. - #) Set up a Windows environment: + #. Set up a Windows environment: - 1) Download the Python3 from ``_, install + 1. Download the Python3 from ``_, install "Python 3.8.10" in WaaG. - #) If Lifecycle Manager for WaaG will be built in Windows, - download the Visual Studio 2019 tool from ``_, - and choose the two options in the below screenshots to install "Microsoft Visual C++ Redistributable - for Visual Studio 2015, 2017 and 2019 (x86 or X64)" in WaaG: + #. If the Lifecycle Manager for WaaG will be built in Windows, + download the Visual Studio 2019 tool from + ``_, and choose the two + options in the below screenshots to install "Microsoft Visual C++ + Redistributable for Visual Studio 2015, 2017 and 2019 (x86 or X64)" in + WaaG: .. figure:: images/Microsoft-Visual-C-install-option-1.png .. figure:: images/Microsoft-Visual-C-install-option-2.png - .. note:: If Lifecycle Manager for WaaG is built in Linux, Visual Studio 2019 tool is not needed for WaaG. + .. note:: If the Lifecycle Manager for WaaG is built in Linux, the + Visual Studio 2019 tool is not needed for WaaG. - #) In WaaG, use the :kbd:`Windows + R` shortcut key, input - ``shell:startup``, click :kbd:`OK` - and then copy the ``life_mngr_win.exe`` application into this directory. + #. In WaaG, use the :kbd:`Windows` + :kbd:`R` shortcut key, input + ``shell:startup``, click :kbd:`OK` and then copy the + ``life_mngr_win.exe`` application into this directory. .. figure:: images/run-shell-startup.png .. figure:: images/launch-startup.png - #) Restart the WaaG VM. The COM2 window will automatically open after reboot. + #. Restart the WaaG VM. The COM2 window will automatically open after reboot. - .. figure:: images/open-com-success.png + .. figure:: images/open-com-success.png -#. If ``s5_trigger_linux.py`` is run in the Service VM, the Service VM will shut down (transitioning to the S5 state), - it sends poweroff request to shut down the User VMs. +#. If ``s5_trigger_linux.py`` is run in the Service VM, the Service VM shuts + down (transitioning to the S5 state) and sends a poweroff request to shut down the User VMs. - .. note:: S5 state is not automatically triggered by a Service VM shutdown; this needs - to run ``s5_trigger_linux.py`` in the Service VM. + .. note:: S5 state is not automatically triggered by a Service VM shutdown; + you need to run ``s5_trigger_linux.py`` in the Service VM. How to Test *********** - As described in :ref:`vuart_config`, two vUARTs are defined for User VM in - pre-defined ACRN scenarios: vUART0/ttyS0 for the console and - vUART1/ttyS1 for S5-related communication (as shown in :ref:`s5-architecture`). - For Yocto Project (Poky) or Ubuntu rootfs, the ``serial-getty`` - service for ``ttyS1`` conflicts with the S5-related communication - use of ``vUART1``. We can eliminate the conflict by preventing - that service from being started - either automatically or manually, by masking the service - using this command +As described in :ref:`vuart_config`, two vUARTs are defined for a User VM in +pre-defined ACRN scenarios: ``vUART0/ttyS0`` for the console and +``vUART1/ttyS1`` for S5-related communication (as shown in +:ref:`s5-architecture`). - :: +For Yocto Project (Poky) or Ubuntu rootfs, the ``serial-getty`` +service for ``ttyS1`` conflicts with the S5-related communication +use of ``vUART1``. We can eliminate the conflict by preventing +that service from being started +either automatically or manually, by masking the service +using this command: - systemctl mask serial-getty@ttyS1.service +:: -#. Refer to the :ref:`enable_s5` section to set up the S5 environment for the User VMs. + systemctl mask serial-getty@ttyS1.service - .. note:: Use the ``systemctl status life_mngr.service`` command to ensure the service is working on the LaaG or RT-Linux: +#. Refer to the :ref:`enable_s5` section to set up the S5 environment for the + User VMs. + + .. note:: Use the ``systemctl status life_mngr.service`` command to ensure + the service is working on the LaaG or RT-Linux: .. code-block:: console - * life_mngr.service - ACRN lifemngr daemon - Loaded: loaded (/lib/systemd/system/life_mngr.service; enabled; vendor preset: enabled) - Active: active (running) since Thu 2021-11-11 12:43:53 CST; 36s ago - Main PID: 197397 (life_mngr) + * life_mngr.service - ACRN lifemngr daemon + Loaded: loaded (/lib/systemd/system/life_mngr.service; enabled; vendor preset: enabled) + Active: active (running) since Thu 2021-11-11 12:43:53 CST; 36s ago + Main PID: 197397 (life_mngr) - .. note:: For WaaG, we need to close ``windbg`` by using the ``bcdedit /set debug off`` command - IF you executed the ``bcdedit /set debug on`` when you set up the WaaG, because it occupies the ``COM2``. + .. note:: For WaaG, you need to close ``windbg`` by using the + ``bcdedit /set debug off`` command IF you executed the ``bcdedit /set + debug on`` command when you set up the WaaG, because it occupies the + ``COM2``. -#. Use the ``user_vm_shutdown.py`` in the Service VM to shut down the User VMs: +#. Run ``user_vm_shutdown.py`` in the Service VM to shut down the User VMs: .. code-block:: none sudo python3 ~/user_vm_shutdown.py - .. note:: The User VM name is configured in the :file:`life_mngr.conf` of User VM. - For the WaaG VM, the User VM name is "windows". + .. note:: The User VM name is configured in the :file:`life_mngr.conf` of + the User VM. For the WaaG VM, the User VM name is "windows". -#. Use the ``acrnctl list`` command to check the User VM status. +#. Run the ``acrnctl list`` command to check the User VM status. .. code-block:: none sudo acrnctl list + + Output example: + + .. code-block:: console + stopped System Shutdown @@ -306,40 +334,41 @@ Using a coordinating script, ``s5_trigger_linux.py`` or ``s5_trigger_win.py``, in conjunction with the Lifecycle Manager in each VM, graceful system shutdown can be performed. -In the ``hybrid_rt`` scenario, operator can use the script to send a system shutdown -request via ``/var/lib/life_mngr/monitor.sock`` to User VM which is configured to be allowed to -trigger system S5, this system shutdown request is forwarded to the Service VM, the -Service VM sends poweroff request to each User VMs (Pre-launched VM or Post-launched VM) -through vUART. The Lifecycle Manager in the User VM receives the poweroff request, sends an -ack message, and proceeds to shut itself down accordingly. +In the ``hybrid_rt`` scenario, operator can use the script to send a system +shutdown request via ``/var/lib/life_mngr/monitor.sock`` to a User VM that is +configured to be allowed to trigger system S5. This system shutdown request is +forwarded to the Service VM. The Service VM sends a poweroff request to each +User VM (Pre-launched VM or Post-launched VM) through vUART. The Lifecycle +Manager in the User VM receives the poweroff request, sends an ack message, and +proceeds to shut itself down accordingly. .. figure:: images/system_shutdown.png :align: center - Graceful system shutdown flow + Graceful System Shutdown Flow -#. The HMI in the Windows VM uses ``s5_trigger_win.py`` to send - system shutdown request to the Lifecycle Manager, Lifecycle Manager - forwards this request to Lifecycle Manager in the Service VM. +#. The HMI in the Windows User VM uses ``s5_trigger_win.py`` to send a + system shutdown request to the Lifecycle Manager. The Lifecycle Manager + forwards this request to the Lifecycle Manager in the Service VM. #. The Lifecycle Manager in the Service VM responds with an ack message and - sends ``poweroff_cmd`` request to Windows VM. -#. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in the HMI - Windows VM responds with an ack message, then shuts down VM. -#. The Lifecycle Manager in the Service VM sends ``poweroff_cmd`` request to - Linux User VM. + sends a ``poweroff_cmd`` request to the Windows User VM. +#. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in the + Windows User VM responds with an ack message, then shuts down the VM. +#. The Lifecycle Manager in the Service VM sends a ``poweroff_cmd`` request to + the Linux User VM. #. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in the - Linux User VM responds with an ack message, then shuts down VM. -#. The Lifecycle Manager in the Service VM sends ``poweroff_cmd`` request to - Pre-launched RTVM. + Linux User VM responds with an ack message, then shuts down the VM. +#. The Lifecycle Manager in the Service VM sends a ``poweroff_cmd`` request to + the Pre-launched RTVM. #. After receiving the ``poweroff_cmd`` request, the Lifecycle Manager in the Pre-launched RTVM responds with an ack message. #. The Lifecycle Manager in the Pre-launched RTVM shuts down the VM using ACPI PM registers. -#. After receiving the ack message from all user VMs, the Lifecycle Manager - in the Service VM shuts down VM. +#. After receiving the ack message from all User VMs, the Lifecycle Manager + in the Service VM shuts down the VM. #. The hypervisor shuts down the system after all VMs have shut down. -.. note:: If one or more virtual functions (VFs) of a SR-IOV device, e.g. GPU on Alder - Lake platform, are assigned to User VMs, extra steps should be taken by user to - disable all VFs before Service VM shuts down. Otherwise, Service VM may fail to - shut down due to some enabled VFs. \ No newline at end of file +.. note:: If one or more virtual functions (VFs) of a SR-IOV device, e.g., GPU + on Alder Lake platform, are assigned to User VMs, take extra steps to disable + all VFs before the Service VM shuts down. Otherwise, the Service VM may fail + to shut down due to some enabled VFs. \ No newline at end of file diff --git a/doc/tutorials/images/system_shutdown.png b/doc/tutorials/images/system_shutdown.png index 58f42eed3..e41300ef2 100644 Binary files a/doc/tutorials/images/system_shutdown.png and b/doc/tutorials/images/system_shutdown.png differ