From d8ee2f3303ab271771c6eba80b63996e96fd6cf9 Mon Sep 17 00:00:00 2001 From: "David B. Kinder" Date: Fri, 7 Aug 2020 17:23:43 -0700 Subject: [PATCH] doc: update release_2.1 with new docs Signed-off-by: David B. Kinder --- doc/asa.rst | 17 ++ doc/develop.rst | 1 + doc/developer-guides/hld/hld-devicemodel.rst | 8 +- doc/developer-guides/hld/hld-hypervisor.rst | 1 + doc/developer-guides/hld/hv-console.rst | 2 +- doc/developer-guides/hld/hv-rdt.rst | 118 ++++-------- doc/developer-guides/hld/ivshmem-hld.rst | 10 +- .../hld/mmio-dev-passthrough.rst | 40 ++++ doc/developer-guides/hld/virtio-net.rst | 42 ++++- doc/developer-guides/hld/vuart-virt-hld.rst | 2 +- doc/getting-started/building-from-source.rst | 52 ++++-- doc/introduction/index.rst | 2 +- doc/release_notes/release_notes_2.1.rst | 111 +++++++++++ doc/scripts/filter-known-issues.py | 2 + doc/scripts/genrest.py | 2 + doc/tutorials/acrn_configuration_tool.rst | 26 +-- doc/tutorials/debug.rst | 6 +- doc/tutorials/docbuild.rst | 5 +- doc/tutorials/images/pre_launched_rt.png | Bin 0 -> 19106 bytes doc/tutorials/pre-launched-rt.rst | 118 ++++++++++++ doc/tutorials/rdt_configuration.rst | 109 +++++------ doc/tutorials/rtvm_performance_tips.rst | 2 +- doc/tutorials/using_grub.rst | 4 +- doc/tutorials/using_hybrid_mode_on_nuc.rst | 24 +-- doc/tutorials/using_partition_mode_on_nuc.rst | 59 +++--- doc/tutorials/vuart_configuration.rst | 3 +- doc/user-guides/acrn-shell.rst | 4 +- doc/user-guides/kernel-parameters.rst | 173 ------------------ 28 files changed, 537 insertions(+), 406 deletions(-) create mode 100644 doc/developer-guides/hld/mmio-dev-passthrough.rst create mode 100644 doc/release_notes/release_notes_2.1.rst create mode 100644 doc/tutorials/images/pre_launched_rt.png create mode 100644 doc/tutorials/pre-launched-rt.rst diff --git a/doc/asa.rst b/doc/asa.rst index 45c3ceef6..b285e008c 100644 --- a/doc/asa.rst +++ b/doc/asa.rst @@ -3,6 +3,23 @@ Security Advisory ################# +Addressed in ACRN v2.1 +************************ + +We recommend that all developers upgrade to this v2.1 release (or later), which +addresses the following security issue that was discovered in previous releases: + +------ + +- Missing access control restrictions in the Hypervisor component + A malicious entity with root access in the Service VM + userspace could abuse the PCIe assign/de-assign Hypercalls via crafted + ioctls and payloads. This attack can result in a corrupt state and Denial + of Service (DoS) for previously assigned PCIe devices to the Service VM + at runtime. + + **Affected Release:** v2.0 and v1.6.1. + Addressed in ACRN v1.6.1 ************************ diff --git a/doc/develop.rst b/doc/develop.rst index 74bebd7a0..a70651504 100644 --- a/doc/develop.rst +++ b/doc/develop.rst @@ -79,6 +79,7 @@ Enable ACRN Features tutorials/setup_openstack_libvirt tutorials/acrn_on_qemu tutorials/using_grub + tutorials/pre-launched-rt Debug ***** diff --git a/doc/developer-guides/hld/hld-devicemodel.rst b/doc/developer-guides/hld/hld-devicemodel.rst index 2545d6f03..5a2e340ae 100644 --- a/doc/developer-guides/hld/hld-devicemodel.rst +++ b/doc/developer-guides/hld/hld-devicemodel.rst @@ -56,6 +56,7 @@ options: [-l lpc] [-m mem] [-p vcpu:hostcpu] [-r ramdisk_image_path] [-s pci] [-U uuid] [--vsbl vsbl_file_name] [--ovmf ovmf_file_path] [--part_info part_info_name] [--enable_trusty] [--intr_monitor param_setting] + [--acpidev_pt HID] [--mmiodev_pt MMIO_regions] [--vtpm2 sock_path] [--virtio_poll interval] [--mac_seed seed_string] [--ptdev_no_reset] [--debugexit] [--lapic_pt] @@ -86,6 +87,8 @@ options: --intr_monitor: enable interrupt storm monitor its params: threshold/s,probe-period(s),delay_time(ms),delay_duration(ms), --virtio_poll: enable virtio poll mode with poll interval with ns + --acpidev_pt: acpi device ID args: HID in ACPI Table + --mmiodev_pt: MMIO resources args: physical MMIO regions --vtpm2: Virtual TPM2 args: sock_path=$PATH_OF_SWTPM_SOCKET --lapic_pt: enable local apic passthrough --rtvm: indicate that the guest is rtvm @@ -104,6 +107,7 @@ Here's an example showing how to run a VM with: - GPU device on PCI 00:02.0 - Virtio-block device on PCI 00:03.0 - Virtio-net device on PCI 00:04.0 +- TPM2 MSFT0101 .. code-block:: bash @@ -113,6 +117,7 @@ Here's an example showing how to run a VM with: -s 5,virtio-console,@pty:pty_port \ -s 3,virtio-blk,b,/data/clearlinux/clearlinux.img \ -s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \ + --acpidev_pt MSFT0101 \ --intr_monitor 10000,10,1,100 \ -B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \ console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \ @@ -1193,4 +1198,5 @@ Passthrough in Device Model **************************** You may refer to :ref:`hv-device-passthrough` for passthrough realization -in device model. +in device model and :ref:`mmio-device-passthrough` for MMIO passthrough realization +in device model and ACRN Hypervisor.. diff --git a/doc/developer-guides/hld/hld-hypervisor.rst b/doc/developer-guides/hld/hld-hypervisor.rst index 30945a94c..f5e4b01fe 100644 --- a/doc/developer-guides/hld/hld-hypervisor.rst +++ b/doc/developer-guides/hld/hld-hypervisor.rst @@ -18,6 +18,7 @@ Hypervisor high-level design Virtual Interrupt VT-d Device Passthrough + mmio-dev-passthrough hv-partitionmode Power Management Console, Shell, and vUART diff --git a/doc/developer-guides/hld/hv-console.rst b/doc/developer-guides/hld/hv-console.rst index a4dd203d3..c39c7fac8 100644 --- a/doc/developer-guides/hld/hv-console.rst +++ b/doc/developer-guides/hld/hv-console.rst @@ -70,7 +70,7 @@ Specifically: the hypervisor shell. Inputs to the physical UART will be redirected to the vUART starting from the next timer event. -- The vUART is deactivated after a :kbd:`Ctrl + Space` hotkey is received +- The vUART is deactivated after a :kbd:`Ctrl` + :kbd:`Space` hotkey is received from the physical UART. Inputs to the physical UART will be handled by the hypervisor shell starting from the next timer event. diff --git a/doc/developer-guides/hld/hv-rdt.rst b/doc/developer-guides/hld/hv-rdt.rst index 5850b5e44..73f3051cf 100644 --- a/doc/developer-guides/hld/hv-rdt.rst +++ b/doc/developer-guides/hld/hv-rdt.rst @@ -38,58 +38,36 @@ IA32_PQR_ASSOC MSR to CLOS 0. (Note that CLOS, or Class of Service, is a resource allocator.) The user can check the cache capabilities such as cache mask and max supported CLOS as described in :ref:`rdt_detection_capabilities` and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a -CLOS ID, to select a cache mask to take effect. ACRN uses -VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes to -enforce the settings. +CLOS ID, to select a cache mask to take effect. These configurations can be +done in scenario xml file under ``FEATURES`` section as shown in the below example. +ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes +to enforce the settings. .. code-block:: none - :emphasize-lines: 3,7,11,15 + :emphasize-lines: 2,4 - struct platform_clos_info platform_l2_clos_array[MAX_PLATFORM_CLOS_NUM] = { - { - .clos_mask = 0xff, - .msr_index = MSR_IA32_L3_MASK_BASE + 0, - }, - { - .clos_mask = 0xff, - .msr_index = MSR_IA32_L3_MASK_BASE + 1, - }, - { - .clos_mask = 0xff, - .msr_index = MSR_IA32_L3_MASK_BASE + 2, - }, - { - .clos_mask = 0xff, - .msr_index = MSR_IA32_L3_MASK_BASE + 3, - }, - }; + + y + n + 0xF + +Once the cache mask is set of each individual CPU, the respective CLOS ID +needs to be set in the scenario xml file under ``VM`` section. If user desires +to use CDP feature, CDP_ENABLED should be set to ``y``. .. code-block:: none - :emphasize-lines: 6 + :emphasize-lines: 2 - struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = { - { - .type = SOS_VM, - .name = SOS_VM_CONFIG_NAME, - .guest_flags = 0UL, - .clos = 0, - .memory = { - .start_hpa = 0x0UL, - .size = CONFIG_SOS_RAM_SIZE, - }, - .os_config = { - .name = SOS_VM_CONFIG_OS_NAME, - }, - }, - }; + + 0 .. note:: ACRN takes the lowest common CLOS max value between the supported - resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS - supported by L3 is 16 and L2 is 8, ACRN programs MAX_PLATFORM_CLOS_NUM to - 8. ACRN recommends consistent capabilities across all RDT - resources by using the common subset CLOS. This is done in order to - minimize misconfiguration errors. + resources as maximum supported CLOS ID. For example, if max CLOS + supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM + to 8. ACRN recommends to have consistent capabilities across all RDT + resources by using a common subset CLOS. This is done in order to minimize + misconfiguration errors. Objective of MBA @@ -128,53 +106,31 @@ that corresponds to each CLOS and then setting IA32_PQR_ASSOC MSR with CLOS users can check the MBA capabilities such as mba delay values and max supported CLOS as described in :ref:`rdt_detection_capabilities` and then program the IA32_MBA_MASK_n and IA32_PQR_ASSOC MSR with the CLOS ID. -ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root -modes to enforce the settings. +These configurations can be done in scenario xml file under ``FEATURES`` section +as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit +for non-root and root modes to enforce the settings. .. code-block:: none - :emphasize-lines: 3,7,11,15 + :emphasize-lines: 2,5 - struct platform_clos_info platform_mba_clos_array[MAX_PLATFORM_CLOS_NUM] = { - { - .mba_delay = 0, - .msr_index = MSR_IA32_MBA_MASK_BASE + 0, - }, - { - .mba_delay = 0, - .msr_index = MSR_IA32_MBA_MASK_BASE + 1, - }, - { - .mba_delay = 0, - .msr_index = MSR_IA32_MBA_MASK_BASE + 2, - }, - { - .mba_delay = 0, - .msr_index = MSR_IA32_MBA_MASK_BASE + 3, - }, - }; + + y + n + + 0 + +Once the cache mask is set of each individual CPU, the respective CLOS ID +needs to be set in the scenario xml file under ``VM`` section. .. code-block:: none - :emphasize-lines: 6 + :emphasize-lines: 2 - struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = { - { - .type = SOS_VM, - .name = SOS_VM_CONFIG_NAME, - .guest_flags = 0UL, - .clos = 0, - .memory = { - .start_hpa = 0x0UL, - .size = CONFIG_SOS_RAM_SIZE, - }, - .os_config = { - .name = SOS_VM_CONFIG_OS_NAME, - }, - }, - }; + + 0 .. note:: ACRN takes the lowest common CLOS max value between the supported - resources and sets the MAX_PLATFORM_CLOS_NUM. For example, if max CLOS + resources as maximum supported CLOS ID. For example, if max CLOS supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM to 8. ACRN recommends to have consistent capabilities across all RDT resources by using a common subset CLOS. This is done in order to minimize diff --git a/doc/developer-guides/hld/ivshmem-hld.rst b/doc/developer-guides/hld/ivshmem-hld.rst index 0ad597f50..9cc42a3d5 100644 --- a/doc/developer-guides/hld/ivshmem-hld.rst +++ b/doc/developer-guides/hld/ivshmem-hld.rst @@ -186,7 +186,7 @@ Inter-VM Communication Security hardening (BKMs) ************************************************ As previously highlighted, ACRN 2.0 provides the capability to create shared -memory regions between Post-Launch user VMs known as “Inter-VM Communication”. +memory regions between Post-Launch user VMs known as "Inter-VM Communication". This mechanism is based on ivshmem v1.0 exposing virtual PCI devices for the shared regions (in Service VM's memory for this release). This feature adopts a community-approved design for shared memory between VMs, following same @@ -194,7 +194,7 @@ specification for KVM/QEMU (`Link shared memory base address - ``/sys/class/uio/uioX/device/config`` --> shared memory Size. - + - For Linux-based User VMs, we recommend using the standard ``UIO`` and ``UIO_PCI_GENERIC`` drivers through the device node (for example, ``/dev/uioX``). - Reference: `AppArmor `_, `SELinux `_, `UIO driver-API `_ 3. **Crypto Support and Secure Applied Crypto** - - According to the application’s threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services. + - According to the application's threat model and the defined assets that need to be shared securely, define the requirements for crypto algorithms.Those algorithms should enable operations such as authenticated encryption and decryption, secure key exchange, true random number generation, and seed extraction. In addition, consider the landscape of your attack surface and define the need for security engine (for example CSME services. - Don't implement your own crypto functions. Use available compliant crypto libraries as applicable, such as. (`Intel IPP `_ or `TinyCrypt `_) - Utilize the platform/kernel infrastructure and services (e.g., :ref:`hld-security` , `Kernel Crypto backend/APIs `_ , `keyring subsystem `_, etc..). - Implement necessary flows for key lifecycle management including wrapping,revocation and migration, depending on the crypto key type used and if there are requirements for key persistence across system and power management events. diff --git a/doc/developer-guides/hld/mmio-dev-passthrough.rst b/doc/developer-guides/hld/mmio-dev-passthrough.rst new file mode 100644 index 000000000..9a18de1cc --- /dev/null +++ b/doc/developer-guides/hld/mmio-dev-passthrough.rst @@ -0,0 +1,40 @@ +.. _mmio-device-passthrough: + +MMIO Device Passthrough +######################## + +The ACRN Hypervisor supports both PCI and MMIO device passthrough. +However there are some constraints on and hypervisor assumptions about +MMIO devices: there can be no DMA access to the MMIO device and the MMIO +device may not use IRQs. + +Here is how ACRN supports MMIO device passthrough: + +* For a pre-launched VM, the VM configuration tells the ACRN hypervisor + the addresses of the physical MMIO device's regions and where they are + mapped to in the pre-launched VM. The hypervisor then removes these + MMIO regions from the Service VM and fills the vACPI table for this MMIO + device based on the device's physical ACPI table. + +* For a post-launched VM, the same actions are done as in a + pre-launched VM, plus we use the command line to tell which MMIO + device we want to pass through to the post-launched VM. + + If the MMIO device has ACPI Tables, use ``--acpidev_pt HID`` and + if not, use ``--mmiodev_pt MMIO_regions``. + +.. note:: + Currently, the vTPM and PT TPM in the ACRN-DM have the same HID so we + can't support them both at the same time. The VM will fail to boot if + both are used. + +These issues remain to be implemented: + +* Save the MMIO regions in a field of the VM structure in order to + release the resources when the post-launched VM shuts down abnormally. +* Allocate the guest MMIO regions for the MMIO device in a guest-reserved + MMIO region instead of being hard-coded. With this, we could add more + passthrough MMIO devices. +* De-assign the MMIO device from the Service VM first before passing + through it to the post-launched VM and not only removing the MMIO + regions from the Service VM. diff --git a/doc/developer-guides/hld/virtio-net.rst b/doc/developer-guides/hld/virtio-net.rst index 8668bcac3..72dea16f3 100644 --- a/doc/developer-guides/hld/virtio-net.rst +++ b/doc/developer-guides/hld/virtio-net.rst @@ -70,8 +70,8 @@ ACRN Device Model and virtio-net Backend Driver: the virtio-net backend driver to process the request. The backend driver receives the data in a shared virtqueue and sends it to the TAP device. -Bridge and Tap Device: - Bridge and Tap are standard virtual network infrastructures. They play +Bridge and TAP Device: + Bridge and TAP are standard virtual network infrastructures. They play an important role in communication among the Service VM, the User VM, and the outside world. @@ -108,7 +108,7 @@ Initialization in Device Model - Present frontend for a virtual PCI based NIC - Setup control plan callbacks - Setup data plan callbacks, including TX, RX -- Setup tap backend +- Setup TAP backend Initialization in virtio-net Frontend Driver ============================================ @@ -365,7 +365,7 @@ cases.) .. code-block:: c vring_interrupt --> // virtio-net frontend driver interrupt handler - skb_recv_done --> //registed by virtnet_probe-->init_vqs-->virtnet_find_vqs + skb_recv_done --> // registered by virtnet_probe-->init_vqs-->virtnet_find_vqs virtqueue_napi_schedule --> __napi_schedule --> virtnet_poll --> @@ -406,13 +406,13 @@ cases.) sk->sk_data_ready --> // application will get notified -How to Use -========== +How to Use TAP Interface +======================== The network infrastructure shown in :numref:`net-virt-infra` needs to be prepared in the Service VM before we start. We need to create a bridge and at -least one tap device (two tap devices are needed to create a dual -virtual NIC) and attach a physical NIC and tap device to the bridge. +least one TAP device (two TAP devices are needed to create a dual +virtual NIC) and attach a physical NIC and TAP device to the bridge. .. figure:: images/network-virt-sos-infrastruct.png :align: center @@ -509,6 +509,32 @@ is the virtual NIC created by acrn-dm: collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) +How to Use MacVTap Interface +============================ +In addition to TAP interface, ACRN also supports MacVTap interface. +MacVTap replaces the combination of the TAP and bridge drivers with +a single module based on MacVLan driver. With MacVTap, each +virtual network interface is assigned its own MAC and IP address +and is directly attached to the physical interface of the host machine +to improve throughput and latencies. + +Create a MacVTap interface in the Service VM as shown here: + +.. code-block:: none + + sudo ip link add link eth0 name macvtap0 type macvtap + +where ``eth0`` is the name of the physical network interface, and +``macvtap0`` is the name of the MacVTap interface being created. (Make +sure the MacVTap interface name includes the keyword ``tap``.) + +Once the MacVTap interface is created, the User VM can be launched by adding +a PCI slot to the device model acrn-dm as shown below. + +.. code-block:: none + + -s 4,virtio-net,,[mac=] + Performance Estimation ====================== diff --git a/doc/developer-guides/hld/vuart-virt-hld.rst b/doc/developer-guides/hld/vuart-virt-hld.rst index a8b545ea7..faacb92b9 100644 --- a/doc/developer-guides/hld/vuart-virt-hld.rst +++ b/doc/developer-guides/hld/vuart-virt-hld.rst @@ -96,7 +96,7 @@ Usage - For console vUART To enable the console port for a VM, change the - port_base and IRQ in ``acrn-hypervisor/hypervisor/scenarios//vm_configurations.c``. If the IRQ number has been used in your system ( ``cat /proc/interrupt``), you can choose other IRQ number. Set the .irq =0, the vUART will work in polling mode. diff --git a/doc/getting-started/building-from-source.rst b/doc/getting-started/building-from-source.rst index 595fe6be1..4a3d5b4b4 100644 --- a/doc/getting-started/building-from-source.rst +++ b/doc/getting-started/building-from-source.rst @@ -15,13 +15,6 @@ The hypervisor binary is generated based on Kconfig configuration settings. Instructions about these settings can be found in :ref:`getting-started-hypervisor-configuration`. -.. note:: - A generic configuration named ``hypervisor/arch/x86/configs/generic.config`` - is provided to help developers try out ACRN more easily. - This configuration works for most x86-based platforms; it is supported - with limited features. It can be enabled by specifying ``BOARD=generic`` - in the ``make`` command line. - One binary for all platforms and all usage scenarios is currently not supported, primarily because dynamic configuration parsing is restricted in the ACRN hypervisor for the following reasons: @@ -61,7 +54,9 @@ distribution. Refer to the :ref:`building-acrn-in-docker` user guide for instructions on how to build ACRN using a container. .. note:: - ACRN uses ``menuconfig``, a python3 text-based user interface (TUI) for configuring hypervisor options and using python's ``kconfiglib`` library. + ACRN uses ``menuconfig``, a python3 text-based user interface (TUI) + for configuring hypervisor options and using python's ``kconfiglib`` + library. Install the necessary tools for the following systems: @@ -121,6 +116,8 @@ Enter the following to get the acrn-hypervisor source code: $ git clone https://github.com/projectacrn/acrn-hypervisor +.. _build-with-acrn-scenario: + .. rst-class:: numbered-step Build with the ACRN scenario @@ -144,7 +141,12 @@ INDUSTRY: HYBRID: This scenario defines a hybrid use case with three VMs: one - pre-launched VM, one pre-launched Service VM, and one post-launched + pre-launched Safety VM, one pre-launched Service VM, and one post-launched + Standard VM. + +HYBRID_RT: + This scenario defines a hybrid use case with three VMs: one + pre-launched RTVM, one pre-launched Service VM, and one post-launched Standard VM. Assuming that you are at the top level of the acrn-hypervisor directory, perform the following: @@ -164,11 +166,19 @@ Assuming that you are at the top level of the acrn-hypervisor directory, perform $ make all BOARD=whl-ipc-i5 SCENARIO=hybrid RELEASE=0 +* Build the ``HYBRID_RT`` scenario on the ``whl-ipc-i7``: + + .. code-block:: none + + $ make all BOARD=whl-ipc-i7 SCENARIO=hybrid_rt RELEASE=0 + * Build the ``SDC`` scenario on the ``nuc6cayh``: .. code-block:: none - $ make all BOARD=nuc6cayh SCENARIO=sdc RELEASE=0 + $ make all BOARD_FILE=$PWD/misc/vm_configs/xmls/board-xmls/nuc6cayh.xml \ + SCENARIO_FILE=$PWD/misc/vm_configs/xmls/config-xmls/nuc6cayh/sdc.xml + See the :ref:`hardware` document for information about platform needs for each scenario. @@ -198,14 +208,14 @@ top level of the acrn-hypervisor directory. The configuration file, named .. code-block:: none $ cd hypervisor - $ make defconfig BOARD=nuc6cayh + $ make defconfig BOARD=nuc7i7dnb SCENARIO=industry The BOARD specified is used to select a ``defconfig`` under -``arch/x86/configs/``. The other command line-based options (e.g. +``misc/vm_configs/scenarios/``. The other command line-based options (e.g. ``RELEASE``) take no effect when generating a defconfig. To modify the hypervisor configurations, you can either edit ``.config`` -manually, or you can invoke a TUI-based menuconfig--powered by kconfiglib--by +manually, or you can invoke a TUI-based menuconfig (powered by kconfiglib) by executing ``make menuconfig``. As an example, the following commands (assuming that you are at the top level of the acrn-hypervisor directory) generate a default configuration file for UEFI, allowing you to modify some @@ -215,8 +225,9 @@ configurations and build the hypervisor using the updated ``.config``: # Modify the configurations per your needs $ cd ../ # Enter top-level folder of acrn-hypervisor source - $ make menuconfig -C hypervisor BOARD=kbl-nuc-i7 - + $ make menuconfig -C hypervisor + # modify your own "ACRN Scenario" and "Target board" that want to build + # in pop up menu Note that ``menuconfig`` is python3 only. @@ -239,7 +250,7 @@ Now you can build all these components at once as follows: The build results are found in the ``build`` directory. You can specify a different Output folder by setting the ``O`` ``make`` parameter, -for example: ``make O=build-nuc BOARD=nuc6cayh``. +for example: ``make O=build-nuc BOARD=nuc7i7dnb``. If you only need the hypervisor, use this command: @@ -259,8 +270,8 @@ of the acrn-hypervisor directory): .. code-block:: none - $ make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/nuc7i7dnb.xml \ - SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/nuc7i7dnb/industry.xml FIRMWARE=uefi TARGET_DIR=xxx + $ make BOARD_FILE=$PWD/misc/vm_configs/xmls/board-xmls/nuc7i7dnb.xml \ + SCENARIO_FILE=$PWD/misc/vm_configs/xmls/config-xmls/nuc7i7dnb/industry.xml FIRMWARE=uefi TARGET_DIR=xxx .. note:: @@ -268,7 +279,10 @@ of the acrn-hypervisor directory): information is retrieved from the corresponding ``BOARD_FILE`` and ``SCENARIO_FILE`` XML configuration files. The ``TARGET_DIR`` parameter specifies what directory is used to store configuration files imported - from XML files. If the ``TARGED_DIR`` is not specified, the original + from XML files. If the ``TARGET_DIR`` is not specified, the original configuration files of acrn-hypervisor would be overridden. + In the 2.1 release, there is a known issue (:acrn-issue:`5157`) that + ``TARGET_DIR=xxx`` does not work. + Follow the same instructions to boot and test the images you created from your build. diff --git a/doc/introduction/index.rst b/doc/introduction/index.rst index bb8eb53e0..4bb0190e9 100644 --- a/doc/introduction/index.rst +++ b/doc/introduction/index.rst @@ -439,7 +439,7 @@ The Boot process proceeds as follows: In this boot mode, the boot options of pre-launched VM and service VM are defined in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config`` -in the source code ``hypervisor/$(SCENARIO)/vm_configurations.c`` by default. +in the source code ``misc/vm_configs/$(SCENARIO)/vm_configurations.c`` by default. Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for details. The boot options of post-launched VM is not covered by hypervisor source code or GRUB menu, it is defined in guest image file or specified by diff --git a/doc/release_notes/release_notes_2.1.rst b/doc/release_notes/release_notes_2.1.rst new file mode 100644 index 000000000..ecda9244e --- /dev/null +++ b/doc/release_notes/release_notes_2.1.rst @@ -0,0 +1,111 @@ +.. _release_notes_2.1: + +ACRN v2.1 (August 2020) +####################### + +We are pleased to announce the release of the Project ACRN +hypervisor version 2.1. + +ACRN is a flexible, lightweight reference hypervisor that is built with +real-time and safety-criticality in mind. It is optimized to streamline +embedded development through an open source platform. Check out +:ref:`introduction` introduction for more information. All project ACRN +source code is maintained in the +https://github.com/projectacrn/acrn-hypervisor repository and includes +folders for the ACRN hypervisor, the ACRN device model, tools, and +documentation. You can either download this source code as a zip or +tar.gz file (see the `ACRN v2.1 GitHub release page +`_) or +use Git clone and checkout commands:: + + git clone https://github.com/projectacrn/acrn-hypervisor + cd acrn-hypervisor + git checkout v2.1 + +The project's online technical documentation is also tagged to +correspond with a specific release: generated v2.1 documents can be +found at https://projectacrn.github.io/2.1/. Documentation for the +latest under-development branch is found at +https://projectacrn.github.io/latest/. + +ACRN v2.1 requires Ubuntu 18.04. Follow the instructions in the +:ref:`rt_industry_ubuntu_setup` to get started with ACRN. + +We recommend that all developers upgrade to ACRN release v2.1. + +What's new in v2.1 +****************** + +* Preempt-RT Linux has been validated as a pre-launched realtime VM. See + :ref:`pre_launched_rt` for more details. + +* A Trusted Platform Module (TPM) MMIO device can be passthroughed to a + pre-launched VM (with some limitations discussed in + :ref:`mmio-device-passthrough`). Previously passthrough was only + supported for PCI devices. + +* Open Virtual Machine Firmware (OVMF) now uses a Local Advanced + Programmable Interrupt Controller (LAPIC) timer as its local time + instead of the High Precision Event Timer (HPET). This provides the + working timer service for the realtime virtual machine (RTVM) booting + process. + +* Grub is the recommended bootloader for ACRN. For more information, + see :ref:`using_grub`. + +Improvements, updates, and corrections have been made throughout our documentation, +including these: + +* :ref:`contribute_guidelines` +* :ref:`hv_rdt` +* :ref:`ivshmem-hld` +* :ref:`mmio-device-passthrough` +* :ref:`virtio-net` +* :ref:`getting-started-building` +* :ref:`acrn_configuration_tool` +* :ref:`pre_launched_rt` +* :ref:`rdt_configuration` +* :ref:`using_hybrid_mode_on_nuc` +* :ref:`using_partition_mode_on_nuc` +* :ref:`using_windows_as_uos` +* :ref:`debian_packaging` + +Fixed Issues Details +******************** +- :acrn-issue:`4047` - [WHL][Function][WaaG] passthru usb, Windows will hang when reboot it +- :acrn-issue:`4691` - [WHL][Function][RTVM]without any virtio device, with only pass-through devices, RTVM can't boot from SATA +- :acrn-issue:`4711` - [WHL][Stabilty][WaaG]Failed to boot up WaaG with core dumped in WaaG reboot test in GVT-d & CPU sharing env. +- :acrn-issue:`4897` - [WHL][Yocto][GVT-d]WaaG reboot failed due to USB mediator trouble in WaaG reboot stability test. +- :acrn-issue:`4937` - [EHL][Yocto] Fail to boot ACRN on Yocto +- :acrn-issue:`4958` - cleanup spin lock in hypervisor +- :acrn-issue:`4989` - [WHL][Yocto][acrn-configuration-tool] Fail to generate board xml on Yocto build +- :acrn-issue:`4991` - [WHL][acrn-configuration-tool] vuart1 of VM1 does not change correctly +- :acrn-issue:`4994` - Default max MSIx table is too small +- :acrn-issue:`5013` - [TGL][Yocto][YaaG] Can't enter console #1 via HV console +- :acrn-issue:`5015` - [EHL][TGL][acrn-configuration-tool] default industry xml is only support 2 user vms +- :acrn-issue:`5016` - [EHL][acrn-configuration-tool] Need update pci devices for ehl industry launch xmls +- :acrn-issue:`5029` - [TGL][Yocto][GVT] can not boot and login waag with GVT-D +- :acrn-issue:`5039` - [acrn-configuration-tool]minor fix for launch config tool +- :acrn-issue:`5041` - Pre-Launched VM boot not successful if SR-IOV PF is passed to +- :acrn-issue:`5049` - [WHL][Yocto][YaaG] Display stay on openembedded screen when launch YaaG with GVT-G +- :acrn-issue:`5056` - [EHL][Yocto]Can't enable SRIOV on EHL SOS +- :acrn-issue:`5062` - [EHL] WaaG cannot boot on EHL when CPU sharing is enabled +- :acrn-issue:`5066` - [WHL][Function] Fail to launch YaaG with usb mediator enabled +- :acrn-issue:`5067` - [WHL][Function][WaaG] passthru usb, Windows will hang when reboot it +- :acrn-issue:`5085` - [EHL][Function]Can't enable SRIOV when add memmap=64M$0xc0000000 in cmdline on EHL SOS +- :acrn-issue:`5091` - [TGL][acrn-configuration-tool] generate tgl launch script fail +- :acrn-issue:`5092` - [EHL][acrn-config-tool]After WebUI Enable CDP_ENABLED=y ,build hypervisor fail +- :acrn-issue:`5094` - [TGL][acrn-configuration-tool] Board xml does not contain SATA information +- :acrn-issue:`5095` - [TGL][acrn-configuration-tool] Missing some default launch script xmls +- :acrn-issue:`5107` - Fix size issue used for memset in create_vm +- :acrn-issue:`5115` - [REG][WHL][WAAG] Shutdown waag fails under CPU sharing status +- :acrn-issue:`5122` - [WHL][Stabilty][WaaG][GVT-g & GVT-d]Failed to boot up SOS in cold boot test. + +Known Issues +************ +- :acrn-issue:`4313` - [WHL][VxWorks] Failed to ping when VxWorks passthru network +- :acrn-issue:`5150` - [REG][WHL][[Yocto][Passthru] Launch RTVM fails with usb passthru +- :acrn-issue:`5151` - [WHL][VxWorks] Launch VxWorks fails due to no suitable video mode found +- :acrn-issue:`5152` - [WHL][Yocto][Hybrid] in hybrid mode ACRN HV env, can not shutdown pre-lanuched RTVM +- :acrn-issue:`5154` - [TGL][Yocto][PM] 148213_PM_SystemS5 with life_mngr fail +- :acrn-issue:`5157` - [build from source] during build HV with XML, “TARGET_DIR=xxx” does not work diff --git a/doc/scripts/filter-known-issues.py b/doc/scripts/filter-known-issues.py index 0f020e57c..459d79499 100755 --- a/doc/scripts/filter-known-issues.py +++ b/doc/scripts/filter-known-issues.py @@ -1,4 +1,6 @@ #! /usr/bin/env python3 +# Copyright (c) 2017, Intel Corporation +# SPDX-License-Identifier: Apache-2.0 """ Filters a file, classifying output in errors, warnings and discarding the rest. diff --git a/doc/scripts/genrest.py b/doc/scripts/genrest.py index 2c2b1f3b4..2d7b2a4c1 100644 --- a/doc/scripts/genrest.py +++ b/doc/scripts/genrest.py @@ -1,3 +1,5 @@ +# Copyright (c) 2017, Intel Corporation +# SPDX-License-Identifier: Apache-2.0 # Generates a Kconfig symbol reference in RST format, with a separate # CONFIG_FOO.rst file for each symbol, and an alphabetical index with links in # index.rst. diff --git a/doc/tutorials/acrn_configuration_tool.rst b/doc/tutorials/acrn_configuration_tool.rst index d1f817466..d6641b6ca 100644 --- a/doc/tutorials/acrn_configuration_tool.rst +++ b/doc/tutorials/acrn_configuration_tool.rst @@ -26,7 +26,7 @@ The hypervisor configuration uses the ``Kconfig`` mechanism. The configuration file is located at ``acrn-hypervisor/hypervisor/arch/x86/Kconfig``. A board-specific ``defconfig`` file, for example -``acrn-hypervisor/hypervisor/arch/x86/configs/$(BOARD).config`` +``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/$(BOARD).config`` is loaded first; it is the default ``Kconfig`` for the specified board. Board configuration @@ -38,7 +38,7 @@ board settings, root device selection, and the kernel cmdline. It also includes **scenario-irrelevant** hardware-specific information such as ACPI/PCI and BDF information. The reference board configuration is organized as ``*.c/*.h`` files located in the -``acrn-hypervisor/hypervisor/arch/x86/configs/$(BOARD)/`` folder. +``misc/vm_configs/boards/$(BOARD)/`` folder. VM configuration ================= @@ -51,10 +51,12 @@ to launch post-launched User VMs. Scenario based VM configurations are organized as ``*.c/*.h`` files. The reference scenarios are located in the -``acrn-hypervisor/hypervisor/scenarios/$(SCENARIO)/`` folder. +``misc/vm_configs/scenarios/$(SCENARIO)/`` folder. +The board specific configurations on this scenario is stored in the +``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/`` folder. User VM launch script samples are located in the -``acrn-hypervisor/devicemodel/samples/`` folder. +``misc/vm_configs/sample_launch_scripts/`` folder. ACRN configuration XMLs *********************** @@ -77,7 +79,7 @@ Board XML format ================ The board XMLs are located in the -``acrn-hypervisor/misc/acrn-config/xmls/board-xmls/`` folder. +``misc/vm_configs/xmls/board-xmls/`` folder. The board XML has an ``acrn-config`` root element and a ``board`` attribute: .. code-block:: xml @@ -90,7 +92,7 @@ about the format of board XML and should not modify it. Scenario XML format =================== The scenario XMLs are located in the -``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/`` folder. The +``misc/vm_configs/xmls/config-xmls/`` folder. The scenario XML has an ``acrn-config`` root element as well as ``board`` and ``scenario`` attributes: @@ -326,7 +328,7 @@ Additional scenario XML elements: Launch XML format ================= The launch XMLs are located in the -``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/`` folder. +``misc/vm_configs/xmls/config-xmls/`` folder. The launch XML has an ``acrn-config`` root element as well as ``board``, ``scenario`` and ``uos_launcher`` attributes: @@ -435,7 +437,7 @@ Board and VM configuration workflow =================================== Python offline tools are provided to configure Board and VM configurations. -The tool source folder is ``acrn-hypervisor/misc/acrn-config/``. +The tool source folder is ``misc/acrn-config/``. Here is the offline configuration tool workflow: @@ -599,7 +601,7 @@ Instructions scenario setting for the current board. The default scenario configuration xmls are located at - ``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/[board]/``. + ``misc/vm_configs/xmls/config-xmls/[board]/``. We can edit the scenario name when creating or loading a scenario. If the current scenario name is duplicated with an existing scenario setting name, rename the current scenario name or overwrite the existing one @@ -644,7 +646,7 @@ Instructions .. note:: All customized scenario xmls will be in user-defined groups which are - located in ``acrn-hypervisor/misc/acrn-config/xmls/config-xmls/[board]/user_defined/``. + located in ``misc/vm_configs/xmls/config-xmls/[board]/user_defined/``. Before saving the scenario xml, the configuration app validates the configurable items. If errors exist, the configuration app lists all @@ -665,9 +667,9 @@ Instructions otherwise, the source code is generated into default folders and overwrite the old ones. The board-related configuration source code is located at - ``acrn-hypervisor/hypervisor/arch/x86/configs/[board]/`` and the + ``misc/vm_configs/boards/[board]/`` and the scenario-based VM configuration source code is located at - ``acrn-hypervisor/hypervisor/scenarios/[scenario]/``. + ``misc/vm_configs/scenarios/[scenario]/``. The **Launch Setting** is quite similar to the **Scenario Setting**: diff --git a/doc/tutorials/debug.rst b/doc/tutorials/debug.rst index d96b9a033..a189ebd7f 100644 --- a/doc/tutorials/debug.rst +++ b/doc/tutorials/debug.rst @@ -151,7 +151,7 @@ reason and times of each vm_exit after we have done some operations. # acrnalyze.py -i /home/trace/acrntrace/20190219-001529/1 -o vmexit --vm_exit .. note:: The acrnalyze.py script is in the - ``acrn-hypervisor/misc/tools/acrntrace/scripts`` folder. The location + ``misc/tools/acrntrace/scripts`` folder. The location of the trace files produced by ``acrntrace`` may be different in your system. .. figure:: images/debug_image28.png @@ -174,7 +174,7 @@ shown in the following example: trace event id 2. Add the following format to - ``acrn-hypervisor/misc/tools/acrntrace/scripts/formats``: + ``misc/tools/acrntrace/scripts/formats``: .. figure:: images/debug_image1.png :align: center @@ -224,7 +224,7 @@ shown in the following example: formats /home/trace/acrntrace/20190219-001529/1 | grep "trace test" .. note:: The acrnalyze.py script is in the - ``acrn-hypervisor/misc/tools/acrntrace/scripts`` folder. The location + ``misc/tools/acrntrace/scripts`` folder. The location of the trace files produced by ``acrntrace`` may be different in your system. and we will get the following log: diff --git a/doc/tutorials/docbuild.rst b/doc/tutorials/docbuild.rst index dc9a03988..7544ffc08 100644 --- a/doc/tutorials/docbuild.rst +++ b/doc/tutorials/docbuild.rst @@ -25,8 +25,8 @@ The project's documentation contains the following items: * ReStructuredText source files used to generate documentation found at the http://projectacrn.github.io website. All of the reStructuredText sources - are found in the acrn-hypervisor/doc folder, or pulled in from sibling - folders (such as /misc/) by the build scripts. + are found in the ``acrn-hypervisor/doc`` folder, or pulled in from sibling + folders (such as ``misc/``) by the build scripts. * Doxygen-generated material used to create all API-specific documents found at http://projectacrn.github.io/latest/api/. The doc build @@ -67,6 +67,7 @@ folder setup for documentation contributions and generation: devicemodel/ doc/ hypervisor/ + misc/ acrn-kernel/ The parent projectacrn folder is there because we'll also be creating a diff --git a/doc/tutorials/images/pre_launched_rt.png b/doc/tutorials/images/pre_launched_rt.png new file mode 100644 index 0000000000000000000000000000000000000000..86e3756642b845836a3ee6d8d16670d5d8874228 GIT binary patch literal 19106 zcmcG#Wk6M3(?3jicQ*(EQXWF2rR&fw(r~0xK}u3eknV;mX~{@4hILH2K=F;AppN{g`#8t z|KMHJWhLRtN2s=e4(d|nk|pg2I5-6X`8N`p zAKvb#qkkfqY<%2wQH(F~tEuIzKld%Eubr-|w{NXeD4U!@_GnSxGM&88)-PEyH3HXF zFfobq)6qumo6i`d3Iv&cr<)4hdDK>n(j;UQ^y^5o#Yj%;mEh z8S=_VwrV?cS@KxQGK~H9?OPZN_bmdlP!3?$nv!-NKUuv(`t5? zCq*UZ)XAH!kTW9hBIs7RTJD3(Pqc$Qc>#Y1W#z7ZUtR}wO^uh5P!l{d$V1s6Qty?2h;+ zgialng=$S*Li&sK@fH(4)}_)@-au zHGV{NjtOe%nJm#+ex7_$9e2syyK+EoL*L@y|A$-vn^JB= za6yETGZ3>W2XfopkQST=<$}Vefi_1(GDKd34)G5}V@O7rCk{WcGTNvPAL~A!wGMl@ z+$xHVOoYAVbt`e`QK04y~d7oz%faGzlzq)3rgE+5WmP zvm0*ujHngFxjy*Ocf%gnMor}0!&p@%j6=FLOJ>62a?H#lA1gXDt2;@R3*zlNhl=Up z{h2Mb!Ei$3EVb*~+B56Q4RS6YkdWwSBDWw4<-}a8k-L{75(Jzq*yY)8=e4{w!R6a* zH|aIz@-;JO5u|G0gHm(K!BS0zO|ILOW82PCUO zIO@`SrK*Msr8g=!Y*g28Jk74DZ=@7|#d5!sq*JY34#)V0ZR(7Od30*-0DgPsRGGY0 zrqpnqPx>R&g!cDQXAq{Cit^m+Ld3-Tah~m+D*k$w?#$O-+M&1rr|vJ~h)$|0=dtOi z12UG@OXTH6HDK@%)>D@6%U2712ZlR}y%k<-?((ec%^!|UkDg_FhSe$W4i~n$?%njL z9pbPWtUZ(F1jvUA{IPNwf-@)3E|PdT&a*cWuD{G<`(r6rmZuE!T4g3;T;pt4Dwq}7 zsIet6%3}1x@q4M*yF4f|M4WC3vCv`eocNZ z-@ya^Qd65B;UE#|`TVn$*tW4&6HYI!aGZT1Ls)C)l+acAR!4o(KGk0B`%Rqq1BG3( zs?>sU>;g0XI)}awqpD&0S;3b-nwpvt3zddaKG7x~|7q+Aq^X89*tp4FW^7MoJbY!# z+Z=6Z)v({VKHUts>!(m2dl6e0-A-q&cq3Ccjs1z$h_&2pq1Ju0wsz5OKe*M#O>kJJ zo>fXY`Re3eXqH|B=11|$sO{7B+4jVvL&cTL==nB8$&15+h+-GEp`bBJQ5(eGASYB% zmuj>Am}J`I<5Y5ih(c3TB*8|c1O=ojKJaHjl2@@n#r})^vHK%>UW$8B(vC%3v!B`7 zma^)-*wQ<~jf_9N@-n5^bj)E+WA4Y8Ww*T{N@|D@=+@5eF1u)Xi;JANs&EO0N+ofjuK7tz%$t+p+gi#O0M!!IO-_-VVj(hh%()pG zWE5xZ3W-FdMs!PD41dC)e7_?+-%T8pVJV?st!7CKn=g9r(taAEv*Xz)}8pirmW zP{?ItD6>#WCG`w0aTh_qq#825DAuQQE0|~lGei~ZC7Ndr65D#sa|3De@1*cPY0!NQ+ya4;%YR#b%k+at>}?7?}+`b74i(eP_9_2DUoHoJ|uN#Cz)gpyE+@VD}n2nu3x zqd32j%j#ZMB^mV1xMo#6h>xQ1KR*Z%Q)mrsXBF4#yL@Jk166*wr)HG(EMyI!xUcy$ zQ)zaY$J8?=mO6f{Z*ln^lwaESIbwhv0jlOjrD3a+Si2pK=eyPy1Lsp(P`G8+->-Z& zTJT|U^A#N{OsOSHi%b>1meLMu)}N=(kZSgAHwKY`Z&^T{;u=`k1C5wrIeC^hnC0Z` z#r_|{*1r;+y~~M?Xm(jSAGO!zgHq*Km>IPZ#nuwPgosP?T31JhR2(`+WJfF3@P($? zt!)%HS!~F;p^~~VL9@o74+jUUYPAaol%coN`zgXT0cWZ~H?oMI!oo&6W#G+y^=nxH zRM>AY8$=vC{TH#Yquz_GNrEAAnKKujvMY?$aD?Svq6Im>``gRusj0WEp7DWHWXT$F zqbID%&7?fgBpto{!5TUVA#2`W3u(7&g@YqW*k_~K}AoG&d?{#vBqhYZ5r ze7&k{WVRbWbJ0+>%xQhCl49fZ%BZJ7X6=_Y_M=t~ZB zpo46}fm#9f=%DQij%*fcS4F6Nhz0PKWgw~iiUasUT~x-ZOI2D}RmvmM2?YQ1u>eh%-Lb=(_f)$0vKT|9C~&3yHnH>;~V?)JjL~eq++omC+)$X8mg{H^xK>&uXrL!`E@=^ zREToh|5$lt-t#h%A%voP7G@7vWCs*HuYa@M{7M=VV8+p8xzC7qFy!!2T4ny*g8y%e zVsP%|MN$i@kDx2~;e>ea2Fbu>y0ldQWb?a0MW<_pVm(ML>;hMt4MRX16eiV81cP}s zyImMR+V$c2tDlLftJ@EGDza%y{E%L=9Kft4vRl5~-xb>{*_07qDAqB&6?Ldr@NLbW zv_8v{`BLXLHB3jbWHp0~|E8TP`Sts`eV+yyCmF>t<0z$%)s8YtC-IbZ4$IWscJq~p zh+bAbLsl-l6zY+|LrELzOkj;K1#52i_4M+c`K)0I7JG_qKx_V5gWILYmd8=wD6g#; z=H8ha@ci4#^s1nP<(wA3NJ+^k{0u)ngS&3o)v6h(c_{p#f}>}Vw!09 z^4?8-jH_PuyqjjkqyjqDOp)v-`?3G_nM4jq`O4%6VJ-oyOLO4STwG8TSie!X( z(O{0TzqQ#uT{Okj2da$K^?S$Jj5aM@K-lKmrp1tp$A-%7CpO-TEMpo)I!6(}zy;~a zg$wCSYYVX$>-PdE7s4P}CypuW2nbt>1UY^b+Smb`?)Mh~3jM89Md2oxw+P|*lGvUl z6~CG(YM-d&wcDAnBfLRR&nZL2QfCp{H!K|^tA6`&$D@7q9euQy*hz_Y(f5y9x$d%v zkea}vT33AyC;^hb7^F2 zYey@+7w8cl=d7n>+tRe<9LhI{A6qwPr)QjF{0_@nU>JmGT#CyU|#`mwpb{_Aas4hZ5AmlaI;(1+!WSmDA|!2J*~ z5#NN21D;SEliSGnL5LVDFHY7KtKA$!QWnbR#BEjcv1g7BYJ#$(7WwM$_UsI%5a({w zz{-R2UZCgyK!*Sfeh8>XPWMtbL-FhiX*DcSlLaXLFyEW{=-;X9H}``2#?tjs^uf1CZVZ-VESbh(~)Eavr=UjB<4h5oAfKO8_{Q5ZqlhSiVs&( zxbfZeV!$%goM~jO+$+0@&`Hz;C6(Ewg=EmI2y)FpJC^7m+?Z;L^ZijLT+hbg(e7Rbn<5J~GIOmC zObO|mbqb-tj;@r@ZS3B%95klwRtlRcamgIE(|BXaMgg}6?VoBA{px_oQREX#uUgJ^KI zOSG;eb|*-M1Gd~mQmWS=KAeO+QIjb%ZqBgi-1N;u5ky}jAj14UyDpvv>6%n;L>e1{ z>C#-;E6DNg4^1zpL7~Pd%r3nyB7jV_MEXTN*`m@Di`b-d5kL>&2yV$r{B{$n)DUy_ z^swUFf$c)+hKsY})F7;Phn5?c0-VH3^874{oEyrhCY&qbY3V&7b2n9=*Ss(W21z!S z_?lQl9!NQJUA$~+DYwP>iC5aE?BVo2j%=B8tBLe^K3Jiz406f{#6A!}BoxNB#Id#Y zWb#Lu&$%#Is~bM6WcuqX@S}tJc+B?5-whB{#kZmE9L7e*@u>MxuvaDT_DUAawjQYbjX+XQd-j@BgEGn$thhku{4{67MfM-)i``)oQ*wn+5`nyewA(-*Pw zZD@v~Z@bVKx^5b~q})C0S?kVjHKEE&X$NlI_%FAiffR>yl97)@+D0-cGw;1b%=)xb zNX(7TmA^ZQ21U}q+`qH$BqYsa*3N7>r<-b;L*)6_O%n#(7~@l3-T#z0kmrENMaul2 z<$LN-ToRk%iz(4g@tLZLUbg)R8QNi>E9w0`_`S(}7pai&n4dnYQ$%?2ztfi?iX(~# zuTpD5ktNsmUSh&X_$ARdH0*hrk^vdRjF77!fTS_QUcVVpaS< zJ>MUx$4mBk*lwNdTBy%|*rUiz{inW}6lYbD1+;bQlEUt`B=%wLU&5c-vV!?^b43DV zbS5s7d??>lUuriq!%~da_TJz_*1{E=e)d1ia4rsBI~hJ1Tu=}Nth9Xr3lnmqVY?>; znxN&%Sv(_^RNc&^6`TRd2q;27hNh*nN+^4C1c<&(?c$dGktZ57NK-pW+q^H+cIj&W z%sB6lAo5x&7#)NT>V*@M5o}O<4qBy~o2cu-EvzA#L*1w8t(pCAU)EZX-F}zq=Mkk~ zz638DRj95V}^P~z_hR+mQLM?s?d%eCryr>5>K@r{mxr||MwC0 z3E)#afZ6^j4@{h>yw9NUzPn}MuBw|M&W3{X7;r4u1ryLH0XFc`31t=(wmk_CR)Vf_ z$@Sls<<#``aC35%AK_aBSnb>+$Np7fufSctYIZTb@6{kCuaf0_gnnq*S@*&c)?GP~Omo4~nQiQ9a?^`y2lyhWn&^x;`!Ae=~ znPCWw1h!q-B(}{xTmR^M#%xQ_ziO5lCir(<{)eJt&+gHa9F_tEjT+XM2zSFrwML0} z?%-@q`>*AlseG>x zcOP_cV*exgrVM(MUwBhd9+*ZpYmivK--ufSf~{oayae7EakPiB}&mF32!7?;$XGaQ3}Sqrp2jt(#H z{VwDkkt#~U6=K+`HUTFxJSf5@? zYzv<)S@mLBO-oR#D8vyR-^7-{X$c(XyzsdKAW{ zGd)ungTzOgF54)B9~K)a++1E}U33$ypD?VS8Lm$h4Wy_I@S(T+@(8#rPL=IJklr3D zjrv-yr4lph?~iDxjA*}zj}0AyPf;8@#Emkjbz=_*1a~f+J}WAJVg@oKJ)V4&O(R3Q z=XC+rN9exaqt{CTgDj$ZFO!w&@f)@;?V2(o#Y#R3Y@ShX%+Y_bN(-M|$-1TTQ*i`x zZ)DWBpBKCb?QF10U_0D4sDd|aX}@#G>v&={9bV);I8|?Ka2a>JP(0FXfw(zucY0~W zE{%HI`#?&D8!-;t^*d*>E!m{2Ww}+*@*mmzPN6c3RKic@*#Tjk5Tj52PqCR|hx+d< zI3x_xkBrRg#(Z9^h`!l*B9i|!2Fx;V<4=F`mt9)$IuBnz=%&U^1Gb?6U>kCCPK^vP z(!RDE(tfTU{{`Ld753Gzch?fV5UJddnjR7M67ri{GR1OYOFcmu@HCIUt_T zs{p2EuXHQEz4#@9|Jp>|QpHD~bHdi68SwQU4>I3g*-cNB8MwgEs#a-yu2Q|1kBR|G znV}u;;Flhqx;`a*w;fPRz}6wb(%4{n&PN#cN5bDlzJ=GTg4Y*X3tY;o& z!!_})Mld1g-f9#O=`a(<=#)lFyPeO@V!&!jSt_}(*+M&9qk<_dhgKDK7!^9X;u@k9 zGH>H?eG5L}y*ATqJ=D1o$zpmf$NKt)$6{Q={O*D`GO#Bx$N1rQ_xJWo`>EE;vg->@ zDp7=WU>in1cF`^$_fQJ3Jc8Z;>%1iqxpe)TVkaFjXdvp#fb8r*F_vUxw@seAlmro; z88_U-0G4`|iA-a=U&TzMHO$(BL)tfmYC)Z-&Bc1OJlXd|BeIDc^$R_;$U}5xZeoq^ zJi?ezmr3LelxIsq9UFPz829;?b~@y_P~%wKkq%RV9JSmFH@yxaSl88 z_RUuU-def#Q8ci31GVAn1C8n@=tRGQF|2iH;88KCKFl{*@>8SqRJ(kuvcl+?tt43A zd(FEM-K`j>Fh-8pI^>2kXacmyN#+^Om&-!YC|CAX>Guu(gn(R=jWF;j&JPh8=XFb? za)|%2U!&ev{50iH>d7})K1|Atsu!2#r z=Yz@+OJ6vEWdpW@A0@fiV2m2EDgMUBH4jq|?CY};%fSQTz8V>FzOQB)cfj_78O0Z_G zy8t!}>h!pUB8*_+{ca&iDy4_uNS!FE6mU5$fx&(ir2b(PoI>e|6w0&Vfaj84=_^)i zP9=cw*oM`;OUU7-RyChw-6RXq$4brNOv-0f>Lt_z4d_Xh;bW_Bp$LAq^tN57c!HN^ zSga?MXYpCzo2>>@C*&g4{#cq3-9LSBpf^dO76xM;MD9;K96XW`6wF5AfIMTDyy&xDt>tIxBt_(i{YdD;6;YLlUe$Ha~r;a}MX_J3zV{0HE z`coZBQb>cqR=xkyVj#DqI9=2-Zb}zbWq$Rq3$%%`EiQhTqr)!;L@(SsuXUMhWd`ZL zH98{*4d0r)Y8uOSFG~A_xV5N^#)yRU*%5I1Li%|m1-$~dtI$d2?C9w&UF)`zFnRyVN%J7Kl`>9xZ(N^~G3v$Pd9ic_4{7N4P76qStLN){m zL9_u8r5+TMb>f0tV7f`&zuGhYJLLarYzs9Zk$UtrwH3$qr=vnzr(rP*s5rrx7>mTI zJ>6Muyf|P9#Q!|M+{phwCQ1ugJ2cFt1ZJSb;C3rKUBFFF85pmV6cvR6+Y+n(X-6=! z25YzrXWHO?@>9QZQ!C%}!ogs^mS4;Q9a&+a>|*!Y2eGfbc(g5FOlKG$n@t zKu{o71F&!Yx{+A1a?G&3|KF<}_(|+HsR1%`E~jO1GA}Y#xmcJWDw=jfJSf4yD+E$T zuf66HJ(>weY>?r49OYZy9S?B8nx`>&m2&1V22Q=-l6u+r>dv)vuBzhFR}^+@p_m!X zk2Ag!&oQsv&JrVo2}vG|swQkUYnpQWq*u_iWjx`bi%Ssmw8YU_GNBhgg{O-K1)QTL?A)9dPSg> z5meC~JfC9a0H;s#W#qfx-(HTl>cR2|r zXo$Rd@rQuABsnNiJBU1e%#^cNsh4e+C4|Wy|8fU=EpO2%>nHU&$wx3)k)ddXW^E-LGV;k<1CM>Nf*+ba)Cmq(GUg zC1->ceM58B!({5HxBeFNrRGbv<=jt_bIIXYgOMMn&%Se0Q`14oCu9tADGOiL==E7p zJ^L}C`Usw=FSh2>2W)f*2GoX6IYdSnc2))nJE2XBxj@0k2G$)B5@&2H9ZLy=zN~4C z89a^_ecG6aNdGwJ1Yti0dGDEME0ejc?sb(zxiD&GnG@i7UmfQ?)~blju4yh}CQXQH z=TbAlut3PLD=rL}1qKL#ZSZX3%nq(2Z9RZ5*CjxSArb|q`Njqdp&4co##|Nj8FRmM zHbMxqaRH62;2#k~Axu!?QL(W|e315k8|QDKba8_dpI#ignZYdtQVUG6c`HBrCfo8R zN1SWm2k)HZOyS{+Y)h?JacmZjT+_51FdqOdky^YXkF9Ovo6|x*2uEP((Q*Ff|$7)?der^)YCB%TIz+oTZSZ7>pS#(&_qgdpReYgbol-z6Pc!zx)Cj zc?k$w?E~R;*(j?1r1qmR*mR@eG-ozTnn-8Ot;6i1C~F$LBOI+9dI}|cjF9v^)^7KI z=RYi9-a^U?L>4Efr)h=VZM!yx_y62XbJ=(23)cpwI08r1fdS#|ut6inZ#Yo|G$8K% zeOCa?=HsvE0qlh*77W{=JOzGHYPs63{*31{4_>Iz99dhk>9fc#}X%jU4l ztA%t?+uMtM5C}xd^%jc)HpJ$s>Y2j(p!H%gnHlybRwpT~@Zq4Y<}CV&Bp@}xY9wv8 z*~8)RbB>PeW6a=(|Mb;x{I#1?zsBqGK%-Cv{vXot6LaijF4p)m^#9?mTS=*a7neTO z6Lm&f?Yv@+zkhs=XLtB=xGIPk|95Gf+k~&XTB=Tpi;G=0M~oVsnf(ABZ($=ES!NMH zasa6-aiDnBry9l!l{n)I&z1d})8xe_%;z+2W;ovUj|^6yVjkS@$1QLg)FGWBf}XpC zxzq3!xLp_+juZZ~Ov(=@SJN3uulrTQD8Psp;H6AaZ_m*T5k>9NtBWkb+rxaOziZv3 z7QdVV^WDgiZg$^)_3^|UZZkvEFszPRuy5s3{h#?Pc5~HGzMipqN+URFtE;>Cl`WOZ zZ|CW~&_L5@)UcQMm#!U^!CUFCv)#1bXJZfdw*f+zd4JT;YMlt}sm{weZ<%380IO3B zEZQID_GW=Z3qOj$u!BF{;czPli_N55#t*A`4-1Fl|A-*$cB8&sm^0h;_)og9#}o|q z&WCiU@fmSq&awj4&?AoKUv|qI{W_m#>wnOIA%JdFK0MrgY#}r~(AU-X)KTYRYV|>s zgQ5e+x{1>3L)BaEk_PkDo=8+2Dm3S084K01R67uBq<2G?~#SI60>WcrM|432(4$Gb~J;% z0OJrnr|e3;Lnb8AA?J zl^uLcN1$T_T_z;1_xcK*d7K7}ifUF+ zi5Xg1A@3VkEv?A{<H^*tEIe#jK)#g^yM4FB4_71bv_LtZH5eeILz zUS2krK#Do81olc&OZ_FT0~JNN(ad1i*$R`c!c3p1ER35QPkNS&G%Ag=Wz=U2f;*W5 zHKcKOR&o8RPap~lhBcP(A)CbxU!G4#-Vi>-I`hGBFKG7p?YToClK_*nQFics%u7KM zvM+hFRThJPm0kMn02IHT1xGI9==W@ucEi5O0-q_lnasNLNBrcN*)KAY?zYmjXAzOy ziWt6*?@iljs>3lYhdb}vomOS?2EPzKdtul>$1p}$Pqd$sJnM7xTU*ap3uAJgrbmP^ zfyYww1s}LY5I>S)eJD7jPL=GDA8JzhV4ez)AKejP&;=CTyUq|3iQ#P5c$-_l{d1pl?6n= z!u2vW7{}AR^GZfujSDIvNWysnC`kh&qgJm=dnKpD7e@N}`eBKO2{BF{Hx)R?_m2sS z*g-$OeVcb5X-<|3a3ySRD#d>B^Nzg!vTyrS?=Ml{&^}M%6LuSbeLUUJ($p+9sJHKp zq7K2KMnHDd3u2*&_XulC;@JTX97>e#<>+IY>-W+heKk*-kFBHaO;ryiU;O-;2xLb# z8>!#bf75uI$yblB@e&C}m+~@^%ecI(OvL9-@aW5BSMB@h7jIk0l+s1KE*d&Jr>m-j zS$f4@dm`D>AJ7wRv$%&KIn%#dw`_X_846|70FJdUhI%#DV>^>2^>*_YvsRhZ!tQ%O z+7duzrv-7kwWg@XnQT|^LVA3VL8!3VxbmL+@s|d?%T04xHfbrTNQ@8^-re2ZW1kOa zTWT#wz+RdDZruLq+BpKHh|lA|&>Xi%M_pZ=qtquM4X2tIJ}CC9q8}!Y)d*GG^Y!(0 zKqqw>(1u+DtfdihMmqF?ZhkjxB;b29F)68jmb{W&&x~cpj!=xQ4&#E+wZM6c{Zz~l zazS~y*zPMsXXgSO)h^2^N*k( zsWdMz2rQ<{ji83yTvFw*$;p*PE|h{a`><;wY0J0M0yJ!|=-Q(}==lZmW87IX`J32F z1ZoT!uiP*W!%%T)7U~_AGeo?&MTbJv?pBovBze*Egulsxp{*~SFy0f$o(1Y?YyTnm zEt#bv$sRGHe4| z#?Oh)4KUhxI9dcuKlOg4{a#emk(#HQQsGIhOnPjyLzvzR~YIfs6fhLGH9UZuh3Fn;Vn_}w1uZc%Oh-$ z;OCyZ`Y)$}zS`lv*~>F-2?Z9ypQD4=#-C&LP1n~*BI-KZd?D#}A;I;t>r>lQG}C7y zH*7%YIayj(CdfZ5|Ib&ClnG>eF>6+qOWSHeQ&Mr_W^VPAnU8ASBZ=Uh=%fJ)o1FoW zZR)>8k!3OgDGkN#&fOU=9OsH_OGDx`T}twtNfR@6j-M=JZ_B3M>lIHKOxRsHdd}Ai?51d4r4Ov$jSG&p&v&^>CPPUqu?s zG{q?cWk>v>t{(+G2n-UxVPr7DBo!owqXSbkkIp+fG0biLRK?pG4X~htfNs<~bPT)I z&DkCXr}AyTf%%xSS%^Q$LxbW1Q|Qq_MZXm#Ir9K-C33j|dHZB1LhtWz3Yxw;zs!~- z>t0yI{->U$jokHTid(gS=ND!ny0C_?{ES9#*;4wa^7B3yCPe2)fQrciI%7y-TM}X- zPP`V~JI0$zS0ATl+!5Dk>*5mIUn6 zJE?J>SP^?UH7EV!A))nkMN=J-LZvSB{WziEGDUf0RZew}(WcJ=%eouCURI=WDyZXUn&tpYO#|oq zr2+UMWFg^$84#;OHs&OO{1SV%+0I*rrbXJ&leEr+2%(z;GN{>csoOwowdT#H>!C4h6gDT&6$#)oR9JWzNqzhxk!m6jrYV!Vy^*Cd+@);O|NcfQ%~o4+8Mp0emy(0Kl+JS49%?{TZ(edn-zdm{4Y!BB+?5 znH6616ouFAEn8JhR~Ef(tBLv{aR*WXt(XthghIV(qy%=V+L)G(j`6g>+cR9)*5r*3 z!*xh$5Wgf83T@!ziFr>tQqRmxkGCZ(kMsB=QoQHNhOWK`cby;IDp358$bCMP5$mBZ zhVdy)6}uKl+YLbNZ-3qCPOQ-&*VkB8#f#)@SHw_*-`Vos%&C!rxidWg} z?(dqz*bM1gVn|3y8S$d z;KWwNNK5jYgR14mUEbzl;?_RIYW}~5{heSU@RhE!?{a^BlN7QQMmj?JZB0{{1OZ14q_n$=|gWWqwlWVM{B;{9izVDF~NR9ZI z(aS-{{?;;xgH-$)tvFsiN zJ|g3u*Krq`tK1)~#3hQPtC=|a%-)vE?53Zk`sPjiFHSPBMvUv7Ap1~- ze~f|^@oq_s;D%CoegS3oE4y=-_=mr?-rWspRQjP>ceu9x?GIqkqW{<9I4! z`CDWg@HVUHRW%7_f>0^0EIfoHy5k)=;eP315OSRa*DuXu02V_; zPL9!8TOMsZJfE8#YRN2zEH-^0T{cT_XF!Uv0sOS444O~&#x%(fP;!?a@C)3uISwww zhZrY}LiH~vK%%g|p|P`caY73Emn{EzZ;Hv5v5YD>L`wM;i3`%VO&+LWA(V^;6H1L0 z!BTqBvMbqZH=Kx`0}of=f&aIN00LhFD3_q8E&qRM8Q|8|B-9wZGk7x{xAMuNUrFps zz!_q&Lm(bGj3iC1JqwmFE0F3aXsnk~dLKroE{j#`WHrgfT5EAJ%tAGylTh4$+N6Yi z#a&Jho}m4aHF&d1j{iwD%U5JBdh`$8=fD&)2p%k=_kA;A*klro5}%kmp0!6wt(Kt&P`i_*5njjZ*b%_uauWlc){>{wGX9_d^s zB*@Dt5kQ2^1&4xdwzD`gc@@c)zs838EB;aj4#+}58FLV-gbVYxs2M39%h(jhB;MaR zGVs_JEUkL~tC|jtzH|H8bKV=y&GMX9Sua0}8b<@iV3ARY4DDR?W}*A3beiH`&yzg+ zYAr|}LoWB6Mr(cnfo;xf3XcS!mbG-gyUR^kvpehX3c}c4uz2LJa0XVFS)6lJ1lLGB z6$=V|U5*_Tyg|QozBVQpU>Q;pC+O>{rTJ?F>gnhDKi9xy{|U!6p)`RdcpAsA#26?i zreQ{ADp;al!mDT&%YzQ8<+k-V?SaJ$r_}Ob$>M-cN;!%~$VPB7)Ia{DP>uy_V0!gU zl`%|s%B-PCD4XGRkq0EBu;jkutL3&NS8QU1Z|tcE{Hxl#vSh+}@xpQx(gSzaDV11d zq<6U@El>_M1;k+1YBIYVrQhUPucZs$e#Q?|8=F%$OYRV^PQ{1Rup(x&e(7V1yUWqw z>Okv;TJINB4*gD_rPB|fnYAN>vG$TpT!BJAhlT!8HY0x+{m@%NpHJ~u(6ECjipDe( zVzlDzOW)>Z-oLNld+Z^+`y=YgM>efAp{ku*#KBndd{MVmCbM($oz+8R;eGwV&@e!^ zjE0_!wpLT6XQXIQNyo??rRuH2qRHoWATnT22h;qVT$ow_I{E^Sp-9oCY2C;-jOda5 z&$3Hp%X~0rAiBmI9^h)q5>WrK-R@>J1(kg+Y_CBkgN82^`6)%)0Z-u`Z@81IE=!=; zZ^C!i-pxf4n|scAj4tkmJ|)Q5wgZj?KJCH4!_rKDj!pm%;`b|R6(!h@X;l&2y2ao+ z(Tw-E-*GQ}k}i_}5W5m{KEV^9u)`wv==RDE_3Es@z`iQCSPqC2NQP2D0^by<_D1YP z*fEUzrkB?d9+4a4CoPtO8KZf(i&IlEO93v=_k_oJc>i4Dr8*F8Rn&!9KcZO02Nm7mPoKaj{}N-VIf z0Iz^4#fbo+{u8p1IAZ06$!eGl*wmy%L=iM1o--9D$H&Kr_uDZ7OX^K`IFA;)!_Tq2 z`l9JsCuAZiK%cH0Ic{;8z?~IN#CY9ITDoYvIY_b+m`IHn)$!cYW*yj{=aB>yBcz_9 zCjk5a;1t~#B;arWt*>||;pD`#fyU~huA!lEw|(rftn+a69Pm$4wa6|23%NIfoX>ti z@Q!rw0TDfp+tv{IjejyuU{J*i1xKP9@wv7W(!BSNc}#UQQPvy{5|%?r{;A0ISj`{b z{E7&|Y>$biLV;ZzED+ zJEUejPtg&3D%5b4(q9oD=3j9o+@lgujfp!F<4McQ_YWoWRpL!aqlY4p^$k1R6K=n9 z)g&>vUfy`nU-I z0z2IpP8IRuSN~P$6CYQ4wsozo&2QjfzcL~*;y-eASgKd8TjJOG8y~qqh!vyaY0x+X z;ywY^Lh<=$M6R?hTNTDI+3^Mc( z)QGpu`f`uEjSuPBicoD={t^yaPlBg2Rk_YPdH(~9I5dqr|t24 z01n@qt1ST#{l`RZEEgSjCV1)Z+Pa<$W;NsQf855N&U$hO-iC0>%O{_;{v^tKe^EMX zE23UWDT9<<+`{@W5E2`tXJS&#H@8Ngi zgk-iwDmLS6o*eogO~N0M#zAEtSPZ_p`CBJ_C@9r!wO?%VzP(5XF4O@KXckE~=`$1% z+Dys!?_vP97FHY2u5*zlc-`C;jLY`iQZ~}<9s=BI%5(6ce*zsU^od~HyKf>^}Rbz0lg>T)PL!))RN3+l@J$4)LZ_*&PM#q>G#jqbuXBS-`zjAV;Vmn-mA0~ zAwk}NvjRkrU8np_q?VSJ+h5@MrbOqvXPgZlF6gYlP}u9j*3Io4K*{QCXAx?$A&4CC z4Z4$n2s0|GhlR~H#cBFUx0N1>9JRI-*@|og_HO{e0|4%5eNsQo9<;Aq(c%+mb0l8F z=mtPT`nx+j;5}*~=e6ol(fQiYNZvqUnQC9GX?0_4W@QaSt zXvFW7c7%7%Ty)OW{&K2S*nz$X_B^5Le1jW-nEs`2^U)U|>FG)_siZSI^Tn34}@{*DAlFABn z^3ad0XI2or5;QaY#RB4cUaLPV)mF2fO5sM+N0_&C;XyFoakmB8)W7Y%+EniD2pgpX zz`j%I5NOvooo{9Lxa%l`j#|-PfNpzJ&rCzRw7l81-?tn|o7c!}ex;xU5mqt3wEE14 z*I8+k$1H@181r5rG>-~Y*z&sUgXMJEv6#Z(-PUh|%s9|}H2Yxe> za<4vtA6fDR?=j;tU4hGmM2(*35F?|-d-B;6@9*nqhd%X%5qZAHMp>pnSA#&MTJe2A zUD*%K2_%408uH`Bf;x7rDj5 zE#Sr}&GLF&I>|;9rfA1*Tq;37LPio8#kca8iB#pOF`r&~?e!N7Zuv2r1N$LggQC>f zSW7D)wxm75On>L(gGB77h~;{ULrX^;vn7_lQ`>-^E%VvCLmLgr<`jN9s8#=5$$C3G zA@{67?Yp)My3WF_aR_!6MN+vw?`(M*+oknQgOR~pjQu3xFg(i~|El0yo9z#1sMqA} zv}LFc_m@$II~5Bm7D)hAuVKHMud`*igr~#ZpbV*{IvwXpJ0@4`bw1wA<&0{ChKf8) zFJKHRIw{I;Mr;ZSP>kH$h&ku2+op256Ew_Yo4N%f%=ClVj8`wSd&t6Ri@KaeV@=p? zq5JdoTyS7BJ=22%2c7DlPT|T2I!cSN`AFv2x%qDn=7{eFa5_D!Yx~w*?%tHM$u>j&jJ+RlVtG%tV?zCP(6g$6{)D{l z{P$c%q)}&d1$gx~nV@6)POvbvm*s%qKYz(^_;dPixy{Nky)6Rf}yL>~pqd zbHf=0Icc}gK7E^-BI=2+i*cR(Fn+6t)SAbgYdqmRWN~BY?Wwr(m=H~ZmzLYO@|z1D z+EkjE57~~yO3=nZ!GI|~+kQm1*j0>YgKt}gC~GHdTfU4rE6yj1McXXCRv}5_hqNRn z@V%l|fmeI>wmP67J(KYkv97YPMty-foa&C?GB2!ZKDyD7fB0o~^ z7gk$ptJJUg{Kn{}U_#9)D*Jgb&Z6xkk_L6%^>C_8*TE*!p7f<1r61aSD9d5_?mzlZY=tk z)m_m}8r03Ej@)Z&GBw+KScYQXH!O(^HT?{$=hJ?Qv`y}($aEoinP1Lw!a3=CtDVT2 z{@SK?y2@?g!l&|soNI8Bi>Mr!gqC1iXb`R>^)SdX_s(ZS=u&ITPvjOo74K>!8TqWBc zWD|W%$K&kqS4@@u@U3x7`r4T586RZvAE-iVFZCtOqoHG@4fHtJiEXxiQxNJ^=yvRE z+DvzGI9-h@2{kHuyAj!vilvfcMsbNo4{iR4oqI8tgM=l2(+SWnpx3D}F0x(LG3YO35jRe)lk3%TJdUY% zs8?|wZ^Rehw8c-$9)cUa{k1HIRWsG-U>_adLeGIi06r8rs7VD-Ef8u#2)v2tu8(m^s7ZyEgjyhl zYQ|h~Nij*NNrje#S`bxPQzPp%Cq)9a>-xqotf*iIkZJ)E8fr3KTbja#T2)om?L*RX zX3t>y{u)-0n)t%quIWfAn1-sL6C~f%2KI7O2(N*H>3p zZ{4~zv9Rd7!-n_GFO;8F#IW$DcV}wG*bSRDZ{B?5$Pp3$H7S{{El2*-)qooPhZIkp zI<;rdp4F>YPn|vc7pWtz@0XzT9w;4?Mj4+ntzhNKmAiKBI&tE}#fuk(`*%NOy0#qq zPge`nP|l&G$Gcx;WhHiAuwcRL*|SSZN{XdoOSq=9X3d&EfBy32%eQUYcIeQd^XJcB zxpGA;sL6C~AqqEL4X8orbouh-bLY+-K74rR&Yc@JY*?{k#p1<_D=I1$NyUnAHE~Um zAM4hw`}pIJ4;(mf=FAyXJp#==sbspg1Vxap7N|k!iX8yLnwpyZ`}cqH$tQRl+^!0Y zloDO^QsSEK-o1O@zI{iJ9zA>ZEGl_icY(?@T{2x`Tl7+fo31?6AnbtBuD-q=?}4XI zojQK}I5Os#R168%^yJBt$d3ybE}+;ElxS2kU1MJ1rmF+gAnZeP1#g4kB7~%oiVe{V ziYtrjd-dv7adD`, to +install Ubuntu on the NVMe drive, and use grub to launch the Service VM. + +Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe +=================================================================== + +The Pre-Launched Preempt RT Linux use Clearlinux as rootfs. Refer to +:ref:`Burn the Preempt-RT VM image onto the SATA disk ` to +download the RTVM image and burn it to the SATA drive. The Kernel should +be on the NVMe drive along with GRUB. You'll need to copy the RT kernel +to the NVMe drive. Once you have successfully installed and booted +Ubuntu from the NVMe drive, you'll then need to copy the RT kernel from +the SATA to the NVMe drive: + +.. code-block:: none + + # mount /dev/nvme0n1p1 /boot + # mount /dev/sda1 /mnt + # cp /mnt/bzImage /boot/EFI/BOOT/bzImage_RT + +Build ACRN with Pre-Launched RT Mode +==================================== + +The ACRN VM configuration framework can easily configure resources for +Pre-Launched VMs. On Whiskey Lake WHL-IPC-I5, to passthrough SATA and +ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with: + +.. code-block:: none + + make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml RELEASE=0 + +After the build completes, please update ACRN on NVMe. It is +/boot/EFI/BOOT/acrn.bin, if /dev/nvme0n1p1 is mounted at /boot. + +Add Pre-Launched RT Kernel Image to GRUB Config +=============================================== + +The last step is to modify the GRUB configuration file to load the Pre-Launched +kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM +`.) The grub config file will look something +like this: + +.. code-block:: none + + menuentry 'ACRN multiboot2 hybrid'{ + echo 'loading multiboot2 hybrid...' + multiboot2 /EFI/BOOT/acrn.bin + module2 /EFI/BOOT/bzImage_RT RT_bzImage + module2 /EFI/BOOT/bzImage Linux_bzImage + } + +Reboot the system, and it will boot into Pre-Launched RT Mode + +.. code-block:: none + + ACRN:\>vm_list + VM_UUID VM_ID VM_NAME VM_STATE + ================================ ===== ================================ ======== + 26c5e0d88f8a47d88109f201ebd61a5e 0 ACRN PRE-LAUNCHED VM0 Running + dbbbd4347a574216a12c2201f1ab0240 1 ACRN SOS VM Running + ACRN:\> + +Connect console of VM0, via 'vm_console' ACRN shell command (Press +:kbd:`Ctrl` + :kbd:`Space` to return to the ACRN shell.) + +.. code-block:: none + + ACRN:\>vm_console 0 + + ----- Entering VM 0 Shell ----- + + root@clr-85a5e9fbac604fbbb92644991f6315df ~ # diff --git a/doc/tutorials/rdt_configuration.rst b/doc/tutorials/rdt_configuration.rst index c8223b400..147e32d68 100644 --- a/doc/tutorials/rdt_configuration.rst +++ b/doc/tutorials/rdt_configuration.rst @@ -89,6 +89,13 @@ MBA bit encoding: ACRN:\>cpuid 0x10 **0x3** cpuid leaf: 0x10, subleaf: 0x3, 0x59:0x0:0x4:0x7 +.. note:: + ACRN takes the lowest common CLOS max value between the supported + resources as maximum supported CLOS ID. For example, if max CLOS + supported by L3 is 16 and MBA is 8, ACRN programs MAX_PLATFORM_CLOS_NUM + to 8. ACRN recommends to have consistent capabilities across all RDT + resources by using a common subset CLOS. This is done in order to minimize + misconfiguration errors. Tuning RDT resources in HV debug shell ************************************** @@ -136,46 +143,51 @@ shell. Configure RDT for VM using VM Configuration ******************************************* -#. RDT on ACRN is enabled by default on supported platforms. This +#. RDT hardware feature is enabled by default on supported platforms. This information can be found using an offline tool that generates a platform-specific xml file that helps ACRN identify RDT-supported - platforms. This feature can be also be toggled using the - CONFIG_RDT_ENABLED flag with the ``make menuconfig`` command. The first - step is to clone the ACRN source code (if you haven't already done so): + platforms. RDT on ACRN is enabled by configuring the ``FEATURES`` + sub-section of the scenario xml file as in the below example. For + details on building ACRN with scenario refer to :ref:`build-with-acrn-scenario`. .. code-block:: none + :emphasize-lines: 6 - $ git clone https://github.com/projectacrn/acrn-hypervisor.git - $ cd acrn-hypervisor/ + + y + SCHED_BVT + y + + *y* + n + + + - .. figure:: images/menuconfig-rdt.png - :align: center - -#. The predefined cache masks can be found at - ``hypervisor/arch/x86/configs/$(CONFIG_BOARD)/board.c`` for respective boards. - For example, apl-up2 can found at ``hypervisor/arch/x86/configs/apl-up2/board.c``. +#. Once RDT is enabled in the scenario xml file, the next step is to program + the desired cache mask or/and the MBA delay value as needed in the + scenario file. Each cache mask or MBA delay configuration corresponds + to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4 + cache mask settings needs to be in place where each setting corresponds + to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and + use default delay value for MBA, it can be done as shown in the example below. .. code-block:: none - :emphasize-lines: 3,7,11,15 + :emphasize-lines: 8,9,10,11,12 - struct platform_clos_info platform_l2_clos_array[MAX_PLATFORM_CLOS_NUM] = { - { - .clos_mask = 0xff, - .msr_index = MSR_IA32_L3_MASK_BASE + 0, - }, - { - .clos_mask = 0xff, - .msr_index = MSR_IA32_L3_MASK_BASE + 1, - }, - { - .clos_mask = 0xff, - .msr_index = MSR_IA32_L3_MASK_BASE + 2, - }, - { - .clos_mask = 0xff, - .msr_index = MSR_IA32_L3_MASK_BASE + 3, - }, - }; + + y + SCHED_BVT + y + + y + n + *0xff* + *0x3f* + *0xf* + *0x3* + *0* + .. note:: Users can change the mask values, but the cache mask must have @@ -183,31 +195,24 @@ Configure RDT for VM using VM Configuration programming an MBA delay value, be sure to set the value to less than or equal to the MAX delay value. -#. Set up the CLOS in the VM config. Follow `RDT detection and resource capabilities`_ - to identify the MAX CLOS that can be used. ACRN uses the +#. Configure each CPU in VMs to a desired CLOS ID in the ``VM`` section of the + scenario file. Follow `RDT detection and resource capabilities`_ + to identify the maximum supported CLOS ID that can be used. ACRN uses the **the lowest common MAX CLOS** value among all RDT resources to avoid - resource misconfigurations. For example, configuration data for the - Service VM sharing mode can be found at - ``hypervisor/arch/x86/configs/vm_config.c`` + resource misconfigurations. .. code-block:: none - :emphasize-lines: 6 + :emphasize-lines: 5,6,7,8 - struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] __aligned(PAGE_SIZE) = { - { - .type = SOS_VM, - .name = SOS_VM_CONFIG_NAME, - .guest_flags = 0UL, - .clos = 1, - .memory = { - .start_hpa = 0x0UL, - .size = CONFIG_SOS_RAM_SIZE, - }, - .os_config = { - .name = SOS_VM_CONFIG_OS_NAME, - }, - }, - }; + + PRE_STD_VM + ACRN PRE-LAUNCHED VM0 + 26c5e0d8-8f8a-47d8-8109-f201ebd61a5e + + *0* + *1* + + .. note:: In ACRN, Lower CLOS always means higher priority (clos 0 > clos 1 > clos 2> ...clos n). diff --git a/doc/tutorials/rtvm_performance_tips.rst b/doc/tutorials/rtvm_performance_tips.rst index f3578dcb8..5d6004672 100644 --- a/doc/tutorials/rtvm_performance_tips.rst +++ b/doc/tutorials/rtvm_performance_tips.rst @@ -177,7 +177,7 @@ Tip: Disable the Intel processor C-State and P-State of the RTVM. Power management of a processor could save power, but it could also impact the RT performance because the power state is changing. C-State and P-State PM mechanism can be disabled by adding ``processor.max_cstate=0 - intel_idle.max_cstate=0 intel_pstate=disabled`` to the kernel parameters. + intel_idle.max_cstate=0 intel_pstate=disable`` to the kernel parameters. Tip: Exercise caution when setting ``/proc/sys/kernel/sched_rt_runtime_us``. Setting ``/proc/sys/kernel/sched_rt_runtime_us`` to ``-1`` can be a diff --git a/doc/tutorials/using_grub.rst b/doc/tutorials/using_grub.rst index fcba86ec2..25a6b689d 100644 --- a/doc/tutorials/using_grub.rst +++ b/doc/tutorials/using_grub.rst @@ -91,11 +91,11 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM): The module ``/boot/kernel4vm0`` is the VM0 kernel file. The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the ``kernel_mod_tag`` of VM0 configured in the - ``hypervisor/scenarios/$(SCENARIO)/vm_configurations.c`` file. The + ``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file. The multiboot module ``/boot/kernel4vm1`` is the VM1 kernel file and the param ``yyyyyy`` is its tag and must exactly match the ``kernel_mod_tag`` of VM1 in the - ``hypervisor/scenarios/$(SCENARIO)/vm_configurations.c`` file. + ``misc/vm_configs/scenarios/$(SCENARIO)/vm_configurations.c`` file. The guest kernel command line arguments is configured in the hypervisor source code by default if no ``$(VMx bootargs)`` is present. diff --git a/doc/tutorials/using_hybrid_mode_on_nuc.rst b/doc/tutorials/using_hybrid_mode_on_nuc.rst index 3acd4b5eb..4fd02bdb4 100644 --- a/doc/tutorials/using_hybrid_mode_on_nuc.rst +++ b/doc/tutorials/using_hybrid_mode_on_nuc.rst @@ -3,7 +3,7 @@ Getting Started Guide for ACRN hybrid mode ########################################## ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr -or Clear Linux) runs in a pre-launched VM or in a post-launched VM that is +or Ubuntu) runs in a pre-launched VM or in a post-launched VM that is launched by a Device model in the Service VM. The following guidelines describe how to set up the ACRN hypervisor hybrid scenario on the Intel NUC, as shown in :numref:`hybrid_scenario_on_nuc`. @@ -19,7 +19,7 @@ Prerequisites ************* - Use the `Intel NUC Kit NUC7i7DNHE `_. - Connect to the serial port as described in :ref:`Connecting to the serial port `. -- Install GRUB on your SATA device or on the NVME disk of your NUC. +- Install Ubuntu 18.04 on your SATA device or on the NVME disk of your NUC. Update Ubuntu GRUB ****************** @@ -31,7 +31,7 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo .. code-block:: bash :emphasize-lines: 10,11 - menuentry 'ACRN hypervisor Hybird Scenario' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' { + menuentry 'ACRN hypervisor Hybrid Scenario' --id ACRN_Hybrid --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' { recordfail load_video gfxmode $linux_gfx_mode @@ -39,21 +39,21 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo insmod part_gpt insmod ext2 echo 'Loading hypervisor Hybrid scenario ...' - multiboot --quirk-modules-after-kernel /boot/acrn.32.out - module /boot/zephyr.bin xxxxxx - module /boot/bzImage yyyyyy + multiboot2 /boot/acrn.bin + module2 /boot/zephyr.bin xxxxxx + module2 /boot/bzImage yyyyyy } .. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file. The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the - ``kernel_mod_tag`` of VM0 which is configured in the ``hypervisor/scenarios/hybrid/vm_configurations.c`` + ``kernel_mod_tag`` of VM0 which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` file. The multiboot module ``/boot/bzImage`` is the Service VM kernel file. The param ``yyyyyy`` is the bzImage tag and must exactly match the - ``kernel_mod_tag`` of VM1 in the ``hypervisor/scenarios/hybrid/vm_configurations.c`` + ``kernel_mod_tag`` of VM1 in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` file. The kernel command line arguments used to boot the Service VM are - located in the header file ``hypervisor/scenarios/hybrid/vm_configurations.h`` + located in the header file ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` and are configured by the `SOS_VM_BOOTARGS` macro. #. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu @@ -61,6 +61,8 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo .. code-block:: bash + GRUB_DEFAULT=ACRN_Hybrid + GRUB_TIMEOUT=5 # GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=false @@ -82,11 +84,11 @@ Hybrid Scenario Startup Checking #. Use these steps to verify all VMs are running properly: a. Use the ``vm_console 0`` to switch to VM0 (Zephyr) console. It will display **Hello world! acrn**. - #. Enter :kbd:`Ctrl+Spacebar` to return to the ACRN hypervisor shell. + #. Enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Use the ``vm_console 1`` command to switch to the VM1 (Service VM) console. #. Verify that the VM1's Service VM can boot up and you can log in. #. ssh to VM1 and launch the post-launched VM2 using the ACRN device model launch script. - #. Go to the Service VM console, and enter :kbd:`Ctrl+Spacebar` to return to the ACRN hypervisor shell. + #. Go to the Service VM console, and enter :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Use the ``vm_console 2`` command to switch to the VM2 (User VM) console. #. Verify that VM2 can boot up and you can log in. diff --git a/doc/tutorials/using_partition_mode_on_nuc.rst b/doc/tutorials/using_partition_mode_on_nuc.rst index 0065f1f8f..4e9df78ea 100644 --- a/doc/tutorials/using_partition_mode_on_nuc.rst +++ b/doc/tutorials/using_partition_mode_on_nuc.rst @@ -4,7 +4,7 @@ Getting Started Guide for ACRN logical partition mode ##################################################### The ACRN hypervisor supports a logical partition scenario in which the User -OS (such as Clear Linux) running in a pre-launched VM can bypass the ACRN +OS (such as Ubuntu OS) running in a pre-launched VM can bypass the ACRN hypervisor and directly access isolated PCI devices. The following guidelines provide step-by-step instructions on how to set up the ACRN hypervisor logical partition scenario on Intel NUC while running two @@ -14,9 +14,8 @@ Validated Versions ****************** - Ubuntu version: **18.04** -- Clear Linux version: **32680** -- ACRN hypervisor tag: **v1.6** -- ACRN kernel commit: **8c9a8695966d8c5c8c7ccb296b9c48671b14aa70** +- ACRN hypervisor tag: **v2.1** +- ACRN kernel tag: **v2.1** Prerequisites ************* @@ -28,14 +27,12 @@ Prerequisites or SATA disk connected with a USB3.0 SATA converter). * Disable **Intel Hyper Threading Technology** in the BIOS to avoid interference from logical cores for the logical partition scenario. -* In the logical partition scenario, two VMs (running Clear Linux) +* In the logical partition scenario, two VMs (running Ubuntu OS) are started by the ACRN hypervisor. Each VM has its own root - filesystem. Set up each VM by following the `Install Clear Linux - OS on bare metal with live server - `_ instructions - and install Clear Linux OS (version: 32680) first on a SATA disk and then - again on a storage device with a USB interface. The two pre-launched - VMs will mount the root file systems via the SATA controller and + filesystem. Set up each VM by following the `Ubuntu desktop installation + `_ instructions + first on a SATA disk and then again on a storage device with a USB interface. + The two pre-launched VMs will mount the root file systems via the SATA controller and the USB controller respectively. Update kernel image and modules of pre-launched VM @@ -84,11 +81,11 @@ Update kernel image and modules of pre-launched VM .. code-block:: none - # Mount the Clear Linux OS root filesystem on the SATA disk + # Mount the Ubuntu OS root filesystem on the SATA disk $ sudo mount /dev/sda3 /mnt $ sudo cp -r /lib/modules/* /mnt/lib/modules $ sudo umount /mnt - # Mount the Clear Linux OS root filesystem on the USB flash disk + # Mount the Ubuntu OS root filesystem on the USB flash disk $ sudo mount /dev/sdb3 /mnt $ sudo cp -r /lib/modules/* /mnt/lib/modules $ sudo umount /mnt @@ -139,13 +136,13 @@ Update ACRN hypervisor image Refer to :ref:`getting-started-building` to set up the ACRN build environment on your development workstation. - Clone the ACRN source code and check out to the tag v1.6: + Clone the ACRN source code and check out to the tag v2.1: .. code-block:: none $ git clone https://github.com/projectacrn/acrn-hypervisor.git $ cd acrn-hypervisor - $ git checkout v1.6 + $ git checkout v2.1 Build the ACRN hypervisor with default xmls: @@ -154,7 +151,7 @@ Update ACRN hypervisor image $ make hypervisor BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/logical_partition.xml RELEASE=0 .. note:: - The ``acrn.32.out`` will be generated to ``./build/hypervisor/acrn.32.out``. + The ``acrn.bin`` will be generated to ``./build/hypervisor/acrn.bin``. #. Check the Ubuntu boot loader name. @@ -171,13 +168,13 @@ Update ACRN hypervisor image #. Check or update the BDF information of the PCI devices for each pre-launched VM; check it in the ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``. -#. Copy the artifact ``acrn.32.out`` to the ``/boot`` directory: +#. Copy the artifact ``acrn.bin`` to the ``/boot`` directory: - #. Copy ``acrn.32.out`` to a removable disk. + #. Copy ``acrn.bin`` to a removable disk. #. Plug the removable disk into the NUC's USB port. - #. Copy the ``acrn.32.out`` from the removable disk to ``/boot`` + #. Copy the ``acrn.bin`` from the removable disk to ``/boot`` directory. Update Ubuntu GRUB to boot hypervisor and load kernel image @@ -187,7 +184,7 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image .. code-block:: none - menuentry 'ACRN hypervisor Logical Partition Scenario' --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' { + menuentry 'ACRN hypervisor Logical Partition Scenario' --id ACRN_Logical_Partition --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' { recordfail load_video gfxmode $linux_gfx_mode @@ -195,25 +192,27 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image insmod part_gpt insmod ext2 + search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1 echo 'Loading hypervisor logical partition scenario ...' - multiboot --quirk-modules-after-kernel /boot/acrn.32.out - module /boot/bzImage XXXXXX + multiboot2 /boot/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83" + module2 /boot/bzImage XXXXXX } .. note:: + Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter) + (or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``. The kernel command line arguments used to boot the pre-launched VMs is - located in the ``hypervisor/scenarios/logical_partition/vm_configurations.h`` header file and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id - number and ``*`` are arguments). The multiboot module param ``XXXXXX`` - is the bzImage tag and must exactly match the ``kernel_mod_tag`` - configured in the - ``hypervisor/scenarios/logical_partition/vm_configurations.c`` file. + located in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` header file + and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM id number and ``*`` are arguments). + The multiboot2 module param ``XXXXXX`` is the bzImage tag and must exactly match the ``kernel_mod_tag`` + configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` file. -#. Modify the `/etc/default/grub` file as follows to make the GRUB menu +#. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu visible when booting: .. code-block:: none - GRUB_DEFAULT=3 + GRUB_DEFAULT=ACRN_Logical_Partition GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" @@ -243,7 +242,7 @@ Logical partition scenario startup checking #. Use the ``vm_console 0`` to switch to VM0's console. #. The VM0's Clear Linux OS should boot up and log in. - #. Use a ``Ctrl-Spacebar`` to return to the Acrn hypervisor shell. + #. Use a :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN hypervisor shell. #. Use the ``vm_console 1`` to switch to VM1's console. #. The VM1's Clear Linux OS should boot up and log in. diff --git a/doc/tutorials/vuart_configuration.rst b/doc/tutorials/vuart_configuration.rst index d2a9aa57e..8c2334330 100644 --- a/doc/tutorials/vuart_configuration.rst +++ b/doc/tutorials/vuart_configuration.rst @@ -8,7 +8,8 @@ Introduction The virtual universal asynchronous receiver-transmitter (vUART) supports two functions: one is the console, the other is communication. vUART only works on a single function. -Currently, only two vUART configurations are added to the ``hypervisor/scenarios//vm_configuration.c`` file, but you can change the value in it. +Currently, only two vUART configurations are added to the +``misc/vm_configs/scenarios//vm_configuration.c`` file, but you can change the value in it. .. code-block:: none diff --git a/doc/user-guides/acrn-shell.rst b/doc/user-guides/acrn-shell.rst index 9cb3b2f61..83705a80a 100644 --- a/doc/user-guides/acrn-shell.rst +++ b/doc/user-guides/acrn-shell.rst @@ -28,7 +28,7 @@ The ACRN hypervisor shell supports the following commands: - Dump a User VM (guest) memory region based on the VM ID (``vm_id``, in decimal), the start of the memory region ``gva`` (in hexadecimal) and its length ``length`` (in bytes, decimal number). * - vm_console - - Switch to the VM's console. Use :kbd:`Ctrl+Spacebar` to return to the ACRN + - Switch to the VM's console. Use :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN shell console * - int - List interrupt information per CPU @@ -156,7 +156,7 @@ vm_console =========== The ``vm_console`` command switches the ACRN's console to become the VM's console. -Use a :kbd:`Ctrl-Spacebar` to return to the ACRN shell console. +Press :kbd:`Ctrl` + :kbd:`Space` to return to the ACRN shell console. vioapic ======= diff --git a/doc/user-guides/kernel-parameters.rst b/doc/user-guides/kernel-parameters.rst index a357ee08e..cfb04cf7c 100644 --- a/doc/user-guides/kernel-parameters.rst +++ b/doc/user-guides/kernel-parameters.rst @@ -342,14 +342,6 @@ section below has more details on a few select parameters. i915.enable_gvt=1 - * - i915.enable_pvmmio - - Service VM, User VM - - Control Para-Virtualized MMIO (PVMMIO). It batches sequential MMIO writes - into a shared buffer between the Service VM and User VM - - :: - - i915.enable_pvmmio=0x1F - * - i915.gvt_workload_priority - Service VM - Define the priority level of User VM graphics workloads @@ -373,20 +365,6 @@ section below has more details on a few select parameters. i915.nuclear_pageflip=1 - * - i915.avail_planes_per_pipe - - Service VM - - See :ref:`i915-avail-planes-owners`. - - :: - - i915.avail_planes_per_pipe=0x01010F - - * - i915.domain_plane_owners - - Service VM - - See :ref:`i915-avail-planes-owners`. - - :: - - i915.domain_plane_owners=0x011111110000 - * - i915.domain_scaler_owner - Service VM - See `i915.domain_scaler_owner`_ @@ -401,13 +379,6 @@ section below has more details on a few select parameters. i915.enable_guc=0x02 - * - i915.avail_planes_per_pipe - - User VM - - See :ref:`i915-avail-planes-owners`. - - :: - - i915.avail_planes_per_pipe=0x070F00 - * - i915.enable_guc - User VM - Disable GuC @@ -445,38 +416,6 @@ support in the host. By default, it's not enabled, so we need to add ``i915.enable_gvt=1`` in the Service VM kernel command line. This is a Service OS only parameter, and cannot be enabled in the User VM. -i915.enable_pvmmio ------------------- - -We introduce the feature named **Para-Virtualized MMIO** (PVMMIO) -to improve graphics performance of the GVT-g guest. -This feature batches sequential MMIO writes into a -shared buffer between the Service VM and User VM, and then submits a -para-virtualized command to notify to GVT-g in Service VM. This -effectively reduces the trap numbers of MMIO operations and improves -overall graphics performance. - -The ``i915.enable_pvmmio`` option controls -the optimization levels of the PVMMIO feature: each bit represents a -sub-feature of the optimization. By default, all -sub-features of PVMMIO are enabled. They can also be selectively -enabled or disabled.. - -The PVMMIO optimization levels are: - -* PVMMIO_ELSP_SUBMIT = 0x1 - Batch submission of the guest graphics - workloads -* PVMMIO_PLANE_UPDATE = 0x2 - Batch plane register update operations -* PVMMIO_PLANE_WM_UPDATE = 0x4 - Batch watermark registers update operations -* PVMMIO_MASTER_IRQ = 0x8 - Batch IRQ related registers -* PVMMIO_PPGTT_UPDATE = 0x10 - Use PVMMIO method to update the PPGTT table - of guest. - -.. note:: This parameter works in both the Service VM and User VM, but - changes to one will affect the other. For example, if either Service VM or User VM - disables the PVMMIO_PPGTT_UPDATE feature, this optimization will be - disabled for both. - i915.gvt_workload_priority -------------------------- @@ -522,118 +461,6 @@ In the current configuration, we will set This parameter is not used on UEFI platforms. -.. _i915-avail-planes-owners: - -i915.avail_planes_per_pipe and i915.domain_plane_owners -------------------------------------------------------- - -Both Service VM and User VM are provided a set of HW planes where they -can display their contents. Since each domain provides its content, -there is no need for any extra composition to be done through Service VM. -``i915.avail_planes_per_pipe`` and ``i915.domain_plane_owners`` work -together to provide the plane restriction (or plan-based domain -ownership) feature. - -* i915.domain_plane_owners - - On Intel's display hardware, each pipeline contains several planes, which are - blended - together by their Z-order and rendered to the display monitors. In - AcrnGT, we can control each planes' ownership so that the domains can - display contents on the planes they own. - - The ``i915.domain_plane_owners`` parameter controls the ownership of all - the planes in the system, as shown in :numref:`i915-planes-pipes`. Each - 4-bit nibble identifies the domain id owner for that plane and a group - of 4 nibbles represents a pipe. This is a Service VM only configuration - and cannot be modified at runtime. Domain ID 0x0 is for the Service VM, - the User VM use domain IDs from 0x1 to 0xF. - - .. figure:: images/i915-image1.png - :width: 900px - :align: center - :name: i915-planes-pipes - - i915.domain_plane_owners - - For example, if we set ``i915.domain_plane_owners=0x010001101110``, the - plane ownership will be as shown in :numref:`i915-planes-example1` - Service VM - (green) owns plane 1A, 1B, 4B, 1C, and 2C, and User VM #1 owns plane 2A, 3A, - 4A, 2B, 3B and 3C. - - .. figure:: images/i915-image2.png - :width: 900px - :align: center - :name: i915-planes-example1 - - i915.domain_plane_owners example - - Some other examples: - - * i915.domain_plane_owners=0x022211110000 - Service VM (0x0) owns planes on pipe A; - User VM #1 (0x1) owns all planes on pipe B; and User VM #2 (0x2) owns all - planes on pipe C (since, in the representation in - :numref:`i915-planes-pipes` above, there are only 3 planes attached to - pipe C). - - * i915.domain_plane_owners=0x000001110000 - Service VM owns all planes on pipe A - and pipe C; User VM #1 owns plane 1, 2 and 3 on pipe B. Plane 4 on pipe B - is owned by the Service VM so that if it wants to display notice message, it - can display on top of the User VM. - -* i915.avail_planes_per_pipe - - Option ``i915.avail_planes_per_pipe`` is a bitmask (shown in - :numref:`i915-avail-planes`) that tells the i915 - driver which planes are available and can be exposed to the compositor. - This is a parameter that must to be set in each domain. If - ``i915.avail_planes_per_pipe=0``, the plane restriction feature is disabled. - - .. figure:: images/i915-image3.png - :width: 600px - :align: center - :name: i915-avail-planes - - i915.avail_planes_per_pipe - - For example, if we set ``i915.avail_planes_per_pipe=0x030901`` in Service VM - and ``i915.avail_planes_per_pipe=0x04060E`` in User VM, the planes will be as - shown in :numref:`i915-avail-planes-example1` and - :numref:`i915-avail-planes-example1`: - - .. figure:: images/i915-image4.png - :width: 500px - :align: center - :name: i915-avail-planes-example1 - - Service VM i915.avail_planes_per_pipe - - .. figure:: images/i915-image5.png - :width: 500px - :align: center - :name: i915-avail-planes-example2 - - User VM i915.avail_planes_per_pipe - - ``i915.avail_planes_per_pipe`` controls the view of planes from i915 drivers - inside of every domain, and ``i915.domain_plane_owners`` is the global - arbiter controlling which domain can present its content onto the - real hardware. Generally, they are aligned. For example, we can set - ``i915.domain_plane_owners= 0x011111110000``, - ``i915.avail_planes_per_pipe=0x00000F`` in Service VM, and - ``i915.avail_planes_per_pipe=0x070F00`` in domain 1, so every domain will - only flip on the planes they owns. - - However, we don't force alignment: ``avail_planes_per_pipe`` might - not be aligned with the - setting of ``domain_plane_owners``. Consider this example: - ``i915.domain_plane_owners=0x011111110000``, - ``i915.avail_planes_per_pipe=0x01010F`` in Service VM and - ``i915.avail_planes_per_pipe=0x070F00`` in domain 1. - With this configuration, Service VM will be able to render on plane 1B and - plane 1C, however, the content of plane 1B and plane 1C will not be - flipped onto the real hardware. - i915.domain_scaler_owner ========================