Options for debugging the hypervisor. Select the host serial device used for hypervisor debugging. Select the default log level for log messages stored in memory. Value can be changed at runtime. Log messages with the selected value or lower are displayed. Select the default log level for the hypervisor via Intel Trace Hub log. Use the Intel Trace Hub's memory to record log messages. Value can be changed at runtime. Log messages with the selected value or lower are displayed. Select the default log level for log messages written to the serial console. Log messages with the selected value or lower are displayed. Options for enabling hypervisor features. Enable hypervisor relocation in memory. The bootloader may need to change the location of the hypervisor because of other firmware. Select the scheduling algorithm for determining the priority of User VMs running on a shared virtual CPU. Enable multiboot2 protocol support (with multiboot1 downward compatibility). If multiboot1 meets your requirements, disable this feature to reduce hypervisor code size. Disable detection of split locks. A split lock can negatively affect an application's real-time performance. If a lock is detected, an alignment check exception #AC occurs. Disable detection of uncacheable-memory (UC) locks. A UC lock can negatively affect an application's real-time performance. If a lock is detected, a general-protection exception #GP occurs. Enable to do fixup for TPM2 and SMBIOS for Security VM. If no Security VM, setting this option to ``n`` If checked, permanently disables all interrupts in HV root mode. Enable Microsoft Hyper-V Hypervisor Top-Level Functional Specification (TFLS) for User VMs running Windows. Specify if the IOMMU enforces snoop behavior of DMA operations. Enable ACPI runtime parsing to get DMAR (DMA remapping) configuration data from the ACPI tables. Otherwise, use existing, static information from the associated board configuration file. Enable L1 cache flush before VM entry to prevent L1 terminal fault. L1 terminal fault is a hardware vulnerability that could allow unauthorized disclosure of information residing in the L1 data cache. Disable the software workaround for Machine Check Error on Page Size Change (erratum in some processor families). Intel Resource Director Technology (RDT) provides cache and memory bandwidth allocation features. The features can be used to improve an application's real-time performance. Configure shared memory regions for inter-VM communication. Configure Software SRAM. This feature reserves memory buffers as always-cached memory to improve an application's real-time performance. Specify the size of the memory stack in bytes for each physical CPU. For example, if you specify 8 kilobytes, each CPU will get its own 8-kilobyte stack. The 2MB-aligned starting physical address of the RAM region used by the hypervisor. Capacity limits for static assigned data structure or maximum supported resource. Maximum number of User VMs allowed. Maximum number of IOAPICs. Integer from 1 to 10. Specify the maximum number of PCI devices. This impacts the amount of memory used to maintain information about these PCI devices. The default value is calculated from the board configuration file. If you have PCI devices that were not detected by the Board Inspector, you may need to change this maximum value. Integer from 1 to 1024. Maximum number of interrupt lines per IOAPIC. Integer from 1 to 120. Specify the maximum number of interrupt request (IRQ) entries from all passthrough devices. Integer from 1 to 1024. Specify the maximum number of Message Signaled Interrupt MSI-X tables per device. The default value is calculated from the board configuration file. Integer value from 1 to 2048. Specify the maximum number of emulated MMIO regions for device virtualization. The default value is calculated from the board configuration file. Integer value from 1 to 128. Segment, Bus, Device, and function of the GPU. Select the build type: * ``Debug`` enables the debug shell, prints, and logs. * ``Release`` optimizes the ACRN binary for deployment and turns off all debug infrastructure. These settings can only be changed at build time. Specify the vUART connection settings. Refer to :ref:`vuart_config` for detailed vUART settings. Configure the debug facilities. Miscellaneous options for workarounds. Specify the cache setting. Specify the VM load order. Specify the name used to identify this VM. The VM name will be shown in the hypervisor console vm_list command. Select the VM type. A standard VM (``STANDARD_VM``) is for general-purpose applications, such as human-machine interface (HMI). A real-time VM (``RTVM``) offers special features for time-sensitive applications. Select the console virtual UART (vUART) type. Add the console settings to the kernel command line by typing them in the "Linux kernel command-line parameters" text box (for example, ``console=ttyS0`` for COM port 1). Select the OS type for this VM. This is required to run Windows in a User VM. See :ref:`acrn-dm_parameters` for how to include this in the Device Model arguments. Enable the ACRN Device Model to emulate COM1 as a User VM stdio I/O. Hypervisor global emulation will take priority over this VM setting. Use virtual bootloader OVMF (Open Virtual Machine Firmware) to boot this VM. Select a subset of physical CPUs that this VM can use. More than one can be selected. Enable LAPIC passthrough for this VM. This feature is required for VMs with stringent real-time performance needs. Enable polling mode for I/O completion for this VM. This feature is required for VMs with stringent real-time performance needs. Enable nested virtualization for KVM. Max number of virtual CLOS MASK Integer value is not below zero. Enable virtualization of the Cache Allocation Technology (CAT) feature in RDT. CAT enables you to allocate cache to VMs, providing isolation to avoid performance interference from other VMs. Specify secure world support for trusty OS. Specify MTRR capability to hide for VM. Specify TPM2 FIXUP for VM. Specify the Intel Software Guard Extensions (SGX) enclave page cache (EPC) section settings. Specify the VM vCPU priority for scheduling. Specify the companion VM id of this VM. General information for host kernel, boot argument and memory. Memory-mapped IO (MMIO) resources to passthrough. Specify the pre-launched VM owned IOAPIC pins and the corresponding mapping between physical GSI and virtual GSI. Enable virtualization of PCIe Precision Time Measurement (PTM) mechanism for devices with PTM capability and for real-time application. The hypervisor provides PCIe root port emulation instead of host bridge emulation for the VM. PTM coordinates timing between the device and root port with the device's local timebases without relying on software. Enable virt-IO devices in post-launched VMs. The virtio GPU device presents a GPU device to the VM. This feature enables you to view the VM's GPU output in the Service VM. Virtio console device for data input and output. The virtio console BE driver copies data from the frontend's transmitting virtqueue when it receives a kick on virtqueue (implemented as a vmexit). The BE driver then writes the data to backend, and can be implemented as PTY, TTY, STDIO, and regular file. For details, see :ref:`virtio-console`. The virtio network device emulates a virtual network interface card (NIC) for the VM. The frontend is the virtio network driver, simulating the virtual NIC. The backend could be: ``tap`` device /dev/net/tun, ``MacVTap`` device /dev/tapx, or ``vhost`` device /dev/vhost-net The virtio input device creates a virtual human interface device such as a keyboard, mouse, and tablet. It sends Linux input layer events over virtio. The virtio-blk device presents a block device to the VM. Each virtio-blk device appears as a disk inside the VM. Specify the post-launched VM's unique Context ID (CID) used by vsock (integer greater than 2). vsock provides a way for the host system and applications running in a user VM to communicate with each other using the standard socket interface. vsock uses a (context id, port) pair of integers for identifying processes. The host system CID is always 2. The port is hardcoded in our implementation. The hypervisor configuration defines a working scenario and target board by configuring the hypervisor image features and capabilities such as setting up the log and the serial port. VM configuration includes **scenario-based** VM configuration information that is used to describe the characteristics and attributes for all VMs in a user scenario. It also includes **launch script-based** VM configuration information, where parameters are passed to the device model to launch post-launched User VMs.