diff --git a/doc/Makefile b/doc/Makefile index 48ec8cf77..c69851bab 100644 --- a/doc/Makefile +++ b/doc/Makefile @@ -52,18 +52,14 @@ content: copy-to-sourcedir $(Q)rsync -rt ../misc/config_tools/schema/*.xsd $(SOURCEDIR)/misc/config_tools/schema $(Q)xsltproc -xinclude ./scripts/configdoc.xsl $(SOURCEDIR)/misc/config_tools/schema/config.xsd > $(SOURCEDIR)/reference/configdoc.txt -kconfig: copy-to-sourcedir - $(Q)srctree=../hypervisor \ - python3 scripts/genrest.py Kconfig $(SOURCEDIR)/reference/kconfig/ - pullsource: $(Q)scripts/pullsource.sh -html: copy-to-sourcedir doxy content kconfig +html: copy-to-sourcedir doxy content -$(Q)$(SPHINXBUILD) -t $(DOC_TAG) -b html -d $(BUILDDIR)/doctrees $(SOURCEDIR) $(BUILDDIR)/html $(SPHINXOPTS) $(OPTS) >> $(BUILDDIR)/doc.log 2>&1 $(Q)./scripts/filter-doc-log.sh $(BUILDDIR)/doc.log -singlehtml: doxy content kconfig +singlehtml: doxy content -$(Q)$(SPHINXBUILD) -t $(DOC_TAG) -b singlehtml -d $(BUILDDIR)/doctrees $(SOURCEDIR) $(BUILDDIR)/html $(SPHINXOPTS) $(OPTS) >> $(BUILDDIR)/doc.log 2>&1 $(Q)./scripts/filter-doc-log.sh $(BUILDDIR)/doc.log @@ -72,9 +68,7 @@ singlehtml: doxy content kconfig clean: rm -fr $(BUILDDIR) @# Keeping these temporarily, but no longer strictly needed. - rm -fr doxygen - rm -fr misc - rm -fr reference/kconfig/*.rst + rm -fr doxygen misc reference/kconfig # Copy material over to the GitHub pages staging repo diff --git a/doc/api/GVT-g_api.rst b/doc/api/GVT-g_api.rst index 0c0118e69..10756bf91 100644 --- a/doc/api/GVT-g_api.rst +++ b/doc/api/GVT-g_api.rst @@ -28,8 +28,8 @@ and the `Graphics Execution Manager(GEM)`_ parts of `i915 driver`_. .. _i915 driver: https://01.org/linuxgraphics/gfx-docs/drm/gpu/i915.html -Intel GVT-g Guest Support(vGPU) -=============================== +Intel GVT-g Guest Support (vGPU) +================================ .. kernel-doc:: drivers/gpu/drm/i915/i915_vgpu.c :doc: Intel GVT-g guest support @@ -37,8 +37,8 @@ Intel GVT-g Guest Support(vGPU) .. kernel-doc:: drivers/gpu/drm/i915/i915_vgpu.c :internal: -Intel GVT-g Host Support(vGPU device model) -=========================================== +Intel GVT-g Host Support (vGPU Device Model) +============================================ .. kernel-doc:: drivers/gpu/drm/i915/intel_gvt.c :doc: Intel GVT-g host support @@ -47,7 +47,7 @@ Intel GVT-g Host Support(vGPU device model) :internal: -VHM APIs called from AcrnGT +VHM APIs Called From AcrnGT **************************** The Virtio and Hypervisor Service Module (VHM) is a kernel module in the @@ -83,7 +83,7 @@ responses to user space modules, notified by vIRQ injections. .. _MPT_interface: -AcrnGT mediated passthrough (MPT) interface +AcrnGT Mediated Passthrough (MPT) Interface ******************************************* AcrnGT receives request from GVT module through MPT interface. Refer to the @@ -145,7 +145,7 @@ This section describes the wrap functions: .. _intel_gvt_ops_interface: -GVT-g intel_gvt_ops interface +GVT-g intel_gvt_ops Interface ***************************** This section contains APIs for GVT-g intel_gvt_ops interface. Sources are found @@ -186,23 +186,23 @@ in the `ACRN kernel GitHub repo`_ .. _sysfs_interface: -AcrnGT sysfs interface -*********************** +AcrnGT sysfs Interface +********************** This section contains APIs for the AcrnGT sysfs interface. Sources are found in the `ACRN kernel GitHub repo`_ -sysfs nodes +sysfs Nodes =========== -In below examples all accesses to these interfaces are via bash command -``echo`` or ``cat``. This is a quick and easy way to get/control things. But -when these operations fails, it is impossible to get respective error code by +In the following examples, all accesses to these interfaces are via bash command +``echo`` or ``cat``. This is a quick and easy way to get or control things. But +when these operations fail, it is impossible to get respective error code by this way. -When accessing sysfs entries, people should use library functions such as -``read()`` or ``write()``. +When accessing sysfs entries, use library functions such as +``read()`` or ``write()`` instead. On **success**, the returned value of ``read()`` or ``write()`` indicates how many bytes have been transferred. On **error**, the returned value is ``-1`` @@ -210,33 +210,17 @@ and the global ``errno`` will be set appropriately. This is the only way to figure out what kind of error occurs. -/sys/kernel/gvt/ ----------------- +- The ``/sys/kernel/gvt/`` class sub-directory belongs to AcrnGT and provides a + centralized sysfs interface for configuring vGPU properties. -The ``/sys/kernel/gvt/`` class sub-directory belongs to AcrnGT and provides a -centralized sysfs interface for configuring vGPU properties. +- The ``/sys/kernel/gvt/control/`` sub-directory contains all the necessary + switches for different purposes. +- The ``/sys/kernel/gvt/control/create_gvt_instance`` node is used by ACRN-DM to + create/destroy a vGPU instance. -/sys/kernel/gvt/control/ ------------------------- +- After a VM is created, a new sub-directory ``/sys/kernel/GVT/vmN`` ("N" is the VM id) will be + created. -The ``/sys/kernel/gvt/control/`` sub-directory contains all the necessary -switches for different purposes. - -/sys/kernel/gvt/control/create_gvt_instance -------------------------------------------- - -The ``/sys/kernel/gvt/control/create_gvt_instance`` node is used by ACRN-DM to -create/destroy a vGPU instance. - -/sys/kernel/gvt/vmN/ --------------------- - -After a VM is created, a new sub-directory ``vmN`` ("N" is the VM id) will be -created. - -/sys/kernel/gvt/vmN/vgpu_id ---------------------------- - -The ``/sys/kernel/gvt/vmN/vgpu_id`` node is to get vGPU id from VM which id is -N. +- The ``/sys/kernel/gvt/vmN/vgpu_id`` node is to get vGPU id from VM which id is + N. diff --git a/doc/asa.rst b/doc/asa.rst index 0c2a2e7dd..41eee96f8 100644 --- a/doc/asa.rst +++ b/doc/asa.rst @@ -9,7 +9,7 @@ Addressed in ACRN v2.3 We recommend that all developers upgrade to this v2.3 release (or later), which addresses the following security issue that was discovered in previous releases: ------- +----- - NULL Pointer Dereference in ``devicemodel\hw\pci\virtio\virtio_mei.c`` ``vmei_proc_tx()`` function tries to find the ``iov_base`` by calling @@ -25,7 +25,7 @@ Addressed in ACRN v2.1 We recommend that all developers upgrade to this v2.1 release (or later), which addresses the following security issue that was discovered in previous releases: ------- +----- - Missing access control restrictions in the Hypervisor component A malicious entity with root access in the Service VM @@ -42,7 +42,7 @@ Addressed in ACRN v1.6.1 We recommend that all developers upgrade to this v1.6.1 release (or later), which addresses the following security issue that was discovered in previous releases: ------- +----- - Service VM kernel Crashes When Fuzzing HC_ASSIGN_PCIDEV and HC_DEASSIGN_PCIDEV NULL pointer dereference due to invalid address of PCI device to be assigned or @@ -58,7 +58,7 @@ Addressed in ACRN v1.6 We recommend that all developers upgrade to this v1.6 release (or later), which addresses the following security issues that were discovered in previous releases: ------- +----- - Hypervisor Crashes When Fuzzing HC_DESTROY_VM The input 'vdev->pdev' should be validated properly when handling @@ -90,7 +90,7 @@ Addressed in ACRN v1.4 We recommend that all developers upgrade to this v1.4 release (or later), which addresses the following security issues that were discovered in previous releases: ------- +----- - Mitigation for Machine Check Error on Page Size Change Improper invalidation for page table updates by a virtual guest operating diff --git a/doc/develop.rst b/doc/develop.rst index faffa618a..941d7471f 100644 --- a/doc/develop.rst +++ b/doc/develop.rst @@ -15,7 +15,6 @@ Configuration and Tools tutorials/acrn_configuration_tool reference/config-options - reference/kconfig/index user-guides/hv-parameters user-guides/kernel-parameters user-guides/acrn-shell diff --git a/doc/developer-guides/GVT-g-porting.rst b/doc/developer-guides/GVT-g-porting.rst index e55eecbf1..73ee7ed13 100644 --- a/doc/developer-guides/GVT-g-porting.rst +++ b/doc/developer-guides/GVT-g-porting.rst @@ -24,7 +24,7 @@ For simplicity, in the rest of this document, the term GVT is used to refer to the core device model component of GVT-g, specifically corresponding to ``gvt.ko`` when build as a module. -Purpose of this document +Purpose of This Document ************************ This document explains the relationship between components of GVT-g in @@ -94,11 +94,11 @@ VHM module GVT-g components and interfaces -Core scenario interaction sequences +Core Scenario Interaction Sequences *********************************** -vGPU creation scenario +vGPU Creation Scenario ====================== In this scenario, AcrnGT receives a create request from ACRN-DM. It @@ -111,14 +111,14 @@ configure space of the vGPU (virtual device 0:2:0) via VHM's APIs. Finally, the AcrnGT module launches an AcrnGT emulation thread to listen to I/O trap notifications from HVM and ACRN hypervisor. -vGPU destroy scenario +vGPU Destroy Scenario ===================== In this scenario, AcrnGT receives a destroy request from ACRN-DM. It calls GVT's :ref:`intel_gvt_ops_interface` to inform GVT of the vGPU destroy request, and cleans up all vGPU resources. -vGPU PCI configure space write scenario +vGPU PCI Configure Space Write Scenario ======================================= ACRN traps the vGPU's PCI config space write, notifies AcrnGT's @@ -133,26 +133,26 @@ config space write: corresponding part in the host's aperture. #. Otherwise, write to the virtual PCI configuration space of the vGPU. -PCI configure space read scenario +PCI Configure Space Read Scenario ================================= Call sequence is almost the same as the write scenario above, but instead it calls the GVT's :ref:`intel_gvt_ops_interface` ``emulate_cfg_read`` to emulate the vGPU PCI config space read. -GGTT read/write scenario +GGTT Read/Write Scenario ======================== GGTT's trap is set up in the PCI configure space write scenario above. -MMIO read/write scenario +MMIO Read/Write Scenario ======================== MMIO's trap is set up in the PCI configure space write scenario above. -PPGTT write-protection page set/unset scenario +PPGTT Write-Protection Page Set/Unset Scenario ============================================== PPGTT write-protection page is set by calling ``acrn_ioreq_add_iorange`` @@ -161,13 +161,13 @@ allowing read without trap. PPGTT write-protection page is unset by calling ``acrn_ioreq_del_range``. -PPGTT write-protection page write +PPGTT Write-Protection Page Write ================================= In the VHM module, ioreq for PPGTT WP and MMIO trap is the same. It will also be trapped into the routine ``intel_vgpu_emulate_mmio_write()``. -API details +API Details *********** APIs of each component interface can be found in the :ref:`GVT-g_api` diff --git a/doc/developer-guides/contribute_guidelines.rst b/doc/developer-guides/contribute_guidelines.rst index 6c6b6e235..77bcca7f8 100644 --- a/doc/developer-guides/contribute_guidelines.rst +++ b/doc/developer-guides/contribute_guidelines.rst @@ -114,7 +114,7 @@ platforms such as GitHub. If you haven't already done so, you'll need to create a (free) GitHub account on https://github.com and have Git tools available on your development system. -Repository layout +Repository Layout ***************** To clone the ACRN hypervisor repository (including the ``hypervisor``, @@ -166,7 +166,7 @@ Contribution Tools and Git Setup .. _Git send-email documentation: https://git-scm.com/docs/git-send-email -git-send-email +Git-Send-Email ============== If you'll be submitting code patches, you may need to install @@ -178,7 +178,7 @@ for example use:: and then configure Git` with your SMTP server information as described in the `Git send-email documentation`_. -Signed-off-by +Signed-Off-By ============= The name in the commit message ``Signed-off-by:`` line and your email must diff --git a/doc/developer-guides/doc_guidelines.rst b/doc/developer-guides/doc_guidelines.rst index ab92e5df7..7751a6d95 100644 --- a/doc/developer-guides/doc_guidelines.rst +++ b/doc/developer-guides/doc_guidelines.rst @@ -162,7 +162,7 @@ Would be rendered as: Remove all generated output, restoring the folders to a clean state. -Multi-column lists +Multi-Column Lists ****************** If you have a long bullet list of items, where each item is short, you @@ -282,7 +282,7 @@ columns, you can specify ``:widths: 1 2 2``. If you'd like the browser to set the column widths automatically based on the column contents, you can use ``:widths: auto``. -File names and Commands +File Names and Commands *********************** Sphinx extends reST by supporting additional inline markup elements (called @@ -484,7 +484,7 @@ as needed, generally at least 500 px wide but no more than 1000 px, and no more than 250 KB unless a particularly large image is needed for clarity. -Tabs, spaces, and indenting +Tabs, Spaces, and Indenting *************************** Indenting is significant in reST file content, and using spaces is @@ -641,7 +641,7 @@ without this ``rst-class`` directive will not be numbered.) For example:: .. rst-class:: numbered-step -First instruction step +First Instruction Step ********************** This is the first instruction step material. You can do the usual paragraphs and @@ -651,7 +651,7 @@ can move steps around easily if needed). .. rst-class:: numbered-step -Second instruction step +Second Instruction Step *********************** This is the second instruction step. diff --git a/doc/developer-guides/graphviz.rst b/doc/developer-guides/graphviz.rst index 243c75cd4..745d9647c 100644 --- a/doc/developer-guides/graphviz.rst +++ b/doc/developer-guides/graphviz.rst @@ -1,6 +1,6 @@ .. _graphviz-examples: -Drawings using graphviz +Drawings Using Graphviz ####################### We support using the Sphinx `graphviz extension`_ for creating simple @@ -35,7 +35,7 @@ and the generated output would appear as this: Let's look at some more examples and then we'll get into more details about the dot language and drawing options. -Simple directed graph +Simple Directed Graph ********************* For simple drawings with shapes and lines, you can put the graphviz @@ -77,7 +77,7 @@ colors, as shown. .. _standard HTML color names: https://www.w3schools.com/colors/colors_hex.asp -Adding edge labels +Adding Edge Labels ****************** Here's an example of a drawing with labels on the edges (arrows) diff --git a/doc/developer-guides/hld/atkbdc-virt-hld.rst b/doc/developer-guides/hld/atkbdc-virt-hld.rst index ace20e8ae..023a3acaf 100644 --- a/doc/developer-guides/hld/atkbdc-virt-hld.rst +++ b/doc/developer-guides/hld/atkbdc-virt-hld.rst @@ -1,6 +1,6 @@ .. _atkbdc_virt_hld: -AT keyboard controller emulation +AT Keyboard Controller Emulation ################################ This document describes the AT keyboard controller emulation implementation in the ACRN device model. The Atkbdc device emulates a PS2 keyboard and mouse. @@ -16,7 +16,7 @@ The PS2 port is a 6-pin mini-Din connector used for connecting keyboards and mic AT keyboard controller emulation architecture -PS2 keyboard emulation +PS2 Keyboard Emulation ********************** ACRN supports AT keyboard controller for PS2 keyboard that can be accessed through I/O ports(0x60 and 0x64). 0x60 is used to access AT keyboard controller data register, 0x64 is used to access AT keyboard controller address register. @@ -45,7 +45,7 @@ The PS2 keyboard ACPI description as below:: }) } -PS2 mouse emulation +PS2 Mouse Emulation ******************* ACRN supports AT keyboard controller for PS2 mouse that can be accessed through I/O ports(0x60 and 0x64). diff --git a/doc/developer-guides/hld/hld-APL_GVT-g.rst b/doc/developer-guides/hld/hld-APL_GVT-g.rst index b0d683ed2..895fd984f 100644 --- a/doc/developer-guides/hld/hld-APL_GVT-g.rst +++ b/doc/developer-guides/hld/hld-APL_GVT-g.rst @@ -1,12 +1,12 @@ .. _APL_GVT-g-hld: -GVT-g high-level design +GVT-g High-Level Design ####################### Introduction ************ -Purpose of this Document +Purpose of This Document ======================== This high-level design (HLD) document describes the usage requirements @@ -919,7 +919,7 @@ OS and an Android Guest OS. Full picture of the AcrnGT -AcrnGT in kernel +AcrnGT in Kernel ================= The AcrnGT module in the Service VM kernel acts as an adaption layer to connect diff --git a/doc/developer-guides/hld/hld-devicemodel.rst b/doc/developer-guides/hld/hld-devicemodel.rst index 3e42a7899..8b552345f 100644 --- a/doc/developer-guides/hld/hld-devicemodel.rst +++ b/doc/developer-guides/hld/hld-devicemodel.rst @@ -1,6 +1,6 @@ .. _hld-devicemodel: -Device Model high-level design +Device Model High-Level Design ############################## Hypervisor Device Model (DM) is a QEMU-like application in Service VM @@ -51,18 +51,18 @@ options: .. code-block:: none - acrn-dm [-hAWYv] [-B bootargs] [-c vcpus] [-E elf_image_path] + acrn-dm [-hAWYv] [-B bootargs] [-E elf_image_path] [-G GVT_args] [-i ioc_mediator_parameters] [-k kernel_image_path] - [-l lpc] [-m mem] [-p vcpu:hostcpu] [-r ramdisk_image_path] + [-l lpc] [-m mem] [-r ramdisk_image_path] [-s pci] [-U uuid] [--vsbl vsbl_file_name] [--ovmf ovmf_file_path] [--part_info part_info_name] [--enable_trusty] [--intr_monitor param_setting] [--acpidev_pt HID] [--mmiodev_pt MMIO_regions] [--vtpm2 sock_path] [--virtio_poll interval] [--mac_seed seed_string] - [--ptdev_no_reset] [--debugexit] - [--lapic_pt] + [--cpu_affinity pCPUs] [--lapic_pt] [--rtvm] [--windows] + [--debugexit] [--logger-setting param_setting] [--pm_notify_channel] + [--pm_by_vuart vuart_node] [--psram] -A: create ACPI tables -B: bootargs for kernel - -c: # cpus (default 1) -E: elf image path -G: GVT args: low_gm_size, high_gm_size, fence_sz -h: help @@ -70,7 +70,6 @@ options: -k: kernel image path -l: LPC device configuration -m: memory size in MB - -p: pin 'vcpu' to 'hostcpu' -r: ramdisk image path -s: PCI slot config -U: uuid @@ -80,9 +79,10 @@ options: --mac_seed: set a platform unique string as a seed for generate mac address --vsbl: vsbl file path --ovmf: ovmf file path + --psram: Enable Pseudo (Software) SRAM passthrough + --cpu_affinity: list of pCPUs assigned to this VM --part_info: guest partition info file path --enable_trusty: enable trusty for guest - --ptdev_no_reset: disable reset check for ptdev --debugexit: enable debug exit function --intr_monitor: enable interrupt storm monitor its params: threshold/s,probe-period(s),delay_time(ms),delay_duration(ms), @@ -95,6 +95,8 @@ options: --logger_setting: params like console,level=4;kmsg,level=3 --pm_notify_channel: define the channel used to notify guest about power event --pm_by_vuart:pty,/run/acrn/vuart_vmname or tty,/dev/ttySn + --windows: support Oracle virtio-blk, virtio-net, and virtio-input devices + for windows guest with secure boot See :ref:`acrn-dm_parameters` for more detailed descriptions of these configuration options. @@ -111,7 +113,7 @@ Here's an example showing how to run a VM with: .. code-block:: bash - acrn-dm -A -m 2048M -c 3 \ + acrn-dm -A -m 2048M \ -s 0:0,hostbridge \ -s 1:0,lpc -l com1,stdio \ -s 5,virtio-console,@pty:pty_port \ @@ -121,10 +123,9 @@ Here's an example showing how to run a VM with: --intr_monitor 10000,10,1,100 \ -B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \ console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \ - consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=0x070F00 \ - i915.enable_guc_loading=0 \ + consoleblank=0 tsc=reliable \ i915.enable_hangcheck=0 i915.nuclear_pageflip=1 \ - i915.enable_guc_submission=0 i915.enable_guc=0" vm1 + i915.enable_guc=0" vm1 DM Initialization ***************** @@ -279,7 +280,7 @@ DM Initialization VHM *** -VHM overview +VHM Overview ============ Device Model manages User VM by accessing interfaces exported from VHM @@ -302,7 +303,7 @@ hypercall to the hypervisor. There are two exceptions: Architecture of ACRN VHM -VHM ioctl interfaces +VHM ioctl Interfaces ==================== .. note:: Reference API documents for General interface, VM Management, @@ -756,7 +757,7 @@ called from the PIO/MMIO handler. The PCI emulation device will make use of interrupt APIs as well for its interrupt injection. -PCI Host Bridge and hierarchy +PCI Host Bridge and Hierarchy ============================= There is PCI host bridge emulation in DM. The bus hierarchy is @@ -765,7 +766,7 @@ example: .. code-block:: bash - acrn-dm -A -m 2048M -c 3 \ + acrn-dm -A -m 2048M \ -s 0:0,hostbridge \ -s 1:0,lpc -l com1,stdio \ -s 5,virtio-console,@pty:pty_port \ @@ -773,10 +774,9 @@ example: -s 4,virtio-net,tap_LaaG --vsbl /usr/share/acrn/bios/VSBL.bin \ -B "root=/dev/vda2 rw rootwait maxcpus=3 nohpet console=hvc0 \ console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M \ - consoleblank=0 tsc=reliable i915.avail_planes_per_pipe=0x070F00 \ - i915.enable_guc_loading=0 \ + consoleblank=0 tsc=reliable \ i915.enable_hangcheck=0 i915.nuclear_pageflip=1 \ - i915.enable_guc_submission=0 i915.enable_guc=0" vm1 + i915.enable_guc=0" vm1 the bus hierarchy would be: @@ -892,7 +892,7 @@ shows a typical ACPI table layout in an Intel APL platform: Typical ACPI table layout on Intel APL platform -ACPI virtualization +ACPI Virtualization =================== Most modern OSes requires ACPI, so we need ACPI virtualization to diff --git a/doc/developer-guides/hld/hld-emulated-devices.rst b/doc/developer-guides/hld/hld-emulated-devices.rst index 204e75aab..b89d50d65 100644 --- a/doc/developer-guides/hld/hld-emulated-devices.rst +++ b/doc/developer-guides/hld/hld-emulated-devices.rst @@ -1,6 +1,6 @@ .. _hld-emulated-devices: -Emulated devices high-level design +Emulated Devices High-Level Design ################################## Full virtualization device models can typically diff --git a/doc/developer-guides/hld/hld-hypervisor.rst b/doc/developer-guides/hld/hld-hypervisor.rst index f5e4b01fe..dc4362958 100644 --- a/doc/developer-guides/hld/hld-hypervisor.rst +++ b/doc/developer-guides/hld/hld-hypervisor.rst @@ -1,6 +1,6 @@ .. _hld-hypervisor: -Hypervisor high-level design +Hypervisor High-Level Design ############################ diff --git a/doc/developer-guides/hld/hld-overview.rst b/doc/developer-guides/hld/hld-overview.rst index 7f875cfd3..8c30a73d8 100644 --- a/doc/developer-guides/hld/hld-overview.rst +++ b/doc/developer-guides/hld/hld-overview.rst @@ -1,6 +1,6 @@ .. _hld-overview: -ACRN high-level design overview +ACRN High-Level Design Overview ############################### ACRN is an open source reference hypervisor (HV) that runs on top of @@ -28,7 +28,7 @@ The Instrument Control (IC) system manages graphic displays of: - alerts of low fuel or tire pressure - rear-view camera (RVC) and surround-camera view for driving assistance -In-vehicle Infotainment +In-Vehicle Infotainment ======================= A typical In-vehicle Infotainment (IVI) system supports: @@ -419,7 +419,7 @@ to complete the User VM's host-to-guest mapping using this pseudo code: host2guest_map_for_uos(x.hpa, x.uos_gpa, x.size) end -Virtual Slim bootloader +Virtual Slim Bootloader ======================= The Virtual Slim bootloader (vSBL) is the virtual bootloader that supports @@ -451,7 +451,7 @@ For an Android VM, the vSBL will load and verify trusty OS first, and trusty OS will then load and verify Android OS according to the Android OS verification mechanism. -OVMF bootloader +OVMF Bootloader ======================= Open Virtual Machine Firmware (OVMF) is the virtual bootloader that supports @@ -536,7 +536,7 @@ Boot Flow Power Management **************** -CPU P-state & C-state +CPU P-State & C-State ===================== In ACRN, CPU P-state and C-state (Px/Cx) are controlled by the guest OS. @@ -562,7 +562,7 @@ This diagram shows CPU P/C-state management blocks: CPU P/C-state management block diagram -System power state +System Power State ================== ACRN supports ACPI standard defined power state: S3 and S5 in system diff --git a/doc/developer-guides/hld/hld-power-management.rst b/doc/developer-guides/hld/hld-power-management.rst index e381a79f8..8909e6cd7 100644 --- a/doc/developer-guides/hld/hld-power-management.rst +++ b/doc/developer-guides/hld/hld-power-management.rst @@ -1,12 +1,12 @@ .. _hld-power-management: -Power Management high-level design +Power Management High-Level Design ################################## -P-state/C-state management +P-State/C-State Management ************************** -ACPI Px/Cx data +ACPI Px/Cx Data =============== CPU P-state/C-state are controlled by the guest OS. The ACPI @@ -54,7 +54,7 @@ Hypervisor module named CPU state table: With these Px/Cx data, the Hypervisor is able to intercept the guest's P/C-state requests with desired restrictions. -Virtual ACPI table build flow +Virtual ACPI Table Build Flow ============================= :numref:`vACPItable` shows how to build the virtual ACPI table with the @@ -127,7 +127,7 @@ could customize it according to their hardware/software requirements. ACRN System S3/S5 diagram -System low power state entry process +System Low Power State Entry Process ==================================== Each time, when lifecycle manager of User VM starts power state transition, @@ -171,7 +171,7 @@ For system power state entry: 6. OSPM in ACRN hypervisor checks all guests are in S5 state and shuts down whole system. -System low power state exit process +System Low Power State Exit Process =================================== The low power state exit process is in reverse order. The ACRN diff --git a/doc/developer-guides/hld/hld-security.rst b/doc/developer-guides/hld/hld-security.rst index 98f648988..3844d1969 100644 --- a/doc/developer-guides/hld/hld-security.rst +++ b/doc/developer-guides/hld/hld-security.rst @@ -1,6 +1,6 @@ .. _hld-security: -Security high-level design +Security High-Level Design ########################## .. primary author: Bing Zhu @@ -131,7 +131,7 @@ Boot Flow --------- ACRN supports two verified boot sequences. -1) Verified Boot Sequence with SBL +1) Verified Boot Sequence With SBL ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As shown in :numref:`security-bootflow-sbl`, the Converged Security Engine Firmware (CSE FW) behaves as the root of trust in this platform boot @@ -148,7 +148,7 @@ before launching. ACRN Boot Flow with SBL -2) Verified Boot Sequence with UEFI +2) Verified Boot Sequence With UEFI ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ As shown in :numref:`security-bootflow-uefi`, in this boot sequence, UEFI authenticates and starts the ACRN hypervisor firstly,and hypervisor will return @@ -193,7 +193,7 @@ partners are responsible for image signing, ensuring the key strength meets security requirements, and storing the secret RSA private key securely. -Guest Secure Boot with OVMF +Guest Secure Boot With OVMF --------------------------- Open Virtual Machine Firmware (OVMF) is an EDK II based project to enable UEFI support for virtual machines in a virtualized environment. In ACRN, OVMF is @@ -744,7 +744,7 @@ for secure-world is preserved too. The physical memory region of secure world is removed from EPT paging tables of any guest VM, even including the Service VM. -Third-party libraries +Third-Party Libraries --------------------- All the third-party libraries must be examined before use to verify @@ -754,7 +754,7 @@ can be used to search for known vulnerabilities. .. _platform_root_of_trust: -Platform Root of Trust Key/SEED Derivation +Platform Root of Trust Key/Seed Derivation ========================================== For security reason, each guest VM requires a root key, which is used to @@ -880,7 +880,7 @@ memory (>=511G) are valid for Trusty World's EPT only. Memory View for User VM non-secure World and Secure World -Trusty/TEE Hypercalls +Trusty/Tee Hypercalls --------------------- Two hypercalls are introduced to assist in secure world (Trusty/TEE) @@ -1039,7 +1039,7 @@ SEED Derivation Refer to the previous section: :ref:`platform_root_of_trust`. -Trusty/TEE S3 (Suspend To RAM) +Trusty/Tee S3 (Suspend to RAM) ------------------------------ Secure world S3 design is not yet finalized. However, there is a diff --git a/doc/developer-guides/hld/hld-splitlock.rst b/doc/developer-guides/hld/hld-splitlock.rst index 0a9082769..4791378c6 100644 --- a/doc/developer-guides/hld/hld-splitlock.rst +++ b/doc/developer-guides/hld/hld-splitlock.rst @@ -1,6 +1,6 @@ .. _hld_splitlock: -Handling Split-locked Access in ACRN +Handling Split-Locked Access in ACRN #################################### A split lock is any atomic operation whose operand crosses two cache @@ -12,7 +12,7 @@ system performance. This document explains Split-locked Access, how to detect it, and how ACRN handles it. -Split-locked Access Introduction +Split-Locked Access Introduction ******************************** Intel-64 and IA32 multiple-processor systems support locked atomic operations on locations in system memory. For example, The LOCK instruction @@ -38,7 +38,7 @@ Split-locked Access can cause unexpected long latency to ordinary memory operations by other CPUs while the bus is locked. This degraded system performance can be hard to investigate. -Split-locked Access Detection +Split-Locked Access Detection ***************************** The `Intel Tremont Microarchitecture `_ @@ -70,7 +70,7 @@ MSR registers. - The 29th bit of TEST_CTL MSR(0x33) controls enabling and disabling #AC for Split-locked Access. -ACRN Handling Split-locked Access +ACRN Handling Split-Locked Access ********************************* Split-locked Access is not expected in the ACRN hypervisor itself, and should never happen. However, such access could happen inside a VM. ACRN @@ -92,7 +92,7 @@ support for handling split-locked access follows these design principles: native OS). The real-time (RT) guest must avoid a Split-locked Access and consider it a software bug. -Enable Split-Locked Access handling early +Enable Split-Locked Access Handling Early ========================================== This feature is enumerated at the Physical CPU (pCPU) pre-initialization stage, where ACRN detects CPU capabilities. If the pCPU supports this @@ -128,7 +128,7 @@ problem by reporting a warning message that the VM tried writing to TEST_CTRL MSR. -Disable Split-locked Access Detection +Disable Split-Locked Access Detection ===================================== If the CPU supports Split-locked Access detection, the ACRN hypervisor uses it to prevent any VM running with potential system performance diff --git a/doc/developer-guides/hld/hld-trace-log.rst b/doc/developer-guides/hld/hld-trace-log.rst index 9e5070485..68c7ce11d 100644 --- a/doc/developer-guides/hld/hld-trace-log.rst +++ b/doc/developer-guides/hld/hld-trace-log.rst @@ -1,6 +1,6 @@ .. _hld-trace-log: -Tracing and Logging high-level design +Tracing and Logging High-Level Design ##################################### Both Trace and Log are built on top of a mechanism named shared @@ -128,7 +128,7 @@ kinds of logs: - Current runtime logs; - Logs remaining in the buffer, from the last crashed run. -Architectural diagram +Architectural Diagram ===================== Similar to the design of ACRN Trace, ACRN Log is built on top of @@ -149,7 +149,7 @@ up: Architectural diagram of ACRN Log -ACRN log support in Hypervisor +ACRN Log Support in Hypervisor ============================== To support ``acrnlog``, the following adaption was made to hypervisor log @@ -162,7 +162,7 @@ system: There are 6 different loglevels, as shown below. The specified severity loglevel is stored in ``mem_loglevel``, initialized -by :option:`CONFIG_MEM_LOGLEVEL_DEFAULT`. The loglevel can +by :option:`hv.DEBUG_OPTIONS.MEM_LOGLEVEL`. The loglevel can be set to a new value at runtime via hypervisor shell command ``loglevel``. diff --git a/doc/developer-guides/hld/hld-virtio-devices.rst b/doc/developer-guides/hld/hld-virtio-devices.rst index 69f100bc0..700d99549 100644 --- a/doc/developer-guides/hld/hld-virtio-devices.rst +++ b/doc/developer-guides/hld/hld-virtio-devices.rst @@ -1,7 +1,7 @@ .. _hld-virtio-devices: .. _virtio-hld: -Virtio devices high-level design +Virtio Devices High-Level Design ################################ The ACRN Hypervisor follows the `Virtual I/O Device (virtio) @@ -47,7 +47,7 @@ ACRN's virtio architectures, and elaborates on ACRN virtio APIs. Finally this section will introduce all the virtio devices currently supported by ACRN. -Virtio introduction +Virtio Introduction ******************* Virtio is an abstraction layer over devices in a para-virtualized @@ -268,7 +268,7 @@ Kernel-Land Virtio Framework ACRN supports two kernel-land virtio frameworks: VBS-K, designed from scratch for ACRN, the other called Vhost, compatible with Linux Vhost. -VBS-K framework +VBS-K Framework --------------- The architecture of ACRN VBS-K is shown in @@ -301,7 +301,7 @@ driver development. ACRN Kernel Land Virtio Framework -Vhost framework +Vhost Framework --------------- Vhost is similar to VBS-K. Vhost is a common solution upstreamed in the @@ -346,7 +346,7 @@ can be described as: virtqueue, which results a event_signal on kick fd by VHM ioeventfd. 5. vhost device in kernel signals on the irqfd to notify the guest. -Ioeventfd implementation +Ioeventfd Implementation ~~~~~~~~~~~~~~~~~~~~~~~~ Ioeventfd module is implemented in VHM, and can enhance a registered @@ -372,7 +372,7 @@ The workflow can be summarized as: corresponding eventfd. 7. trigger the signal to related eventfd. -Irqfd implementation +Irqfd Implementation ~~~~~~~~~~~~~~~~~~~~ The irqfd module is implemented in VHM, and can enhance a registered @@ -584,7 +584,7 @@ VBS-K APIs The VBS-K APIs are exported by VBS-K related modules. Users could use the following APIs to implement their VBS-K modules. -APIs provided by DM +APIs Provided by DM ~~~~~~~~~~~~~~~~~~~ .. doxygenfunction:: vbs_kernel_reset @@ -596,7 +596,7 @@ APIs provided by DM .. doxygenfunction:: vbs_kernel_stop :project: Project ACRN -APIs provided by VBS-K modules in service OS +APIs Provided by VBS-K Modules in Service OS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .. kernel-doc:: include/linux/vbs/vbs.h @@ -608,10 +608,10 @@ APIs provided by VBS-K modules in service OS virtio_vqs_index_get virtio_dev_reset -VHOST APIS +VHOST APIs ========== -APIs provided by DM +APIs Provided by DM ------------------- .. doxygenfunction:: vhost_dev_init @@ -626,7 +626,7 @@ APIs provided by DM .. doxygenfunction:: vhost_dev_stop :project: Project ACRN -Linux vhost IOCTLs +Linux Vhost IOCTLs ------------------ ``#define VHOST_GET_FEATURES _IOR(VHOST_VIRTIO, 0x00, __u64)`` @@ -658,7 +658,7 @@ Linux vhost IOCTLs This IOCTL is used to set the eventfd which is used by vhost do inject virtual interrupt. -VHM eventfd IOCTLs +VHM Eventfd IOCTLs ------------------ .. doxygenstruct:: acrn_ioeventfd diff --git a/doc/developer-guides/hld/hld-vsbl.rst b/doc/developer-guides/hld/hld-vsbl.rst index cd77c99fb..d48fc9c88 100644 --- a/doc/developer-guides/hld/hld-vsbl.rst +++ b/doc/developer-guides/hld/hld-vsbl.rst @@ -1,4 +1,4 @@ .. _hld-vsbl: -Virtual Slim-Bootloader high-level design +Virtual Slim-Bootloader High-Level Design ######################################### diff --git a/doc/developer-guides/hld/hostbridge-virt-hld.rst b/doc/developer-guides/hld/hostbridge-virt-hld.rst index 1cbe85567..554b8a221 100644 --- a/doc/developer-guides/hld/hostbridge-virt-hld.rst +++ b/doc/developer-guides/hld/hostbridge-virt-hld.rst @@ -1,6 +1,6 @@ .. _hostbridge_virt_hld: -Hostbridge emulation +Hostbridge Emulation #################### Overview @@ -8,7 +8,7 @@ Overview Hostbridge emulation is based on PCI emulation; however, the hostbridge emulation only sets the PCI configuration space. The device model sets the PCI configuration space for hostbridge in the Service VM and then exposes it to the User VM to detect the PCI hostbridge. -PCI Host Bridge and hierarchy +PCI Host Bridge and Hierarchy ***************************** There is PCI host bridge emulation in DM. The bus hierarchy is determined by ``acrn-dm`` command line input. Using this command line, as an example:: diff --git a/doc/developer-guides/hld/hv-config.rst b/doc/developer-guides/hld/hv-config.rst index 587f5da84..121007361 100644 --- a/doc/developer-guides/hld/hv-config.rst +++ b/doc/developer-guides/hld/hv-config.rst @@ -1,51 +1,46 @@ .. _hv-config: -Compile-time Configuration +Compile-Time Configuration ########################## -The hypervisor provides a kconfig-like way for manipulating compile-time -configurations. Basically the hypervisor defines a set of configuration -symbols and declare their default value. A configuration file is -created, containing the values of each symbol, before building the -sources. +As described in :ref:`acrn_configuration_tool`, ACRN hypervisor configurations +are saved as XML files and used for compilation. At compile-time, configuration +data in the board and scenario XMLs are converted to C header and source files +that define macros, variables, and data structures to which the hypervisor can +refer. This conversion has two main steps: -Similar to Linux kconfig, there are three files involved: +1. **Static allocation of resources**, which statically reserves resources for + the VMs if only high-level requirements are given in the scenario + configurations. Examples include the runtime base address of the hypervisor + image and PCI BDF addresses of ivshmem virtual devices. -- **.config** This files stores the values of all configuration - symbols. +#. **Generation of C files**, which places the configuration data in the data + types and structures defined by the hypervisor. -- **config.mk** This file is a conversion of .config in Makefile - syntax, and can be included in makefiles so that the build - process can rely on the configurations. +Some key files, which can be found under the build directory of the hypervisor, +are as follows. -- **config.h** This file is a conversion of .config in C syntax, and is - automatically included in every source file so that the values of - the configuration symbols are available in the sources. +- **.board.xml** and **.scenario.xml** These files contain the configuration + data used by that build. -.. figure:: images/config-image103.png - :align: center - :name: config-build-workflow +- **configs/allocation.xml** contains the results of the static allocation. - Hypervisor configuration and build workflow +- **configs/config.mk** This file is a conversion of the hypervisor feature + configurations (specified in the scenario XML) in Makefile syntax, and can be + included in makefiles so that the build process can rely on the + configurations. -:numref:`config-build-workflow` shows the workflow of building the -hypervisor: +- **include/config.h** This file is a conversion of the hypervisor feature + configurations in C header syntax, and is automatically included in every + source file so that the values of the configuration symbols are available in + the sources. -1. Three targets are introduced for manipulating the configurations. +- **configs/boards** and **configs/scenarios** contain all the other generated C + headers and sources that encode the configuration data in the XML files. - a. **defconfig** creates a .config based on a predefined - configuration file. +Whenever ``.board.xml`` or ``.scenario.xml`` is modified, the hypervisor will be +rebuilt upon the next invocation of ``make``. - b. **oldconfig** updates an existing .config after creating one if it - does not exist. - - c. **menuconfig** presents a terminal UI to navigate and modify the - configurations in an interactive manner. - -2. The target oldconfig is also used to create a .config if a .config - file does not exist when building the source directly. - -3. The other two files for makefiles and C sources are regenerated after - .config changes. - -Refer to :ref:`configuration` for a complete list of configuration symbols. +For the concept and usage of the configuration toolset, refer to +:ref:`acrn_configuration_tool`. For a complete list of configuration symbols, +refer to :ref:`scenario-config-options`. diff --git a/doc/developer-guides/hld/hv-console.rst b/doc/developer-guides/hld/hv-console.rst index c39c7fac8..39001b6a7 100644 --- a/doc/developer-guides/hld/hv-console.rst +++ b/doc/developer-guides/hld/hv-console.rst @@ -1,11 +1,11 @@ .. _hv-console-shell-uart: -Hypervisor console, hypervisor shell, and virtual UART +Hypervisor Console, Hypervisor Shell, and Virtual UART ###################################################### .. _hv-console: -Hypervisor console +Hypervisor Console ****************** The hypervisor console is a text-based terminal accessible from UART. @@ -32,7 +32,7 @@ is active: configured at compile time. In the release version, the console is disabled and the physical UART is not used by the hypervisor or Service VM. -Hypervisor shell +Hypervisor Shell **************** For debugging, the hypervisor shell provides commands to list some diff --git a/doc/developer-guides/hld/hv-cpu-virt.rst b/doc/developer-guides/hld/hv-cpu-virt.rst index c7c9b855e..0ad6a0195 100644 --- a/doc/developer-guides/hld/hv-cpu-virt.rst +++ b/doc/developer-guides/hld/hv-cpu-virt.rst @@ -31,7 +31,7 @@ Based on Intel VT-x virtualization technology, ACRN emulates a virtual CPU - **simple schedule**: a well-designed scheduler framework that allows ACRN to adopt different scheduling policies, such as the **noop** and **round-robin**: - - **noop scheduler**: only two thread loops are maintained for a CPU: a + - **noop scheduler**: only two thread loops are maintained for a CPU: a vCPU thread and a default idle thread. A CPU runs most of the time in the vCPU thread for emulating a guest CPU, switching between VMX root mode and non-root mode. A CPU schedules out to default idle when an @@ -45,7 +45,7 @@ Based on Intel VT-x virtualization technology, ACRN emulates a virtual CPU itself as well, such as when it executes "PAUSE" instruction. -Static CPU partitioning +Static CPU Partitioning *********************** CPU partitioning is a policy for mapping a virtual @@ -75,7 +75,7 @@ VM. See :ref:`cpu_sharing` for more information. -CPU management in the Service VM under static CPU partitioning +CPU Management in the Service VM Under Static CPU Partitioning ============================================================== With ACRN, all ACPI table entries are passthrough to the Service VM, including @@ -96,7 +96,7 @@ Here is an example flow of CPU allocation on a multi-core platform. CPU allocation on a multi-core platform -CPU management in the Service VM under flexible CPU sharing +CPU Management in the Service VM Under Flexible CPU Sharing =========================================================== As all Service VM CPUs could share with different User VMs, ACRN can still passthrough @@ -105,7 +105,7 @@ MADT to Service VM, and the Service VM is still able to see all physical CPUs. But as under CPU sharing, the Service VM does not need offline/release the physical CPUs intended for User VM use. -CPU management in the User VM +CPU Management in the User VM ============================= ``cpu_affinity`` in ``vm config`` defines a set of pCPUs that a User VM @@ -113,7 +113,7 @@ is allowed to run on. acrn-dm could choose to launch on only a subset of the pCP or on all pCPUs listed in cpu_affinity, but it can't assign any pCPU that is not included in it. -CPU assignment management in HV +CPU Assignment Management in HV =============================== The physical CPU assignment is pre-defined by ``cpu_affinity`` in @@ -169,7 +169,7 @@ lifecycle: :project: Project ACRN -vCPU Scheduling under static CPU partitioning +vCPU Scheduling Under Static CPU Partitioning ********************************************* .. figure:: images/hld-image35.png @@ -225,7 +225,7 @@ Some example scenario flows are shown here: *hcall_notify_ioreq_finish->resume_vcpu* and makes the vCPU schedule back to *vcpu_thread* to continue its guest execution. -vCPU Scheduling under flexible CPU sharing +vCPU Scheduling Under Flexible CPU Sharing ****************************************** To be added. diff --git a/doc/developer-guides/hld/hv-dev-passthrough.rst b/doc/developer-guides/hld/hv-dev-passthrough.rst index 4cabbd0e1..d090e8cf1 100644 --- a/doc/developer-guides/hld/hv-dev-passthrough.rst +++ b/doc/developer-guides/hld/hv-dev-passthrough.rst @@ -53,7 +53,7 @@ for post-launched VM: Passthrough devices initialization control flow -Passthrough Device status +Passthrough Device Status ************************* Most common devices on supported platforms are enabled for @@ -129,7 +129,7 @@ a passthrough device to/from a post-launched VM is shown in the following figure .. _vtd-posted-interrupt: -VT-d Interrupt-remapping +VT-d Interrupt-Remapping ************************ The VT-d interrupt-remapping architecture enables system software to @@ -252,7 +252,7 @@ There is one exception, MSI-X table is also in a MMIO BAR. Hypervisor needs to t accesses to MSI-X table. So the page(s) having MSI-X table should not be accessed by guest directly. EPT mapping is not built for these pages having MSI-X table. -Device configuration emulation +Device Configuration Emulation ****************************** The PCI configuration space can be accessed by a PCI-compatible @@ -260,7 +260,7 @@ Configuration Mechanism (IO port 0xCF8/CFC) and the PCI Express Enhanced Configuration Access Mechanism (PCI MMCONFIG). The ACRN hypervisor traps this PCI configuration space access and emulate it. Refer to :ref:`split-device-model` for details. -MSI-X table emulation +MSI-X Table Emulation ********************* VM accesses to MSI-X table should be trapped so that hypervisor has the @@ -386,7 +386,7 @@ The platform GSI information is in devicemodel/hw/pci/platform_gsi_info.c for limited platform (currently, only APL MRB). For other platforms, the platform specific GSI information should be added to activate the checking of GSI sharing violation. -Data structures and interfaces +Data Structures and Interfaces ****************************** The following APIs are common APIs provided to initialize interrupt remapping for diff --git a/doc/developer-guides/hld/hv-hypercall.rst b/doc/developer-guides/hld/hv-hypercall.rst index f0b402d09..5d83ce12f 100644 --- a/doc/developer-guides/hld/hv-hypercall.rst +++ b/doc/developer-guides/hld/hv-hypercall.rst @@ -1,6 +1,6 @@ .. _hv-hypercall: -Hypercall / VHM upcall +Hypercall / VHM Upcall ###################### The hypercall/upcall is used to request services between the Guest VM and the hypervisor. @@ -28,7 +28,7 @@ injected to Service VM vCPU0. The Service VM will register the IRQ handler for v module in the Service VM once the IRQ is triggered. View the detailed upcall process at :ref:`ipi-management` -Hypercall APIs reference: +Hypercall APIs Reference: ************************* :ref:`hypercall_apis` for the Service VM diff --git a/doc/developer-guides/hld/hv-interrupt.rst b/doc/developer-guides/hld/hv-interrupt.rst index 1ba0f9f29..7b4e7c1ed 100644 --- a/doc/developer-guides/hld/hv-interrupt.rst +++ b/doc/developer-guides/hld/hv-interrupt.rst @@ -1,6 +1,6 @@ .. _interrupt-hld: -Physical Interrupt high-level design +Physical Interrupt High-Level Design #################################### Overview @@ -374,7 +374,7 @@ IPI vector 0xF3 upcall. The virtual interrupt injection uses IPI vector 0xF0. .. _hv_interrupt-data-api: -Data structures and interfaces +Data Structures and Interfaces ****************************** IOAPIC diff --git a/doc/developer-guides/hld/hv-io-emulation.rst b/doc/developer-guides/hld/hv-io-emulation.rst index f985fcdde..0d743317f 100644 --- a/doc/developer-guides/hld/hv-io-emulation.rst +++ b/doc/developer-guides/hld/hv-io-emulation.rst @@ -1,6 +1,6 @@ .. _hld-io-emulation: -I/O Emulation high-level design +I/O Emulation High-Level Design ############################### As discussed in :ref:`intro-io-emulation`, there are multiple ways and @@ -215,7 +215,7 @@ Note that there is no state to represent a 'failed' I/O request. Service VM should return all 1's for reads and ignore writes whenever it cannot handle the I/O request, and change the state of the request to COMPLETE. -Post-work +Post-Work ========= After an I/O request is completed, some more work needs to be done for diff --git a/doc/developer-guides/hld/hv-ioc-virt.rst b/doc/developer-guides/hld/hv-ioc-virt.rst index 9fec26284..6d301028c 100644 --- a/doc/developer-guides/hld/hv-ioc-virt.rst +++ b/doc/developer-guides/hld/hv-ioc-virt.rst @@ -1,6 +1,6 @@ .. _IOC_virtualization_hld: -IOC Virtualization high-level design +IOC Virtualization High-Level Design #################################### @@ -31,7 +31,7 @@ IOC Mediator Design Architecture Diagrams ===================== -IOC introduction +IOC Introduction ---------------- .. figure:: images/ioc-image12.png @@ -57,7 +57,7 @@ IOC introduction IOC for storing persistent data. The IOC is in charge of accessing NVM following the SoC's requirements. -CBC protocol introduction +CBC Protocol Introduction ------------------------- The Carrier Board Communication (CBC) protocol multiplexes and @@ -85,7 +85,7 @@ The CBC protocol is based on a four-layer system: and contains Multiplexer (MUX) and Priority fields. - The **Service Layer** contains the payload data. -Native architecture +Native Architecture ------------------- In the native architecture, the IOC controller connects to UART @@ -102,7 +102,7 @@ devices. IOC Native - Software architecture -Virtualization architecture +Virtualization Architecture --------------------------- In the virtualization architecture, the IOC Device Model (DM) is @@ -163,7 +163,7 @@ char devices and UART DM immediately. - Currently, IOC mediator only cares about lifecycle, signal, and raw data. Others, e.g. diagnosis, are not used by the IOC mediator. -State transfer +State Transfer -------------- IOC mediator has four states and five events for state transfer. @@ -190,7 +190,7 @@ IOC mediator has four states and five events for state transfer. sleep until a RESUME event is triggered to re-open the closed native CBC char devices and transition to the INIT state. -CBC protocol +CBC Protocol ------------ IOC mediator needs to pack/unpack the CBC link frame for IOC @@ -221,7 +221,7 @@ priority. Currently, priority is not supported by IOC firmware; the priority setting by the IOC mediator is based on the priority setting of the CBC driver. The Service VM and User VM use the same CBC driver. -Power management virtualization +Power Management Virtualization ------------------------------- In acrn-dm, the IOC power management architecture involves PM DM, IOC @@ -232,7 +232,7 @@ and wakeup reason flow is used to indicate IOC power state to the OS. UART DM transfers all IOC data between the Service VM and User VM. These modules complete boot/suspend/resume/shutdown functions. -Boot flow +Boot Flow +++++++++ .. figure:: images/ioc-image19.png @@ -251,7 +251,7 @@ Boot flow #. PM DM starts User VM. #. User VM lifecycle gets a "booting" wakeup reason. -Suspend & Shutdown flow +Suspend & Shutdown Flow +++++++++++++++++++++++ .. figure:: images/ioc-image21.png @@ -281,7 +281,7 @@ Suspend & Shutdown flow suspend/shutdown SUS_STAT, based on the Service VM's own lifecycle service policy. -Resume flow +Resume Flow +++++++++++ .. figure:: images/ioc-image22.png @@ -326,7 +326,7 @@ For RTC resume flow initial or active heartbeat. The User VM gets wakeup reason 0x800200 after resuming.. -System control data +System Control Data ------------------- IOC mediator has several emulated CBC commands, including wakeup reason, @@ -385,7 +385,7 @@ table: disable any watchdog on the CBC heartbeat messages during this period of time. -Wakeup reason +Wakeup Reason +++++++++++++ The wakeup reasons command contains a bit mask of all reasons, which is @@ -532,7 +532,7 @@ definition is as below. IOC Mediator - RTC flow -Signal data +Signal Data ----------- Signal channel is an API between the SOC and IOC for @@ -579,7 +579,7 @@ new multi signal, which contains the signals in the passlist. IOC Mediator - Multi-Signal passlist -Raw data +Raw Data -------- OEM raw channel only assigns to a specific User VM following that OEM @@ -613,7 +613,7 @@ for TTY line discipline in User VM:: -l com2,/run/acrn/ioc_$vm_name -Porting and adaptation to different platforms +Porting and Adaptation to Different Platforms ********************************************* TBD diff --git a/doc/developer-guides/hld/hv-memmgt.rst b/doc/developer-guides/hld/hv-memmgt.rst index 4c7c8bd45..ee8dba8c9 100644 --- a/doc/developer-guides/hld/hv-memmgt.rst +++ b/doc/developer-guides/hld/hv-memmgt.rst @@ -1,6 +1,6 @@ .. _memmgt-hld: -Memory Management high-level design +Memory Management High-Level Design ################################### This document describes memory management for the ACRN hypervisor. @@ -233,7 +233,7 @@ checking service and an EPT hugepage supporting checking service. Before the HV enables memory virtualization and uses the EPT hugepage, these services need to be invoked by other units. -Data Transfer between Different Address Spaces +Data Transfer Between Different Address Spaces ============================================== In ACRN, different memory space management is used in the hypervisor, @@ -244,7 +244,7 @@ transferring, or when the hypervisor does instruction emulation: the HV needs to access the guest instruction pointer register to fetch guest instruction data. -Access GPA from Hypervisor +Access GPA From Hypervisor -------------------------- When the hypervisor needs to access the GPA for data transfer, the caller from guest @@ -255,7 +255,7 @@ different 2M huge host-physical pages. The ACRN hypervisor must take care of this kind of data transfer by doing EPT page walking based on its HPA. -Access GVA from Hypervisor +Access GVA From Hypervisor -------------------------- When the hypervisor needs to access GVA for data transfer, it's likely both @@ -312,7 +312,7 @@ PAT entry in the PAT MSR (which is determined by PAT, PCD, and PWT bits from the guest paging structures) to determine the effective memory type. -VPID operations +VPID Operations =============== Virtual-processor identifier (VPID) is a hardware feature to optimize @@ -376,7 +376,7 @@ Interfaces Design The memory virtualization unit interacts with external units through VM exit and APIs. -VM Exit about EPT +VM Exit About EPT ================= There are two VM exit handlers for EPT violation and EPT @@ -395,7 +395,7 @@ Here is a list of major memory related APIs in the HV: EPT/VPID Capability Checking ---------------------------- -Data Transferring between hypervisor and VM +Data Transferring Between Hypervisor and VM ------------------------------------------- .. doxygenfunction:: copy_from_gpa diff --git a/doc/developer-guides/hld/hv-partitionmode.rst b/doc/developer-guides/hld/hv-partitionmode.rst index 27a05c9b8..cc4c8b4a8 100644 --- a/doc/developer-guides/hld/hv-partitionmode.rst +++ b/doc/developer-guides/hld/hv-partitionmode.rst @@ -1,6 +1,6 @@ .. _partition-mode-hld: -Partition mode +Partition Mode ############## ACRN is a type-1 hypervisor that supports running multiple guest operating @@ -44,7 +44,7 @@ example of two VMs with exclusive access to physical resources. Partition Mode example with two VMs -Guest info +Guest Info ********** ACRN uses multi-boot info passed from the platform bootloader to know @@ -57,7 +57,7 @@ configuration and copies them to the corresponding guest memory. .. figure:: images/partition-image18.png :align: center -ACRN setup for guests +ACRN Setup for Guests ********************* Cores @@ -96,7 +96,7 @@ for assigning host memory to the guests: ACRN creates EPT mapping for the guest between GPA (0, memory size) and HPA (starting address in guest configuration, memory size). -E820 and zero page info +E820 and Zero Page Info ======================= A default E820 is used for all the guests in partition mode. This table @@ -123,7 +123,7 @@ e820 info for all the guests. | RESERVED | +------------------------+ -Platform info - mptable +Platform Info - mptable ======================= ACRN, in partition mode, uses mptable to convey platform info to each @@ -132,7 +132,7 @@ guest, and whether the guest needs devices with INTX, ACRN builds mptable and copies it to the guest memory. In partition mode, ACRN uses physical APIC IDs to pass to the guests. -I/O - Virtual devices +I/O - Virtual Devices ===================== Port I/O is supported for PCI device config space 0xcfc and 0xcf8, vUART @@ -141,7 +141,7 @@ Port I/O is supported for PCI device config space 0xcfc and 0xcf8, vUART host-bridge at BDF (Bus Device Function) 0.0:0 to each guest. Access to 256 bytes of config space for virtual host bridge is emulated. -I/O - Passthrough devices +I/O - Passthrough Devices ========================= ACRN, in partition mode, supports passing thru PCI devices on the @@ -153,7 +153,7 @@ expects the developer to provide the virtual BDF to BDF of the physical device mapping for all the passthrough devices as part of each guest configuration. -Runtime ACRN support for guests +Runtime ACRN Support for Guests ******************************* ACRN, in partition mode, supports an option to passthrough LAPIC of the @@ -170,7 +170,7 @@ will be discussed in detail in the corresponding sections. :align: center -Guest SMP boot flow +Guest SMP Boot Flow =================== The core APIC IDs are reported to the guest using mptable info. SMP boot @@ -178,18 +178,18 @@ flow is similar to sharing mode. Refer to :ref:`vm-startup` for guest SMP boot flow in ACRN. Partition mode guests startup is same as the Service VM startup in sharing mode. -Inter-processor Interrupt (IPI) Handling +Inter-Processor Interrupt (IPI) Handling ======================================== -Guests w/o LAPIC passthrough ----------------------------- +Guests Without LAPIC Passthrough +-------------------------------- For guests without LAPIC passthrough, IPIs between guest CPUs are handled in the same way as sharing mode in ACRN. Refer to :ref:`virtual-interrupt-hld` for more details. -Guests w/ LAPIC passthrough ---------------------------- +Guests With LAPIC Passthrough +----------------------------- ACRN supports passthrough if and only if the guest is using x2APIC mode for the vLAPIC. In LAPIC passthrough mode, writes to the Interrupt Command @@ -204,10 +204,10 @@ corresponding to the destination processor info in the ICR. :align: center -Passthrough device support +Passthrough Device Support ========================== -Configuration space access +Configuration Space Access -------------------------- ACRN emulates Configuration Space Address (0xcf8) I/O port and @@ -258,7 +258,7 @@ Interrupt Configuration ACRN supports both legacy (INTx) and MSI interrupts for passthrough devices. -INTx support +INTx Support ~~~~~~~~~~~~ ACRN expects developers to identify the interrupt line info (0x3CH) from @@ -271,7 +271,7 @@ IOAPIC. When guest masks the RTE in vIOAPIC, ACRN masks the interrupt RTE in the physical IOAPIC. Level triggered interrupts are not supported. -MSI support +MSI Support ~~~~~~~~~~~ Guest reads/writes to PCI configuration space for configuring MSI @@ -279,7 +279,7 @@ interrupts using an address. Data and control registers are passthrough to the physical BAR of the passthrough device. Refer to `Configuration space access`_ for details on how the PCI configuration space is emulated. -Virtual device support +Virtual Device Support ====================== ACRN provides read-only vRTC support for partition mode guests. Writes @@ -288,11 +288,11 @@ to the data port are discarded. For port I/O to ports other than vPIC, vRTC, or vUART, reads return 0xFF and writes are discarded. -Interrupt delivery +Interrupt Delivery ================== -Guests w/o LAPIC passthrough ----------------------------- +Guests Without LAPIC Passthrough +-------------------------------- In partition mode of ACRN, interrupts stay disabled after a vmexit. The processor does not take interrupts when it is executing in VMX root @@ -307,26 +307,26 @@ for device interrupts. :align: center -Guests w/ LAPIC passthrough ---------------------------- +Guests With LAPIC Passthrough +----------------------------- For guests with LAPIC passthrough, ACRN does not configure vmexit upon external interrupts. There is no vmexit upon device interrupts and they are handled by the guest IDT. -Hypervisor IPI service +Hypervisor IPI Service ====================== ACRN needs IPIs for events such as flushing TLBs across CPUs, sending virtual device interrupts (e.g. vUART to vCPUs), and others. -Guests w/o LAPIC passthrough ----------------------------- +Guests Without LAPIC Passthrough +-------------------------------- Hypervisor IPIs work the same way as in sharing mode. -Guests w/ LAPIC passthrough ---------------------------- +Guests With LAPIC Passthrough +----------------------------- Since external interrupts are passthrough to the guest IDT, IPIs do not trigger vmexit. ACRN uses NMI delivery mode and the NMI exiting is @@ -344,8 +344,8 @@ For a guest console in partition mode, ACRN provides an option to pass ``vmid`` as an argument to ``vm_console``. vmid is the same as the one developers use in the guest configuration. -Guests w/o LAPIC passthrough ----------------------------- +Guests Without LAPIC Passthrough +-------------------------------- Works the same way as sharing mode. diff --git a/doc/developer-guides/hld/hv-pm.rst b/doc/developer-guides/hld/hv-pm.rst index a7e4e18ac..6dfdfefcb 100644 --- a/doc/developer-guides/hld/hv-pm.rst +++ b/doc/developer-guides/hld/hv-pm.rst @@ -3,7 +3,7 @@ Power Management ################ -System PM module +System PM Module **************** The PM module in the hypervisor does three things: diff --git a/doc/developer-guides/hld/hv-rdt.rst b/doc/developer-guides/hld/hv-rdt.rst index fc1b2a630..6fd42fbc8 100644 --- a/doc/developer-guides/hld/hv-rdt.rst +++ b/doc/developer-guides/hld/hv-rdt.rst @@ -38,7 +38,7 @@ IA32_PQR_ASSOC MSR to CLOS 0. (Note that CLOS, or Class of Service, is a resource allocator.) The user can check the cache capabilities such as cache mask and max supported CLOS as described in :ref:`rdt_detection_capabilities` and then program the IA32_type_MASK_n and IA32_PQR_ASSOC MSR with a -CLOS ID, to select a cache mask to take effect. These configurations can be +CLOS ID, to select a cache mask to take effect. These configurations can be done in scenario XML file under ``FEATURES`` section as shown in the below example. ACRN uses VMCS MSR loads on every VM Entry/VM Exit for non-root and root modes to enforce the settings. @@ -137,17 +137,17 @@ needs to be set in the scenario XML file under ``VM`` section. misconfiguration errors. -CAT and MBA high-level design in ACRN +CAT and MBA High-Level Design in ACRN ************************************* -Data structures +Data Structures =============== The below figure shows the RDT data structure to store enumerated resources. .. figure:: images/mba_data_structures.png :align: center -Enabling CAT, MBA software flow +Enabling CAT, MBA Software Flow =============================== The hypervisor enumerates RDT capabilities and sets up mask arrays; it also diff --git a/doc/developer-guides/hld/hv-timer.rst b/doc/developer-guides/hld/hv-timer.rst index 475dee734..36c4d804c 100644 --- a/doc/developer-guides/hld/hv-timer.rst +++ b/doc/developer-guides/hld/hv-timer.rst @@ -11,7 +11,7 @@ limited timer management services: - A timer can only be added on the logical CPU for a process or thread. Timer scheduling or timer migrating is not supported. -How it works +How It Works ************ When the system boots, we check that the hardware supports lapic diff --git a/doc/developer-guides/hld/hv-virt-interrupt.rst b/doc/developer-guides/hld/hv-virt-interrupt.rst index d148ff1c7..a9d591656 100644 --- a/doc/developer-guides/hld/hv-virt-interrupt.rst +++ b/doc/developer-guides/hld/hv-virt-interrupt.rst @@ -113,7 +113,7 @@ These APIs will finish by making a vCPU request. .. doxygenfunction:: vlapic_receive_intr :project: Project ACRN -EOI processing +EOI Processing ============== EOI virtualization is enabled if APICv virtual interrupt delivery is @@ -129,7 +129,7 @@ indicate that is a level triggered interrupt. .. _lapic_passthru: -LAPIC passthrough based on vLAPIC +LAPIC Passthrough Based on vLAPIC ================================= LAPIC passthrough is supported based on vLAPIC, the guest OS first boots with @@ -280,7 +280,7 @@ window is not present, HV would enable VM Enter directly. The injection will be done on next VM Exit once Guest issues ``STI (GuestRFLAG.IF=1)``. -Data structures and interfaces +Data Structures and Interfaces ****************************** There is no data structure exported to the other components in the diff --git a/doc/developer-guides/hld/hv-vm-management.rst b/doc/developer-guides/hld/hv-vm-management.rst index 5dc6a4a0b..93ed4827f 100644 --- a/doc/developer-guides/hld/hv-vm-management.rst +++ b/doc/developer-guides/hld/hv-vm-management.rst @@ -8,7 +8,7 @@ running VM, and a series VM APIs like create_vm, start_vm, reset_vm, shutdown_vm etc are used to switch a VM to the right state, according to the requirements of applications or system power operations. -VM structure +VM Structure ************ The ``acrn_vm`` structure is defined to manage a VM instance, this structure @@ -22,7 +22,7 @@ platform level cpuid entries. The ``acrn_vm`` structure instance will be created by ``create_vm`` API, and then work as the first parameter for other VM APIs. -VM state +VM State ******** Generally, a VM is not running at the beginning: it is in a 'powered off' @@ -44,7 +44,7 @@ please refer to :ref:`hv-cpu-virt` for related VCPU state. VM State Management ******************* -Pre-launched and Service VM +Pre-Launched and Service VM =========================== The hypervisor is the owner to control pre-launched and Service VM's state @@ -52,7 +52,7 @@ by calling VM APIs directly, following the design of system power management. Please refer to ACRN power management design for more details. -Post-launched User VMs +Post-Launched User VMs ====================== DM takes control of post-launched User VMs' state transition after the Service VM diff --git a/doc/developer-guides/hld/hv-vt-d.rst b/doc/developer-guides/hld/hv-vt-d.rst index 0d7f2591c..1798c6754 100644 --- a/doc/developer-guides/hld/hv-vt-d.rst +++ b/doc/developer-guides/hld/hv-vt-d.rst @@ -28,7 +28,7 @@ First-level/nested translation. DMAR Engines Discovery ********************** -DMA Remapping Report ACPI table +DMA Remapping Report ACPI Table =============================== For generic platforms, the ACRN hypervisor retrieves DMAR information from @@ -43,13 +43,13 @@ the devices under the scope of a remapping hardware unit, as shown in DMA Remapping Reporting Structure -Pre-parsed DMAR information +Pre-Parsed DMAR Information =========================== For specific platforms, the ACRN hypervisor uses pre-parsed DMA remapping reporting information directly to save hypervisor bootup time. -DMA remapping unit for integrated graphics device +DMA Remapping Unit for Integrated Graphics Device ================================================= Generally, there is a dedicated remapping hardware unit for the Intel @@ -167,7 +167,7 @@ Other domains EPT table of the VM only allows devices to access the memory allocated for the Normal world of the VM. -Page-walk coherency +Page-Walk Coherency =================== For the VT-d hardware, which doesn't support page-walk coherency, the @@ -182,14 +182,14 @@ memory: ACRN flushes the related cache line after these structures are updated if the VT-d hardware doesn't support page-walk coherency. -Super-page support +Super-Page Support ================== The ACRN VT-d reuses the EPT table as the address translation table. VT-d capability or super-page support should be identical with the usage of the EPT table. -Snoop control +Snoop Control ============= If VT-d hardware supports snoop control, iVT-d can control the @@ -272,7 +272,7 @@ translation for DMAR unit(s) if they are not marked as ignored. .. _device-assignment: -Device assignment +Device Assignment ***************** All devices are initially added to the SOS_VM domain. To assign a device @@ -286,7 +286,7 @@ device is removed from the VM domain related to the User OS and then added back to the SOS_VM domain; this changes the address translation table from the EPT of the User OS to the EPT of the SOS_VM for the device. -Power Management support for S3 +Power Management Support for S3 ******************************* During platform S3 suspend and resume, the VT-d register values are @@ -309,10 +309,10 @@ registered for the IRQ. DMAR unit supports report fault event via MSI. When a fault event occurs, a MSI is generated, so that the DMAR fault handler will be called to report the error event. -Data structures and interfaces +Data Structures and Interfaces ****************************** -initialization and deinitialization +Initialization and Deinitialization =================================== The following APIs are provided during initialization and @@ -321,7 +321,7 @@ deinitialization: .. doxygenfunction:: init_iommu :project: Project ACRN -runtime +Runtime ======= The following API are provided during runtime: diff --git a/doc/developer-guides/hld/images/config-image103.png b/doc/developer-guides/hld/images/config-image103.png deleted file mode 100644 index 792015f23..000000000 Binary files a/doc/developer-guides/hld/images/config-image103.png and /dev/null differ diff --git a/doc/developer-guides/hld/ivshmem-hld.rst b/doc/developer-guides/hld/ivshmem-hld.rst index 768249754..1b051b4ef 100644 --- a/doc/developer-guides/hld/ivshmem-hld.rst +++ b/doc/developer-guides/hld/ivshmem-hld.rst @@ -129,11 +129,11 @@ Usage For usage information, see :ref:`enable_ivshmem` -Inter-VM Communication Security hardening (BKMs) +Inter-VM Communication Security Hardening (BKMs) ************************************************ As previously highlighted, ACRN 2.0 provides the capability to create shared -memory regions between Post-Launch user VMs known as "Inter-VM Communication". +memory regions between Post-Launched User VMs known as "Inter-VM Communication". This mechanism is based on ivshmem v1.0 exposing virtual PCI devices for the shared regions (in Service VM's memory for this release). This feature adopts a community-approved design for shared memory between VMs, following same diff --git a/doc/developer-guides/hld/system-timer-hld.rst b/doc/developer-guides/hld/system-timer-hld.rst index 2bdb9fdfa..6cac2d86e 100644 --- a/doc/developer-guides/hld/system-timer-hld.rst +++ b/doc/developer-guides/hld/system-timer-hld.rst @@ -1,6 +1,6 @@ .. _system-timer-hld: -System timer virtualization +System Timer Virtualization ########################### ACRN supports RTC (Real-time clock), HPET (High Precision Event Timer), @@ -20,7 +20,7 @@ System timer virtualization architecture timerfd\_create interfaces to set up native timers for the trigger timeout mechanism. -System Timer initialization +System Timer Initialization =========================== The device model initializes vRTC, vHEPT, and vPIT devices automatically when @@ -48,7 +48,7 @@ below code snippets.:: ... } -PIT emulation +PIT Emulation ============= The ACRN emulated Intel 8253 Programmable Interval Timer includes a chip @@ -83,7 +83,7 @@ I/O ports definition:: #define TIMER_CNTR2 (IO_TIMER1_PORT + TIMER_REG_CNTR2) #define TIMER_MODE (IO_TIMER1_PORT + TIMER_REG_MODE) -RTC emulation +RTC Emulation ============= ACRN supports RTC (real-time clock) that can only be accessed through @@ -114,7 +114,7 @@ The RTC ACPI description as below:: dsdt_line("}"); } -HPET emulation +HPET Emulation ============== ACRN supports HPET (High Precision Event Timer) which is a higher resolution diff --git a/doc/developer-guides/hld/usb-virt-hld.rst b/doc/developer-guides/hld/usb-virt-hld.rst index 50f013997..e6984f4ac 100644 --- a/doc/developer-guides/hld/usb-virt-hld.rst +++ b/doc/developer-guides/hld/usb-virt-hld.rst @@ -43,7 +43,7 @@ An xHCI register access from a User VM will induce EPT trap from the User VM to DM, and the xHCI DM or DRD DM will emulate hardware behaviors to make the subsystem run. -USB devices supported by USB mediator +USB Devices Supported by USB Mediator ************************************* The following USB devices are supported for the WaaG and LaaG operating systems. @@ -70,7 +70,7 @@ The following USB devices are supported for the WaaG and LaaG operating systems. The above information is current as of ACRN 1.4. -USB host virtualization +USB Host Virtualization *********************** USB host virtualization is implemented as shown in @@ -116,7 +116,7 @@ This configuration means the virtual xHCI will appear in PCI slot 7 in the User VM, and any physical USB device attached on 1-2 or 2-2 will be detected by a User VM and used as expected. -USB DRD virtualization +USB DRD Virtualization ********************** USB DRD (Dual Role Device) emulation works as shown in this figure: diff --git a/doc/developer-guides/hld/virtio-blk.rst b/doc/developer-guides/hld/virtio-blk.rst index 7d4cebd60..9bf18fed4 100644 --- a/doc/developer-guides/hld/virtio-blk.rst +++ b/doc/developer-guides/hld/virtio-blk.rst @@ -1,6 +1,6 @@ .. _virtio-blk: -Virtio-blk +Virtio-BLK ########## The virtio-blk device is a simple virtual block device. The FE driver @@ -35,7 +35,7 @@ The feature bits supported by the BE device are shown as follows: Device can toggle its cache between writeback and writethrough modes. -Virtio-blk-BE design +Virtio-BLK BE Design ******************** .. figure:: images/virtio-blk-image02.png diff --git a/doc/developer-guides/hld/virtio-console.rst b/doc/developer-guides/hld/virtio-console.rst index 7d9e94200..2d374a2e1 100644 --- a/doc/developer-guides/hld/virtio-console.rst +++ b/doc/developer-guides/hld/virtio-console.rst @@ -1,6 +1,6 @@ .. _virtio-console: -Virtio-console +Virtio-Console ############## The Virtio-console is a simple device for data input and output. The @@ -142,7 +142,7 @@ PTY .. code-block:: console # minicom -D /dev/pts/0 - + or: .. code-block:: console @@ -162,7 +162,7 @@ TTY /dev/pts/0 # sleep 2d - + - If you do not have network access to your device, use screen to create a new TTY: diff --git a/doc/developer-guides/hld/virtio-gpio.rst b/doc/developer-guides/hld/virtio-gpio.rst index 94ca29f25..a156fea26 100644 --- a/doc/developer-guides/hld/virtio-gpio.rst +++ b/doc/developer-guides/hld/virtio-gpio.rst @@ -1,6 +1,6 @@ .. _virtio-gpio: -Virtio-gpio +Virtio-GPIO ########### virtio-gpio provides a virtual GPIO controller, which will map part of @@ -33,7 +33,7 @@ irq_set_type of irqchip) will trigger a virtqueue_kick on its own virtqueue. If some gpio has been set to interrupt mode, the interrupt events will be handled within the IRQ virtqueue callback. -GPIO mapping +GPIO Mapping ************ .. figure:: images/virtio-gpio-2.png diff --git a/doc/developer-guides/hld/virtio-i2c.rst b/doc/developer-guides/hld/virtio-i2c.rst index d9b4616b0..4cba6a9bc 100644 --- a/doc/developer-guides/hld/virtio-i2c.rst +++ b/doc/developer-guides/hld/virtio-i2c.rst @@ -1,6 +1,6 @@ .. _virtio-i2c: -Virtio-i2c +Virtio-I2C ########## Virtio-i2c provides a virtual I2C adapter that supports mapping multiple diff --git a/doc/developer-guides/hld/virtio-input.rst b/doc/developer-guides/hld/virtio-input.rst index deaf1ac9c..d06303125 100644 --- a/doc/developer-guides/hld/virtio-input.rst +++ b/doc/developer-guides/hld/virtio-input.rst @@ -1,6 +1,6 @@ .. _virtio-input: -Virtio-input +Virtio-Input ############ The virtio input device can be used to create virtual human interface diff --git a/doc/developer-guides/hld/virtio-net.rst b/doc/developer-guides/hld/virtio-net.rst index ecd135bb3..1f8234814 100644 --- a/doc/developer-guides/hld/virtio-net.rst +++ b/doc/developer-guides/hld/virtio-net.rst @@ -1,6 +1,6 @@ .. _virtio-net: -Virtio-net +Virtio-Net ########## Virtio-net is the para-virtualization solution used in ACRN for @@ -110,7 +110,7 @@ Initialization in Device Model - Setup data plan callbacks, including TX, RX - Setup TAP backend -Initialization in virtio-net Frontend Driver +Initialization in Virtio-Net Frontend Driver ============================================ **virtio_pci_probe** diff --git a/doc/developer-guides/hld/virtio-rnd.rst b/doc/developer-guides/hld/virtio-rnd.rst index fdc15ad79..a5484a5bb 100644 --- a/doc/developer-guides/hld/virtio-rnd.rst +++ b/doc/developer-guides/hld/virtio-rnd.rst @@ -1,6 +1,6 @@ .. _virtio-rnd: -Virtio-rnd +Virtio-RND ########## Virtio-rnd provides a virtual hardware random source for the User VM. It simulates a PCI device diff --git a/doc/developer-guides/hld/vuart-virt-hld.rst b/doc/developer-guides/hld/vuart-virt-hld.rst index 1f4a72899..15ff69ac0 100644 --- a/doc/developer-guides/hld/vuart-virt-hld.rst +++ b/doc/developer-guides/hld/vuart-virt-hld.rst @@ -95,9 +95,11 @@ Usage - For console vUART - To enable the console port for a VM, change the - port_base and IRQ in ``misc/vm_configs/scenarios//vm_configurations.c``. If the IRQ number has been used in your + To enable the console port for a VM, change the ``port_base`` and ``irq`` + fields in + ``configs/scenarios//vm_configurations.c`` under the + hypervisor build directory using the combinations listed below. If the IRQ + number has been used in your system ( ``cat /proc/interrupt``), you can choose other IRQ number. Set the ``.irq =0``, the vUART will work in polling mode. diff --git a/doc/developer-guides/hld/watchdog-hld.rst b/doc/developer-guides/hld/watchdog-hld.rst index c06ae3a7b..1502dd97d 100644 --- a/doc/developer-guides/hld/watchdog-hld.rst +++ b/doc/developer-guides/hld/watchdog-hld.rst @@ -34,7 +34,7 @@ It receives read/write commands from the watchdog driver, does the actions, and returns. In ACRN, the commands are from User VM watchdog driver. -User VM watchdog workflow +User VM Watchdog Workflow ************************* When the User VM does a read or write operation on the watchdog device's @@ -58,7 +58,7 @@ from a User VM to the Service VM and return back: Watchdog operation workflow -Implementation in ACRN and how to use it +Implementation in ACRN and How to Use It **************************************** In ACRN, the Intel 6300ESB watchdog device emulation is added into the diff --git a/doc/developer-guides/l1tf.rst b/doc/developer-guides/l1tf.rst index d52803c1b..9c5214d44 100644 --- a/doc/developer-guides/l1tf.rst +++ b/doc/developer-guides/l1tf.rst @@ -67,7 +67,7 @@ to protect itself from malicious user space attack. Intel SGX/SMM related attacks are mitigated by using latest microcode. There is no additional action in ACRN hypervisor. -Guest -> hypervisor Attack +Guest -> Hypervisor Attack ========================== ACRN always enables EPT for all guests (Service VM and User VM), thus a malicious @@ -84,7 +84,7 @@ a malicious guest running on one logical processor can attack the data which is brought into L1D by the context which runs on the sibling thread of the same physical core. This context can be any code in hypervisor. -Guest -> guest Attack +Guest -> Guest Attack ===================== The possibility of guest -> guest attack varies on specific configuration, @@ -144,7 +144,7 @@ not all of them apply to a specific ACRN deployment. Check the 'Mitigation Status'_ and 'Mitigation Recommendations'_ sections for guidance. -L1D flush on VMENTRY +L1D Flush on VMENTRY ==================== ACRN may optionally flush L1D at VMENTRY, which ensures no @@ -175,7 +175,7 @@ is always enabled on all platforms. ACRN hypervisor doesn't set reserved bits in any EPT entry. -Put Secret Data into Uncached Memory +Put Secret Data Into Uncached Memory ==================================== It is hard to decide which data in ACRN hypervisor is secret or valuable @@ -204,7 +204,7 @@ useful to be attacked. However if such 100% identification is not possible, user should consider other mitigation options to protect hypervisor. -L1D flush on World Switch +L1D Flush on World Switch ========================= For L1D-affected platforms, ACRN writes to aforementioned MSR @@ -218,7 +218,7 @@ normal world is less privileged entity to secure world. This mitigation is always enabled. -Core-based scheduling +Core-Based Scheduling ===================== If Hyper-threading is enabled, it's important to avoid running diff --git a/doc/developer-guides/trusty.rst b/doc/developer-guides/trusty.rst index acf66996e..96c986578 100644 --- a/doc/developer-guides/trusty.rst +++ b/doc/developer-guides/trusty.rst @@ -35,7 +35,7 @@ Trusty Architecture .. _trusty-hypercalls: -Trusty specific Hypercalls +Trusty Specific Hypercalls ************************** There are a few :ref:`hypercall_apis` that are related to Trusty. @@ -44,7 +44,7 @@ There are a few :ref:`hypercall_apis` that are related to Trusty. :project: Project ACRN :content-only: -Trusty Boot flow +Trusty Boot Flow **************** By design, the User OS bootloader (``UOS_Loader``) will trigger the Trusty boot process. The complete boot flow is illustrated below. diff --git a/doc/faq.rst b/doc/faq.rst index 2c0bb21e7..f94e0d730 100644 --- a/doc/faq.rst +++ b/doc/faq.rst @@ -9,93 +9,31 @@ Here are some frequently asked questions about the ACRN project. :local: :backlinks: entry ------- -What hardware does ACRN support? +What Hardware Does ACRN Support? ******************************** -ACRN runs on Intel boards, as documented in +ACRN runs on Intel-based boards, as documented in our :ref:`hardware` documentation. .. _config_32GB_memory: -How do I configure ACRN's memory size? +How Do I Configure ACRN's Memory Size? ************************************** -It's important that the ACRN Kconfig settings are aligned with the physical memory -on your platform. Check the documentation for these option settings for -details: +It's important that the ACRN configuration settings are aligned with the +physical memory on your platform. Check the documentation for these +option settings for details: -* :option:`CONFIG_PLATFORM_RAM_SIZE` -* :option:`CONFIG_HV_RAM_SIZE` +* :option:`hv.MEMORY.PLATFORM_RAM_SIZE` +* :option:`hv.MEMORY.SOS_RAM_SIZE` +* :option:`hv.MEMORY.UOS_RAM_SIZE` +* :option:`hv.MEMORY.HV_RAM_SIZE` -For example, if the Intel NUC's physical memory size is 32G, you may follow these steps -to make the new UEFI ACRN hypervisor, and then deploy it onto the Intel NUC to boot -the ACRN Service VM with the 32G memory size. +Check the :ref:`acrn_configuration_tool` for more information on how +to adjust these settings. -#. Use ``make menuconfig`` to change the ``RAM_SIZE``:: - - $ cd acrn-hypervisor - $ make menuconfig -C hypervisor BOARD=nuc7i7dnb - -#. Navigate to these items and then change the value as given below:: - - (0x0f000000) Size of the RAM region used by the hypervisor - (0x800000000) Size of the physical platform RAM - -#. Press :kbd:`S` and then :kbd:`Enter` to save the ``.config`` to the default directory: - ``acrn-hypervisor/hypervisor/build/.config`` - -#. Press :kbd:`ESC` to leave the menu. - -#. Then continue building the ACRN Service VM as usual. - -How to modify the default display output for a User VM? -******************************************************* - -Apollo Lake HW has three pipes and each pipe can have three or four planes which -help to display the overlay video. The hardware can support up to 3 monitors -simultaneously. Some parameters are available to control how display monitors -are assigned between the Service VM and User VM(s), simplifying the assignment policy and -providing configuration flexibility for the pipes and planes for various IoT -scenarios. This is known as the **plane restriction** feature. - -* ``i915.avail_planes_per_pipe``: for controlling how planes are assigned to the - pipes -* ``i915.domain_plane_owners``: for controlling which domain (VM) will have - access to which plane - -Refer to :ref:`GVT-g-kernel-options` for detailed parameter descriptions. - -In the default configuration, pipe A is assigned to the Service VM and pipes B and C -are assigned to the User VM, as described by these parameters: - -* Service VM:: - - i915.avail_planes_per_pipe=0x01010F - i915.domain_plane_owners=0x011111110000 - -* User VM:: - - i915.avail_planes_per_pipe=0x0070F00 - -To assign pipes A and B to the User VM, while pipe C is assigned to the Service VM, use -these parameters: - -* Service VM:: - - i915.avail_planes_per_pipe=0x070101 - i915.domain_plane_owners=0x000011111111 - -* User VM:: - - i915.avail_planes_per_pipe=0x000F0F - -.. note:: The Service VM always has at least one plane per pipe. This is - intentional, and the driver will enforce this if the parameters do not - do this. - -Why does ACRN need to know how much RAM the system has? +Why Does ACRN Need to Know How Much RAM the System Has? ******************************************************* Configuring ACRN at compile time with the system RAM size is a tradeoff between diff --git a/doc/getting-started/building-from-source.rst b/doc/getting-started/building-from-source.rst index d9c967873..378929c46 100644 --- a/doc/getting-started/building-from-source.rst +++ b/doc/getting-started/building-from-source.rst @@ -1,6 +1,6 @@ .. _getting-started-building: -Build ACRN from Source +Build ACRN From Source ###################### Following a general embedded-system programming model, the ACRN @@ -8,13 +8,13 @@ hypervisor is designed to be customized at build time per hardware platform and per usage scenario, rather than one binary for all scenarios. -The hypervisor binary is generated based on Kconfig configuration -settings. Instructions about these settings can be found in +The hypervisor binary is generated based on configuration settings in XML +files. Instructions about customizing these settings can be found in :ref:`getting-started-hypervisor-configuration`. -One binary for all platforms and all usage scenarios is currently not -supported, primarily because dynamic configuration parsing is restricted in -the ACRN hypervisor for the following reasons: +One binary for all platforms and all usage scenarios is not +supported. Dynamic configuration parsing is not used in +the ACRN hypervisor for these reasons: - **Maintain functional safety requirements.** Implementing dynamic parsing introduces dynamic objects, which violate functional safety requirements. @@ -45,22 +45,15 @@ these steps. .. rst-class:: numbered-step -Install build tools and dependencies +Install Build Tools and Dependencies ************************************ -ACRN development is supported on popular Linux distributions, each with -their own way to install development tools. This user guide covers the -different steps to configure and build ACRN natively on your -distribution. +ACRN development is supported on popular Linux distributions, each with their +own way to install development tools. This user guide covers the steps to +configure and build ACRN natively on **Ubuntu 18.04 or newer**. -.. note:: - ACRN uses ``menuconfig``, a python3 text-based user interface (TUI) - for configuring hypervisor options and using Python's ``kconfiglib`` - library. - -Install the necessary tools for the following systems: - -* Ubuntu development system: +The following commands install the necessary tools for configuring and building +ACRN. .. code-block:: none @@ -73,6 +66,7 @@ Install the necessary tools for the following systems: libsystemd-dev \ libevent-dev \ libxml2-dev \ + libxml2-utils \ libusb-1.0-0-dev \ python3 \ python3-pip \ @@ -82,34 +76,30 @@ Install the necessary tools for the following systems: libnuma-dev \ liblz4-tool \ flex \ - bison + bison \ + xsltproc - $ sudo pip3 install kconfiglib - $ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz - $ tar zxvf acpica-unix-20191018.tar.gz - $ cd acpica-unix-20191018 + $ sudo pip3 install lxml xmlschema + $ wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz + $ tar zxvf acpica-unix-20210105.tar.gz + $ cd acpica-unix-20210105 $ make clean && make iasl $ sudo cp ./generate/unix/bin/iasl /usr/sbin/ - .. note:: - ACRN requires ``gcc`` version 7.3.* (or higher) and ``binutils`` version - 2.27 (or higher). Check your development environment to ensure you have - appropriate versions of these packages by using the commands: ``gcc -v`` - and ``ld -v``. - .. rst-class:: numbered-step -Get the ACRN hypervisor source code +Get the ACRN Hypervisor Source Code *********************************** -The `acrn-hypervisor `_ +The `ACRN hypervisor `_ repository contains four main components: -1. The ACRN hypervisor code, located in the ``hypervisor`` directory. -#. The ACRN device model code, located in the ``devicemodel`` directory. -#. The ACRN tools source code, located in the ``misc/tools`` directory. +1. The ACRN hypervisor code is in the ``hypervisor`` directory. +#. The ACRN device model code is in the ``devicemodel`` directory. +#. The ACRN debug tools source code is in the ``misc/debug_tools`` directory. +#. The ACRN online services source code is in the ``misc/services`` directory. -Enter the following to get the acrn-hypervisor source code: +Enter the following to get the ACRN hypervisor source code: .. code-block:: none @@ -120,7 +110,7 @@ Enter the following to get the acrn-hypervisor source code: .. rst-class:: numbered-step -Build with the ACRN scenario +Build With the ACRN Scenario **************************** Currently, the ACRN hypervisor defines these typical usage scenarios: @@ -134,10 +124,10 @@ LOGICAL_PARTITION: This scenario defines two pre-launched VMs. INDUSTRY: - This is a typical scenario for industrial usage with up to eight VMs: + This scenario is an example for industrial usage with up to eight VMs: one pre-launched Service VM, five post-launched Standard VMs (for Human interaction etc.), one post-launched RT VMs (for real-time control), - and one Kata container VM. + and one Kata Container VM. HYBRID: This scenario defines a hybrid use case with three VMs: one @@ -149,126 +139,126 @@ HYBRID_RT: pre-launched RTVM, one pre-launched Service VM, and one post-launched Standard VM. -Assuming that you are at the top level of the acrn-hypervisor directory, perform the following: +XML configuration files for these scenarios on supported boards are available +under the ``misc/config_tools/data`` directory. + +Assuming that you are at the top level of the ``acrn-hypervisor`` directory, perform +the following to build the hypervisor, device model, and tools: .. note:: - The release version is built by default, ``RELEASE=0`` builds the debug version. + The debug version is built by default. To build a release version, + build with ``RELEASE=y`` explicitly, regardless of whether a previous + build exists. -* Build the ``INDUSTRY`` scenario on the ``nuc7i7dnb``: +* Build the debug version of ``INDUSTRY`` scenario on the ``nuc7i7dnb``: .. code-block:: none - $ make all BOARD=nuc7i7dnb SCENARIO=industry RELEASE=0 + $ make BOARD=nuc7i7dnb SCENARIO=industry -* Build the ``HYBRID`` scenario on the ``whl-ipc-i5``: +* Build the release version of ``HYBRID`` scenario on the ``whl-ipc-i5``: .. code-block:: none - $ make all BOARD=whl-ipc-i5 SCENARIO=hybrid RELEASE=0 + $ make BOARD=whl-ipc-i5 SCENARIO=hybrid RELEASE=y -* Build the ``HYBRID_RT`` scenario on the ``whl-ipc-i7``: +* Build the release version of ``HYBRID_RT`` scenario on the ``whl-ipc-i7`` + (hypervisor only): .. code-block:: none - $ make all BOARD=whl-ipc-i7 SCENARIO=hybrid_rt RELEASE=0 + $ make BOARD=whl-ipc-i7 SCENARIO=hybrid_rt RELEASE=y hypervisor -* Build the ``SDC`` scenario on the ``nuc6cayh``: +* Build the release version of the device model and tools: .. code-block:: none - $ make all BOARD_FILE=$PWD/misc/vm_configs/xmls/board-xmls/nuc6cayh.xml \ - SCENARIO_FILE=$PWD/misc/vm_configs/xmls/config-xmls/nuc6cayh/sdc.xml + $ make RELEASE=y devicemodel tools +You can also build ACRN with your customized scenario: -See the :ref:`hardware` document for information about platform needs -for each scenario. +* Build with your own scenario configuration on the ``nuc6cayh``, assuming the + scenario is defined in ``/path/to/scenario.xml``: + + .. code-block:: none + + $ make BOARD=nuc6cayh SCENARIO=/path/to/scenario.xml + +* Build with your own board and scenario configuration, assuming the board and + scenario XML files are ``/path/to/board.xml`` and ``/path/to/scenario.xml``: + + .. code-block:: none + + $ make BOARD=/path/to/board.xml SCENARIO=/path/to/scenario.xml + +.. note:: + ACRN uses XML files to summarize board characteristics and scenario + settings. The ``BOARD`` and ``SCENARIO`` variables accept board/scenario + names as well as paths to XML files. When board/scenario names are given, the + build system searches for XML files with the same names under + ``misc/config_tools/data/``. When paths (absolute or relative) to the XML + files are given, the build system uses the files pointed at. If relative + paths are used, they are considered relative to the current working + directory. + +See the :ref:`hardware` document for information about platform needs for each +scenario. For more instructions to customize scenarios, see +:ref:`getting-started-hypervisor-configuration` and +:ref:`acrn_configuration_tool`. + +The build results are found in the ``build`` directory. You can specify +a different build directory by setting the ``O`` ``make`` parameter, +for example: ``make O=build-nuc``. + +To query the board, scenario, and build type of an existing build, the +``hvshowconfig`` target will help. + + .. code-block:: none + + $ make BOARD=tgl-rvp SCENARIO=hybrid_rt hypervisor + ... + $ make hvshowconfig + Build directory: /path/to/acrn-hypervisor/build/hypervisor + This build directory is configured with the settings below. + - BOARD = tgl-rvp + - SCENARIO = hybrid_rt + - RELEASE = n .. _getting-started-hypervisor-configuration: .. rst-class:: numbered-step -Build the hypervisor configuration -********************************** +Modify the Hypervisor Configuration +*********************************** -Modify the hypervisor configuration -=================================== +The ACRN hypervisor is built with scenario encoded in an XML file (referred to +as the scenario XML hereinafter). The scenario XML of a build can be found at +``/hypervisor/.scenario.xml``, where ```` is the name of the build +directory. You can make further changes to this file to adjust to your specific +requirements. Another ``make`` will rebuild the hypervisor using the updated +scenario XML. -The ACRN hypervisor leverages Kconfig to manage configurations; it is -powered by ``Kconfiglib``. A default configuration is generated based on the -board you have selected via the ``BOARD=`` command line parameter. You can -make further changes to that default configuration to adjust to your specific -requirements. - -To generate hypervisor configurations, you must build the hypervisor -individually. The following steps generate a default but complete -configuration, based on the platform selected, assuming that you are at the -top level of the acrn-hypervisor directory. The configuration file, named -``.config``, can be found under the target folder of your build. +The following commands show how to customize manually the scenario XML based on +the predefined ``INDUSTRY`` scenario for ``nuc7i7dnb`` and rebuild the +hypervisor. The ``hvdefconfig`` target generates the configuration files without +building the hypervisor, allowing users to tweak the configurations. .. code-block:: none - $ cd hypervisor - $ make defconfig BOARD=nuc7i7dnb SCENARIO=industry - -The BOARD specified is used to select a ``defconfig`` under -``misc/vm_configs/scenarios/``. The other command line-based options (e.g. -``RELEASE``) take no effect when generating a defconfig. - -To modify the hypervisor configurations, you can either edit ``.config`` -manually, or you can invoke a TUI-based menuconfig (powered by kconfiglib) by -executing ``make menuconfig``. As an example, the following commands -(assuming that you are at the top level of the acrn-hypervisor directory) -generate a default configuration file, allowing you to modify some -configurations and build the hypervisor using the updated ``.config``: - -.. code-block:: none - - # Modify the configurations per your needs - $ cd ../ # Enter top-level folder of acrn-hypervisor source - $ make menuconfig -C hypervisor - # modify your own "ACRN Scenario" and "Target board" that want to build - # in pop up menu - -Note that ``menuconfig`` is python3 only. - -Refer to the help on menuconfig for a detailed guide on the interface: - -.. code-block:: none - - $ pydoc3 menuconfig - -.. rst-class:: numbered-step - -Build the hypervisor, device model, and tools -********************************************* - -Now you can build all these components at once as follows: - -.. code-block:: none - - $ make # Build hypervisor with the new .config - -The build results are found in the ``build`` directory. You can specify -a different Output folder by setting the ``O`` ``make`` parameter, -for example: ``make O=build-nuc``. - - -.. code-block:: none - - $ make all BOARD_FILE=$PWD/misc/vm_configs/xmls/board-xmls/nuc7i7dnb.xml \ - SCENARIO_FILE=$PWD/misc/vm_configs/xmls/config-xmls/nuc7i7dnb/industry.xml TARGET_DIR=xxx - -The build results are found in the ``build`` directory. You can specify -a different build folder by setting the ``O`` ``make`` parameter, -for example: ``make O=build-nuc``. - + $ make BOARD=nuc7i7dnb SCENARIO=industry hvdefconfig + $ vim build/hypervisor/.scenario.xml + (Modify the XML file per your needs) + $ make .. note:: - The ``BOARD`` and ``SCENARIO`` parameters are not needed because the - information is retrieved from the corresponding ``BOARD_FILE`` and - ``SCENARIO_FILE`` XML configuration files. The ``TARGET_DIR`` parameter - specifies what directory is used to store configuration files imported - from XML files. If the ``TARGET_DIR`` is not specified, the original - configuration files of acrn-hypervisor would be overridden. + A hypervisor build remembers the board and scenario previously + configured. Thus, there is no need to duplicate BOARD and SCENARIO in the + second ``make`` above. -Follow the same instructions to boot and test the images you created from your build. +While the scenario XML files can be changed manually, we recommend you use the +ACRN web-based configuration app that provides valid options and descriptions +of the configuration entries. Refer to :ref:`acrn_config_tool_ui` for more +instructions. + +Descriptions of each configuration entry in scenario XML files are also +available at :ref:`scenario-config-options`. diff --git a/doc/getting-started/roscube/roscube-gsg.rst b/doc/getting-started/roscube/roscube-gsg.rst index 133125f42..6b2dde0ed 100644 --- a/doc/getting-started/roscube/roscube-gsg.rst +++ b/doc/getting-started/roscube/roscube-gsg.rst @@ -1,11 +1,13 @@ -Getting Started Guide for ACRN Industry Scenario with ROScube-I +.. _roscube-gsg: + +Getting Started Guide for ACRN Industry Scenario With ROScube-I ############################################################### .. contents:: :local: :depth: 1 -Verified version +Verified Version **************** - Ubuntu version: **18.04** @@ -68,10 +70,10 @@ Prerequisites .. rst-class:: numbered-step -Install ACRN hypervisor +Install ACRN Hypervisor *********************** -Set up Environment +Set Up Environment ================== #. Open ``/etc/default/grub/`` and add ``idle=nomwait intel_pstate=disable`` @@ -92,7 +94,7 @@ Set up Environment sudo apt update sudo apt install -y gcc git make gnu-efi libssl-dev libpciaccess-dev \ - uuid-dev libsystemd-dev libevent-dev libxml2-dev \ + uuid-dev libsystemd-dev libevent-dev libxml2-dev libxml2-utils \ libusb-1.0-0-dev python3 python3-pip libblkid-dev \ e2fslibs-dev pkg-config libnuma-dev liblz4-tool flex bison sudo pip3 install kconfiglib @@ -203,10 +205,10 @@ Configure Hypervisor .. rst-class:: numbered-step -Install Service VM kernel +Install Service VM Kernel ************************* -Build Service VM kernel +Build Service VM Kernel ======================= #. Get code from GitHub @@ -302,7 +304,7 @@ Update Grub Install User VM *************** -Before create User VM +Before Create User VM ===================== #. Download Ubuntu image (Here we use `Ubuntu 18.04 LTS @@ -316,7 +318,7 @@ Before create User VM bridge-utils virt-manager ovmf sudo reboot -Create User VM image +Create User VM Image ==================== .. note:: Reboot into the **native Linux kernel** (not the ACRN kernel) @@ -410,9 +412,9 @@ the User VM. .. code-block:: bash cd /tmp - wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz - tar zxvf acpica-unix-20191018.tar.gz - cd acpica-unix-20191018 + wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz + tar zxvf acpica-unix-20210105.tar.gz + cd acpica-unix-20210105 make clean && make iasl sudo cp ./generate/unix/bin/iasl /usr/sbin/ @@ -451,10 +453,10 @@ the User VM. .. rst-class:: numbered-step -Install real-time VM +Install Real-Time VM ******************** -Copy real-time VM image +Copy Real-Time VM Image ======================= .. note:: Reboot into the **native Linux kernel** (not the ACRN kernel) @@ -468,7 +470,7 @@ Copy real-time VM image .. figure:: images/rqi-acrn-rtos-ready.png -Set up real-time VM +Set Up Real-Time VM =================== .. note:: The section will show you how to install Xenomai on ROScube-I. @@ -548,7 +550,7 @@ Set up real-time VM sudo poweroff -Run real-time VM +Run Real-Time VM ================ Now back to the native machine and we'll set up the environment for @@ -582,7 +584,7 @@ launching the real-time VM. In ACRN design, rebooting the real-time VM will also reboot the whole system. -Customizing the launch file +Customizing the Launch File *************************** The launch file in this tutorial has the following hardware resource allocation. diff --git a/doc/getting-started/rt_industry_ubuntu.rst b/doc/getting-started/rt_industry_ubuntu.rst index be942ae57..3afb5ccc0 100644 --- a/doc/getting-started/rt_industry_ubuntu.rst +++ b/doc/getting-started/rt_industry_ubuntu.rst @@ -1,19 +1,19 @@ .. _rt_industry_ubuntu_setup: -Getting Started Guide for ACRN Industry Scenario with Ubuntu Service VM +Getting Started Guide for ACRN Industry Scenario With Ubuntu Service VM ####################################################################### .. contents:: :local: :depth: 1 -Verified version +Verified Version **************** - Ubuntu version: **18.04** - GCC version: **7.5** -- ACRN-hypervisor branch: **release_2.3 (v2.3)** -- ACRN-Kernel (Service VM kernel): **release_2.3 (v2.3)** +- ACRN-hypervisor branch: **release_2.4 (v2.4)** +- ACRN-Kernel (Service VM kernel): **release_2.4 (v2.4)** - RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)** - HW: Maxtang Intel WHL-U i7-8665U (`AX8665U-A2 `_) @@ -51,7 +51,7 @@ Connect the WHL Maxtang with the appropriate external devices. .. _install-ubuntu-rtvm-sata: -Install the Ubuntu User VM (RTVM) on the SATA disk +Install the Ubuntu User VM (RTVM) on the SATA Disk ************************************************** .. note:: The WHL Maxtang machine contains both an NVMe and SATA disk. @@ -84,7 +84,7 @@ to turn it into a real-time User VM (RTVM). .. _install-ubuntu-Service VM-NVMe: -Install the Ubuntu Service VM on the NVMe disk +Install the Ubuntu Service VM on the NVMe Disk ********************************************** .. note:: Before you install the Ubuntu Service VM on the NVMe disk, either @@ -161,9 +161,10 @@ Build the ACRN Hypervisor on Ubuntu libnuma-dev \ liblz4-tool \ flex \ - bison + bison \ + xsltproc - $ sudo pip3 install kconfiglib + $ sudo pip3 install lxml xmlschema #. Starting with the ACRN v2.2 release, we use the ``iasl`` tool to compile an offline ACPI binary for pre-launched VMs while building ACRN, @@ -176,16 +177,12 @@ Build the ACRN Hypervisor on Ubuntu .. code-block:: none $ cd /home/acrn/work - $ wget https://acpica.org/sites/acpica/files/acpica-unix-20191018.tar.gz - $ tar zxvf acpica-unix-20191018.tar.gz - $ cd acpica-unix-20191018 + $ wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz + $ tar zxvf acpica-unix-20210105.tar.gz + $ cd acpica-unix-20210105 $ make clean && make iasl $ sudo cp ./generate/unix/bin/iasl /usr/sbin/ - .. note:: While there are newer versions of software available from - the `ACPICA downloads site `_, this - 20191018 version has been verified to work. - #. Get the ACRN source code: .. code-block:: none @@ -194,22 +191,22 @@ Build the ACRN Hypervisor on Ubuntu $ git clone https://github.com/projectacrn/acrn-hypervisor $ cd acrn-hypervisor -#. Switch to the v2.3 version: +#. Switch to the v2.4 version: .. code-block:: none - $ git checkout v2.3 + $ git checkout v2.4 #. Build ACRN: .. code-block:: none - $ make all BOARD_FILE=misc/vm_configs/xmls/board-xmls/whl-ipc-i7.xml SCENARIO_FILE=misc/vm_configs/xmls/config-xmls/whl-ipc-i7/industry.xml RELEASE=0 + $ make BOARD=whl-ipc-i7 SCENARIO=industry $ sudo make install $ sudo mkdir -p /boot/acrn $ sudo cp build/hypervisor/acrn.bin /boot/acrn/ -Build and install the ACRN kernel +Build and Install the ACRN Kernel ================================= #. Build the Service VM kernel from the ACRN repo: @@ -224,12 +221,12 @@ Build and install the ACRN kernel .. code-block:: none - $ git checkout v2.3 + $ git checkout v2.4 $ cp kernel_config_uefi_sos .config $ make olddefconfig $ make all -Install the Service VM kernel and modules +Install the Service VM Kernel and Modules ========================================= .. code-block:: none @@ -289,7 +286,7 @@ Update Grub for the Ubuntu Service VM $ sudo update-grub -Enable network sharing for the User VM +Enable Network Sharing for the User VM ====================================== In the Ubuntu Service VM, enable network sharing for the User VM: @@ -300,7 +297,7 @@ In the Ubuntu Service VM, enable network sharing for the User VM: $ sudo systemctl start systemd-networkd -Reboot the system +Reboot the System ================= Reboot the system. You should see the Grub menu with the new **ACRN @@ -317,10 +314,10 @@ typical output of a successful installation resembles the following: [ 0.862942] ACRN HVLog: acrn_hvlog_init -Additional settings in the Service VM +Additional Settings in the Service VM ===================================== -BIOS settings of GVT-d for WaaG +BIOS Settings of GVT-d for WaaG ------------------------------- .. note:: @@ -333,11 +330,11 @@ Set **DVMT Pre-Allocated** to **64MB**: .. figure:: images/DVMT-reallocated-64mb.png -Set **PM Support** to **Enabled**: +Set **PM Support** to **Enabled**: .. figure:: images/PM-support-enabled.png -Use OVMF to launch the User VM +Use OVMF to Launch the User VM ------------------------------ The User VM will be launched by OVMF, so copy it to the specific folder: @@ -347,7 +344,7 @@ The User VM will be launched by OVMF, so copy it to the specific folder: $ sudo mkdir -p /usr/share/acrn/bios $ sudo cp /home/acrn/work/acrn-hypervisor/devicemodel/bios/OVMF.fd /usr/share/acrn/bios -Build and Install the RT kernel for the Ubuntu User VM +Build and Install the RT Kernel for the Ubuntu User VM ------------------------------------------------------ Follow these instructions to build the RT kernel. @@ -398,7 +395,7 @@ Grub in the Ubuntu User VM (RTVM) needs to be configured to use the new RT kernel that was just built and installed on the rootfs. Follow these steps to perform this operation. -Update the Grub file +Update the Grub File ==================== #. Reboot into the Ubuntu User VM located on the SATA drive and log on. @@ -458,10 +455,13 @@ Launch the RTVM .. code-block:: none - $ sudo cp /home/acrn/work/acrn-hyperviso/misc/vm_configs/sample_launch_scripts/nuc/launch_hard_rt_vm.sh /usr/share/acrn/ + $ sudo cp /home/acrn/work/acrn-hyperviso/misc/config_tools/data/sample_launch_scripts/nuc/launch_hard_rt_vm.sh /usr/share/acrn/ $ sudo /usr/share/acrn/launch_hard_rt_vm.sh -Recommended BIOS settings for RTVM +.. note:: + If using a KBL NUC, the script must be adapted to match the BDF on the actual HW platform + +Recommended BIOS Settings for RTVM ---------------------------------- .. csv-table:: @@ -491,7 +491,7 @@ Recommended BIOS settings for RTVM .. note:: BIOS settings depend on the platform and BIOS version; some may not be applicable. -Recommended kernel cmdline for RTVM +Recommended Kernel Cmdline for RTVM ----------------------------------- .. code-block:: none @@ -513,7 +513,7 @@ automatically at the time of RTVM creation. Refer to :ref:`rdt_configuration` for details on RDT configuration and :ref:`hv_rdt` for details on RDT high-level design. -Set up the core allocation for the RTVM +Set Up the Core Allocation for the RTVM --------------------------------------- In our recommended configuration, two cores are allocated to the RTVM: @@ -563,7 +563,7 @@ this, follow the below steps to allocate all housekeeping tasks to core 0: .. note:: Ignore the error messages that might appear while the script is running. -Run cyclictest +Run Cyclictest -------------- #. Refer to the :ref:`troubleshooting section ` @@ -621,7 +621,7 @@ Troubleshooting .. _enabling the network on the RTVM: -Enabling the network on the RTVM +Enabling the Network on the RTVM ================================ If you need to access the internet, you must add the following command line @@ -638,13 +638,12 @@ to the ``launch_hard_rt_vm.sh`` script before launching it: -s 2,passthru,02/0/0 \ -s 3,virtio-console,@stdio:stdio_port \ -s 8,virtio-net,tap0 \ - $pm_channel $pm_by_vuart \ --ovmf /usr/share/acrn/bios/OVMF.fd \ hard_rtvm .. _passthru to rtvm: -Passthrough a hard disk to RTVM +Passthrough a Hard Disk to RTVM =============================== #. Use the ``lspci`` command to ensure that the correct SATA device IDs will @@ -692,7 +691,6 @@ Passthrough a hard disk to RTVM -s 2,passthru,00/17/0 \ -s 3,virtio-console,@stdio:stdio_port \ -s 8,virtio-net,tap0 \ - $pm_channel $pm_by_vuart \ --ovmf /usr/share/acrn/bios/OVMF.fd \ hard_rtvm diff --git a/doc/index.rst b/doc/index.rst index ce99b5e4b..96fee9065 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -1,6 +1,6 @@ .. _acrn_home: -Project ACRN documentation +Project ACRN Documentation ########################## Welcome to the Project ACRN (version |version|) documentation. ACRN is diff --git a/doc/introduction/index.rst b/doc/introduction/index.rst index 0878a19f7..8343b1f50 100644 --- a/doc/introduction/index.rst +++ b/doc/introduction/index.rst @@ -1,6 +1,6 @@ .. _introduction: -What is ACRN +What Is ACRN ############ Introduction to Project ACRN @@ -93,13 +93,11 @@ above, i.e. the *logical partitioning*, *sharing*, and *hybrid* modes. They further specify the number of VMs that can be run, their attributes and the resources they have access to, either shared with other VMs or exclusively. -The predefined scenarios are in the -:acrn_file:`misc/vm_configs/scenarios` folder -in the source code. XML examples for some platforms can also be found under -:acrn_file:`misc/vm_configs/xmls/config-xmls`. +The predefined scenarios are in the :acrn_file:`misc/config_tools/data` folder +in the source code. The :ref:`acrn_configuration_tool` tutorial explains how to use the ACRN -Configuration tool to create your own scenario or modify an existing one. +configuration toolset to create your own scenario or modify an existing one. Industrial Workload Consolidation ================================= @@ -281,7 +279,7 @@ application scenario needs. Here are block diagrams for each of these four scenarios. -SDC scenario +SDC Scenario ============ In this SDC scenario, an instrument cluster (IC) system runs with the @@ -295,7 +293,7 @@ VM. SDC scenario with two VMs -Industry scenario +Industry Scenario ================= In this Industry scenario, the Service VM provides device sharing capability for @@ -312,7 +310,7 @@ vision, etc. Industry scenario -Hybrid scenario +Hybrid Scenario =============== In this Hybrid scenario, a pre-launched Safety/RTVM is started by the @@ -326,7 +324,7 @@ non-real-time tasks. Hybrid scenario -Hybrid real-time (RT) scenario +Hybrid Real-Time (RT) Scenario ============================== In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the @@ -340,7 +338,7 @@ non-real-time tasks. Hybrid RT scenario -Logical Partition scenario +Logical Partition Scenario ========================== This scenario is a simplified VM configuration for VM logical @@ -423,7 +421,8 @@ The Boot process proceeds as follows: In this boot mode, the boot options of pre-launched VM and service VM are defined in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config`` -in the source code ``misc/vm_configs/$(SCENARIO)/vm_configurations.c`` by default. +in the source code ``configs/scenarios/$(SCENARIO)/vm_configurations.c`` (which +resides under the hypervisor build directory) by default. Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for details. The boot options of a post-launched VM are not covered by hypervisor source code or a GRUB menu; they are defined in a guest image file or specified by @@ -619,7 +618,7 @@ ACRN Device model incorporates these three aspects: .. _pass-through: -Device passthrough +Device Passthrough ****************** At the highest level, device passthrough is about providing isolation @@ -651,7 +650,7 @@ don't support passthrough for a legacy serial port, (for example 0x3f8). -Hardware support for device passthrough +Hardware Support for Device Passthrough ======================================= Intel's current processor architectures provides support for device @@ -673,7 +672,7 @@ fabrics to scale to many devices. MSI is ideal for I/O virtualization, as it allows isolation of interrupt sources (as opposed to physical pins that must be multiplexed or routed through software). -Hypervisor support for device passthrough +Hypervisor Support for Device Passthrough ========================================= By using the latest virtualization-enhanced processor architectures, @@ -688,7 +687,7 @@ assigned to the same guest OS. PCIe does not have this restriction. .. _ACRN-io-mediator: -ACRN I/O mediator +ACRN I/O Mediator ***************** :numref:`io-emulation-path` shows the flow of an example I/O emulation path. @@ -736,7 +735,7 @@ The MMIO path is very similar, except the VM exit reason is different. MMIO access is usually trapped through a VMX_EXIT_REASON_EPT_VIOLATION in the hypervisor. -Virtio framework architecture +Virtio Framework Architecture ***************************** .. _Virtio spec: diff --git a/doc/learn.rst b/doc/learn.rst index c7875c58a..3e2d71f85 100644 --- a/doc/learn.rst +++ b/doc/learn.rst @@ -2,7 +2,7 @@ .. _learn_acrn: -What is ACRN +What Is ACRN ############ ACRN is supported on Apollo Lake and Kaby Lake Intel platforms, diff --git a/doc/nocl.rst b/doc/nocl.rst index 7d7691931..a4f4bad2f 100644 --- a/doc/nocl.rst +++ b/doc/nocl.rst @@ -7,7 +7,7 @@ lingering references to these docs out in the wild and in the Google index. Give the reader a reference to the /2.1/ document instead. -This document was removed +This Document Was Removed ######################### .. raw:: html diff --git a/doc/reference/config-options.rst b/doc/reference/config-options.rst index 8a3433520..ba848e177 100644 --- a/doc/reference/config-options.rst +++ b/doc/reference/config-options.rst @@ -7,11 +7,13 @@ As explained in :ref:`acrn_configuration_tool`, ACRN scenarios define the hypervisor (hv) and VM settings for the execution environment of an ACRN-based application. This document describes these option settings. +.. rst-class:: rst-columns3 + .. contents:: :local: :depth: 2 -Common option value types +Common Option Value Types ************************* Within this option documentation, we refer to some common type diff --git a/doc/reference/hardware.rst b/doc/reference/hardware.rst index 232a58532..38eb00dba 100644 --- a/doc/reference/hardware.rst +++ b/doc/reference/hardware.rst @@ -166,7 +166,7 @@ Verified Hardware Specifications Detail | | | | - One SATA3 port for connection to 2.5" HDD or SSD | | | | | (up to 9.5 mm thickness) | | | +------------------------+-----------------------------------------------------------+ -| | | Serial Port | - Yes | +| | | Serial Port | - No | +--------------------------------+------------------------+------------------------+-----------------------------------------------------------+ | | **Kaby Lake** | | NUC7i7BNH | Processor | - Intel® Coreâ„¢ i7-7567U CPU @ 3.50GHz (2C4T) | | | (Code name: Baby Canyon) | | (Board: NUC7i7BNB) | | | @@ -199,7 +199,7 @@ Verified Hardware Specifications Detail | | | | - One SATA3 port for connection to 2.5" HDD or SSD | | | | | (up to 9.5 mm thickness) (NUC7i5DNHE only) | | | +------------------------+-----------------------------------------------------------+ -| | | Serial Port | - No | +| | | Serial Port | - Yes | +--------------------------------+------------------------+------------------------+-----------------------------------------------------------+ | | **Whiskey Lake** | | WHL-IPC-I5 | Processor | - Intel® Coreâ„¢ i5-8265U CPU @ 1.60GHz (4C8T) | | | | | (Board: WHL-IPC-I5) | | | diff --git a/doc/reference/kconfig/readme.txt b/doc/reference/kconfig/readme.txt deleted file mode 100644 index 8c74a1632..000000000 --- a/doc/reference/kconfig/readme.txt +++ /dev/null @@ -1 +0,0 @@ -This directory has auto-generated files, do not edit any of the files here. diff --git a/doc/release_notes/release_notes_0.1.rst b/doc/release_notes/release_notes_0.1.rst index 1ddc8a5df..a908239b0 100644 --- a/doc/release_notes/release_notes_0.1.rst +++ b/doc/release_notes/release_notes_0.1.rst @@ -14,7 +14,7 @@ The project ACRN reference code can be found on GitHub in https://github.com/projectacrn. It includes the ACRN hypervisor, the ACRN device model, and documentation. -Version 0.1 new features +Version 0.1 New Features ************************ Hardware Support @@ -35,7 +35,7 @@ Virtual Graphics support added: assigned to different display. The display ports supports eDP and HDMI. - See :ref:`APL_GVT-G-hld` documentation for more information. -Virtio standard is supported +Virtio Standard Is Supported ============================ Virtio is a virtualization standard for @@ -45,7 +45,7 @@ the hypervisor. The SOS and UOS can share physical LAN network and physical eMMC storage device. (See :ref:`virtio-hld` for more information.) -Device pass-through support +Device Pass-Through Support =========================== Device pass-through to UOS support for: @@ -54,13 +54,13 @@ Device pass-through to UOS support for: - SD card (mount, read, and write directly in the UOS) - Converged Security Engine (CSE) -Hypervisor configuration +Hypervisor Configuration ======================== Developers can configure hypervisor via Kconfig parameters. (See -:ref:`configuration` for configuration options.) +documentation for configuration options.) -New ACRN tools +New ACRN Tools ============== We've added a collection of support tools including acrnctl, acrntrace, diff --git a/doc/release_notes/release_notes_0.2.rst b/doc/release_notes/release_notes_0.2.rst index 1b8528d8e..5d159c672 100644 --- a/doc/release_notes/release_notes_0.2.rst +++ b/doc/release_notes/release_notes_0.2.rst @@ -31,7 +31,7 @@ https://projectacrn.github.io/0.2/. Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. -Version 0.2 new features +Version 0.2 New Features ************************ VT-x, VT-d @@ -86,7 +86,7 @@ hotspot for 3rd party devices, provides 3rd party device applications access to the vehicle, and provides access of 3rd party devices to the TCU provided connectivity. -IPU (MIPI-CS2, HDMI-in) +IPU (MIPI-CS2, HDMI-In) ======================== ACRN hypervisor supports passthrough IPU assignment to Service OS or guest OS, without sharing. @@ -104,7 +104,7 @@ This is done to ensure performance of the most critical workload can be achieved. Three different schedulers for the GPU are involved: i915 UOS scheduler, Mediator GVT scheduler, and i915 SOS scheduler. -GPU - display surface sharing via Hyper DMA +GPU - Display Surface Sharing via Hyper DMA ============================================ Surface sharing is one typical automotive use case which requires that the SOS accesses an individual surface or a set of surfaces diff --git a/doc/release_notes/release_notes_0.3.rst b/doc/release_notes/release_notes_0.3.rst index cb0df835c..637bccfd1 100644 --- a/doc/release_notes/release_notes_0.3.rst +++ b/doc/release_notes/release_notes_0.3.rst @@ -31,7 +31,7 @@ https://projectacrn.github.io/0.3/. Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. -Version 0.3 new features +Version 0.3 New Features ************************ diff --git a/doc/release_notes/release_notes_0.4.rst b/doc/release_notes/release_notes_0.4.rst index 79bae68ff..9eda6bd04 100644 --- a/doc/release_notes/release_notes_0.4.rst +++ b/doc/release_notes/release_notes_0.4.rst @@ -31,7 +31,7 @@ https://projectacrn.github.io/0.4/. Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. -Version 0.4 new features +Version 0.4 New Features ************************ - :acrn-issue:`1824` - implement "wbinvd" emulation diff --git a/doc/release_notes/release_notes_0.5.rst b/doc/release_notes/release_notes_0.5.rst index 4b8d0e8fb..d5b847c4d 100644 --- a/doc/release_notes/release_notes_0.5.rst +++ b/doc/release_notes/release_notes_0.5.rst @@ -31,7 +31,7 @@ https://projectacrn.github.io/0.5/. Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. -Version 0.5 new features +Version 0.5 New Features ************************ **OVMF support initial patches merged in ACRN**: diff --git a/doc/release_notes/release_notes_0.6.rst b/doc/release_notes/release_notes_0.6.rst index 331870427..694cd5f95 100644 --- a/doc/release_notes/release_notes_0.6.rst +++ b/doc/release_notes/release_notes_0.6.rst @@ -32,7 +32,7 @@ https://projectacrn.github.io/0.6/. Documentation for the latest ACRN v0.6 requires Clear Linux OS version 27600. -Version 0.6 new features +Version 0.6 New Features ************************ **Enable Privileged VM support for real-time UOS in ACRN**: diff --git a/doc/release_notes/release_notes_0.7.rst b/doc/release_notes/release_notes_0.7.rst index ed242d34f..f7dec94e3 100644 --- a/doc/release_notes/release_notes_0.7.rst +++ b/doc/release_notes/release_notes_0.7.rst @@ -32,10 +32,10 @@ https://projectacrn.github.io/0.7/. Documentation for the latest ACRN v0.7 requires Clear Linux OS version 28260. -Version 0.7 new features +Version 0.7 New Features ************************ -Enable cache QOS with CAT +Enable Cache QOS With CAT ========================= Cache Allocation Technology (CAT) is enabled on Apollo Lake (APL) @@ -46,12 +46,12 @@ build time. For debugging and performance tuning, the CAT can also be enabled and configured at runtime by writing proper values to certain MSRs using the ``wrmsr`` command on ACRN shell. -Support ACPI power key mediator +Support ACPI Power Key Mediator =============================== ACRN supports ACPI power/sleep key on the APL and KBL NUC platforms, triggering S3/S5 flow, following the ACPI spec. -Document updates +Document Updates ================ Several new documents have been added in this release, including: diff --git a/doc/release_notes/release_notes_0.8.rst b/doc/release_notes/release_notes_0.8.rst index 61945435d..15ad5f84a 100644 --- a/doc/release_notes/release_notes_0.8.rst +++ b/doc/release_notes/release_notes_0.8.rst @@ -32,10 +32,10 @@ https://projectacrn.github.io/0.8/. Documentation for the latest ACRN v0.8 requires Clear Linux OS version 28600. -Version 0.8 new features +Version 0.8 New Features ************************ -GPIO virtualization +GPIO Virtualization ========================= GPIO virtualization is supported as para-virtualization based on the @@ -45,19 +45,19 @@ configuration via one virtual GPIO controller. In the Back-end, the GPIO command line in the launch script can be modified to map native GPIO to UOS. -Enable QoS based on runC container +Enable QoS Based on runC Container ================================== ACRN supports Device-Model QoS based on runC container to control the SOS resources (CPU, Storage, MEM, NET) by modifying the runC configuration file. -S5 support for RTVM +S5 Support for RTVM =============================== ACRN supports a Real-time VM (RTVM) shutting itself down. A RTVM is a kind of VM that the SOS can't interfere at runtime, and as such, can only power itself off internally. All poweroff requests external to the RTVM will be rejected to avoid any interference. -Document updates +Document Updates ================ Several new documents have been added in this release, including: diff --git a/doc/release_notes/release_notes_1.0.1.rst b/doc/release_notes/release_notes_1.0.1.rst index c82e39b69..330fcca93 100644 --- a/doc/release_notes/release_notes_1.0.1.rst +++ b/doc/release_notes/release_notes_1.0.1.rst @@ -27,7 +27,7 @@ There were no documentation changes in this update, so you can still refer to the v1.0-specific documentation found at https://projectacrn.github.io/1.0/. -Change Log in version 1.0.1 since version 1.0 +Change Log in Version 1.0.1 Since Version 1.0 ********************************************* Primary changes are to fix several security and stability issues found diff --git a/doc/release_notes/release_notes_1.0.2.rst b/doc/release_notes/release_notes_1.0.2.rst index b6743a89d..47fd18881 100644 --- a/doc/release_notes/release_notes_1.0.2.rst +++ b/doc/release_notes/release_notes_1.0.2.rst @@ -27,7 +27,7 @@ There were no documentation changes in this update, so you can still refer to the v1.0-specific documentation found at https://projectacrn.github.io/1.0/. -Change Log in v1.0.2 since v1.0.1 +Change Log in v1.0.2 Since v1.0.1 ********************************* Primary changes are to fix several security and stability issues found diff --git a/doc/release_notes/release_notes_1.0.rst b/doc/release_notes/release_notes_1.0.rst index 20cdc24c2..6833935ac 100644 --- a/doc/release_notes/release_notes_1.0.rst +++ b/doc/release_notes/release_notes_1.0.rst @@ -33,7 +33,7 @@ with a specific release: generated v1.0 documents can be found at https://projec Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. ACRN v1.0 requires Clear Linux* OS version 29070. -Version 1.0 major features +Version 1.0 Major Features ************************** Hardware Support @@ -42,7 +42,7 @@ ACRN supports multiple x86 platforms and has been tested with Apollo Lake and Kaby Lake NUCs, and the UP Squared board. (See :ref:`hardware` for supported platform details.) -APL UP2 board with SBL firmware +APL UP2 Board With SBL Firmware =============================== ACRN supports APL UP2 board with Slim Bootloader (SBL) firmware. Slim Bootloader is a modern, flexible, light-weight, @@ -51,13 +51,13 @@ customizable, and secure. An end-to-end reference build has been verified on UP2/SBL board using ACRN hypervisor, Clear Linux OS as SOS, and Clear Linux OS as UOS. -Enable post-launched RTVM support for real-time UOS in ACRN +Enable Post-Launched RTVM Support for Real-Time UOS in ACRN =========================================================== This release provides initial patches enabling a User OS (UOS) running as a virtual machine (VM) with real-time characteristics, also called a "post-launched RTVM". More patches for ACRN real time support will continue. -Enable cache QOS with CAT +Enable Cache QOS With CAT ========================= Cache Allocation Technology (CAT) is available on Apollo Lake (APL) platforms, providing cache isolation between VMs mainly for real-time performance quality @@ -66,27 +66,27 @@ the VM configuration determined at build time. For debugging and performance tuning, the CAT can also be enabled and configured at runtime by writing proper values to certain MSRs using the ``wrmsr`` command on ACRN shell. -Enable QoS based on runC container +Enable QoS Based on runC Container ================================== ACRN supports Device-Model QoS based on runC container to control the SOS resources (CPU, Storage, MEM, NET) by modifying the runC configuration file, configuration guide will be published in next release. -S5 support for RTVM +S5 Support for RTVM =================== ACRN supports a Real-time VM (RTVM) shutting itself down. A RTVM is a kind of VM that the SOS can't interfere with at runtime, and as such, only the RTVM can power itself off internally. All power-off requests external to the RTVM will be rejected to avoid any interference. -OVMF support initial patches merged in ACRN +OVMF Support Initial Patches Merged in ACRN =========================================== To support booting Windows as a Guest OS, we are using Open source Virtual Machine Firmware (OVMF). Initial patches to support OVMF have been merged in ACRN hypervisor. More patches for ACRN and patches upstreaming to OVMF work will be continuing. -Support ACPI power key mediator +Support ACPI Power Key Mediator =============================== ACRN supports ACPI power/sleep key on the APL and KBL NUC platforms, triggering S3/S5 flow, following the ACPI spec. @@ -135,7 +135,7 @@ a Guest VM (UOS), enables control of the Wi-Fi as an in-vehicle hotspot for thir devices, provides third-party device applications access to the vehicle, and provides access of third-party devices to the TCU (if applicable) provided connectivity. -IPU (MIPI CSI-2, HDMI-in) +IPU (MIPI CSI-2, HDMI-In) ========================= ACRN hypervisor provide an IPU mediator to share with Guest OS. Alternatively, IPU can also be configured as pass-through to Guest OS without sharing. @@ -161,7 +161,7 @@ to ensure performance of the most critical workload can be achieved. Three different schedulers for the GPU are involved: i915 UOS scheduler, Mediator GVT scheduler, and i915 SOS scheduler. -GPU - display surface sharing via Hyper DMA +GPU - Display Surface Sharing via Hyper DMA =========================================== Surface sharing is one typical automotive use case which requires that the SOS accesses an individual surface or a set of surfaces from the UOS without @@ -169,7 +169,7 @@ having to access the entire frame buffer of the UOS. It leverages hyper_DMABUF, a Linux kernel driver running on multiple VMs and expands DMA-BUFFER sharing capability to inter-VM. -Virtio standard is supported +Virtio Standard Is Supported ============================ Virtio framework is widely used in ACRN, allowing devices beyond network and storage to be shared to UOS in a standard way. Many mediators in ACRN follow @@ -179,11 +179,11 @@ the guest's device driver "knows" it is running in a virtual environment, and cooperates with the hypervisor. The SOS and UOS can share physical LAN network and physical eMMC storage device. (See :ref:`virtio-hld` for more information.) -Device pass-through support +Device Pass-Through Support =========================== Device pass-through to UOS supported with help of VT-d. -GPIO virtualization +GPIO Virtualization =================== GPIO virtualization is supported as para-virtualization based on the Virtual I/O Device (VIRTIO) specification. The GPIO consumers of the Front-end are able @@ -191,12 +191,12 @@ to set or get GPIO values, directions, and configuration via one virtual GPIO controller. In the Back-end, the GPIO command line in the launch script can be modified to map native GPIO to UOS. (See :ref:`virtio-hld` for more information.) -New ACRN tools +New ACRN Tools ============== We've added a collection of support tools including ``acrnctl``, ``acrntrace``, ``acrnlog``, ``acrn-crashlog``, ``acrnprobe``. (See the `Tools` section under **User Guides** for details.) -Document updates +Document Updates ================ We have many reference documents `available `_, including: diff --git a/doc/release_notes/release_notes_1.1.rst b/doc/release_notes/release_notes_1.1.rst index 612a27841..f2dc82df7 100644 --- a/doc/release_notes/release_notes_1.1.rst +++ b/doc/release_notes/release_notes_1.1.rst @@ -24,7 +24,7 @@ with a specific release: generated v1.1 documents can be found at https://projec Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. ACRN v1.1 requires Clear Linux* OS version 29970. -Version 1.1 major features +Version 1.1 Major Features ************************** Hybrid Mode Introduced @@ -33,14 +33,14 @@ In hybrid mode, a Zephyr OS is launched by the hypervisor even before the Servic launched (pre-launched), with dedicated resources to achieve highest level of isolation. This is designed to meet the needs of a FuSa certifiable safety OS. -Support for new guest Operating Systems +Support for New Guest Operating Systems ======================================= * The `Zephyr RTOS `_ can be a pre-launched Safety OS in hybrid mode. It can also be a post-launched (launched by Service OS, not the hypervisor) as a guest OS. * VxWorks as a post-launched RTOS for industrial usages. * Windows as a post-launched OS -Document updates +Document Updates ================ We have many `reference documents available `_, including: diff --git a/doc/release_notes/release_notes_1.2.rst b/doc/release_notes/release_notes_1.2.rst index 4fa5fb0bd..e62a55ee3 100644 --- a/doc/release_notes/release_notes_1.2.rst +++ b/doc/release_notes/release_notes_1.2.rst @@ -24,7 +24,7 @@ with a specific release: generated v1.2 documents can be found at https://projec Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. ACRN v1.2 requires Clear Linux* OS version 30690. -Version 1.2 major features +Version 1.2 Major Features ************************** What's New in v1.2 @@ -36,7 +36,7 @@ What's New in v1.2 * Virtualization supports Always Running Timer (ART) * Various bug fixes and enhancements -Document updates +Document Updates ================ We have many `reference documents available `_, including: diff --git a/doc/release_notes/release_notes_1.3.rst b/doc/release_notes/release_notes_1.3.rst index 85b21e8c9..59493e726 100644 --- a/doc/release_notes/release_notes_1.3.rst +++ b/doc/release_notes/release_notes_1.3.rst @@ -24,7 +24,7 @@ with a specific release: generated v1.3 documents can be found at https://projec Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. ACRN v1.3 requires Clear Linux* OS version 31080. -Version 1.3 major features +Version 1.3 Major Features ************************** What's New in v1.3 @@ -38,7 +38,7 @@ What's New in v1.3 * Ethernet mediator now supports prioritization per VM. * Features for real-time determinism, e.g. Cache Allocation Technology (CAT, only supported on Apollo Lake). -Document updates +Document Updates ================ We have many new `reference documents available `_, including: diff --git a/doc/release_notes/release_notes_1.4.rst b/doc/release_notes/release_notes_1.4.rst index cd2cfa847..3ee1b587f 100644 --- a/doc/release_notes/release_notes_1.4.rst +++ b/doc/release_notes/release_notes_1.4.rst @@ -24,7 +24,7 @@ with a specific release: generated v1.4 documents can be found at https://projec Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. ACRN v1.4 requires Clear Linux* OS version 31670. -Version 1.4 major features +Version 1.4 Major Features ************************** What's New in v1.4 @@ -34,7 +34,7 @@ What's New in v1.4 * WaaG (Windows as a guest) stability and performance has been improved. * Realtime performance of the RTVM (preempt-RT kernel-based) has been improved. -Document updates +Document Updates ================ Many new `reference documents `_ are available, including: diff --git a/doc/release_notes/release_notes_1.5.rst b/doc/release_notes/release_notes_1.5.rst index 58ac3b296..5df2f2df3 100644 --- a/doc/release_notes/release_notes_1.5.rst +++ b/doc/release_notes/release_notes_1.5.rst @@ -24,7 +24,7 @@ with a specific release: generated v1.5 documents can be found at https://projec Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. ACRN v1.5 requires Clear Linux* OS version 32030. -Version 1.5 major features +Version 1.5 Major Features ************************** What's New in v1.5 @@ -34,7 +34,7 @@ What's New in v1.5 * Overall stability and performance has been improved. * An offline configuration tool has been created to help developers port ACRN to different hardware boards. -Document updates +Document Updates ================ Many new `reference documents `_ are available, including: diff --git a/doc/release_notes/release_notes_1.6.1.rst b/doc/release_notes/release_notes_1.6.1.rst index cdd25b0f9..c7687ab0a 100644 --- a/doc/release_notes/release_notes_1.6.1.rst +++ b/doc/release_notes/release_notes_1.6.1.rst @@ -25,7 +25,7 @@ https://projectacrn.github.io/1.6.1/. Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. ACRN v1.6.1 requires Clear Linux OS version 33050. -Version 1.6.1 major features +Version 1.6.1 Major Features **************************** What's New in v1.6.1 @@ -49,7 +49,7 @@ What's New in v1.6.1 * Supported VT-d Posted Interrupts -Document updates +Document Updates ================ Many new and updated `reference documents `_ are available, including: diff --git a/doc/release_notes/release_notes_1.6.rst b/doc/release_notes/release_notes_1.6.rst index bad722c77..9a413b092 100644 --- a/doc/release_notes/release_notes_1.6.rst +++ b/doc/release_notes/release_notes_1.6.rst @@ -24,7 +24,7 @@ with a specific release: generated v1.6 documents can be found at https://projec Documentation for the latest (master) branch is found at https://projectacrn.github.io/latest/. ACRN v1.6 requires Clear Linux OS version 32680. -Version 1.6 major features +Version 1.6 Major Features ************************** What's New in v1.6 @@ -51,7 +51,7 @@ What's New in v1.6 * PCI bridge emulation in hypervisor -Document updates +Document Updates ================ Many new and updated `reference documents `_ are available, including: diff --git a/doc/release_notes/release_notes_2.0.rst b/doc/release_notes/release_notes_2.0.rst index e4c7c4728..98d6173da 100644 --- a/doc/release_notes/release_notes_2.0.rst +++ b/doc/release_notes/release_notes_2.0.rst @@ -55,7 +55,7 @@ started with ACRN. We recommend that all developers upgrade to ACRN release v2.0. -Version 2.0 Key Features (comparing with v1.0) +Version 2.0 Key Features (Comparing With v1.0) ********************************************** .. contents:: @@ -101,7 +101,7 @@ New Hardware Platform Support This release adds support for 8th Gen Intel® Coreâ„¢ Processors (code name: Whiskey Lake). (See :ref:`hardware` for platform details.) -Pre-launched Safety VM Support +Pre-Launched Safety VM Support ============================== ACRN supports a pre-launched partitioned safety VM, isolated from the @@ -111,21 +111,21 @@ For example, in the hybrid mode, a real-time Zephyr RTOS VM can be and with its own dedicated resources to achieve a high level of isolation. This is designed to meet the needs of a Functional Safety OS. -Post-launched VM support via OVMF +Post-Launched VM Support via OVMF ================================= ACRN supports Open Virtual Machine Firmware (OVMF) as a virtual boot loader for the Service VM to launch post-launched VMs such as Windows, Linux, VxWorks, or Zephyr RTOS. Secure boot is also supported. -Post-launched real-time VM Support +Post-Launched Real-Time VM Support ================================== ACRN supports a post-launched RTVM, which also uses partitioned hardware resources to ensure adequate real-time performance, as required for industrial use cases. -Real-time VM Performance Optimizations +Real-Time VM Performance Optimizations ====================================== ACRN 2.0 improves RTVM performance with these optimizations: @@ -161,7 +161,7 @@ scheduler in the hypervisor to make sure the physical CPU can be shared between VMs and support for yielding an idle vCPU when it's running a 'HLT' or 'PAUSE' instruction. -Large selection of OSs for User VMs +Large Selection of OSs for User VMs =================================== ACRN now supports Windows* 10, Android*, Ubuntu*, Xenomai, VxWorks*, @@ -170,7 +170,7 @@ to the Microsoft* Hypervisor Top-Level Functional Specification (TLFS). ACRN 2.0 also improves overall Windows as a Guest (WaaG) stability and performance. -GRUB bootloader +GRUB Bootloader =============== The ACRN hypervisor can boot from the popular GRUB bootloader using @@ -189,14 +189,14 @@ In this example, the ACRN Service VM supports a SR-IOV ethernet device through the Physical Function (PF) driver, and ensures that the SR-IOV Virtual Function (VF) device can passthrough to a post-launched VM. -Graphics passthrough support +Graphics Passthrough Support ============================ ACRN supports GPU passthrough to dedicated User VM based on Intel GVT-d technology used to virtualize the GPU for multiple guest VMs, effectively providing near-native graphics performance in the VM. -Shared memory based Inter-VM communication +Shared Memory Based Inter-Vm Communication ========================================== ACRN supports Inter-VM communication based on shared memory for @@ -213,7 +213,7 @@ Kata Containers Support ACRN can launch a Kata container, a secure container runtime, as a User VM. -VM orchestration +VM Orchestration ================ Libvirt is an open-source API, daemon, and management tool as a layer to @@ -221,7 +221,7 @@ decouple orchestrators and hypervisors. By adding a "ACRN driver", ACRN supports libvirt-based tools and orchestrators to configure a User VM's CPU configuration during VM creation. -Document updates +Document Updates ================ Many new and updated `reference documents `_ are available, including: diff --git a/doc/release_notes/release_notes_2.1.rst b/doc/release_notes/release_notes_2.1.rst index e8cbdda46..70cf5540d 100644 --- a/doc/release_notes/release_notes_2.1.rst +++ b/doc/release_notes/release_notes_2.1.rst @@ -33,7 +33,7 @@ ACRN v2.1 requires Ubuntu 18.04. Follow the instructions in the We recommend that all developers upgrade to ACRN release v2.1. -What's new in v2.1 +What's New in v2.1 ****************** * Preempt-RT Linux has been validated as a pre-launched realtime VM. See diff --git a/doc/release_notes/release_notes_2.2.rst b/doc/release_notes/release_notes_2.2.rst index 7bd9dae40..3ef8bf639 100644 --- a/doc/release_notes/release_notes_2.2.rst +++ b/doc/release_notes/release_notes_2.2.rst @@ -73,7 +73,7 @@ Staged removal of deprivileged boot mode support. Clear Linux though, so we have chosen Ubuntu (and Yocto Project) as the preferred Service VM OSs moving forward. -Document updates +Document Updates **************** New and updated reference documents are available, including: diff --git a/doc/release_notes/release_notes_2.3.rst b/doc/release_notes/release_notes_2.3.rst index 3b140e337..adf988362 100644 --- a/doc/release_notes/release_notes_2.3.rst +++ b/doc/release_notes/release_notes_2.3.rst @@ -65,7 +65,7 @@ Removed deprivileged boot mode support Clear Linux so we have chosen Ubuntu (and Yocto Project) as the preferred Service VM OSs moving forward. -Document updates +Document Updates **************** New and updated reference documents are available, including: diff --git a/doc/release_notes/release_notes_2.4.rst b/doc/release_notes/release_notes_2.4.rst new file mode 100644 index 000000000..dc4b3483f --- /dev/null +++ b/doc/release_notes/release_notes_2.4.rst @@ -0,0 +1,296 @@ +.. _release_notes_2.4: + +ACRN v2.4 (Apr 2021) +#################### + +We are pleased to announce the release of the Project ACRN hypervisor +version 2.4. + +ACRN is a flexible, lightweight reference hypervisor that is built with +real-time and safety-criticality in mind. It is optimized to streamline +embedded development through an open-source platform. See the +:ref:`introduction` introduction for more information. All project ACRN +source code is maintained in the +https://github.com/projectacrn/acrn-hypervisor repository and includes +folders for the ACRN hypervisor, the ACRN device model, tools, and +documentation. You can either download this source code as a zip or +tar.gz file (see the `ACRN v2.4 GitHub release page +`_) or +use the Git ``clone`` and ``checkout`` commands:: + + git clone https://github.com/projectacrn/acrn-hypervisor + cd acrn-hypervisor + git checkout v2.4 + +The project's online technical documentation is also tagged to +correspond with a specific release: generated v2.4 documents can be +found at https://projectacrn.github.io/2.4/. Documentation for the +latest under-development branch is found at +https://projectacrn.github.io/latest/. + +ACRN v2.4 requires Ubuntu 18.04. Follow the instructions in the +:ref:`rt_industry_ubuntu_setup` to get started with ACRN. + + +What's New in v2.4 +****************** + +Extensive work was done to redesign how ACRN +configuration is handled, update the build process to use the new +configuration system, and update the corresponding documentation. This is a +significant change and improvement to how you configure ACRN but also impacts +existing projects, as explained in the next section. + +We've also validated the hybrid_rt scenario on the next generation of Intel® +Coreâ„¢ processors (codenamed Elkhart Lake) and enabled software SRAM and cache +locking for real time performance on Elkhart Lake. + +ACRN Configuration and Build +============================ + +The following major changes on ACRN configuration and build process have been +integrated into v2.4: + + - Metadata of configuration entries, including documentation and attributes, + has been removed from ``scenario`` XMLs. + - The C sources generated from ``board`` and ``scenario`` XMLs are no longer + maintained in the repository. Instead they'll be generated as part of the + hypervisor build. Users can now find them under ``configs/`` of the build + directory. + - Kconfig is no longer used for configuration. Related build targets, such as + ``defconfig``, now apply to the configuration files in XML. + - The ``make`` command-line variables ``BOARD`` and ``BOARD_FILE`` have been + unified. Users can now specify ``BOARD=xxx`` when invoking ``make`` with + ``xxx`` being either a board name or a (relative or absolute) path to a + board XML file. ``SCENARIO`` and ``SCENARIO_FILE`` have been unified in the same + way. + +For complete instructions to get started with the new build system, refer to +:ref:`getting-started-building`. For an introduction on the concepts and +workflow of the new configuration mechanism, refer to +:ref:`acrn_configuration_tool`. + +Upgrading to v2.4 From Previous Releases +**************************************** + +We highly recommended that you follow the instructions below to +upgrade to v2.4 from previous ACRN releases. + +Additional Dependencies +======================= + +Python version 3.6 or higher is required to build ACRN v2.4. You can check the version of +Python you are using by: + +.. code-block:: bash + + $ python3 --version + Python 3.5.2 + +Only when the reported version is less than 3.6 (as is the case in the example above) do +you need an upgrade. The first (and preferred) choice is to install the latest +Python 3 from the official package repository: + +.. code-block:: bash + + $ sudo apt install python3 + ... + $ python --version + Python 3.8.8 + +If this does not get you an appropriate version, you may use the deadsnakes PPA +(using the instructions below) or build from source yourself. + +.. code-block:: bash + + $ sudo add-apt-repository ppa:deadsnakes/ppa + $ sudo apt-get update + $ sudo apt install python3.9 + $ python --version + Python 3.9.2 + +In addition, the following new tools and packages are needed to build ACRN v2.4: + +.. code-block:: bash + + $ sudo apt install libxml2-utils xsltproc + $ sudo pip3 install lxml xmlschema + +.. note:: + This is not the complete list of tools required to build ACRN. Refer to + :ref:`getting-started-building` for a complete guide to get started from + scratch. + +Configuration File Format +========================= + +Starting with release v2.4, Kconfig is no longer used, and the contents of scenario +XML files have been simplified. You need to upgrade your own Kconfig-format files +or scenario XML files if you maintain any. + +For Kconfig-format file, you must translate your configuration to a scenario +XML file where all previous Kconfig configuration entries are also available. Refer +to :ref:`scenario-config-options` for the full list of settings available in +scenario XML files. + +For scenario XML files, you need to remove the obsolete metadata in those files. You can use +the following XML transformation (in XSLT) for this purpose: + +.. code-block:: xml + + + + + + + + + + + + + + + +After saving the snippet above to a file (e.g., ``remove_metadata.xsl``), you +can use ``xsltproc`` to clean and transform your own scenario XML file: + +.. code-block:: bash + + $ xsltproc -o remove_metadata.xsl + +New Configuration Options +========================= + +The following element is added to scenario XML files in v2.4: + + - :option:`hv.FEATURES.ENFORCE_TURNOFF_AC` + +To upgrade a v2.3-compliant scenario XML file, you can use the following XML +transformation. The indentation in this transformation are carefully tweaked for +the best indentation in converted XML files. + +.. code-block:: xml + + + + + + + + + + + y + + + + + + + + + + + +Build Commands +============== + +We recommend you update the usage of variables ``BOARD_FILE`` and +``SCENARIO_FILE``, which are being deprecated, and ``RELEASE``: + + - ``BOARD_FILE`` should be replaced with ``BOARD``. You should not specify + ``BOARD`` and ``BOARD_FILE`` at the same time. + - Similarly, ``SCENARIO_FILE`` should be replaced with ``SCENARIO``. + - The value of ``RELEASE`` should be either ``y`` (previously was ``1``) or + ``n`` (previously was ``0``). + +``BOARD_FILE`` and ``SCENARIO_FILE`` can still be used but will take effect +only if ``BOARD`` and ``SCENARIO`` are not defined. They will be deprecated in +a future release. + +Patches on Generated Sources +============================ + +The C files generated from board and scenario XML files have been removed from the +repository in v2.4. Instead they will be generated in the build output when building the +hypervisor. + +Typically you should be able to customize your scenario by modifying the +scenario XML file rather than the generated files directly. But if that is not +possible, you can still register one or more patches that will be applied to +the generated files by following the instructions in +:ref:`acrn_makefile_targets`. + +Modifying generated files is not a recommended practice. +If you find a configuration that is not flexible enough to meet your +needs, please let us know by sending mail to `the acrn-dev mailing +list `_ or submitting a +`GitHub issue `_. + +Document Updates +**************** + +With the changes to ACRN configuration noted above, we made substantial updates +to the ACRN documentation around configuration and options, as listed here: + +.. rst-class:: rst-columns2 + +* :ref:`hv-config` +* :ref:`scenario-config-options` +* :ref:`acrn_configuration_tool` +* :ref:`vuart_config` +* :ref:`getting-started-building` +* :ref:`acrn-dm_parameters` +* :ref:`kernel-parameters` + +Additional new or updated reference documents are also available, including: + +.. rst-class:: rst-columns2 + +* :ref:`rt_industry_ubuntu_setup` +* :ref:`setup_openstack_libvirt` +* :ref:`using_windows_as_uos` + +We've also made edits throughout the documentation to improve clarity, +formatting, and presentation throughout the ACRN documentation. + +Deprivileged Boot Mode Support +============================== + +Because we dropped deprivileged boot mode support (in v2.3), we also +switched our Service VM of choice away from Clear Linux and have +removed Clear Linux-specific tutorials. Deleted documents are still +available in the `version-specific v2.1 documentation +`_. + + +Fixed Issues Details +******************** + +- :acrn-issue:`5626` - [CFL][industry] Host Call Trace once detected +- :acrn-issue:`5672` - [EHL][v2.4][config_tools] Pop error message while config multi_ivshmem_device. +- :acrn-issue:`5689` - [EHL][SBL] copy GPA error when booting zephyr as pre-launched VM +- :acrn-issue:`5712` - [CFL][EHL][Hybrid-rt][WAAG]Post Launch WAAG with USB_Mediator-USB3.0 flash disk/SSD with USB3.0 port .waag cannot access USB mass storage +- :acrn-issue:`5717` - [WaaG Ivshmem] windows ivshmem driver does not work with hv land ivshmem +- :acrn-issue:`5719` - [EHL][[Hybrid RT] it will pop some warning messages while launch vm +- :acrn-issue:`5736` - Launch script: Remove --pm_notify_channel uart parameter in launch script +- :acrn-issue:`5772` - The `RELEASE` variable is not correctly handled +- :acrn-issue:`5778` - [EHL][v2.4] Failed to build hv with hypervisor_tools_default_setting _for newboard +- :acrn-issue:`5798` - [EHL][V2.4][[Fusa Partition] cannot disable AC after modify AC configuration in Kconfig +- :acrn-issue:`5802` - [EHL][syzkaller]HV crash with info " rcu detected stall in corrupted" during fuzzing testing +- :acrn-issue:`5806` - [TGL][PTCM]Cache was not locked after post-RTVM power off and restart +- :acrn-issue:`5818` - [EHL][v2.4_rc1] Failed to boot up WAAG randomly +- :acrn-issue:`5863` - config-tools: loosen IVSHMEM_REGION restriction in schema + +Known Issues +************ + +- :acrn-issue:`5369` - [TGL][qemu] Cannot launch qemu on TGL +- :acrn-issue:`5705` - [WindowsGuest] Less memory in the virtual machine than the initialization +- :acrn-issue:`5879` - hybrid_rt scenario does not work with large initrd in pre-launched VM +- :acrn-issue:`5888` - Unable to launch vm at the second time with pty,/run/acrn/life_mngr_$vm_name parameter added in the launch script diff --git a/doc/scripts/configdoc.xsl b/doc/scripts/configdoc.xsl index 2bf7e654c..c98b70edc 100644 --- a/doc/scripts/configdoc.xsl +++ b/doc/scripts/configdoc.xsl @@ -4,10 +4,13 @@ version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:xs="http://www.w3.org/2001/XMLSchema"> - + - - + + + + + @@ -50,28 +53,29 @@ - - - + + + - + - + + - + - - + + @@ -81,66 +85,68 @@ - + - + - - + - + - - - + + - - - - + + + + + - - + + - - + + - - + - + - + 1 @@ -149,14 +155,14 @@ - + 1 The ** - + ** option is @@ -170,35 +176,57 @@ - + occurrence s - + to - + occurrences . - + - + + + + + + + *(Optional)* + + + + + + + + + + + + + + + + - - + + - + @@ -208,69 +236,70 @@ --> - - + + - + - - + + - - - + + + - + - + - + - + - + - + + .. option:: - - + + - - + + - - + + - + - + - - + + - + diff --git a/doc/scripts/genrest.py b/doc/scripts/genrest.py deleted file mode 100644 index 576ed1a54..000000000 --- a/doc/scripts/genrest.py +++ /dev/null @@ -1,443 +0,0 @@ -# Copyright (c) 2017, Intel Corporation -# SPDX-License-Identifier: Apache-2.0 -# Generates a Kconfig symbol reference in RST format, with a separate -# CONFIG_FOO.rst file for each symbol, and an alphabetical index with links in -# index.rst. - -import errno -import os -import sys -import textwrap - -import kconfiglib - - -def rst_link(sc): - # Returns an RST link (string) for the symbol/choice 'sc', or the normal - # Kconfig expression format (e.g. just the name) for 'sc' if it can't be - # turned into a link. - - if isinstance(sc, kconfiglib.Symbol): - # Skip constant and undefined symbols by checking if expr.nodes is - # empty - if sc.nodes: - # The "\ " avoids RST issues for !CONFIG_FOO -- see - # http://docutils.sourceforge.net/docs/ref/rst/restructuredtext.html#character-level-inline-markup - return r"\ :option:`{0} `".format(sc.name) - - elif isinstance(sc, kconfiglib.Choice): - # Choices appear as dependencies of choice symbols. - # - # Use a :ref: instead of an :option:. With an :option:, we'd have to have - # an '.. option::' in the choice reference page as well. That would make - # the internal choice ID show up in the documentation. - # - # Note that the first pair of <...> is non-syntactic here. We just display - # choices links within <> in the documentation. - return r"\ :ref:`<{}> <{}>`" \ - .format(choice_desc(sc), choice_id(sc)) - - # Can't turn 'sc' into a link. Use the standard Kconfig format. - return kconfiglib.standard_sc_expr_str(sc) - - -def expr_str(expr): - # Returns the Kconfig representation of 'expr', with symbols/choices turned - # into RST links - - return kconfiglib.expr_str(expr, rst_link) - - -INDEX_RST_HEADER = """.. _configuration: - -Kconfig Symbol Reference -######################## - -Introduction -************ - -Kconfig files describe the configuration symbols supported in the build -system, the logical organization and structure that group the symbols in menus -and sub-menus, and the relationships between the different configuration -symbols that govern the valid configuration combinations. - -The Kconfig files are distributed across the build directory tree. The files -are organized based on their common characteristics and on what new symbols -they add to the configuration menus. - -The configuration options' information below is extracted directly from -:program:`Kconfig`. Click on -the option name in the table below for detailed information about each option. - -Supported Options -***************** - -.. list-table:: Alphabetized Index of Configuration Options - :header-rows: 1 - - * - Kconfig Symbol - - Description -""" - -def write_kconfig_rst(): - # The "main" function. Writes index.rst and the symbol RST files. - - if len(sys.argv) != 3: - print("usage: {} ", file=sys.stderr) - sys.exit(1) - - kconf = kconfiglib.Kconfig(sys.argv[1]) - out_dir = sys.argv[2] - - # String with the RST for the index page - index_rst = INDEX_RST_HEADER - - # Sort the symbols by name so that they end up in sorted order in index.rst - for sym in sorted(kconf.unique_defined_syms, key=lambda sym: sym.name): - # Write an RST file for the symbol - write_sym_rst(sym, out_dir) - - # Add an index entry for the symbol that links to its RST file. Also - # list its prompt(s), if any. (A symbol can have multiple prompts if it - # has multiple definitions.) - index_rst += " * - :option:`CONFIG_{}`\n - {}\n".format( - sym.name, - " / ".join(node.prompt[0] - for node in sym.nodes if node.prompt)) - - for choice in kconf.unique_choices: - # Write an RST file for the choice - write_choice_rst(choice, out_dir) - - write_if_updated(os.path.join(out_dir, "index.rst"), index_rst) - - -def write_sym_rst(sym, out_dir): - # Writes documentation for 'sym' to /CONFIG_.rst - - write_if_updated(os.path.join(out_dir, "CONFIG_{}.rst".format(sym.name)), - sym_header_rst(sym) + - help_rst(sym) + - direct_deps_rst(sym) + - defaults_rst(sym) + - select_imply_rst(sym) + - selecting_implying_rst(sym) + - kconfig_definition_rst(sym)) - - -def write_choice_rst(choice, out_dir): - # Writes documentation for 'choice' to /choice_.rst, where - # is the index of the choice in kconf.choices (where choices appear in the - # same order as in the Kconfig files) - - write_if_updated(os.path.join(out_dir, choice_id(choice) + ".rst"), - choice_header_rst(choice) + - help_rst(choice) + - direct_deps_rst(choice) + - defaults_rst(choice) + - choice_syms_rst(choice) + - kconfig_definition_rst(choice)) - - -def sym_header_rst(sym): - # Returns RST that appears at the top of symbol reference pages - - # - :orphan: suppresses warnings for the symbol RST files not being - # included in any toctree - # - # - '.. title::' sets the title of the document (e.g. ). This seems - # to be poorly documented at the moment. - return ":orphan:\n\n" \ - ".. title:: {0}\n\n" \ - ".. option:: CONFIG_{0}\n\n" \ - "{1}\n\n" \ - "Type: ``{2}``\n\n" \ - .format(sym.name, prompt_rst(sym), - kconfiglib.TYPE_TO_STR[sym.type]) - - -def choice_header_rst(choice): - # Returns RST that appears at the top of choice reference pages - - return ":orphan:\n\n" \ - ".. title:: {0}\n\n" \ - ".. _{1}:\n\n" \ - ".. describe:: {0}\n\n" \ - "{2}\n\n" \ - "Type: ``{3}``\n\n" \ - .format(choice_desc(choice), choice_id(choice), - prompt_rst(choice), kconfiglib.TYPE_TO_STR[choice.type]) - - -def prompt_rst(sc): - # Returns RST that lists the prompts of 'sc' (symbol or choice) - - return "\n\n".join("*{}*".format(node.prompt[0]) - for node in sc.nodes if node.prompt) \ - or "*(No prompt -- not directly user assignable.)*" - - -def help_rst(sc): - # Returns RST that lists the help text(s) of 'sc' (symbol or choice). - # Symbols and choices with multiple definitions can have multiple help - # texts. - - rst = "" - - for node in sc.nodes: - if node.help is not None: - rst += "Help\n" \ - "====\n\n" \ - "{}\n\n" \ - .format(node.help) - - return rst - - -def direct_deps_rst(sc): - # Returns RST that lists the direct dependencies of 'sc' (symbol or choice) - - if sc.direct_dep is sc.kconfig.y: - return "" - - return "Direct dependencies\n" \ - "===================\n\n" \ - "{}\n\n" \ - "*(Includes any dependencies from if's and menus.)*\n\n" \ - .format(expr_str(sc.direct_dep)) - - -def defaults_rst(sc): - # Returns RST that lists the 'default' properties of 'sc' (symbol or - # choice) - - if isinstance(sc, kconfiglib.Symbol) and sc.choice: - # 'default's on choice symbols have no effect (and generate a warning). - # The implicit value hint below would be misleading as well. - return "" - - rst = "Defaults\n" \ - "========\n\n" - - if sc.defaults: - for value, cond in sc.defaults: - rst += "- " + expr_str(value) - if cond is not sc.kconfig.y: - rst += " if " + expr_str(cond) - rst += "\n" - - else: - rst += "No defaults. Implicitly defaults to " - - if isinstance(sc, kconfiglib.Choice): - rst += "the first (visible) choice option.\n" - elif sc.orig_type in (kconfiglib.BOOL, kconfiglib.TRISTATE): - rst += "``n``.\n" - else: - # This is accurate even for int/hex symbols, though an active - # 'range' might clamp the value (which is then treated as zero) - rst += "the empty string.\n" - - return rst + "\n" - - -def choice_syms_rst(choice): - # Returns RST that lists the symbols contained in the choice - - if not choice.syms: - return "" - - rst = "Choice options\n" \ - "==============\n\n" - - for sym in choice.syms: - # Generates a link - rst += "- {}\n".format(expr_str(sym)) - - return rst + "\n" - - -def select_imply_rst(sym): - # Returns RST that lists the symbols 'select'ed or 'imply'd by the symbol - - rst = "" - - def add_select_imply_rst(type_str, lst): - # Adds RST that lists the selects/implies from 'lst', which holds - # (<symbol>, <condition>) tuples, if any. Also adds a heading derived - # from 'type_str' if there any selects/implies. - - nonlocal rst - - if lst: - heading = "Symbols {} by this symbol".format(type_str) - rst += "{}\n{}\n\n".format(heading, len(heading)*"=") - - for select, cond in lst: - rst += "- " + rst_link(select) - if cond is not sym.kconfig.y: - rst += " if " + expr_str(cond) - rst += "\n" - - rst += "\n" - - add_select_imply_rst("selected", sym.selects) - add_select_imply_rst("implied", sym.implies) - - return rst - - -def selecting_implying_rst(sym): - # Returns RST that lists the symbols that are 'select'ing or 'imply'ing the - # symbol - - rst = "" - - def add_selecting_implying_rst(type_str, expr): - # Writes a link for each symbol that selects the symbol (if 'expr' is - # sym.rev_dep) or each symbol that imply's the symbol (if 'expr' is - # sym.weak_rev_dep). Also adds a heading at the top derived from - # type_str ("select"/"imply"), if there are any selecting/implying - # symbols. - - nonlocal rst - - if expr is not sym.kconfig.n: - heading = "Symbols that {} this symbol".format(type_str) - rst += "{}\n{}\n\n".format(heading, len(heading)*"=") - - # The reverse dependencies from each select/imply are ORed together - for select in kconfiglib.split_expr(expr, kconfiglib.OR): - # - 'select/imply A if B' turns into A && B - # - 'select/imply A' just turns into A - # - # In both cases, we can split on AND and pick the first - # operand. - - rst += "- {}\n".format(rst_link( - kconfiglib.split_expr(select, kconfiglib.AND)[0])) - - rst += "\n" - - add_selecting_implying_rst("select", sym.rev_dep) - add_selecting_implying_rst("imply", sym.weak_rev_dep) - - return rst - - -def kconfig_definition_rst(sc): - # Returns RST that lists the Kconfig definition location, include path, - # menu path, and Kconfig definition for each node (definition location) of - # 'sc' (symbol or choice) - - # Fancy Unicode arrow. Added in '93, so ought to be pretty safe. - arrow = " \N{RIGHTWARDS ARROW} " - - def include_path(node): - if not node.include_path: - # In the top-level Kconfig file - return "" - - return "Included via {}\n\n".format( - arrow.join("``{}:{}``".format(filename, linenr) - for filename, linenr in node.include_path)) - - def menu_path(node): - path = "" - - while True: - # This excludes indented submenus created in the menuconfig - # interface when items depend on the preceding symbol. - # is_menuconfig means anything that would be shown as a separate - # menu (not indented): proper 'menu's, menuconfig symbols, and - # choices. - node = node.parent - while not node.is_menuconfig: - node = node.parent - - if node is node.kconfig.top_node: - break - - path = arrow + node.prompt[0] + path - - return "(top menu)" + path - - heading = "Kconfig definition" - if len(sc.nodes) > 1: heading += "s" - rst = "{}\n{}\n\n".format(heading, len(heading)*"=") - - rst += ".. highlight:: kconfig" - - for node in sc.nodes: - rst += "\n\n" \ - "At ``{}:{}``\n\n" \ - "{}" \ - "Menu path: {}\n\n" \ - ".. parsed-literal::\n\n{}" \ - .format(node.filename, node.linenr, - include_path(node), menu_path(node), - textwrap.indent(node.custom_str(rst_link), 4*" ")) - - # Not the last node? - if node is not sc.nodes[-1]: - # Add a horizontal line between multiple definitions - rst += "\n\n----" - - rst += "\n\n*(Definitions include propagated dependencies, " \ - "including from if's and menus.)*" - - return rst - - -def choice_id(choice): - # Returns "choice_<n>", where <n> is the index of the choice in the Kconfig - # files. The choice that appears first has index 0, the next one index 1, - # etc. - # - # This gives each choice a unique ID, which is used to generate its RST - # filename and in cross-references. Choices (usually) don't have names, so - # we can't use that, and the prompt isn't guaranteed to be unique. - - # Pretty slow, but fast enough - return "choice_{}".format(choice.kconfig.choices.index(choice)) - - -def choice_desc(choice): - # Returns a description of the choice, used as the title of choice - # reference pages and in link texts. The format is - # "choice <name, if any>: <prompt text>" - - desc = "choice" - - if choice.name: - desc += " " + choice.name - - # The choice might be defined in multiple locations. Use the prompt from - # the first location that has a prompt. - for node in choice.nodes: - if node.prompt: - desc += ": " + node.prompt[0] - break - - return desc - - -def write_if_updated(filename, s): - # Writes 's' as the contents of 'filename', but only if it differs from the - # current contents of the file. This avoids unnecessary timestamp updates, - # which trigger documentation rebuilds. - - try: - with open(filename, 'r', encoding='utf-8') as f: - if s == f.read(): - return - except OSError as e: - if e.errno != errno.ENOENT: - raise - - with open(filename, "w", encoding='utf-8') as f: - f.write(s) - - -if __name__ == "__main__": - write_kconfig_rst() diff --git a/doc/scripts/requirements.txt b/doc/scripts/requirements.txt index f4fcef15c..686b3decf 100644 --- a/doc/scripts/requirements.txt +++ b/doc/scripts/requirements.txt @@ -2,5 +2,4 @@ breathe==4.23.0 sphinx==3.2.1 docutils==0.16 sphinx_rtd_theme==0.5.0 -kconfiglib==14.1.0 sphinx-tabs==1.3.0 diff --git a/doc/static/acrn-custom.css b/doc/static/acrn-custom.css index b142f8a33..4b244cba5 100644 --- a/doc/static/acrn-custom.css +++ b/doc/static/acrn-custom.css @@ -30,6 +30,11 @@ font-weight: bold; } +/* :option: link color */ +a code span.pre { + color: #2980b9; +} + /* tweak doc version selection */ .rst-versions { position: static; diff --git a/doc/static/downloads/build_acrn_ovmf.sh b/doc/static/downloads/build_acrn_ovmf.sh new file mode 100644 index 000000000..7377ee6ca --- /dev/null +++ b/doc/static/downloads/build_acrn_ovmf.sh @@ -0,0 +1,220 @@ +#!/bin/bash +# Copyright (C) 2021 Intel Corporation. +# SPDX-License-Identifier: BSD-3-Clause +# +# PREREQUISITES: +# 1) Get your specific "IntelGopDriver.efi" and "Vbt.bin" +# from your BIOS vender +# 2) Install Docker on your host machine and allow non-root users +# For Ubuntu: https://docs.docker.com/engine/install/ubuntu/ +# To enable non-root users: https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user +# 3) If you are working behind proxy, create a file named +# "proxy.conf" in ${your_working_directory} with +# configurations like below: +# Acquire::http::Proxy "http://x.y.z:port1"; +# Acquire::https::Proxy "https://x.y.z:port2"; +# Acquire::ftp::Proxy "ftp://x.y.z:port3"; +# +# HOWTO: +# 1) mkdir ${your_working_directory} +# 2) cd ${your_working_directory} +# 2) mkdir gop +# 3) cp /path/to/IntelGopDriver.efi /path/to/Vbt.bin gop +# 4) cp /path/to/build_acrn_ovmf.sh ${your_working_directory} +# 5) ./build_acrn_ovmf.sh +# +# OUTPUT: ${your_working_directory}/acrn-edk2/Build/OvmfX64/DEBUG_GCC5/FV/OVMF.fd +# +# For more information, ./build_acrn_ovmf.sh -h +# + +gop_bin_dir="./gop" +docker_image_name="ubuntu:ovmf.16.04" +proxy_conf="proxy.conf" +acrn_ver="latest" + +if [ ! -x "$(command -v docker)" ]; then + echo "Install Docker first:" + echo "If you are using Ubuntu, you can refer to: https://docs.docker.com/engine/install/ubuntu/" + exit +fi + +if [ ! -d "${gop_bin_dir}" ]; then + mkdir ${gop_bin_dir} + echo "Copy IntelGopDriver.efi and Vbt.bin to ${gop_bin_dir}" + exit +fi + +if [ ! -f "${gop_bin_dir}/IntelGopDriver.efi" ]; then + echo "Copy IntelGopDriver.efi to ${gop_bin_dir}" + exit +fi + +if [ ! -f "${gop_bin_dir}/Vbt.bin" ]; then + echo "Copy Vbt.bin to ${gop_bin_dir}" + exit +fi + +if [ ! -f "${proxy_conf}" ]; then + touch "${proxy_conf}" +fi + +usage() +{ + echo "$0 [-v ver] [-i] [-s] [-h]" + echo " -v ver: The release version of ACRN, e.g. 2.3" + echo " -i: Delete the existing docker image ${docker_image_name} and re-create it" + echo " -s: Delete the existing acrn-edk2 source code and re-download/re-patch it" + echo " -h: Show this help" + exit +} + +re_download=0 +re_create_image=0 + +while getopts "hisv:" opt +do + case "${opt}" in + h) + usage + ;; + i) + re_create_image=1 + ;; + s) + re_download=1 + ;; + v) + acrn_ver=${OPTARG} + ;; + ?) + echo "${OPTARG}" + ;; + esac +done +shift $((OPTIND-1)) + +if [[ "${re_create_image}" -eq 1 ]]; then + if [[ "$(docker images -q ${docker_image_name} 2> /dev/null)" != "" ]]; then + echo "====================================================================" + echo "Deleting the old Docker image ${docker_image_name} ..." + echo "====================================================================" + docker image rm -f "${docker_image_name}" + fi +fi + +if [[ "${re_download}" -eq 1 ]]; then + echo "====================================================================" + echo "Deleting the old acrn-edk2 source code ..." + echo "====================================================================" + sudo rm -rf acrn-edk2 +fi + +create_acrn_edk2_workspace() +{ + echo "====================================================================" + echo "Downloading & patching acrn_edk2 source code ..." + echo "====================================================================" + + [ -d acrn-edk2 ] && sudo rm -rf acrn-edk2 + + git clone https://github.com/projectacrn/acrn-edk2.git + if [ $? -ne 0 ]; then + echo "git clone acrn-edk2 failed" + return 1 + fi + + cd acrn-edk2 + git submodule update --init CryptoPkg/Library/OpensslLib/openssl + if [ $? -ne 0 ]; then + echo "git submodule acrn-edk2 failed" + return 1 + fi + + if [ "${acrn_ver}" != "latest" ]; then + git checkout --recurse-submodules -b "v${acrn_ver}" "ovmf-acrn-v${acrn_ver}" + if [ $? -ne 0 ]; then + echo "git checkout --recurse-submodules -b v${acrn_ver} ovmf-acrn-v${acrn_ver} failed" + return 1 + fi + fi + + wget -q https://projectacrn.github.io/${acrn_ver}/_static/downloads/Use-the-default-vbt-released-with-GOP-driver.patch + if [ $? -ne 0 ]; then + echo "Downloading Use-the-default-vbt-released-with-GOP-driver.patch failed" + return 1 + fi + + wget -q https://projectacrn.github.io/${acrn_ver}/_static/downloads/Integrate-IntelGopDriver-into-OVMF.patch + if [ $? -ne 0 ]; then + echo "Downloading Integrate-IntelGopDriver-into-OVMF.patch failed" + return 1 + fi + + git am --keep-cr Use-the-default-vbt-released-with-GOP-driver.patch + if [ $? -ne 0 ]; then + echo "Apply Use-the-default-vbt-released-with-GOP-driver.patch failed" + return 1 + fi + + git am --keep-cr Integrate-IntelGopDriver-into-OVMF.patch + if [ $? -ne 0 ]; then + echo "Apply Integrate-IntelGopDriver-into-OVMF.patch failed" + return 1 + fi + + return 0 +} + +create_docker_image() +{ + echo "====================================================================" + echo "Creating Docker image ..." + echo "====================================================================" + + cat > Dockerfile.ovmf <<EOF +FROM ubuntu:16.04 + +WORKDIR /root/acrn + +COPY ${proxy_conf} /etc/apt/apt.conf.d/proxy.conf +RUN apt-get update && apt-get install -y vim build-essential uuid-dev iasl git gcc-5 nasm python-dev +EOF + + docker build -t "${docker_image_name}" -f Dockerfile.ovmf . + rm Dockerfile.ovmf +} + +if [[ "$(docker images -q ${docker_image_name} 2> /dev/null)" == "" ]]; then + create_docker_image +fi + +if [ ! -d acrn-edk2 ]; then + create_acrn_edk2_workspace + if [ $? -ne 0 ]; then + echo "Download/patch acrn-edk2 failed" + exit + fi +else + cd acrn-edk2 +fi + +cp -f ../${gop_bin_dir}/IntelGopDriver.efi OvmfPkg/IntelGop/IntelGopDriver.efi +cp -f ../${gop_bin_dir}/Vbt.bin OvmfPkg/Vbt/Vbt.bin + +source edksetup.sh + +sed -i 's:^ACTIVE_PLATFORM\s*=\s*\w*/\w*\.dsc*:ACTIVE_PLATFORM = OvmfPkg/OvmfPkgX64.dsc:g' Conf/target.txt +sed -i 's:^TARGET_ARCH\s*=\s*\w*:TARGET_ARCH = X64:g' Conf/target.txt +sed -i 's:^TOOL_CHAIN_TAG\s*=\s*\w*:TOOL_CHAIN_TAG = GCC5:g' Conf/target.txt + +cd .. + +docker run \ + -ti \ + --rm \ + -w $PWD/acrn-edk2 \ + --privileged=true \ + -v $PWD:$PWD \ + ${docker_image_name} \ + /bin/bash -c "source edksetup.sh && make -C BaseTools && build -DFD_SIZE_2MB -DDEBUG_ON_SERIAL_PORT=TRUE" diff --git a/doc/substitutions.txt b/doc/substitutions.txt index 7540de0d4..d247d2d24 100644 --- a/doc/substitutions.txt +++ b/doc/substitutions.txt @@ -20,3 +20,4 @@ .. |check| unicode:: U+02714 .. HEAVY CHECK MARK :rtrim: .. |oplus| unicode:: U+02295 .. CIRCLED PLUS SIGN +.. |rarr| unicode:: U+02192 .. RIGHTWARDS ARROW diff --git a/doc/tutorials/acrn-config-details.txt b/doc/tutorials/acrn-config-details.txt index 4674bb57c..6eb5b94e1 100644 --- a/doc/tutorials/acrn-config-details.txt +++ b/doc/tutorials/acrn-config-details.txt @@ -95,6 +95,12 @@ ``LOW_RAM_SIZE`` (a child node of ``MEMORY``): Specify the size of the RAM region below address 0x10000, starting from address 0x0. +``SOS_RAM_SIZE`` (a child node of ``MEMORY``): + Specify the size of the Service OS VM RAM region. + +``UOS_RAM_SIZE`` (a child node of ``MEMORY``): + Specify the size of the User OS VM RAM region. + ``PLATFORM_RAM_SIZE`` (a child node of ``MEMORY``): Specify the size of the physical platform RAM region. diff --git a/doc/tutorials/acrn-secure-boot-with-grub.rst b/doc/tutorials/acrn-secure-boot-with-grub.rst index 737f89fc2..daf4b47ba 100644 --- a/doc/tutorials/acrn-secure-boot-with-grub.rst +++ b/doc/tutorials/acrn-secure-boot-with-grub.rst @@ -1,6 +1,6 @@ .. _how-to-enable-acrn-secure-boot-with-grub: -Enable ACRN Secure Boot with GRUB +Enable ACRN Secure Boot With GRUB ################################# This document shows how to enable ACRN secure boot with GRUB including: @@ -243,14 +243,14 @@ Creating UEFI Secure Boot Key The keys to be enrolled in UEFI firmware: :file:`PK.der`, :file:`KEK.der`, :file:`db.der`. The keys to sign bootloader image: :file:`grubx64.efi`, :file:`db.key` , :file:`db.crt`. -Sign GRUB Image With ``db`` Key -================================ +Sign GRUB Image With db Key +=========================== sbsign --key db.key --cert db.crt path/to/grubx64.efi :file:`grubx64.efi.signed` will be created, it will be your bootloader. -Enroll UEFI Keys To UEFI Firmware +Enroll UEFI Keys to UEFI Firmware ================================= Enroll ``PK`` (:file:`PK.der`), ``KEK`` (:file:`KEK.der`) and ``db`` diff --git a/doc/tutorials/acrn_configuration_tool.rst b/doc/tutorials/acrn_configuration_tool.rst index 050a61f35..bd42c9928 100644 --- a/doc/tutorials/acrn_configuration_tool.rst +++ b/doc/tutorials/acrn_configuration_tool.rst @@ -1,114 +1,287 @@ .. _acrn_configuration_tool: -ACRN Configuration Tool -####################### +Introduction to ACRN Configuration +################################## -The ACRN configuration tool is designed for System Integrators / Tier 1s to -customize ACRN to meet their own needs. It consists of two tools, the -``Kconfig`` tool and the ``acrn-config`` tool. The latter allows users to -provision VMs via a web interface and configure the hypervisor from XML -files at build time. +ACRN configuration is designed for System Integrators / Tier 1s to customize +ACRN to meet their own needs. It allows users to adapt ACRN to target boards as +well as configure hypervisor capabilities and provision VMs. -Introduction -************ +ACRN configuration consists of the following key components. -ACRN includes three types of configurations: Hypervisor, Board, and VM. Each -is discussed in the following sections. + - Configuration data saved as XML files. + - A configuration toolset that helps users to generate and edit configuration + data. The toolset includes: -Hypervisor configuration -======================== + - A **board inspector** that collects board-specific information on target + machines. + - A **configuration editor** that lets you edit configuration data via a web-based UI. -The hypervisor configuration defines a working scenario and target -board by configuring the hypervisor image features and capabilities such as -setting up the log and the serial port. +The following sections introduce the concepts and tools of ACRN configuration +from the aspects below. -The hypervisor configuration uses the ``Kconfig`` mechanism. The configuration -file is located at ``acrn-hypervisor/hypervisor/arch/x86/Kconfig``. + - :ref:`acrn_config_types` introduces the objectives and main contents of + different types of configuration data. + - :ref:`acrn_config_workflow` overviews the steps to customize ACRN + configuration using the configuration toolset. + - :ref:`acrn_config_data` explains the location and format of configuration + data saved as XML files. + - :ref:`acrn_config_tool_ui` gives detailed instructions on using the + configuration editor. -A board-specific ``defconfig`` file, for example -``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/$(BOARD).config`` -is loaded first; it is the default ``Kconfig`` for the specified board. +.. _acrn_config_types: -Board configuration -=================== - -The board configuration stores board-specific settings referenced by the -ACRN hypervisor. This includes **scenario-relevant** information such as -board settings, root device selection, and the kernel cmdline. It also includes -**scenario-irrelevant** hardware-specific information such as ACPI/PCI -and BDF information. The reference board configuration is organized as -``*.c/*.h`` files located in the -``misc/vm_configs/boards/$(BOARD)/`` folder. - -VM configuration -================= - -VM configuration includes **scenario-based** VM configuration -information that is used to describe the characteristics and attributes for -VMs on each user scenario. It also includes **launch script-based** VM -configuration information, where parameters are passed to the device model -to launch post-launched User VMs. - -Scenario based VM configurations are organized as ``*.c/*.h`` files. The -reference scenarios are located in the -``misc/vm_configs/scenarios/$(SCENARIO)/`` folder. -The board-specific configurations on this scenario are stored in the -``misc/vm_configs/scenarios/$(SCENARIO)/$(BOARD)/`` folder. - -User VM launch script samples are located in the -``misc/vm_configs/sample_launch_scripts/`` folder. - -ACRN configuration XMLs +Types of Configurations *********************** -The ACRN configuration includes three kinds of XML files for acrn-config -usage: ``board``, ``scenario``, and ``launch`` XML. All -scenario-irrelevant hardware-specific information for the board -configuration is stored in the ``board`` XML. The XML is generated by -``misc/acrn-config/target/board_parser.py``, which runs on the target -board. The scenario-relevant board and scenario-based VM configurations -are stored in the ``scenario`` XML. The launch script-based VM -configuration is stored in the ``launch`` XML. These two XMLs can be -customized by using the web interface tool at -``misc/acrn-config/config_app/app.py``. End users can load their own -configurations by importing customized XMLs or by saving the -configurations by exporting XMLs. +ACRN includes three types of configurations: board, scenario, and launch. The +following sections briefly describe the objectives and main contents of each +type. +Board Configuration +=================== -Board XML format +The board configuration stores hardware-specific information extracted on the +target platform. It describes the capacity of hardware resources (such as +processors and memory), platform power states, available devices, and BIOS +versions. This information is used by ACRN configuration tool to check feature +availability and allocate resources among VMs, as well as by ACRN hypervisor to +initialize and manage the platform at runtime. + +The board configuration is scenario-neutral by nature. Thus, multiple scenario +configurations can be based on the same board configuration. + +Scenario Configuration +====================== + +The scenario configuration defines a working scenario by configuring hypervisor +capabilities and defining VM attributes and resources. You can specify the +following in scenario configuration. + + - Hypervisor capabilities + + - Availability and settings of hypervisor features, such as debugging + facilities, scheduling algorithm, ivshmem, and security features. + - Hardware management capacity of the hypervisor, such as maximum PCI devices + and maximum interrupt lines supported. + - Memory consumption of the hypervisor, such as the entry point and stack + size. + + - VM attributes and resources + + - VM attributes, such as VM names. + - Maximum number of VMs supported. + - Resources allocated to each VM, such as number of vCPUs, amount of guest + memory, and pass-through devices. + - Guest OS settings, such as boot protocol and guest kernel parameters. + - Settings of virtual devices, such as virtual UARTs. + +For pre-launched VMs, the VM attributes and resources are exactly the amount of +resource allocated to them. For post-launched VMs, the number of vCPUs define +the upper limit the Service VM can allocate to them and settings of virtual +devices still apply. Other resources are under the control of the Service VM and +can be dynamically allocated to post-launched VMs. + +The scenario configuration is used by ACRN configuration tool to reserve +sufficient memory for the hypervisor to manage the VMs at build time, as well as +by ACRN hypervisor to initialize its capabilities and set up the VMs at runtime. + +Launch Configuration +==================== + +The launch configuration defines the attributes and resources of a +post-launched VM. The main contents are similar to the VM attributes and +resources in scenario configuration. The launch configuration is used to generate shell scripts that +invoke ``acrn-dm`` to create post-launched VMs. Unlike board and scenario +configurations used at build time or by ACRN hypervisor, launch +configuration are used dynamically in the Service VM. + +.. _acrn_config_workflow: + +Using ACRN Configuration Toolset +******************************** + +ACRN configuration toolset is provided to create and edit configuration +data. The toolset can be found in ``misc/config_tools``. + +Here is the workflow to customize ACRN configurations using the configuration +toolset. + +#. Get the board info. + + a. Set up a native Linux environment on the target board. Make sure the + following tools are installed and the kernel boots with the following + command line options. + + | **Native Linux requirement:** + | **Release:** Ubuntu 18.04+ + | **Tools:** cpuid, rdmsr, lspci, dmidecode (optional) + | **Kernel cmdline:** "idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable" + + #. Copy the ``target`` directory into the target file system and then run the + ``sudo python3 board_parser.py $(BOARD)`` command. + #. A ``$(BOARD).xml`` that includes all needed hardware-specific information + is generated in the ``./out/`` directory. Here, ``$(BOARD)`` is the + specified board name. + +#. Customize your needs. + + a. Copy ``$(BOARD).xml`` to the host development machine. + #. Run the ACRN configuration editor (available at + ``misc/config_tools/config_app/app.py``) on the host machine and import + the ``$(BOARD).xml``. Select your working scenario under **Scenario Setting** + and input the desired scenario settings. The tool will do validation checks + on the input based on the ``$(BOARD).xml``. The customized settings can be + exported to your own ``$(SCENARIO).xml``. If you have a customized scenario + XML file, you can also import it to the editor for modification. + #. In ACRN configuration editor, input the launch script parameters for the + post-launched User VM under **Launch Setting**. The editor will validate + the input based on both the ``$(BOARD).xml`` and ``$(SCENARIO).xml`` and then + export settings to your ``$(LAUNCH).xml``. + + .. note:: Refer to :ref:`acrn_config_tool_ui` for more details on + the configuration editor. + +#. Build with your XML files. Refer to :ref:`getting-started-building` to build + the ACRN hypervisor with your XML files on the host machine. + +#. Deploy VMs and run ACRN hypervisor on the target board. + +.. figure:: images/offline_tools_workflow.png + :align: center + + Configuration Workflow + +.. _acrn_makefile_targets: + +Makefile Targets for Configuration +================================== + +In addition to the ``BOARD`` and ``SCENARIO`` variables, ACRN source also +includes the following makefile targets to aid customization. + +.. list-table:: + :widths: 20 50 + :header-rows: 1 + + * - Target + - Description + + * - ``hvdefconfig`` + - Generate configuration files (a bunch of C source files) in the + build directory without building the hypervisor. This target can be used + when you want to customize the configurations based on a predefined + scenario. + + * - ``hvshowconfig`` + - Print the target ``BOARD``, ``SCENARIO`` and build type (debug or + release) of a build. + + * - ``hvdiffconfig`` + - After modifying the generated configuration files, you can use this + target to generate a patch that shows the differences made. + + * - ``hvapplydiffconfig PATCH=/path/to/patch`` + - Register a patch to be applied on the generated configuration files + every time they are regenerated. The ``PATCH`` variable specifies the path + (absolute or relative to current working directory) of the + patch. Multiple patches can be registered by invoking this target + multiple times. + +The targets ``hvdiffconfig`` and ``hvapplydiffconfig`` +are provided for users who already have offline patches to the generated +configuration files. Prior to v2.4, the generated configuration files are also +in the repository. Some users may already have chosen to modify these files +directly to customize the configurations. + +.. note:: + We highly recommend new users save and maintain customized configurations + in XML, not in patches to generated configuration files. + +Here is an example how to use the ``hvdiffconfig`` to generate a patch and save +it to ``config.patch``. + +.. code-block:: console + + acrn-hypervisor$ make BOARD=ehl-crb-b SCENARIO=hybrid_rt hvdefconfig + ... + acrn-hypervisor$ vim build/hypervisor/configs/scenarios/hybrid_rt/pci_dev.c + (edit the file manually) + acrn-hypervisor$ make hvdiffconfig + ... + Diff on generated configuration files is available at /path/to/acrn-hypervisor/build/hypervisor/config.patch. + To make a patch effective, use 'applydiffconfig PATCH=/path/to/patch' to register it to a build. + ... + acrn-hypervisor$ cp build/hypervisor/config.patch config.patch + +The example below shows how to use ``hvapplydiffconfig`` to apply +``config.patch`` to a new build. + +.. code-block:: console + + acrn-hypervisor$ make clean + acrn-hypervisor$ make BOARD=ehl-crb-b SCENARIO=hybrid_rt hvdefconfig + ... + acrn-hypervisor$ make hvapplydiffconfig PATCH=config.patch + ... + /path/to/acrn-hypervisor/config.patch is registered for build directory /path/to/acrn-hypervisor/build/hypervisor. + Registered patches will be applied the next time 'make' is invoked. + To unregister a patch, remove it from /path/to/acrn-hypervisor/build/hypervisor/configs/.diffconfig. + ... + acrn-hypervisor$ make hypervisor + ... + Applying patch /path/to/acrn-hypervisor/config.patch: + patching file scenarios/hybrid_rt/pci_dev.c + ... + +.. _acrn_config_data: + +ACRN Configuration Data +*********************** + +ACRN configuration data are saved in three XML files: ``board``, ``scenario``, +and ``launch`` XML. The ``board`` XML contains board configuration and is +generated by the board inspector on the target machine. The ``scenario`` and +``launch`` XMLs, containing scenario and launch configurations respectively, can +be customized by using the configuration editor. End users can load their own +configurations by importing customized XMLs or by saving the configurations by +exporting XMLs. + +The predefined XMLs provided by ACRN are located in the ``misc/config_tools/data/`` +directory of the ``acrn-hypervisor`` repo. + +Board XML Format ================ -The board XMLs are located in the -``misc/vm_configs/xmls/board-xmls/`` folder. The board XML has an ``acrn-config`` root element and a ``board`` attribute: .. code-block:: xml <acrn-config board="BOARD"> -As an input for the ``acrn-config`` tool, end users do not need to care -about the format of board XML and should not modify it. +Board XML files are input to the configuration editor and the build system, and are not +intended for end users to modify. -Scenario XML format +Scenario XML Format =================== -The scenario XMLs are located in the -``misc/vm_configs/xmls/config-xmls/`` folder. The -scenario XML has an ``acrn-config`` root element as well as ``board`` -and ``scenario`` attributes: + +The scenario XML has an ``acrn-config`` root element as well as ``board`` and +``scenario`` attributes: .. code-block:: xml <acrn-config board="BOARD" scenario="SCENARIO"> -See :ref:`scenario-config-options` for a full explanation of available scenario XML elements. +See :ref:`scenario-config-options` for a full explanation of available scenario +XML elements. Users are recommended to tweak the configuration data by using +ACRN configuration editor. -Launch XML format +Launch XML Format ================= -The launch XMLs are located in the -``misc/vm_configs/xmls/config-xmls/`` folder. -The launch XML has an ``acrn-config`` root element as well as -``board``, ``scenario`` and ``uos_launcher`` attributes: + +The launch XML has an ``acrn-config`` root element as well as ``board``, +``scenario`` and ``uos_launcher`` attributes: .. code-block:: xml @@ -188,133 +361,20 @@ current scenario has: interface. When ``configurable="0"``, the item does not appear on the interface. -Configuration tool workflow -*************************** - -Hypervisor configuration workflow -================================== - -The hypervisor configuration is based on the ``Kconfig`` -mechanism. Begin by creating a board-specific ``defconfig`` file to -set up the default ``Kconfig`` values for the specified board. -Next, configure the hypervisor build options using the ``make menuconfig`` -graphical interface or ``make defconfig`` to generate -a ``.config`` file. The resulting ``.config`` file is -used by the ACRN build process to create a configured scenario- and -board-specific hypervisor image. - -.. figure:: images/sample_of_defconfig.png - :align: center - - defconfig file sample - -.. figure:: images/GUI_of_menuconfig.png - :align: center - - ``menuconfig`` interface sample - -Refer to :ref:`getting-started-hypervisor-configuration` for detailed -configuration steps. - - -.. _vm_config_workflow: - -Board and VM configuration workflow -=================================== - -Python offline tools are provided to configure Board and VM configurations. -The tool source folder is ``misc/acrn-config/``. - -Here is the offline configuration tool workflow: - -#. Get the board info. - - a. Set up a native Linux environment on the target board. - #. Copy the ``target`` folder into the target file system and then run the - ``sudo python3 board_parser.py $(BOARD)`` command. - #. A $(BOARD).xml that includes all needed hardware-specific information - is generated in the ``./out/`` folder. Here, ``$(BOARD)`` is the - specified board name. - - | **Native Linux requirement:** - | **Release:** Ubuntu 18.04+ - | **Tools:** cpuid, rdmsr, lspci, dmidecode (optional) - | **Kernel cmdline:** "idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable" - -#. Customize your needs. - - a. Copy ``$(BOARD).xml`` to the host development machine. - #. Run the ``misc/acrn-config/config_app/app.py`` tool on the host - machine and import the $(BOARD).xml. Select your working scenario under - **Scenario Setting** and input the desired scenario settings. The tool - will do a sanity check on the input based on the $(BOARD).xml. The - customized settings can be exported to your own $(SCENARIO).xml. - #. In the configuration tool UI, input the launch script parameters - for the post-launched User VM under **Launch Setting**. The tool will - sanity check the input based on both the $(BOARD).xml and - $(SCENARIO).xml and then export settings to your $(LAUNCH).xml. - #. The user defined XMLs can be imported by acrn-config for modification. - - .. note:: Refer to :ref:`acrn_config_tool_ui` for more details on - the configuration tool UI. - -#. Auto generate the code. - - Python tools are used to generate configurations in patch format. - The patches are applied to your local ``acrn-hypervisor`` git tree - automatically. - - a. Generate a patch for the board-related configuration:: - - cd misc/acrn-config/board_config - python3 board_cfg_gen.py --board $(BOARD).xml --scenario $(SCENARIO).xml - - Note that this can also be done by clicking **Generate Board SRC** in the acrn-config UI. - - - #. Generate a patch for scenario-based VM configuration:: - - cd misc/acrn-config/scenario_config - python3 scenario_cfg_gen.py --board $(BOARD).xml --scenario $(SCENARIO).xml - - Note that this can also be done by clicking **Generate Scenario SRC** in the acrn-config UI. - - #. Generate the launch script for the specified - post-launched User VM:: - - cd misc/acrn-config/launch_config - python3 launch_cfg_gen.py --board $(BOARD).xml --scenario $(SCENARIO).xml --launch $(LAUNCH).xml --uosid xx - - Note that this can also be done by clicking **Generate Launch Script** in the acrn-config UI. - -#. Re-build the ACRN hypervisor. Refer to - :ref:`getting-started-building` to re-build the ACRN hypervisor on the host machine. - -#. Deploy VMs and run ACRN hypervisor on the target board. - -.. figure:: images/offline_tools_workflow.png - :align: center - - Offline tool workflow - - .. _acrn_config_tool_ui: -Use the ACRN configuration app -****************************** +Use the ACRN Configuration Editor +********************************* -The ACRN configuration app is a web user interface application that performs the following: +The ACRN configuration editor provides a web-based user interface for the following: - reads board info -- configures and validates scenario settings -- automatically generates source code for board-related configurations and - scenario-based VM configurations -- configures and validates launch settings +- configures and validates scenario and launch configurations - generates launch scripts for the specified post-launched User VMs. -- dynamically creates a new scenario setting and adds or deletes VM settings - in scenario settings -- dynamically creates a new launch setting and adds or deletes User VM - settings in launch settings +- dynamically creates a new scenario configuration and adds or deletes VM + settings in it +- dynamically creates a new launch configuration and adds or deletes User VM + settings in it Prerequisites ============= @@ -322,24 +382,24 @@ Prerequisites .. _get acrn repo guide: https://projectacrn.github.io/latest/getting-started/building-from-source.html#get-the-acrn-hypervisor-source-code -- Clone acrn-hypervisor: +- Clone the ACRN hypervisor repo:: .. code-block:: none - $git clone https://github.com/projectacrn/acrn-hypervisor + $ git clone https://github.com/projectacrn/acrn-hypervisor -- Install ACRN configuration app dependencies: +- Install ACRN configuration editor dependencies: .. code-block:: none - $ cd ~/acrn-hypervisor/misc/acrn-config/config_app + $ cd ~/acrn-hypervisor/misc/config_tools/config_app $ sudo pip3 install -r requirements Instructions ============ -#. Launch the ACRN configuration app: +#. Launch the ACRN configuration editor: .. code-block:: none @@ -348,9 +408,9 @@ Instructions #. Open a browser and navigate to the website `<http://127.0.0.1:5001/>`_ automatically, or you may need to visit this website manually. Make sure you can connect to open network from browser - because the app needs to download some JavaScript files. + because the editor needs to download some JavaScript files. - .. note:: The ACRN configuration app is supported on Chrome, Firefox, + .. note:: The ACRN configuration editor is supported on Chrome, Firefox, and Microsoft Edge. Do not use Internet Explorer. The website is shown below: @@ -366,38 +426,37 @@ Instructions .. figure:: images/click_import_board_info_button.png :align: center - #. Upload the board info you have generated from the ACRN config tool. + #. Upload the board XML you have generated from the ACRN board inspector. - #. After board info is uploaded, you will see the board name from the + #. After board XML is uploaded, you will see the board name from the Board info list. Select the board name to be configured. .. figure:: images/select_board_info.png :align: center -#. Load or create the scenario setting by selecting among the following: +#. Load or create the scenario configuration by selecting among the following: - Choose a scenario from the **Scenario Setting** menu that lists all user-defined scenarios for the board you selected in the previous step. - - Click the **Create a new scenario** from the **Scenario Setting** - menu to dynamically create a new scenario setting for the current board. + - Click the **Create a new scenario** from the **Scenario Setting** menu to + dynamically create a new scenario configuration for the current board. - - Click the **Load a default scenario** from the **Scenario Setting** - menu, and then select one default scenario setting to load a default - scenario setting for the current board. + - Click the **Load a default scenario** from the **Scenario Setting** menu, + and then select one default scenario configuration to load a predefined + scenario XML for the current board. - The default scenario configuration XMLs are located at - ``misc/vm_configs/xmls/config-xmls/[board]/``. - We can edit the scenario name when creating or loading a scenario. If the - current scenario name is duplicated with an existing scenario setting - name, rename the current scenario name or overwrite the existing one - after the confirmation message. + The default scenario XMLs are located at + ``misc/config_tools/data/[board]/``. You can edit the scenario name when + creating or loading a scenario. If the current scenario name is duplicated + with an existing scenario setting name, rename the current scenario name or + overwrite the existing one after the confirmation message. .. figure:: images/choose_scenario.png :align: center Note that you can also use a customized scenario XML by clicking **Import - XML**. The configuration app automatically directs to the new scenario + XML**. The configuration editor automatically directs to the new scenario XML once the import is complete. #. The configurable items display after one scenario is created, loaded, @@ -421,8 +480,9 @@ Instructions - Click **Remove this VM** in one VM setting to remove the current VM for the scenario setting. - When one VM is added or removed in the scenario setting, the - configuration app reassigns the VM IDs for the remaining VMs by the order of Pre-launched VMs, Service VMs, and Post-launched VMs. + When one VM is added or removed in the scenario, the configuration editor + reassigns the VM IDs for the remaining VMs by the order of Pre-launched VMs, + Service VMs, and Post-launched VMs. .. figure:: images/configure_vm_add.png :align: center @@ -431,12 +491,12 @@ Instructions pop-up model. .. note:: - All customized scenario XMLs will be in user-defined groups, which are - located in ``misc/vm_configs/xmls/config-xmls/[board]/user_defined/``. + All customized scenario XMLs will be in user-defined groups + located in ``misc/config_tools/data/[board]/user_defined/``. - Before saving the scenario XML, the configuration app validates the - configurable items. If errors exist, the configuration app lists all - incorrect configurable items and shows the errors as below: + Before saving the scenario XML, the configuration editor validates the + configurable items. If errors exist, the configuration editor lists all + incorrectly configured items and shows the errors as below: .. figure:: images/err_acrn_configuration.png :align: center @@ -444,50 +504,38 @@ Instructions After the scenario is saved, the page automatically directs to the saved scenario XMLs. Delete the configured scenario by clicking **Export XML** -> **Remove**. -#. Click **Generate configuration files** to save the current scenario - setting and then generate files for the board-related configuration - source code and the scenario-based VM configuration source code. - - If **Source Path** in the pop-up model is edited, the source code is - generated into the edited Source Path relative to ``acrn-hypervisor``; - otherwise, source code is generated into default folders and - overwrites the old ones. The board-related configuration source - code is located at - ``misc/vm_configs/boards/[board]/`` and the - scenario-based VM configuration source code is located at - ``misc/vm_configs/scenarios/[scenario]/``. - The **Launch Setting** is quite similar to the **Scenario Setting**: -#. Upload board info or select one board as the current board. +#. Upload board XML or select one board as the current board. -#. Load or create one launch setting by selecting among the following: +#. Load or create one launch configuration by selecting among the following: - Click **Create a new launch script** from the **Launch Setting** menu. - Click **Load a default launch script** from the **Launch Setting** menu. - - Select one launch setting XML file from the menu. + - Select one launch XML from the menu. - - Import the local launch setting XML file by clicking **Import XML**. + - Import a local launch XML by clicking **Import XML**. -#. Select one scenario for the current launch setting from the **Select Scenario** drop-down box. +#. Select one scenario for the current launch configuration from the **Select + Scenario** drop-down box. -#. Configure the items for the current launch setting. +#. Configure the items for the current launch configuration. -#. To dynamically add or remove User VM (UOS) launch scripts: +#. Add or remove User VM (UOS) launch scripts: - Add a UOS launch script by clicking **Configure an UOS below** for the - current launch setting. + current launch configuration. - Remove a UOS launch script by clicking **Remove this VM** for the - current launch setting. + current launch configuration. -#. Save the current launch setting to the user-defined XML files by - clicking **Export XML**. The configuration app validates the current - configuration and lists all incorrect configurable items and shows errors. +#. Save the current launch configuration to the user-defined XML files by + clicking **Export XML**. The configuration editor validates the current + configuration and lists all incorrectly configured items. -#. Click **Generate Launch Script** to save the current launch setting and +#. Click **Generate Launch Script** to save the current launch configuration and then generate the launch script. .. figure:: images/generate_launch_script.png diff --git a/doc/tutorials/acrn_on_qemu.rst b/doc/tutorials/acrn_on_qemu.rst index caa5f90d7..654526b12 100644 --- a/doc/tutorials/acrn_on_qemu.rst +++ b/doc/tutorials/acrn_on_qemu.rst @@ -1,6 +1,6 @@ .. _acrn_on_qemu: -Enable ACRN over QEMU/KVM +Enable ACRN Over QEMU/KVM ######################### Goal of this document is to bring-up ACRN as a nested Hypervisor on top of QEMU/KVM @@ -195,7 +195,7 @@ Install ACRN Hypervisor $ virsh destroy ACRNSOS # where ACRNSOS is the virsh domain name. -Service VM Networking updates for User VM +Service VM Networking Updates for User VM ***************************************** Follow these steps to enable networking for the User VM (L2 guest): @@ -232,7 +232,7 @@ Follow these steps to enable networking for the User VM (L2 guest): 4. Restart ACRNSOS guest (L1 guest) to complete the setup and start with bring-up of User VM -Bring-up User VM (L2 Guest) +Bring-Up User VM (L2 Guest) *************************** 1. Build the device-model, using ``make devicemodel`` and copy acrn-dm to ACRNSOS guest (L1 guest) directory ``/usr/bin/acrn-dm`` diff --git a/doc/tutorials/cpu_sharing.rst b/doc/tutorials/cpu_sharing.rst index 78227ee9f..73b7932e0 100644 --- a/doc/tutorials/cpu_sharing.rst +++ b/doc/tutorials/cpu_sharing.rst @@ -37,7 +37,7 @@ Scheduling initialization is invoked in the hardware management layer. .. figure:: images/cpu_sharing_api.png :align: center -CPU affinity +CPU Affinity ************* Currently, we do not support vCPU migration; the assignment of vCPU mapping to @@ -64,7 +64,7 @@ Here is an example for affinity: .. figure:: images/cpu_sharing_affinity.png :align: center -Thread object state +Thread Object State ******************* The thread object contains three states: RUNNING, RUNNABLE, and BLOCKED. @@ -128,7 +128,6 @@ and BVT (Borrowed Virtual Time) scheduler. Scheduler configuration -*********************** * The option in Kconfig decides the only scheduler used in runtime. ``hypervisor/arch/x86/Kconfig`` @@ -159,7 +158,7 @@ The default scheduler is **SCHED_BVT**. - With ``cpu_affinity`` option in acrn-dm. This launches the user VM on a subset of the configured cpu_affinity pCPUs. - + For example, assign physical CPUs 0 and 1 to this VM:: --cpu_affinity 0,1 diff --git a/doc/tutorials/debug.rst b/doc/tutorials/debug.rst index 4afd04ccd..13fa5f35f 100644 --- a/doc/tutorials/debug.rst +++ b/doc/tutorials/debug.rst @@ -15,7 +15,7 @@ full list of commands, or see a summary of available commands by using the ``help`` command within the ACRN shell. -An example +An Example ********** As an example, we'll show how to obtain the interrupts of a passthrough USB device. @@ -54,7 +54,7 @@ ACRN log provides a console log and a mem log for a user to analyze. We can use console log to debug directly, while mem log is a userland tool used to capture an ACRN hypervisor log. -Turn on the logging info +Turn on the Logging Info ======================== ACRN enables a console log by default. @@ -65,7 +65,7 @@ To enable and start the mem log:: $ systemctl start acrnlog -Set and grab log +Set and Grab Log ================ We have six (1-6) log levels for console log and mem log. The following @@ -129,7 +129,7 @@ ACRN trace is a tool running on the Service VM to capture trace data. We can use the existing trace information to analyze, and we can add self-defined tracing to analyze code that we care about. -Using Existing trace event ID to analyze trace +Using Existing Trace Event ID to Analyze Trace ============================================== As an example, we can use the existing vm_exit trace to analyze the @@ -159,7 +159,7 @@ reason and times of each vm_exit after we have done some operations. vmexit summary information -Using Self-defined trace event ID to analyze trace +Using Self-Defined Trace Event ID to Analyze Trace ================================================== For some undefined trace event ID, we can define it by ourselves as diff --git a/doc/tutorials/docbuild.rst b/doc/tutorials/docbuild.rst index 84df0d576..cc34fe673 100644 --- a/doc/tutorials/docbuild.rst +++ b/doc/tutorials/docbuild.rst @@ -1,6 +1,6 @@ .. _acrn_doc: -ACRN documentation generation +ACRN Documentation Generation ############################# These instructions will walk you through generating the Project ACRN's @@ -8,7 +8,7 @@ documentation and publishing it to https://projectacrn.github.io. You can also use these instructions to generate the ACRN documentation on your local system. -Documentation overview +Documentation Overview ********************** Project ACRN content is written using the reStructuredText markup @@ -34,12 +34,15 @@ The project's documentation contains the following items: device-model folders, and from sources in the acrn-kernel repo (as explained later). +.. image:: images/doc-gen-flow.png + :align: center + The reStructuredText files are processed by the Sphinx documentation system and use the breathe extension for including the doxygen-generated API material. -Set up the documentation working folders +Set Up the Documentation Working Folders **************************************** You'll need ``git`` installed to get the working folders set up: @@ -118,8 +121,8 @@ repos (though ``https`` clones work too): git config --global user.name "David Developer" git config --global user.email "david.developer@company.com" -Installing the documentation tools -********************************** +Install the Documentation Tools +******************************* Our documentation processing has been tested to run with Python 3.6.3 and these other tools: @@ -170,7 +173,7 @@ And with that you're ready to generate the documentation. doc/scripts/show-versions.py -Documentation presentation theme +Documentation Presentation Theme ******************************** Sphinx supports easy customization of the generated documentation @@ -187,8 +190,8 @@ The ``read-the-docs`` theme is installed as part of the and JavaScript customization found in ``doc/static``, and theme template overrides found in ``doc/_templates``. -Running the documentation processors -************************************ +Run the Documentation Processors +******************************** The ``acrn-hypervisor/doc`` directory has all the ``.rst`` source files, extra tools, and ``Makefile`` for generating a local copy of the ACRN technical @@ -217,8 +220,8 @@ with the command: and use your web browser to open the URL: ``http://localhost:8000``. -Publishing content -****************** +Publish Content +*************** If you have merge rights to the projectacrn repo called ``projectacrn.github.io``, you can update the public project documentation @@ -239,12 +242,14 @@ good, you can push directly to the publishing site with: make publish -This will delete everything in the publishing repo's **latest** folder -(in case the new version has deleted files) and push a copy of the -newly-generated HTML content directly to the GitHub pages publishing -repo. The public site at https://projectacrn.github.io will be updated -typically within a few minutes, so it's best to verify the locally -generated HTML before publishing. +This uses git commands to synchronize the new content with what's +already published and will delete files in the publishing repo's +**latest** folder that are no longer needed. New or changed files from +the newly-generated HTML content are added to the GitHub pages +publishing repo. The public site at https://projectacrn.github.io will +be updated by the `GitHub pages system +<https://guides.github.com/features/pages/>`_, typically within a few +minutes. Document Versioning ******************* @@ -293,8 +298,8 @@ of the repo, and add some extra flags to the ``make`` commands: make DOC_TAG=release RELEASE=2.3 html make DOC_TAG=release RELEASE=2.3 publish -Filtering expected warnings -*************************** +Filter Expected Warnings +************************ Alas, there are some known issues with the doxygen/Sphinx/Breathe processing that generates warnings for some constructs, in particular diff --git a/doc/tutorials/enable_ivshmem.rst b/doc/tutorials/enable_ivshmem.rst index 025ddb742..3c2ca186d 100644 --- a/doc/tutorials/enable_ivshmem.rst +++ b/doc/tutorials/enable_ivshmem.rst @@ -1,7 +1,7 @@ .. _enable_ivshmem: -Enable Inter-VM Communication Based on ``ivshmem`` -################################################## +Enable Inter-VM Communication Based on Ivshmem +############################################## You can use inter-VM communication based on the ``ivshmem`` dm-land solution or hv-land solution, according to the usage scenario needs. @@ -9,7 +9,7 @@ solution or hv-land solution, according to the usage scenario needs. While both solutions can be used at the same time, VMs using different solutions cannot communicate with each other. -ivshmem dm-land usage +Ivshmem DM-Land Usage ********************* Add this line as an ``acrn-dm`` boot parameter:: @@ -33,15 +33,16 @@ where .. note:: This device can be used with real-time VM (RTVM) as well. -ivshmem hv-land usage +.. _ivshmem-hv: + +Ivshmem HV-Land Usage ********************* -The ``ivshmem`` hv-land solution is disabled by default in ACRN. You -enable it using the :ref:`acrn_configuration_tool` with these steps: +The ``ivshmem`` hv-land solution is disabled by default in ACRN. You can enable +it using the :ref:`ACRN configuration toolset <acrn_config_workflow>` with these +steps: -- Enable ``ivshmem`` hv-land in ACRN XML configuration file. For example, the - XML configuration file for the hybrid_rt scenario on a whl-ipc-i5 board is found in - ``acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml`` +- Enable ``ivshmem`` hv-land in ACRN XML configuration file. - Edit ``IVSHMEM_ENABLED`` to ``y`` in ACRN scenario XML configuration to enable ``ivshmem`` hv-land @@ -64,9 +65,9 @@ enable it using the :ref:`acrn_configuration_tool` with these steps: .. note:: You can define up to eight ``ivshmem`` hv-land shared regions. -- Build the XML configuration, refer to :ref:`getting-started-building` +- Build with the XML configuration, refer to :ref:`getting-started-building`. -ivshmem notification mechanism +Ivshmem Notification Mechanism ****************************** Notification (doorbell) of ivshmem device allows VMs with ivshmem @@ -95,7 +96,7 @@ to applications. Inter-VM Communication Examples ******************************* -dm-land example +DM-Land Example =============== This example uses dm-land inter-VM communication between two @@ -165,35 +166,29 @@ Linux-based post-launched VMs (VM1 and VM2). - For VM1 use ``ls -lh /sys/bus/pci/devices/0000:00:06.0/uio`` - For VM2 use ``ls -lh /sys/bus/pci/devices/0000:00:05.0/uio`` -hv-land example +HV-Land Example =============== This example uses hv-land inter-VM communication between two Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM). -1. Configure shared memory for the communication between VM0 and VM2 for hybrid_rt - scenario on whl-ipc-i5 board, the shared memory name is ``hv:/shm_region_0``, - and shared memory size is 2M bytes: +1. Make a copy of the predefined hybrid_rt scenario on whl-ipc-i5 (available at + ``acrn-hypervisor/misc/config_tools/data/whl-ipc-i5/hybrid_rt.xml``) and + configure shared memory for the communication between VM0 and VM2. The shared + memory name is ``hv:/shm_region_0``, and shared memory size is 2M bytes. The + resulting scenario XML should look like this: - - Edit XML configuration file for hybrid_rt scenario on whl-ipc-i5 board - ``acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml`` - to enable ``ivshmem`` and configure the shared memory region using the format - ``shm_name, shm_size, VM IDs`` (as described above in the ACRN dm boot parameters). - The region name must start with ``hv:/`` for an hv-land shared region, and we'll allocate 2MB - shared between VMs 0 and 2: + .. code-block:: none + :emphasize-lines: 2,3 - .. code-block:: none - :emphasize-lines: 2,3 - - <IVSHMEM desc="IVSHMEM configuration"> - <IVSHMEM_ENABLED>y</IVSHMEM_ENABLED> - <IVSHMEM_REGION>hv:/shm_region_0, 2, 0:2</IVSHMEM_REGION> - </IVSHMEM> + <IVSHMEM desc="IVSHMEM configuration"> + <IVSHMEM_ENABLED>y</IVSHMEM_ENABLED> + <IVSHMEM_REGION>hv:/shm_region_0, 2, 0:2</IVSHMEM_REGION> + </IVSHMEM> 2. Build ACRN based on the XML configuration for hybrid_rt scenario on whl-ipc-i5 board:: - make BOARD_FILE=acrn-hypervisor/misc/vm_configs/xmls/board-xmls/whl-ipc-i5.xml \ - SCENARIO_FILE=acrn-hypervisor/misc/vm_configs/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml TARGET_DIR=xxx + make BOARD=whl-ipc-i5 SCENARIO=<path/to/edited/scenario.xml> TARGET_DIR=xxx 3. Add a new virtual PCI device for VM2 (post-launched VM): the device type is ``ivshmem``, shared memory name is ``hv:/shm_region_0``, and shared memory diff --git a/doc/tutorials/enable_s5.rst b/doc/tutorials/enable_s5.rst index 9af2c5544..c81186af8 100644 --- a/doc/tutorials/enable_s5.rst +++ b/doc/tutorials/enable_s5.rst @@ -48,6 +48,22 @@ The diagram below shows the overall architecture: .. graphviz:: images/s5-scenario-2.dot :name: s5-scenario-2 +Initiate a system S5 from within a User VM (e.g. HMI) +===================================================== + +As in Figure 56, a request to Service VM initiates the shutdown flow. +This could come from a User VM, most likely the HMI (Windows or user-friendly Linux). +When a human operator click to initiate the flow, the lifecycle_mgr in it will send +the request via vUART to the lifecycle manager in the Service VM which in turn acknowledge +the request and trigger the following flow. + +.. note:: The User VM need to be authorized to be able to request a Shutdown, this is achieved by adding + "``--pm_notify_channel uart``" in the launch script of that VM. + And, there is only one VM in the system can be configured to request a shutdown. If there is a second User + VM launched with "``--pm_notify_channel uart``", ACRN will stop launching it and throw out below error message: + ``initiate a connection on a socket error`` + ``create socket to connect life-cycle manager failed`` + Trigger the User VM's S5 ======================== @@ -57,7 +73,7 @@ to the User VM through a channel. If the User VM receives the command, it will s to the Device Model. It is the Service VM's responsibility to check if the User VMs shut down successfully or not, and decides when to power off itself. -User VM "lifecycle manager" +User VM "Lifecycle Manager" =========================== As part of the current S5 reference design, a lifecycle manager daemon (life_mngr) runs in the @@ -159,7 +175,7 @@ The procedure for enabling S5 is specific to the particular OS: .. note:: S5 state is not automatically triggered by a Service VM shutdown; this needs to be run before powering off the Service VM. -How to test +How to Test *********** As described in :ref:`vuart_config`, two vUARTs are defined in pre-defined ACRN scenarios: vUART0/ttyS0 for the console and diff --git a/doc/tutorials/gpu-passthru.rst b/doc/tutorials/gpu-passthru.rst index 5824092fd..c689e74b2 100644 --- a/doc/tutorials/gpu-passthru.rst +++ b/doc/tutorials/gpu-passthru.rst @@ -18,7 +18,7 @@ It allows for direct assignment of an entire GPU's prowess to a single user, passing the native driver capabilities through to the hypervisor without any limitations. -Verified version +Verified Version ***************** - ACRN-hypervisor tag: **acrn-2020w17.4-140000p** @@ -31,7 +31,7 @@ Prerequisites Follow :ref:`these instructions <rt_industry_ubuntu_setup>` to set up Ubuntu as the ACRN Service VM. -Supported hardware platform +Supported Hardware Platform *************************** Currently, ACRN has enabled GVT-d on the following platforms: @@ -40,31 +40,31 @@ Currently, ACRN has enabled GVT-d on the following platforms: * Whiskey Lake * Elkhart Lake -BIOS settings +BIOS Settings ************* -Kaby Lake platform +Kaby Lake Platform ================== -* Set **IGD Minimum Memory** to **64MB** in **Devices** → - **Video** → **IGD Minimum Memory**. +* Set **IGD Minimum Memory** to **64MB** in **Devices** |rarr| + **Video** |rarr| **IGD Minimum Memory**. -Whiskey Lake platform +Whiskey Lake Platform ===================== -* Set **PM Support** to **Enabled** in **Chipset** → **System - Agent (SA) Configuration** → **Graphics Configuration** → +* Set **PM Support** to **Enabled** in **Chipset** |rarr| **System + Agent (SA) Configuration** |rarr| **Graphics Configuration** |rarr| **PM support**. -* Set **DVMT Pre-Allocated** to **64MB** in **Chipset** → +* Set **DVMT Pre-Allocated** to **64MB** in **Chipset** |rarr| **System Agent (SA) Configuration** - → **Graphics Configuration** → **DVMT Pre-Allocated**. + |rarr| **Graphics Configuration** |rarr| **DVMT Pre-Allocated**. -Elkhart Lake platform +Elkhart Lake Platform ===================== * Set **DMVT Pre-Allocated** to **64MB** in **Intel Advanced Menu** - → **System Agent(SA) Configuration** → - **Graphics Configuration** → **DMVT Pre-Allocated**. + |rarr| **System Agent(SA) Configuration** |rarr| + **Graphics Configuration** |rarr| **DMVT Pre-Allocated**. Passthrough the GPU to Guest **************************** @@ -93,7 +93,7 @@ Passthrough the GPU to Guest 4. Run ``launch_win.sh``. -Enable the GVT-d GOP driver +Enable the GVT-d GOP Driver *************************** When enabling GVT-d, the Guest OS cannot light up the physical screen @@ -165,3 +165,11 @@ Keep in mind the following: ``Build/OvmfX64/DEBUG_GCC5/FV/OVMF.fd``. Transfer the binary to your target machine. - Modify the launch script to specify the OVMF you built just now. + +Script +====== + +Once you've installed the Docker environment, you can use this +`script <../_static/downloads/build_acrn_ovmf.sh>`_ to build ACRN OVMF +with the GOP driver enabled. For more details about the script usage, +run ``build_acrn_ovmf.sh -h``. diff --git a/doc/tutorials/images/GUI_of_menuconfig.png b/doc/tutorials/images/GUI_of_menuconfig.png deleted file mode 100644 index 330f963ec..000000000 Binary files a/doc/tutorials/images/GUI_of_menuconfig.png and /dev/null differ diff --git a/doc/tutorials/images/OpenStack-06a-create-image-browse.png b/doc/tutorials/images/OpenStack-06a-create-image-browse.png index b7a558ad4..f13b3b322 100644 Binary files a/doc/tutorials/images/OpenStack-06a-create-image-browse.png and b/doc/tutorials/images/OpenStack-06a-create-image-browse.png differ diff --git a/doc/tutorials/images/OpenStack-06b-create-image-select.png b/doc/tutorials/images/OpenStack-06b-create-image-select.png index ae1dc7fd3..fd29bf9cd 100644 Binary files a/doc/tutorials/images/OpenStack-06b-create-image-select.png and b/doc/tutorials/images/OpenStack-06b-create-image-select.png differ diff --git a/doc/tutorials/images/OpenStack-06e-create-image.png b/doc/tutorials/images/OpenStack-06e-create-image.png index 13f8fbf4a..e0406a0d9 100644 Binary files a/doc/tutorials/images/OpenStack-06e-create-image.png and b/doc/tutorials/images/OpenStack-06e-create-image.png differ diff --git a/doc/tutorials/images/OpenStack-07a-create-flavor.png b/doc/tutorials/images/OpenStack-07a-create-flavor.png index 70c068176..cb1d2ee5f 100644 Binary files a/doc/tutorials/images/OpenStack-07a-create-flavor.png and b/doc/tutorials/images/OpenStack-07a-create-flavor.png differ diff --git a/doc/tutorials/images/OpenStack-07b-flavor-created.png b/doc/tutorials/images/OpenStack-07b-flavor-created.png index 22f0236e5..49d59f53e 100644 Binary files a/doc/tutorials/images/OpenStack-07b-flavor-created.png and b/doc/tutorials/images/OpenStack-07b-flavor-created.png differ diff --git a/doc/tutorials/images/OpenStack-10a-launch-instance-name.png b/doc/tutorials/images/OpenStack-10a-launch-instance-name.png index 6a1fe2a2a..a06c153a4 100644 Binary files a/doc/tutorials/images/OpenStack-10a-launch-instance-name.png and b/doc/tutorials/images/OpenStack-10a-launch-instance-name.png differ diff --git a/doc/tutorials/images/OpenStack-10b-no-new-vol-select-allocated.png b/doc/tutorials/images/OpenStack-10b-no-new-vol-select-allocated.png index a8a6ea37a..d95dc417c 100644 Binary files a/doc/tutorials/images/OpenStack-10b-no-new-vol-select-allocated.png and b/doc/tutorials/images/OpenStack-10b-no-new-vol-select-allocated.png differ diff --git a/doc/tutorials/images/OpenStack-10c-select-flavor.png b/doc/tutorials/images/OpenStack-10c-select-flavor.png index 7d2f5b71a..b228a1a43 100644 Binary files a/doc/tutorials/images/OpenStack-10c-select-flavor.png and b/doc/tutorials/images/OpenStack-10c-select-flavor.png differ diff --git a/doc/tutorials/images/OpenStack-10d-flavor-selected.png b/doc/tutorials/images/OpenStack-10d-flavor-selected.png index 7ddb62b87..2ad6d7c4b 100644 Binary files a/doc/tutorials/images/OpenStack-10d-flavor-selected.png and b/doc/tutorials/images/OpenStack-10d-flavor-selected.png differ diff --git a/doc/tutorials/images/OpenStack-10f-select-security-group.png b/doc/tutorials/images/OpenStack-10f-select-security-group.png deleted file mode 100644 index eca93d78b..000000000 Binary files a/doc/tutorials/images/OpenStack-10f-select-security-group.png and /dev/null differ diff --git a/doc/tutorials/images/OpenStack-11-wait-for-running-create-snapshot.png b/doc/tutorials/images/OpenStack-11-wait-for-running-create-snapshot.png index 603248e5b..3e644adef 100644 Binary files a/doc/tutorials/images/OpenStack-11-wait-for-running-create-snapshot.png and b/doc/tutorials/images/OpenStack-11-wait-for-running-create-snapshot.png differ diff --git a/doc/tutorials/images/OpenStack-11a-manage-floating-ip.png b/doc/tutorials/images/OpenStack-11a-manage-floating-ip.png index ae8767e7e..a4ae9a91a 100644 Binary files a/doc/tutorials/images/OpenStack-11a-manage-floating-ip.png and b/doc/tutorials/images/OpenStack-11a-manage-floating-ip.png differ diff --git a/doc/tutorials/images/OpenStack-12b-running-topology-instance.png b/doc/tutorials/images/OpenStack-12b-running-topology-instance.png index 8886526fe..1fe52886a 100644 Binary files a/doc/tutorials/images/OpenStack-12b-running-topology-instance.png and b/doc/tutorials/images/OpenStack-12b-running-topology-instance.png differ diff --git a/doc/tutorials/images/OpenStack-12d-compute-hypervisor.png b/doc/tutorials/images/OpenStack-12d-compute-hypervisor.png index aacd5e02c..5988421ca 100644 Binary files a/doc/tutorials/images/OpenStack-12d-compute-hypervisor.png and b/doc/tutorials/images/OpenStack-12d-compute-hypervisor.png differ diff --git a/doc/tutorials/images/doc-gen-flow.dot b/doc/tutorials/images/doc-gen-flow.dot new file mode 100644 index 000000000..a19fbc4fa --- /dev/null +++ b/doc/tutorials/images/doc-gen-flow.dot @@ -0,0 +1,23 @@ +# Doc Generation flow +# dot -Tpng -odoc-gen-flow.png doc-gen-flow.dot + +digraph docgen { + node [ fontname="verdana"] + bgcolor=transparent; rankdir=LR; + images [shape="rectangle" label=".png, .jpg\nimages"] + rst [shape="rectangle" label="restructuredText\nfiles"] + conf [shape="rectangle" label="conf.py\nconfiguration"] + rtd [shape="rectangle" label="read-the-docs\ntheme"] + header [shape="rectangle" label="c header\ncomments"] + xml [shape="rectangle" label="XML"] + html [shape="rectangle" label="HTML\nweb site"] + sphinx[shape="ellipse" label="sphinx +\nbreathe,\ndocutils"] + images -> sphinx + rst -> sphinx + conf -> sphinx + header -> doxygen + doxygen -> xml + xml-> sphinx + rtd -> sphinx + sphinx -> html + } diff --git a/doc/tutorials/images/doc-gen-flow.png b/doc/tutorials/images/doc-gen-flow.png new file mode 100644 index 000000000..837563a6f Binary files /dev/null and b/doc/tutorials/images/doc-gen-flow.png differ diff --git a/doc/tutorials/images/offline_tools_workflow.png b/doc/tutorials/images/offline_tools_workflow.png index bb487d4a3..4f2d1fa00 100644 Binary files a/doc/tutorials/images/offline_tools_workflow.png and b/doc/tutorials/images/offline_tools_workflow.png differ diff --git a/doc/tutorials/images/sample_of_defconfig.png b/doc/tutorials/images/sample_of_defconfig.png deleted file mode 100644 index 050131902..000000000 Binary files a/doc/tutorials/images/sample_of_defconfig.png and /dev/null differ diff --git a/doc/tutorials/images/waag_secure_boot_image1.png b/doc/tutorials/images/waag_secure_boot_image1.png deleted file mode 100644 index 240c88112..000000000 Binary files a/doc/tutorials/images/waag_secure_boot_image1.png and /dev/null differ diff --git a/doc/tutorials/images/waag_secure_boot_image2.png b/doc/tutorials/images/waag_secure_boot_image2.png deleted file mode 100644 index 932426e32..000000000 Binary files a/doc/tutorials/images/waag_secure_boot_image2.png and /dev/null differ diff --git a/doc/tutorials/images/waag_secure_boot_image3.png b/doc/tutorials/images/waag_secure_boot_image3.png deleted file mode 100644 index 7cf86bea7..000000000 Binary files a/doc/tutorials/images/waag_secure_boot_image3.png and /dev/null differ diff --git a/doc/tutorials/pre-launched-rt.rst b/doc/tutorials/pre-launched-rt.rst index a3a15d318..6e4cc6cd5 100644 --- a/doc/tutorials/pre-launched-rt.rst +++ b/doc/tutorials/pre-launched-rt.rst @@ -1,6 +1,6 @@ .. _pre_launched_rt: -Pre-Launched Preempt-RT Linux Mode in ACRN +Pre-Launched Preempt-Rt Linux Mode in ACRN ########################################## The Pre-Launched Preempt-RT Linux Mode of ACRN, abbreviated as @@ -34,7 +34,7 @@ two Ethernet ports. We will passthrough the SATA and Ethernet 03:00.0 devices into the Pre-Launched RT VM, and give the rest of the devices to the Service VM. -Install SOS with Grub on NVMe +Install SOS With Grub on NVMe ============================= As with the Hybrid and Logical Partition scenarios, the Pre-Launched RT @@ -64,7 +64,7 @@ the SATA to the NVMe drive: # mount /dev/sda1 /mnt # cp /mnt/bzImage /boot/EFI/BOOT/bzImage_RT -Build ACRN with Pre-Launched RT Mode +Build ACRN With Pre-Launched RT Mode ==================================== The ACRN VM configuration framework can easily configure resources for diff --git a/doc/tutorials/rdt_configuration.rst b/doc/tutorials/rdt_configuration.rst index 608d1aa1c..e8d1df0c2 100644 --- a/doc/tutorials/rdt_configuration.rst +++ b/doc/tutorials/rdt_configuration.rst @@ -35,7 +35,7 @@ Manual, (Section 17.19 Intel Resource Director Technology Allocation Features) .. _rdt_detection_capabilities: -RDT detection and resource capabilities +RDT Detection and Resource Capabilities *************************************** From the ACRN HV debug shell, use ``cpuid`` to detect and identify the resource capabilities. Use the platform's serial port for the HV shell. @@ -98,7 +98,7 @@ MBA bit encoding: resources by using a common subset CLOS. This is done in order to minimize misconfiguration errors. -Tuning RDT resources in HV debug shell +Tuning RDT Resources in HV Debug Shell ************************************** This section explains how to configure the RDT resources from the HV debug shell. @@ -141,7 +141,7 @@ shell. .. _rdt_vm_configuration: -Configure RDT for VM using VM Configuration +Configure RDT for VM Using VM Configuration ******************************************* #. RDT hardware feature is enabled by default on supported platforms. This @@ -166,11 +166,11 @@ Configure RDT for VM using VM Configuration </RDT> #. Once RDT is enabled in the scenario XML file, the next step is to program - the desired cache mask or/and the MBA delay value as needed in the - scenario file. Each cache mask or MBA delay configuration corresponds - to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4 + the desired cache mask or/and the MBA delay value as needed in the + scenario file. Each cache mask or MBA delay configuration corresponds + to a CLOS ID. For example, if the maximum supported CLOS ID is 4, then 4 cache mask settings needs to be in place where each setting corresponds - to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and + to a CLOS ID starting from 0. To set the cache masks for 4 CLOS ID and use default delay value for MBA, it can be done as shown in the example below. .. code-block:: none diff --git a/doc/tutorials/realtime_performance_tuning.rst b/doc/tutorials/realtime_performance_tuning.rst index 95e0708dd..e1ad3241a 100644 --- a/doc/tutorials/realtime_performance_tuning.rst +++ b/doc/tutorials/realtime_performance_tuning.rst @@ -1,6 +1,6 @@ .. _rt_performance_tuning: -ACRN Real-time (RT) Performance Analysis +ACRN Real-Time (RT) Performance Analysis ######################################## The document describes the methods to collect trace/data for ACRN real-time VM (RTVM) @@ -9,8 +9,8 @@ real-time performance analysis. Two parts are included: - Method to trace ``vmexit`` occurrences for analysis. - Method to collect Performance Monitoring Counters information for tuning based on Performance Monitoring Unit, or PMU. -``vmexit`` analysis for ACRN RT performance -******************************************* +vmexit Analysis for ACRN RT Performance +*************************************** ``vmexit`` are triggered in response to certain instructions and events and are a key source of performance degradation in virtual machines. During the runtime @@ -30,7 +30,7 @@ the duration of time where we do not want to see any ``vmexit`` occur. Different RT tasks use different critical sections. This document uses the cyclictest benchmark as an example of how to do ``vmexit`` analysis. -The critical sections +The Critical Sections ===================== Here is example pseudocode of a cyclictest implementation. @@ -53,14 +53,14 @@ the cyclictest to be awakened and scheduled. Here we can get the latency by So, we define the starting point of the critical section as ``next`` and the ending point as ``now``. -Log and trace data collection +Log and Trace Data Collection ============================= #. Add time stamps (in TSC) at ``next`` and ``now``. #. Capture the log with the above time stamps in the RTVM. #. Capture the ``acrntrace`` log in the Service VM at the same time. -Offline analysis +Offline Analysis ================ #. Convert the raw trace data to human readable format. @@ -71,10 +71,10 @@ Offline analysis :align: center :name: vm_exits_log -Collecting Performance Monitoring Counters data +Collecting Performance Monitoring Counters Data *********************************************** -Enable Performance Monitoring Unit (PMU) support in VM +Enable Performance Monitoring Unit (PMU) Support in VM ====================================================== By default, the ACRN hypervisor doesn't expose the PMU-related CPUID and @@ -149,7 +149,7 @@ Note that Precise Event Based Sampling (PEBS) is not yet enabled in the VM. value64 = hva2hpa(vcpu->arch.msr_bitmap); exec_vmwrite64(VMX_MSR_BITMAP_FULL, value64); -Perf/PMU tools in performance analysis +Perf/PMU Tools in Performance Analysis ====================================== After exposing PMU-related CPUID/MSRs to the VM, performance analysis tools @@ -170,7 +170,7 @@ following links for perf usage: Refer to https://github.com/andikleen/pmu-tools for PMU usage. -Top-down Microarchitecture Analysis Method (TMAM) +Top-Down Microarchitecture Analysis Method (TMAM) ================================================== The top-down microarchitecture analysis method (TMAM), based on top-down diff --git a/doc/tutorials/rtvm_performance_tips.rst b/doc/tutorials/rtvm_performance_tips.rst index f9080af69..2f35cb1d4 100644 --- a/doc/tutorials/rtvm_performance_tips.rst +++ b/doc/tutorials/rtvm_performance_tips.rst @@ -1,6 +1,6 @@ .. _rt_perf_tips_rtvm: -ACRN Real-time VM Performance Tips +ACRN Real-Time VM Performance Tips ################################## Background @@ -34,7 +34,7 @@ RTVM performance: This document summarizes tips from issues encountered and resolved during real-time development and performance tuning. -Mandatory options for an RTVM +Mandatory Options for an RTVM ***************************** An RTVM is a post-launched VM with LAPIC passthrough. Pay attention to @@ -55,7 +55,7 @@ Tip: Use virtio polling mode and enables polling mode to avoid a VM-exit at the frontend. Enable virtio polling mode via the option ``--virtio_poll [polling interval]``. -Avoid VM-exit latency +Avoid VM-exit Latency ********************* VM-exit has a significant negative impact on virtualization performance. @@ -137,7 +137,7 @@ Tip: Create and initialize the RT tasks at the beginning to avoid runtime access to CR3 and CR8 does not cause a VM-exit. However, writes to CR0 and CR4 may cause a VM-exit, which would happen at the spawning and initialization of a new task. -Isolating the impact of neighbor VMs +Isolating the Impact of Neighbor VMs ************************************ ACRN makes use of several technologies and hardware features to avoid @@ -190,7 +190,7 @@ Tip: Disable the software workaround for Machine Check Error on Page Size Change Change is conditionally applied to the models that may be affected by the issue. However, the software workaround has a negative impact on performance. If all guest OS kernels are trusted, the - :option:`CONFIG_MCE_ON_PSC_WORKAROUND_DISABLED` option could be set for performance. + :option:`hv.FEATURES.MCE_ON_PSC_DISABLED` option could be set for performance. .. note:: The tips for preempt-RT Linux are mostly applicable to the Linux-based RTOS as well, such as Xenomai. diff --git a/doc/tutorials/rtvm_workload_design_guideline.rst b/doc/tutorials/rtvm_workload_design_guideline.rst index 5db2f9ad4..37654eef8 100644 --- a/doc/tutorials/rtvm_workload_design_guideline.rst +++ b/doc/tutorials/rtvm_workload_design_guideline.rst @@ -1,6 +1,6 @@ .. _rtvm_workload_guideline: -Real-time VM Application Design Guidelines +Real-Time VM Application Design Guidelines ########################################## An RTOS developer must be aware of the differences between running applications on a native @@ -11,7 +11,7 @@ incremental runtime overhead. This document provides some application design guidelines when using an RTVM within the ACRN hypervisor. -Run RTVM with dedicated resources/devices +Run RTVM With Dedicated Resources/Devices ***************************************** For best practice, ACRN allocates dedicated CPU, memory resources, and cache resources (using Intel @@ -22,14 +22,14 @@ of I/O devices, we recommend using dedicated (passthrough) PCIe devices to avoid The configuration space for passthrough PCI devices is still emulated and accessing it will trigger a VM-Exit. -RTVM with virtio PMD (Polling Mode Driver) for I/O sharing +RTVM With Virtio PMD (Polling Mode Driver) for I/O Sharing ********************************************************** If the RTVM must use shared devices, we recommend using PMD drivers that can eliminate the unpredictable latency caused by guest I/O trap-and-emulate access. The RTVM application must be aware that the packets in the PMD driver may arrive or be sent later than expected. -RTVM with HV Emulated Device +RTVM With HV Emulated Device **************************** ACRN uses hypervisor emulated virtual UART (vUART) devices for inter-VM synchronization such as @@ -39,7 +39,7 @@ behavior, the RT application using the vUART shall reserve a margin of CPU cycle for the additional latency introduced by the VM-Exit to the vUART I/O registers (~2000-3000 cycles per register access). -DM emulated device (Except PMD) +DM Emulated Device (Except PMD) ******************************* We recommend **not** using DM-emulated devices in an RTVM. diff --git a/doc/tutorials/run_kata_containers.rst b/doc/tutorials/run_kata_containers.rst index 0f7e57e8e..aac0450ba 100644 --- a/doc/tutorials/run_kata_containers.rst +++ b/doc/tutorials/run_kata_containers.rst @@ -177,7 +177,7 @@ outputs: Debug = false UseVSock = false -Run a Kata Container with ACRN +Run a Kata Container With ACRN ****************************** The system is now ready to run a Kata Container on ACRN. Note that a reboot diff --git a/doc/tutorials/running_deb_as_serv_vm.rst b/doc/tutorials/running_deb_as_serv_vm.rst index d61b9c21e..24244993c 100644 --- a/doc/tutorials/running_deb_as_serv_vm.rst +++ b/doc/tutorials/running_deb_as_serv_vm.rst @@ -21,6 +21,11 @@ Use the following instructions to install Debian. <https://cdimage.debian.org/debian-cd/current/amd64/iso-cd/>`_. Select and download **debian-10.1.0-amd64-netinst.iso** (scroll down to the bottom of the page). + + .. note:: These instructions were validated with the + debian_10.1.0 ISO image. A newer Debian 10 version + should still work as expected. + - Follow the `Debian installation guide <https://www.debian.org/releases/stable/amd64/index.en.html>`_ to install it on your board; we are using a Kaby Lake Intel NUC (NUC7i7DNHE) @@ -141,7 +146,7 @@ Install ACRN on the Debian VM [ 0.982837] ACRN HVLog: Failed to init last hvlog devs, errno -19 [ 0.983023] ACRN HVLog: Initialized hvlog module with 4 cp -Enable the network sharing to give network access to User VM +Enable the Network Sharing to Give Network Access to User VM ************************************************************ .. code-block:: bash diff --git a/doc/tutorials/running_ubun_as_user_vm.rst b/doc/tutorials/running_ubun_as_user_vm.rst index d77a76a77..a29974b18 100644 --- a/doc/tutorials/running_ubun_as_user_vm.rst +++ b/doc/tutorials/running_ubun_as_user_vm.rst @@ -190,7 +190,7 @@ Modify the ``launch_win.sh`` script in order to launch Ubuntu as the User VM. The Ubuntu desktop on the secondary monitor -Enable the Ubuntu Console instead of the User Interface +Enable the Ubuntu Console Instead of the User Interface ******************************************************* After the Ubuntu VM reboots, follow the steps below to enable the Ubuntu diff --git a/doc/tutorials/setup_openstack_libvirt.rst b/doc/tutorials/setup_openstack_libvirt.rst index 5cc11fae4..d3b3725fc 100644 --- a/doc/tutorials/setup_openstack_libvirt.rst +++ b/doc/tutorials/setup_openstack_libvirt.rst @@ -1,6 +1,6 @@ .. _setup_openstack_libvirt: -Configure ACRN using OpenStack and libvirt +Configure ACRN Using OpenStack and Libvirt ########################################## Introduction @@ -11,22 +11,22 @@ ACRN. We use OpenStack to use libvirt and we'll install OpenStack in a container to avoid crashing your system and to take advantage of easy snapshots/restores so that you can quickly roll back your system in the event of setup failure. (You should only install OpenStack directly on Ubuntu if -you have a dedicated testing machine.) This setup utilizes LXC/LXD on -Ubuntu 18.04. +you have a dedicated testing machine). This setup utilizes LXC/LXD on +Ubuntu 20.04. Install ACRN ************ -#. Install ACRN using Ubuntu 18.04 as its Service VM. Refer to +#. Install ACRN using Ubuntu 20.04 as its Service VM. Refer to :ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`. #. Make the acrn-kernel using the `kernel_config_uefi_sos <https://raw.githubusercontent.com/projectacrn/acrn-kernel/master/kernel_config_uefi_sos>`_ configuration file (from the ``acrn-kernel`` repo). -#. Add the following kernel boot arg to give the Service VM more memory - and more loop devices. Refer to `Kernel Boot Parameters - <https://wiki.ubuntu.com/Kernel/KernelBootParameters>`_ documentation:: +#. Append the following kernel boot arguments to the ``multiboot2`` line in + :file:`/etc/grub.d/40_custom` and run ``sudo update-grub`` before rebooting the system. + It will give the Service VM more memory and more loop devices:: hugepagesz=1G hugepages=10 max_loop=16 @@ -41,37 +41,28 @@ Install ACRN create it using the instructions in :ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`. -Set up and launch LXC/LXD +Set Up and Launch LXC/LXD ************************* -1. Set up the LXC/LXD Linux container engine using these `instructions - <https://ubuntu.com/tutorials/tutorial-setting-up-lxd-1604>`_ provided - by Ubuntu. +1. Set up the LXC/LXD Linux container engine:: - Refer to the following additional information for the setup - procedure: + $ sudo snap install lxd + $ lxd init --auto - - Disregard ZFS utils (we're not going to use the ZFS storage - backend). - - Answer ``dir`` (and not ``zfs``) when prompted for the name of the storage backend to use. - - Set up ``lxdbr0`` as instructed. - - Before launching a container, install lxc-utils by ``apt-get install lxc-utils``, - make sure ``lxc-checkconfig | grep missing`` does not show any missing kernel features - except ``CONFIG_NF_NAT_IPV4`` and ``CONFIG_NF_NAT_IPV6``, which - were renamed in recent kernels. + Use all default values if running ``lxd init`` in interactive mode. 2. Create an Ubuntu 18.04 container named ``openstack``:: - $ lxc init ubuntu:18.04 openstack + $ lxc init ubuntu:18.04 openstack 3. Export the kernel interfaces necessary to launch a Service VM in the ``openstack`` container: a. Edit the ``openstack`` config file using the command:: - $ lxc config edit openstack + $ lxc config edit openstack - In the editor, add the following lines under **config**: + In the editor, add the following lines in the **config** section: .. code-block:: none @@ -82,11 +73,18 @@ Set up and launch LXC/LXD lxc.cgroup.devices.allow = c 243:0 rwm lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file 0 0 lxc.mount.auto=proc:rw sys:rw cgroup:rw + lxc.apparmor.profile=unconfined security.nesting: "true" security.privileged: "true" Save and exit the editor. + .. note:: + + Make sure to respect the indentation as to keep these options within + the **config** section. It is a good idea after saving your changes + to check that they have been correctly recorded (``lxc config show openstack``). + b. Run the following commands to configure ``openstack``:: $ lxc config device add openstack eth1 nic name=eth1 nictype=bridged parent=acrn-br0 @@ -135,20 +133,22 @@ Set up and launch LXC/LXD no_proxy=xcompany.com,.xcompany.com,10.0.0.0/8,192.168.0.0/16,localhost,.local,127.0.0.0/8,134.134.0.0/16 -10. Add a new user named **stack** and set permissions:: +10. Add a new user named **stack** and set permissions - $ useradd -s /bin/bash -d /opt/stack -m stack - $ echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers + .. code-block:: none + + # useradd -s /bin/bash -d /opt/stack -m stack + # echo "stack ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers 11. Log off and restart the ``openstack`` container:: - $ lxc restart openstack + $ lxc restart openstack The ``openstack`` container is now properly configured for OpenStack. Use the ``lxc list`` command to verify that both **eth0** and **eth1** appear in the container. -Set up ACRN prerequisites inside the container +Set Up ACRN Prerequisites Inside the Container ********************************************** 1. Log in to the ``openstack`` container as the **stack** user:: @@ -162,19 +162,22 @@ Set up ACRN prerequisites inside the container .. code-block:: none + $ cd ~ $ git clone https://github.com/projectacrn/acrn-hypervisor $ cd acrn-hypervisor - $ git checkout v2.3 + $ git checkout v2.4 $ make - $ cd misc/acrn-manager/; make + $ sudo make devicemodel-install + $ sudo cp build/misc/debug_tools/acrnd /usr/bin/ + $ sudo cp build/misc/debug_tools/acrnctl /usr/bin/ Install only the user-space components: ``acrn-dm``, ``acrnctl``, and - ``acrnd`` + ``acrnd`` as shown above. -3. Download, compile, and install ``iasl``. Refer to - :ref:`Build and Install ACRN on Ubuntu <build-and-install-acrn-on-ubuntu>`. + .. note:: Use the tag that matches the version of the ACRN hypervisor (``acrn.bin``) + that runs on your system. -Set up libvirt +Set Up Libvirt ************** 1. Install the required packages:: @@ -186,6 +189,7 @@ Set up libvirt 2. Download libvirt/ACRN:: + $ cd ~ $ git clone https://github.com/projectacrn/acrn-libvirt.git 3. Build and install libvirt:: @@ -200,6 +204,9 @@ Set up libvirt $ make $ sudo make install + .. note:: The ``dev-acrn-v6.1.0`` branch is used in this tutorial. It is + the default branch. + 4. Edit and enable these options in ``/etc/libvirt/libvirtd.conf``:: unix_sock_ro_perms = "0777" @@ -211,7 +218,7 @@ Set up libvirt $ sudo systemctl daemon-reload -Set up OpenStack +Set Up OpenStack **************** Use DevStack to install OpenStack. Refer to the `DevStack instructions <https://docs.openstack.org/devstack/>`_. @@ -219,20 +226,20 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https:// 1. Use the latest maintenance branch **stable/train** to ensure OpenStack stability:: - $ git clone https://opendev.org/openstack/devstack.git -b stable/train + $ cd ~ + $ git clone https://opendev.org/openstack/devstack.git -b stable/train -2. Go into the ``devstack`` directory, download an ACRN patch from - :acrn_raw:`doc/tutorials/0001-devstack-installation-for-acrn.patch`, - and apply it :: +2. Go into the ``devstack`` directory, and apply the + :file:`doc/tutorials/0001-devstack-installation-for-acrn.patch`:: $ cd devstack - $ git apply 0001-devstack-installation-for-acrn.patch + $ git apply ~/acrn-hypervisor/doc/tutorials/0001-devstack-installation-for-acrn.patch 3. Edit ``lib/nova_plugins/hypervisor-libvirt``: Change ``xen_hvmloader_path`` to the location of your OVMF image - file. A stock image is included in the ACRN source tree - (``devicemodel/bios/OVMF.fd``). + file: ``/usr/share/acrn/bios/OVMF.fd``. Or use the stock image that is included + in the ACRN source tree (``devicemodel/bios/OVMF.fd``). 4. Create a ``devstack/local.conf`` file as shown below (setting the passwords as appropriate): @@ -256,6 +263,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https:// USE_PYTHON3=True .. note:: + Now is a great time to take a snapshot of the container using ``lxc snapshot``. If the OpenStack installation fails, manually rolling back to the previous state can be difficult. Currently, no step exists to @@ -263,7 +271,7 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https:// 5. Install OpenStack:: - execute ./stack.sh in devstack/ + $ ./stack.sh The installation should take about 20-30 minutes. Upon successful installation, the installer reports the URL of OpenStack's management @@ -295,18 +303,14 @@ Use DevStack to install OpenStack. Refer to the `DevStack instructions <https:// $ sudo iptables -t nat -A POSTROUTING -s 172.24.4.1/24 -o br-ex -j SNAT --to-source 192.168.1.104 -Configure and create OpenStack Instance +Configure and Create OpenStack Instance *************************************** -We'll be using the Clear Linux Cloud Guest as the OS image (qcow2 -format). Download the Cloud Guest image from -https://clearlinux.org/downloads and uncompress it, for example:: +We'll be using the Ubuntu 20.04 (Focal) Cloud image as the OS image (qcow2 +format). Download the Cloud image from https://cloud-images.ubuntu.com/releases/focal, +for example:: - $ wget https://cdn.download.clearlinux.org/releases/33110/clear/clear-33110-cloudguest.img.xz - $ unxz clear-33110-cloudguest.img.xz - -This will leave you with the uncompressed OS image -``clear-33110-cloudguest.img`` we'll use later. + $ wget https://cloud-images.ubuntu.com/releases/focal/release-20210201/ubuntu-20.04-server-cloudimg-amd64.img Use the OpenStack management interface URL reported in a previous step to finish setting up the network and configure and create an OpenStack @@ -395,7 +399,7 @@ instance. :width: 1200px :name: os-06-create-image - Browse for and select the Clear Linux Cloud Guest image file we + Browse for and select the Ubuntu Cloud image file we downloaded earlier: .. figure:: images/OpenStack-06a-create-image-browse.png @@ -405,31 +409,30 @@ instance. .. figure:: images/OpenStack-06b-create-image-select.png :align: center - :width: 1200px :name: os-06b-create-image - Give the image a name (**acrnImage**), select the **QCOW2 - QEMU + Give the image a name (**Ubuntu20.04**), select the **QCOW2 - QEMU Emulator** format, and click on **Create Image**: .. figure:: images/OpenStack-06e-create-image.png :align: center - :width: 1200px + :width: 900px :name: os-063-create-image This will take a few minutes to complete. #. Next, click on the **Admin / Computer / Flavors** tabs and then the **+Create Flavor** button. This is where you'll define a machine flavor name - (**acrn4vcpu**), and specify its resource requirements: the number of vCPUs (**4**), RAM size - (**256MB**), and root disk size (**2GB**): + (**UbuntuCloud**), and specify its resource requirements: the number of vCPUs (**2**), RAM size + (**512MB**), and root disk size (**4GB**): .. figure:: images/OpenStack-07a-create-flavor.png :align: center - :width: 1200px + :width: 700px :name: os-07a-create-flavor Click on **Create Flavor** and you'll return to see a list of - available flavors plus the new one you created (**acrn4vcpu**): + available flavors plus the new one you created (**UbuntuCloud**): .. figure:: images/OpenStack-07b-flavor-created.png :align: center @@ -499,36 +502,36 @@ instance. #. Now we're ready to launch an instance. Go to **Project / Compute / Instance**, click on the **Launch Instance** button, give it a name - (**acrn4vcpuVM**) and click **Next**: + (**UbuntuOnACRN**) and click **Next**: .. figure:: images/OpenStack-10a-launch-instance-name.png :align: center - :width: 1200px + :width: 900px :name: os-10a-launch Select **No** for "Create New Volume", and click the up-arrow button - for uploaded (**acrnImage**) image as the "Available source" for this + for uploaded (**Ubuntu20.04**) image as the "Available source" for this instance: .. figure:: images/OpenStack-10b-no-new-vol-select-allocated.png :align: center - :width: 1200px + :width: 900px :name: os-10b-launch Click **Next**, and select the machine flavor you created earlier - (**acrn4vcpu**): + (**UbuntuCloud**): .. figure:: images/OpenStack-10c-select-flavor.png :align: center - :width: 1200px + :width: 900px :name: os-10c-launch - Click on **>** next to the Allocated **acrn4vcpu** flavor and see + Click on **>** next to the Allocated **UbuntuCloud** flavor and see details about your choice: .. figure:: images/OpenStack-10d-flavor-selected.png :align: center - :width: 1200px + :width: 900px :name: os-10d-launch Click on the **Networks** tab, and select the internal **shared** @@ -574,7 +577,7 @@ instance. .. figure:: images/OpenStack-11a-manage-floating-ip.png :align: center - :width: 1200px + :width: 700px :name: os-11a-running Select **public** pool, and click on **Allocate IP**: @@ -625,26 +628,5 @@ running: * Ping the instance inside the container using the instance's floating IP address. -* Clear Linux prohibits root SSH login by default. Use libvirt's ``virsh`` - console to configure the instance. Inside the container, using:: - - $ sudo virsh -c acrn:///system - list #you should see the instance listed as running - console <instance_name> - - Log in to the Clear Linux instance and set up the root SSH. Refer to - the Clear Linux instructions on `enabling root login - <https://docs.01.org/clearlinux/latest/guides/network/openssh-server.html#enable-root-login>`_. - - - If needed, set up the proxy inside the instance. - - Configure ``systemd-resolved`` to use the correct DNS server. - - Install ping: ``swupd bundle-add clr-network-troubleshooter``. - - The ACRN instance should now be able to ping ``acrn-br0`` and another - ACRN instance. It should also be accessible inside the container via SSH - and its floating IP address. - -The ACRN instance can be deleted via the OpenStack management interface. - For more advanced CLI usage, refer to this `OpenStack cheat sheet <https://docs.openstack.org/ocata/user-guide/cli-cheat-sheet.html>`_. diff --git a/doc/tutorials/sgx_virtualization.rst b/doc/tutorials/sgx_virtualization.rst index 92a795af0..b6169cb5f 100644 --- a/doc/tutorials/sgx_virtualization.rst +++ b/doc/tutorials/sgx_virtualization.rst @@ -30,7 +30,7 @@ The image below shows the high-level design of SGX virtualization in ACRN. SGX Virtualization in ACRN -Enable SGX support for Guest +Enable SGX Support for Guest **************************** Presumptions @@ -232,13 +232,13 @@ ENCLS[ECREATE] Other VMExit Control ******************** -RDRAND exiting +RDRAND Exiting ============== * ACRN allows Guest to use RDRAND/RDSEED instruction but does not set "RDRAND exiting" to 1. -PAUSE exiting +PAUSE Exiting ============= * ACRN does not set "PAUSE exiting" to 1. @@ -248,7 +248,7 @@ Future Development Following are some currently unplanned areas of interest for future ACRN development around SGX virtualization. -Launch Configuration support +Launch Configuration Support ============================ When the following two conditions are both satisfied: diff --git a/doc/tutorials/sriov_virtualization.rst b/doc/tutorials/sriov_virtualization.rst index d0086efd1..a107cfda9 100644 --- a/doc/tutorials/sriov_virtualization.rst +++ b/doc/tutorials/sriov_virtualization.rst @@ -128,7 +128,7 @@ SR-IOV Architecture in ACRN standard BAR registers. The MSI-X mapping base address is also from the PF's SR-IOV capabilities, not PCI standard BAR registers. -SR-IOV Passthrough VF Architecture In ACRN +SR-IOV Passthrough VF Architecture in ACRN ------------------------------------------ .. figure:: images/sriov-image4.png @@ -219,7 +219,7 @@ SR-IOV VF Assignment Policy a passthrough to high privilege VMs because the PF device may impact the assigned VFs' functionality and stability. -SR-IOV Usage Guide In ACRN +SR-IOV Usage Guide in ACRN -------------------------- We use the Intel 82576 NIC as an example in the following instructions. We @@ -280,7 +280,7 @@ only support LaaG (Linux as a Guest). c. Boot the User VM -SR-IOV Limitations In ACRN +SR-IOV Limitations in ACRN -------------------------- 1. The SR-IOV migration feature is not supported. diff --git a/doc/tutorials/using_grub.rst b/doc/tutorials/using_grub.rst index 084720746..046d492fb 100644 --- a/doc/tutorials/using_grub.rst +++ b/doc/tutorials/using_grub.rst @@ -1,6 +1,6 @@ .. _using_grub: -Using GRUB to boot ACRN +Using GRUB to Boot ACRN ####################### `GRUB <http://www.gnu.org/software/grub/>`_ is a multiboot bootloader @@ -19,8 +19,8 @@ Comparing with multiboot protocol, the multiboot2 protocol adds UEFI support. The multiboot protocol is supported by the ACRN hypervisor natively. -The multiboot2 protocol is supported when ``CONFIG_MULTIBOOT2`` is -enabled in Kconfig. The ``CONFIG_MULTIBOOT2`` is enabled by default. +The multiboot2 protocol is supported when :option:`hv.FEATURES.MULTIBOOT2` is +enabled in configuration. The :option:`hv.FEATURES.MULTIBOOT2` is enabled by default. Which boot protocol is used depends on the hypervisor is loaded by GRUB's ``multiboot`` command or ``multiboot2`` command. The guest kernel or ramdisk must be loaded by the GRUB ``module`` command or ``module2`` @@ -29,12 +29,13 @@ command accordingly when different boot protocol is used. The ACRN hypervisor binary is built with two formats: ``acrn.32.out`` in ELF format and ``acrn.bin`` in RAW format. The GRUB ``multiboot`` command support ELF format only and does not support binary relocation, -even if ``CONFIG_RELOC`` is set. The GRUB ``multiboot2`` command support -ELF format when ``CONFIG_RELOC`` is not set, or RAW format when -``CONFIG_RELOC`` is set. +even if :option:`hv.FEATURES.RELOC` is set. The GRUB ``multiboot2`` +command supports +ELF format when :option:`hv.FEATURES.RELOC` is not set, or RAW format when +:option:`hv.FEATURES.RELOC` is set. .. note:: - * ``CONFIG_RELOC`` is set by default, so use ``acrn.32.out`` in multiboot + * :option:`hv.FEATURES.RELOC` is set by default, so use ``acrn.32.out`` in multiboot protocol and ``acrn.bin`` in multiboot2 protocol. * Per ACPI specification, the RSDP pointer is described in the EFI System @@ -44,7 +45,7 @@ ELF format when ``CONFIG_RELOC`` is not set, or RAW format when .. _pre-installed-grub: -Using pre-installed GRUB +Using Pre-Installed GRUB ************************ Most Linux distributions use GRUB version 2 by default. If its version @@ -136,7 +137,7 @@ pre-launched VMs (the SOS_VM is also a kind of pre-launched VM): start the VMs automatically. -Installing self-built GRUB +Installing Self-Built GRUB ************************** If the GRUB version on your platform is outdated or has issues booting diff --git a/doc/tutorials/using_hybrid_mode_on_nuc.rst b/doc/tutorials/using_hybrid_mode_on_nuc.rst index 365797342..5afd767ac 100644 --- a/doc/tutorials/using_hybrid_mode_on_nuc.rst +++ b/doc/tutorials/using_hybrid_mode_on_nuc.rst @@ -1,6 +1,6 @@ .. _using_hybrid_mode_on_nuc: -Getting Started Guide for ACRN hybrid mode +Getting Started Guide for ACRN Hybrid Mode ########################################## ACRN hypervisor supports a hybrid scenario where the User VM (such as Zephyr @@ -59,13 +59,12 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo .. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file. The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the - ``kernel_mod_tag`` of VM0, which is configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` + ``kern_mod`` of VM0, which is configured in the ``misc/config_tools/data/nuc7i7dnb/hybrid.xml`` file. The multiboot module ``/boot/bzImage`` is the Service VM kernel file. The param ``yyyyyy`` is the bzImage tag and must exactly match the - ``kernel_mod_tag`` of VM1 in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` + ``kern_mod`` of VM1 in the ``misc/config_tools/data/nuc7i7dnb/hybrid.xml`` file. The kernel command-line arguments used to boot the Service VM are - located in the header file ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` - and are configured by the `SOS_VM_BOOTARGS` macro. + ``bootargs`` of VM1 in the ``misc/config_tools/data/nuc7i7dnb/hybrid.xml``. The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0 (Zephyr). The parameter ``ACPI_VM0`` is VM0's ACPI tag and should not be modified. diff --git a/doc/tutorials/using_partition_mode_on_nuc.rst b/doc/tutorials/using_partition_mode_on_nuc.rst index 2882ea9d1..2de64a402 100644 --- a/doc/tutorials/using_partition_mode_on_nuc.rst +++ b/doc/tutorials/using_partition_mode_on_nuc.rst @@ -1,6 +1,6 @@ .. _using_partition_mode_on_nuc: -Getting Started Guide for ACRN logical partition mode +Getting Started Guide for ACRN Logical Partition Mode ##################################################### The ACRN hypervisor supports a logical partition scenario in which the User @@ -18,8 +18,8 @@ Validated Versions ****************** - Ubuntu version: **18.04** -- ACRN hypervisor tag: **v2.3** -- ACRN kernel tag: **v2.3** +- ACRN hypervisor tag: **v2.4** +- ACRN kernel tag: **v2.4** Prerequisites ************* @@ -41,7 +41,7 @@ Prerequisites .. rst-class:: numbered-step -Update kernel image and modules of pre-launched VM +Update Kernel Image and Modules of Pre-Launched VM ************************************************** #. On your development workstation, clone the ACRN kernel source tree, and build the Linux kernel image that will be used to boot the pre-launched VMs: @@ -105,7 +105,7 @@ Update kernel image and modules of pre-launched VM .. rst-class:: numbered-step -Update ACRN hypervisor image +Update ACRN Hypervisor Image **************************** #. Before building the ACRN hypervisor, find the I/O address of the serial @@ -128,36 +128,40 @@ Update ACRN hypervisor image .. code-block:: none $ sudo lspci -vv - 00:14.0 USB controller: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller (rev 21) (prog-if 30 [XHCI]) - Subsystem: Intel Corporation Sunrise Point-LP USB 3.0 xHCI Controller - 00:17.0 SATA controller: Intel Corporation Sunrise Point-LP SATA Controller [AHCI mode] (rev 21) (prog-if 01 [AHCI 1.0]) - Subsystem: Intel Corporation Sunrise Point-LP SATA Controller [AHCI mode] - 00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection I219-LM (rev 21) - Subsystem: Intel Corporation Ethernet Connection I219-LM - - .. note:: - Verify the PCI devices BDF defined in the - ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h`` - with the information reported by the ``lspci -vv`` command. + 00:14.0 USB controller: Intel Corporation Device 9ded (rev 30) (prog-if 30 [XHCI]) + Subsystem: Intel Corporation Device 7270 + 00:17.0 SATA controller: Intel Corporation Device 9dd3 (rev 30) (prog-if 01 [AHCI 1.0]) + Subsystem: Intel Corporation Device 7270 + 02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a8 (rev 03) (prog-if 02 [NVM Express]) + Subsystem: Intel Corporation Device 390d + 03:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) + Subsystem: Intel Corporation I210 Gigabit Network Connection + 04:00.0 Ethernet controller: Intel Corporation I210 Gigabit Network Connection (rev 03) + Subsystem: Intel Corporation I210 Gigabit Network Connection #. Clone the ACRN source code and configure the build options. Refer to :ref:`getting-started-building` to set up the ACRN build environment on your development workstation. - Clone the ACRN source code and check out to the tag v2.3: + Clone the ACRN source code and check out to the tag v2.4: .. code-block:: none $ git clone https://github.com/projectacrn/acrn-hypervisor.git $ cd acrn-hypervisor - $ git checkout v2.3 + $ git checkout v2.4 - Build the ACRN hypervisor and ACPI binaries for pre-launched VMs with default xmls: +#. Check the ``pci_devs`` sections in ``misc/config_tools/data/whl-ipc-i7/logical_partition.xml`` + for each pre-launched VM to ensure you are using the right PCI device BDF information (as + reported by ``lspci -vv``). If you need to make changes to this file, create a copy of it and + use it subsequently when building ACRN (``SCENARIO=/path/to/newfile.xml``). + +#. Build the ACRN hypervisor and ACPI binaries for pre-launched VMs with default xmls: .. code-block:: none - $ make hypervisor BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/logical_partition.xml RELEASE=0 + $ make hypervisor BOARD=whl-ipc-i7 SCENARIO=logical_partition RELEASE=0 .. note:: The ``acrn.bin`` will be generated to ``./build/hypervisor/acrn.bin``. @@ -175,10 +179,7 @@ Update ACRN hypervisor image The above command output should contain the ``GRUB`` keyword. -#. Check or update the BDF information of the PCI devices for each - pre-launched VM; check it in the ``hypervisor/arch/x86/configs/whl-ipc-i5/pci_devices.h``. - -#. Copy the artifact ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` to the ``/boot`` directory: +#. Copy the artifact ``acrn.bin``, ``ACPI_VM0.bin``, and ``ACPI_VM1.bin`` to the ``/boot`` directory on NVME: #. Copy ``acrn.bin``, ``ACPI_VM1.bin`` and ``ACPI_VM0.bin`` to a removable disk. @@ -189,7 +190,7 @@ Update ACRN hypervisor image .. rst-class:: numbered-step -Update Ubuntu GRUB to boot hypervisor and load kernel image +Update Ubuntu GRUB to Boot Hypervisor and Load Kernel Image *********************************************************** #. Append the following configuration to the ``/etc/grub.d/40_custom`` file: @@ -213,13 +214,12 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image } .. note:: - Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter) + Update the UUID (``--set``) and PARTUUID (``root=`` parameter) (or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``. - The kernel command-line arguments used to boot the pre-launched VMs is - located in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.h`` header file - and is configured by ``VMx_CONFIG_OS_BOOTARG_*`` MACROs (where x is the VM ID number and ``*`` are arguments). - The multiboot2 module param ``XXXXXX`` is the bzImage tag and must exactly match the ``kernel_mod_tag`` - configured in the ``misc/vm_configs/scenarios/hybrid/vm_configurations.c`` file. + The kernel command-line arguments used to boot the pre-launched VMs is ``bootargs`` + in the ``misc/config_tools/data/whl-ipc-i7/logical_partition.xml`` + The ``module2 /boot/bzImage`` param ``XXXXXX`` is the bzImage tag and must exactly match the ``kern_mod`` + in the ``misc/config_tools/data/whl-ipc-i7/logical_partition.xml`` file. The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0, the parameter ``ACPI_VM0`` is VM0's ACPI tag and should not be modified. The module ``/boot/ACPI_VM1.bin`` is the binary of ACPI tables for pre-launched VM1 the parameter ``ACPI_VM1`` is @@ -231,6 +231,8 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image .. code-block:: none GRUB_DEFAULT=ACRN_Logical_Partition + #GRUB_HIDDEN_TIMEOUT=0 + #GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" @@ -249,7 +251,7 @@ Update Ubuntu GRUB to boot hypervisor and load kernel image .. rst-class:: numbered-step -Logical partition scenario startup check +Logical Partition Scenario Startup Check **************************************** #. Use these steps to verify that the hypervisor is properly running: diff --git a/doc/tutorials/using_windows_as_uos.rst b/doc/tutorials/using_windows_as_uos.rst index 9f1855778..19e353bda 100644 --- a/doc/tutorials/using_windows_as_uos.rst +++ b/doc/tutorials/using_windows_as_uos.rst @@ -21,7 +21,7 @@ In the following steps, you'll first create a Windows image in the Service VM, and then launch that image as a Guest VM. -Verified version +Verified Version ================ * Windows 10 Version: @@ -34,22 +34,22 @@ Verified version .. note:: - WHL needs following setting in BOIS: + WHL needs the following BIOS setting: set **DVMT Pre-Allocated** to **64MB** and set **PM Support** to **Enabled**. -Create a Windows 10 image in the Service VM +Create a Windows 10 Image in the Service VM =========================================== Create a Windows 10 image to install Windows 10 onto a virtual disk. -Download Win10 image and drivers +Download Win10 Image and Drivers -------------------------------- #. Download `MediaCreationTool20H2.exe <https://www.microsoft.com/software-download/windows10>`_. - - Run this file and select **Create installation media(USB flash drive,DVD, or ISO file) for another PC**; - Then click **ISO file** to create ``windows10.iso``. + - Run this file and select **Create installation media(USB flash drive, DVD, or ISO file) for another PC**; + Then click **ISO file** to create ``Windows10.iso``. #. Download the `Oracle Windows driver <https://edelivery.oracle.com/osdc/faces/SoftwareDelivery>`_. @@ -66,7 +66,7 @@ Download Win10 image and drivers - Click **Download**. When the download is complete, unzip the file. You will see an ISO named ``winvirtio.iso``. -Create a raw disk +Create a Raw Disk ----------------- Run these commands on the Service VM:: @@ -76,7 +76,7 @@ Run these commands on the Service VM:: $ cd /home/acrn/work $ qemu-img create -f raw win10-ltsc.img 30G -Prepare the script to create an image +Prepare the Script to Create an Image ------------------------------------- #. Refer :ref:`gpu-passthrough` to enable GVT-d GOP feature; then copy above .iso files and the built OVMF.fd to /home/acrn/work @@ -107,7 +107,7 @@ Prepare the script to create an image -s 2,passthru,0/2/0,gpu \ -s 8,virtio-net,tap0 \ -s 4,virtio-blk,/home/acrn/work/win10-ltsc.img - -s 5,ahci,cd:/home/acrn/work/windows.iso \ + -s 5,ahci,cd:/home/acrn/work/Windows10.iso \ -s 6,ahci,cd:/home/acrn/work/winvirtio.iso \ -s 7,passthru,0/14/0,d3hot_reset \ --ovmf /home/acrn/work/OVMF.fd \ @@ -130,7 +130,7 @@ Prepare the script to create an image echo $idx > /sys/class/vhm/acrn_vhm/offline_cpu fi done - launch_win 1 "64 448 8" + launch_win 1 Install Windows 10 by GVT-d --------------------------- @@ -208,14 +208,14 @@ When you see the UEFI shell, input **exit**. :align: center #. Download the `Intel DCH Graphics Driver - <https://downloadcenter.intel.com/download/30066?v=t>`__.in + <https://downloadcenter.intel.com/download/30066?v=t>`__ in Windows and install in safe mode. - The latest version(27.20.100.9030) was verified on WHL.You’d better use the same version as the one in native Windows 10 on your board. + Version 27.20.100.9030 was verified on WHL. You should use the same version as the one in native Windows 10 on your board. -Boot Windows on ACRN with a default configuration +Boot Windows on ACRN With a Default Configuration ================================================= -#. Prepare WaaG lauch script +#. Prepare WaaG launch script cp /home/acrn/work/install_win.sh /home/acrn/work/launch_win.sh @@ -223,10 +223,10 @@ Boot Windows on ACRN with a default configuration .. code-block:: bash - -s 5,ahci,cd:./windows.iso \ - -s 6,ahci,cd:./winvirtio.iso \ + -s 5,ahci,cd:/home/acrn/work/Windows10.iso \ + -s 6,ahci,cd:/home/acrn/work/winvirtio.iso \ -#. Lauch WaaG +#. Launch WaaG .. code-block:: bash @@ -235,14 +235,13 @@ Boot Windows on ACRN with a default configuration The WaaG desktop displays on the monitor. -ACRN Windows verified feature list +ACRN Windows Verified Feature List ********************************** .. csv-table:: :header: "Items", "Details", "Status" "IO Devices", "Virtio block as the boot device", "Working" - , "AHCI as the boot device", "Working" , "AHCI CD-ROM", "Working" , "Virtio network", "Working" , "Virtio input - mouse", "Working" @@ -257,7 +256,7 @@ ACRN Windows verified feature list , "Microsoft Store", "OK" , "3D Viewer", "OK" -Explanation for acrn-dm popular command lines +Explanation for acrn-dm Popular Command Lines ********************************************* .. note:: Use these acrn-dm command line entries according to your @@ -267,14 +266,10 @@ Explanation for acrn-dm popular command lines This is GVT-d to passthrough the VGA controller to Windows. You may need to change 0/2/0 to match the bdf of the VGA controller on your platform. -* ``-s 4,virtio-net,tap0``: +* ``-s 8,virtio-net,tap0``: This is for the network virtualization. -* ``-s 5,fbuf,tcp=0.0.0.0:5900,w=800,h=600``: - This opens port 5900 on the Service VM, which can be connected to via - ``vncviewer``. - -* ``-s 6,virtio-input,/dev/input/event4``: +* ``-s 3,virtio-input,/dev/input/event4``: This is to passthrough the mouse/keyboard to Windows via virtio. Change ``event4`` accordingly. Use the following command to check the event node on your Service VM:: @@ -282,22 +277,22 @@ Explanation for acrn-dm popular command lines <To get the input event of mouse> # cat /proc/bus/input/devices | grep mouse -* ``-s 7,ahci,cd:/home/acrn/work/Windows10.iso``: +* ``-s 5,ahci,cd:/home/acrn/work/Windows10.iso``: This is the IOS image used to install Windows 10. It appears as a CD-ROM - device. Make sure that the slot ID **7** points to your win10 ISO path. + device. Make sure that it points to your win10 ISO path. -* ``-s 8,ahci,cd:/home/acrn/work/winvirtio.iso``: +* ``-s 6,ahci,cd:/home/acrn/work/winvirtio.iso``: This is CD-ROM device to install the virtio Windows driver. Make sure it points to your VirtIO ISO path. -* ``-s 9,passthru,0/14/0``: - This is to passthrough the USB controller to Windows. +* ``-s 7,passthru,0/14/0,d3hot_reset``: + This is to passthrough the USB controller to Windows;d3hot_reset is needed for WaaG reboot when USB controller is passthroughed to Windows. You may need to change ``0/14/0`` to match the BDF of the USB controller on your platform. * ``--ovmf /home/acrn/work/OVMF.fd``: Make sure it points to your OVMF binary path. -Secure boot enabling +Secure Boot Enabling ******************** Refer to the steps in :ref:`How-to-enable-secure-boot-for-windows` for secure boot enabling. diff --git a/doc/tutorials/using_xenomai_as_uos.rst b/doc/tutorials/using_xenomai_as_uos.rst index fb758b5ac..404129b25 100644 --- a/doc/tutorials/using_xenomai_as_uos.rst +++ b/doc/tutorials/using_xenomai_as_uos.rst @@ -1,6 +1,6 @@ .. _using_xenomai_as_uos: -Run Xenomai as the User VM OS (Real-time VM) +Run Xenomai as the User VM OS (Real-Time VM) ############################################ `Xenomai`_ is a versatile real-time framework that provides support to user space applications that are seamlessly integrated into Linux environments. @@ -9,7 +9,7 @@ This tutorial describes how to run Xenomai as the User VM OS (real-time VM) on t .. _Xenomai: https://gitlab.denx.de/Xenomai/xenomai/-/wikis/home -Build the Xenomai kernel +Build the Xenomai Kernel ************************ Follow these instructions to build the Xenomai kernel: @@ -92,7 +92,7 @@ Launch the RTVM clr-c1ff5bba8c3145ac8478e8e1f96e1087 login: -Install the Xenomai libraries and tools +Install the Xenomai Libraries and Tools *************************************** To build and install Xenomai tools or its libraries in the RVTM, refer to the official diff --git a/doc/tutorials/using_yp.rst b/doc/tutorials/using_yp.rst index 1333ff2f0..1edba161a 100644 --- a/doc/tutorials/using_yp.rst +++ b/doc/tutorials/using_yp.rst @@ -1,6 +1,6 @@ .. _using_yp: -Using Yocto Project with ACRN +Using Yocto Project With ACRN ############################# The `Yocto Project <https://yoctoproject.org>`_ (YP) is an open source @@ -16,7 +16,7 @@ components, and software components. Layers are repositories containing related sets of instructions that tell the Yocto Project build system what to do. -The meta-acrn layer +The meta-acrn Layer ******************* The meta-acrn layer integrates the ACRN hypervisor with OpenEmbedded, diff --git a/doc/tutorials/vuart_configuration.rst b/doc/tutorials/vuart_configuration.rst index 1af51c725..0cd0c31fe 100644 --- a/doc/tutorials/vuart_configuration.rst +++ b/doc/tutorials/vuart_configuration.rst @@ -10,26 +10,11 @@ The virtual universal asynchronous receiver/transmitter (vUART) supports two functions: one is the console, the other is communication. vUART only works on a single function. -Currently, only two vUART configurations are added to the -``misc/vm_configs/scenarios/<xxx>/vm_configuration.c`` file, but you can -change the value in it. +Only two vUART configurations are added to the predefined scenarios, +but you can customize the scenarios to enable more using the :ref:`ACRN +configuration toolset <acrn_config_workflow>`. -.. code-block:: none - - .vuart[0] = { - .type = VUART_LEGACY_PIO, - .addr.port_base = INVALID_COM_BASE, - }, - .vuart[1] = { - .type = VUART_LEGACY_PIO, - .addr.port_base = INVALID_COM_BASE, - } - -``vuart[0]`` is initiated as the **console** port. - -``vuart[1]`` is initiated as a **communication** port. - -Console enable list +Console Enable List =================== +-----------------+-----------------------+--------------------+----------------+----------------+ @@ -50,7 +35,7 @@ Console enable list .. _how-to-configure-a-console-port: -How to configure a console port +How to Configure a Console Port =============================== To enable the console port for a VM, change only the ``port_base`` and @@ -75,7 +60,7 @@ Example: .. _how-to-configure-a-communication-port: -How to configure a communication port +How to Configure a Communication Port ===================================== To enable the communication port, configure ``vuart[1]`` in the two VMs that want to communicate. @@ -84,7 +69,7 @@ The port_base and IRQ should differ from the ``vuart[0]`` in the same VM. ``t_vuart.vm_id`` is the target VM's vm_id, start from 0. (0 means VM0) -``t_vuart.vuart_id`` is the target vuart index in the target VM. Start +``t_vuart.vuart_id`` is the target vUART index in the target VM. Start from ``1``. (``1`` means ``vuart[1]``) Example: @@ -111,7 +96,7 @@ Example: .t_vuart.vuart_id = 1U, }, -Communication vUART enable list +Communication vUART Enable List =============================== +-----------------+-----------------------+--------------------+---------------------+----------------+ @@ -128,7 +113,7 @@ Communication vUART enable list | Logic_partition | Pre-launched | Pre-launched RTVM | | | +-----------------+-----------------------+--------------------+---------------------+----------------+ -Launch script +Launch Script ============= - ``-s 1:0,lpc -l com1,stdio`` @@ -139,7 +124,7 @@ Launch script - ``-B " ....,console=ttyS0, ..."`` Add this to the kernel-based system. -Test the communication port +Test the Communication Port =========================== After you have configured the communication port in hypervisor, you can @@ -172,7 +157,7 @@ access the corresponding port. For example, in Linux OS: - This cannot be used to transfer files because flow control is not supported so data may be lost. -vUART design +vUART Design ============ **Console vUART** @@ -187,7 +172,7 @@ vUART design :align: center :name: communication-vuart -COM port configurations for Post-Launched VMs +COM Port Configurations for Post-Launched VMs ============================================= For a post-launched VM, the ``acrn-dm`` cmdline also provides a COM port configuration: @@ -215,7 +200,7 @@ started, as shown in the diagram below: .. note:: For operating systems such as VxWorks and Windows that depend on the ACPI table to probe the UART driver, adding the vUART configuration in - the hypervisor is not sufficient. Currently, we recommend that you use + the hypervisor is not sufficient. We recommend that you use the configuration in the figure 3 data flow. This may be refined in the future. @@ -244,7 +229,7 @@ to 8 vUART for each VM, from ``vuart_idx=0`` to ``vuart_idx=7``. Suppose we use vUART0 for a port with ``vuart_idx=0``, vUART1 for ``vuart_idx=1``, and so on. -Please pay attention to these points: +Pay attention to these points: * vUART0 is the console port, vUART1-vUART7 are inter-VM communication ports. * Each communication port must set the connection to another communication vUART port of another VM. @@ -266,8 +251,8 @@ vUART settings. Configuration tools will override your settings in and :ref:`How to Configure a Communication Port <how-to-configure-a-communication-port>`. -You can configure both Legacy vUART and PCI-vUART in -``./misc/vm_configs/xmls/config-xmls/<board>/<scenario>.xml``. For +You can configure both Legacy vUART and PCI-vUART in :ref:`scenario +configurations <acrn_config_types>`. For example, if VM0 has a legacy vUART0 and a PCI-vUART1, VM1 has no legacy vUART but has a PCI-vUART0 and a PCI-vUART1, VM0's PCI-vUART1 and VM1's PCI-vUART1 are connected to each other. You should configure then like this: @@ -319,7 +304,7 @@ The ACRN vUART related XML fields: legacy ``vuart[0]`` configuration, ``id=1`` is for ``vuart[1]``. - ``type`` in ``<legacy_vuart>``, type is always ``VUART_LEGACY_PIO`` for legacy vUART. - - ``base`` in ``<legacy_vuart>``, if use the legacy vUART port, set + - ``base`` in ``<legacy_vuart>``, if using the legacy vUART port, set COM1_BASE for ``vuart[0]``, set ``COM2_BASE`` for ``vuart[1]``. ``INVALID_COM_BASE`` means do not use the legacy vUART port. - ``irq`` in ``<legacy_vuart>``, if you use the legacy vUART port, set @@ -334,13 +319,13 @@ The ACRN vUART related XML fields: Run the command to build ACRN with this XML configuration file:: - make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/<board>.xml \ - SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/<board>/<scenario>.xml + make BOARD=<board> SCENARIO=<scenario> The configuration tools will test your settings, and check :ref:`vUART Rules <index-of-vuart>` for compilation issue. After compiling, you can find -``./misc/vm_configs/scenarios/<scenario>/<board>/pci_dev.c`` has been -changed by the configuration tools based on the XML settings, something like: +the generated sources under +``build/hypervisor/configs/scenarios/<scenario>/pci_dev.c``, +based on the XML settings, something like: .. code-block:: none @@ -357,7 +342,7 @@ changed by the configuration tools based on the XML settings, something like: }, } -This struct shows a PCI-vUART with ``vuart_idx=1``, ``BDF 00:05.0``, its +This struct shows a PCI-vUART with ``vuart_idx=1``, ``BDF 00:05.0``, it's a PCI-vUART1 of VM0, and it is connected to VM1's vUART1 port. When VM0 wants to communicate with VM1, it can use ``/dev/ttyS*``, the character device file of @@ -365,7 +350,7 @@ VM0's PCI-vUART1. Usually, legacy ``vuart[0]`` is ``ttyS0`` in VM, and ``vuart[1]`` is ``ttyS1``. So we hope PCI-vUART0 is ``ttyS0``, PCI-VUART1 is ``ttyS1`` and so on through PCI-vUART7 is ``ttyS7``, but that is not true. We can use BDF to identify -PCI-vUART in VM. +PCI-vUART in VM. If you run ``dmesg | grep tty`` at a VM shell, you may see: @@ -398,11 +383,11 @@ symbols set: CONFIG_SERIAL_8250_EXTENDED=y CONFIG_SERIAL_8250_DETECT_IRQ=y -Kernel Cmdline for PCI-vUART console +Kernel Cmdline for PCI-vUART Console ==================================== When an ACRN VM does not have a legacy ``vuart[0]`` but has a PCI-vUART0, you can use PCI-vUART0 for VM serial input/output. Check -which tty has the BDF of PCI-vUART0; usually it is not ``/dev/ttyS0``. +which TTY has the BDF of PCI-vUART0; usually it is not ``/dev/ttyS0``. For example, if ``/dev/ttyS4`` is PCI-vUART0, you must set ``console=/dev/ttyS4`` in the kernel cmdline. diff --git a/doc/tutorials/waag-secure-boot.rst b/doc/tutorials/waag-secure-boot.rst index cdfe181f8..e59b9d31d 100644 --- a/doc/tutorials/waag-secure-boot.rst +++ b/doc/tutorials/waag-secure-boot.rst @@ -22,7 +22,7 @@ the OEM can generate their own PK. Here we show two ways to generate a PK: ``openssl`` and Microsoft tools. -Generate PK Using openssl +Generate PK Using Openssl ========================= - Generate a Self-Signed Certificate as PK from a new key using the @@ -128,7 +128,7 @@ Generate PK Using openssl openssl x509 -in PK.crt -outform der -out PK.der -Using Microsoft tools +Using Microsoft Tools ===================== Microsoft documents explain `how to use Microsoft tools to generate a secure boot key @@ -198,22 +198,24 @@ which we'll summarize below. HashAlgorithm = SHA256 KeyAlgorithm = RSA KeyLength = 2048 - KeyContainer = "{EA75381E-6D9B-4BDC-B6C7-5144C96507DD}" ProviderName = "Microsoft Strong Cryptographic Provider" KeyUsage = 0xf0 - Generate the Platform Key using ``certreq.exe``:: - C:\\PKtest> certreq.exe -new request.inf PKtest.cer - Installed Certificate: - Serial Number: 3f675d4b64156f9c48ccf30793121147 - Subject: CN=Intel Platform Key, O=Intel, L=Shanghai, S=Shanghai, C=CN - NotBefore: 6/26/2019 10:40 AM - NotAfter: 6/26/2025 10:50 AM - Thumbprint: ff2771bd5bd1f7086ab96fb9532b594ed8619c3b - Microsoft Strong Cryptographic Provider - 3d40ebea7d109ee93b238b96721f0e6d_4be58f30-7127-42f5-9b76-f47187495247 - CertReq: Certificate Created and Installed + C:\WINDOWS\system32>certreq.exe -v -new -binary request.inf PKtestDER.cer + Cert: 4 -> 4 + Years: 6 -> 6 + Installed Certificate: + Serial Number: 285c6f1ec39cc186495f8e55fa053593 + Subject: CN=Intel Platform Key, O=Intel, L=Shanghai, S=Shanghai, C=CN + NotBefore: 3/30/2021 10:30 55.000s + NotAfter: 3/30/2027 10:40 55.000s + Thumbprint: 8d79139f90b9fa47200eedbc8c29039869cc4adc + Microsoft Strong Cryptographic Provider + c387aac7266d5db5d81da8a6aa21c703_163d773d-a567-4430-aabf-893dc207fa3d + + CertReq: Certificate Created and Installed - Validate the Platform Key certificate has been generated correctly:: @@ -385,36 +387,7 @@ which we'll summarize below. Signature test passed CertUtil: -store command completed successfully. -- Convert ``PKtest.cer`` from Base-64 to DER format. - - OVMF secure boot key only supports DER encoded certificate. - - 1) open certificate by double clicking ``PKtest.cer`` and click "Copy to - File..." - - .. image:: images/waag_secure_boot_image1.png - :align: center - :width: 600px - - 2) Follow the certificate export wizard and select the format as - "DER encoded binary X.509 (.CER)" - - .. image:: images/waag_secure_boot_image2.png - :align: center - :width: 600px - - 3) Follow the wizard to save file and finish export - - .. image:: images/waag_secure_boot_image3.png - :align: center - :width: 600px - - You can rename ``PKtestDER.cer`` extension to ``PKtestDER.crt``. - A ``.cer`` file is an alternate form of ``.crt`` by Microsoft - Conventions. CRT and CER file extensions can be interchanged as - the encoding type is identical. - -Download KEK and DB from Microsoft +Download KEK and DB From Microsoft ********************************** KEK (Key Exchange Key): @@ -431,10 +404,9 @@ DB (Allowed Signature database): <https://go.microsoft.com/fwlink/p/?LinkID=321194>`_: Microsoft signer for third party UEFI binaries via DevCenter program. -Compile OVMF with secure boot support +Compile OVMF With Secure Boot Support ************************************* -:: git clone https://github.com/projectacrn/acrn-edk2.git @@ -475,7 +447,7 @@ Notes: .. _qemu_inject_boot_keys: -Use QEMU to inject secure boot keys into OVMF +Use QEMU to Inject Secure Boot Keys Into OVMF ********************************************* We follow the `openSUSE: UEFI Secure boot using qemu-kvm document diff --git a/doc/user-guides/acrn-dm-parameters.rst b/doc/user-guides/acrn-dm-parameters.rst index 9e197fab2..be9f24ba4 100644 --- a/doc/user-guides/acrn-dm-parameters.rst +++ b/doc/user-guides/acrn-dm-parameters.rst @@ -10,392 +10,457 @@ emulation based on command line configurations, as introduced in Here are descriptions for each of these ``acrn-dm`` command line parameters: -.. list-table:: - :widths: 22 78 - :header-rows: 0 +``-A``, ``--acpi`` + Create ACPI tables. With this option, DM will build an ACPI table into its + VMs F-Segment (0xf2400). This ACPI table includes full tables for RSDP, + RSDT, XSDT, MADT, FADT, HPET, MCFG, FACS, and DSDT. All these items are + programmed according to acrn-dm command line configuration and derived from + their default value. - * - :kbd:`-A, --acpi` - - Create ACPI tables. - With this option, DM will build an ACPI table into its VMs F-Segment - (0xf2400). This ACPI table includes full tables for RSDP, RSDT, XSDT, - MADT, FADT, HPET, MCFG, FACS, and DSDT. All these items are programmed - according to acrn-dm command line configuration and derived from their - default value. +---- - * - :kbd:`-B, --bootargs <bootargs>` - - Set the User VM kernel command-line arguments. - The maximum length is 1023. - The bootargs string will be passed to the kernel as its cmdline. +``-B``, ``--bootargs <bootargs>`` + Set the User VM kernel command-line arguments. The maximum length is 1023. + The bootargs string will be passed to the kernel as its cmdline. - Example:: + Example:: - -B "loglevel=7" + -B "loglevel=7" - specifies the kernel log level at 7 + specifies the kernel log level at 7 - * - :kbd:`--debugexit` - - Enable guest to write io port 0xf4 to exit guest. It's mainly used by - guest unit test. +---- - * - :kbd:`-E, --elf_file <elf image path>` - - This option is to define a static elf binary which could be loaded by - DM. DM will run elf as guest of ACRN. +``--debugexit`` + Enable guest to write io port 0xf4 to exit guest. It's mainly used by guest + unit test. - * - :kbd:`--enable_trusty` - - Enable trusty for guest. - For Android guest OS, ACRN provides a VM environment with two worlds: - normal world and trusty world. The Android OS runs in the the normal - world. The trusty OS and security sensitive applications runs in the - trusty world. The trusty world can see the memory of normal world but - not vice versa. See :ref:`trusty_tee` for more information. +---- - By default, the trusty world is disabled. Use this option to enable it. +``-E``, ``--elf_file <elf image path>`` + This option is to define a static elf binary which could be loaded by DM. + DM will run elf as guest of ACRN. - * - :kbd:`-G, --gvtargs <GVT_args>` - - ACRN implements GVT-g for graphics virtualization (aka AcrnGT). This - option allows you to set some of its parameters. +---- - GVT_args format: ``low_gm_sz high_gm_sz fence_sz`` +``--enable_trusty`` + Enable trusty for guest. For Android guest OS, ACRN provides a VM + environment with two worlds: normal world and trusty world. The Android + OS runs in the the normal world. The trusty OS and security sensitive + applications runs in the trusty world. The trusty world can see the memory + of normal world but not vice versa. See :ref:`trusty_tee` for more + information. - Where: + By default, the trusty world is disabled. Use this option to enable it. - - ``low_gm_sz``: GVT-g aperture size, unit is MB - - ``high_gm_sz``: GVT-g hidden gfx memory size, unit is MB - - ``fence_sz``: the number of fence registers +---- - Example:: +``-G``, ``--gvtargs <GVT_args>`` + ACRN implements GVT-g for graphics virtualization (aka AcrnGT). This + option allows you to set some of its parameters. - -G "10 128 6" + GVT_args format: ``low_gm_sz high_gm_sz fence_sz`` - sets up 10Mb for GVT-g aperture, 128M for GVT-g hidden - memory, and 6 fence registers. + Where: - * - :kbd:`-h, --help` - - Show a summary of commands. + - ``low_gm_sz``: GVT-g aperture size, unit is MB + - ``high_gm_sz``: GVT-g hidden gfx memory size, unit is MB + - ``fence_sz``: the number of fence registers - * - :kbd:`-i, --ioc_node <ioc_mediator_parameters>` - - IOC (IO Controller) is a bridge of an SoC to communicate with Vehicle Bus. - It routes Vehicle Bus signals, for example extracted from CAN messages, - from IOC to the SoC and back, as well as controlling the onboard - peripherals from SoC. (The ``-i`` and ``-l`` parameters are only - available on a platform with IOC.) + Example:: - IOC DM opens ``/dev/ptmx`` device to create a peer PTY devices, IOC DM uses - these to communicate with UART DM since UART DM needs a TTY capable - device as its backend. + -G "10 128 6" - The device model configuration command syntax for IOC mediator is:: + sets up 10Mb for GVT-g aperture, 128M for GVT-g hidden memory, and 6 + fence registers. - -i,[ioc_channel_path],[wakeup_reason] - -l,[lpc_port],[ioc_channel_path] +---- - - ``ioc_channel_path`` is an absolute path for communication between IOC - mediator and UART DM. - - ``lpc_port`` is com1 or com2. IOC mediator needs one unassigned lpc - port for data transfer between User OS and Service OS. - - ``wakeup_reason`` is IOC mediator boot reason, where each bit represents - one wakeup reason. +``-h``, ``--help`` + Show a summary of commands. - Currently the wakeup reason bits supported by IOC firmware are: +---- - - ``CBC_WK_RSN_BTN`` (bit 5): ignition button. - - ``CBC_WK_RSN_RTC`` (bit 9): RTC timer. - - ``CBC_WK_RSN_DOR`` (bit 11): Car door. - - ``CBC_WK_RSN_SOC`` (bit 23): SoC active/inactive. +``-i``, ``--ioc_node <ioc_mediator_parameters>`` + IOC (IO Controller) is a bridge of an SoC to communicate with Vehicle Bus. + It routes Vehicle Bus signals, for example extracted from CAN messages, + from IOC to the SoC and back, as well as controlling the onboard + peripherals from SoC. (The ``-i`` and ``-l`` parameters are only available + on a platform with IOC.) - As an example, the following commands are used to enable IOC feature, the - initial wakeup reason is ignition button, and cbc_attach uses ttyS1 for - TTY line discipline in User VM:: + IOC DM opens ``/dev/ptmx`` device to create a peer PTY devices, IOC DM uses + these to communicate with UART DM since UART DM needs a TTY capable device + as its backend. - -i /run/acrn/ioc_$vm_name,0x20 - -l com2,/run/acrn/ioc_$vm_name + The device model configuration command syntax for IOC mediator is:: - * - :kbd:`--intr_monitor <intr_monitor_params>` - - Enable interrupt storm monitor for User VM. Use this option to prevent an interrupt - storm from the User VM. + -i,[ioc_channel_path],[wakeup_reason] + -l,[lpc_port],[ioc_channel_path] - usage: ``--intr_monitor threshold/s probe-period(s) delay_time(ms) delay_duration(ms)`` + - ``ioc_channel_path`` is an absolute path for communication between IOC + mediator and UART DM. + - ``lpc_port`` is com1 or com2. IOC mediator needs one unassigned lpc + port for data transfer between User OS and Service OS. + - ``wakeup_reason`` is IOC mediator boot reason, where each bit represents + one wakeup reason. - Example:: + Currently the wakeup reason bits supported by IOC firmware are: - --intr_monitor 10000,10,1,100 + - ``CBC_WK_RSN_BTN`` (bit 5): ignition button. + - ``CBC_WK_RSN_RTC`` (bit 9): RTC timer. + - ``CBC_WK_RSN_DOR`` (bit 11): Car door. + - ``CBC_WK_RSN_SOC`` (bit 23): SoC active/inactive. - - ``10000``: interrupt rate larger than 10000/s will be treated as interrupt - storm - - ``10``: use the last 10s of interrupt data to detect an interrupt storm - - ``1``: when interrupts are identified as a storm, the next interrupt will - be delayed 1ms before being injected to the guest - - ``100``: after 100ms, we will cancel the interrupt injection delay and restore - to normal. + As an example, the following commands are used to enable IOC feature, the + initial wakeup reason is ignition button, and cbc_attach uses ttyS1 for TTY + line discipline in User VM:: - * - :kbd:`-k, --kernel <kernel_image_path>` - - Set the kernel (full path) for the User VM kernel. The maximum path length is - 1023 characters. The DM handles bzImage image format. + -i /run/acrn/ioc_$vm_name,0x20 + -l com2,/run/acrn/ioc_$vm_name - usage: ``-k /path/to/your/kernel_image`` +---- - * - :kbd:`-l, --lpc <lpc_device_configuration>` - - (See :kbd:`-i, --ioc_node`) +``--intr_monitor <intr_monitor_params>`` + Enable interrupt storm monitor for User VM. Use this option to prevent an + interrupt storm from the User VM. - * - :kbd:`-m, --memsize <memory_size>` - - Setup total memory size for User VM. + usage: ``--intr_monitor threshold/s probe-period(s) delay_time(ms) delay_duration(ms)`` - memory_size format is: "<size>{K/k, B/b, M/m, G/g}", and size is an - integer. + Example:: - usage: ``-m 4g``: set User VM memory to 4 gigabytes. + --intr_monitor 10000,10,1,100 - * - :kbd:`--mac_seed <seed_string>` - - Set a platform unique string as a seed to generate the mac address. - Each VM should have a different "seed_string". The "seed_string" can - be generated by the following method where $(vm_name) contains the - name of the VM you are going to launch. + - ``10000``: interrupt rate larger than 10000/s will be treated as + interrupt storm + - ``10``: use the last 10s of interrupt data to detect an interrupt storm + - ``1``: when interrupts are identified as a storm, the next interrupt + will be delayed 1ms before being injected to the guest + - ``100``: after 100ms, we will cancel the interrupt injection delay and + restore to normal. - ``mac=$(cat /sys/class/net/e*/address)`` +---- - ``seed_string=${mac:9:8}-${vm_name}`` +``-k``, ``--kernel <kernel_image_path>`` + Set the kernel (full path) for the User VM kernel. The maximum path length + is 1023 characters. The DM handles bzImage image format. - * - :kbd:`--part_info <part_info_name>` - - Set guest partition info path. + usage: ``-k /path/to/your/kernel_image`` - * - :kbd:`-r, --ramdisk <ramdisk_image_path>` - - Set the ramdisk (full path) for the User VM. The maximum length is 1023. - The supported ramdisk format depends on your User VM kernel configuration. +---- - usage: ``-r /path/to/your/ramdisk_image`` +``-l``, ``--lpc <lpc_device_configuration>`` + (See ``-i``, ``--ioc_node``) - * - :kbd:`-s, --pci_slot <slot_config>` - - Setup PCI device configuration. +---- - slot_config format is:: +``-m``, ``--memsize <memory_size>`` + Setup total memory size for User VM. - <bus>:<slot>:<func>,<emul>[,<config>] - <slot>[:<func>],<emul>[,<config>] + memory_size format is: "<size>{K/k, B/b, M/m, G/g}", and size is an + integer. - Where: + usage: ``-m 4g``: set User VM memory to 4 gigabytes. - - ``slot`` is 0..31 - - ``func`` is 0..7 - - ``emul`` is a string describing the type of PCI device e.g. virtio-net - - ``config`` is an optional device-dependent string, used for - configuration. +---- - Examples:: +``--mac_seed <seed_string>`` + Set a platform-unique string as a seed to generate the mac address. Each + VM should have a different "seed_string". The "seed_string" can be + generated by the following method where $(vm_name) contains the name of the + VM you are going to launch. - -s 7,xhci,1-2,2-2 + .. code-block:: - This configuration means the virtual xHCI will appear in PCI slot 7 - in User VM. Any physical USB device attached on 1-2 (bus 1, port 2) or - 2-2 (bus 2, port 2) will be detected by User VM and be used as expected. To - determine which bus and port a USB device is attached, you could run - ``lsusb -t`` in Service VM. + mac=$(cat /sys/class/net/e*/address) + seed_string=${mac:9:8}-${vm_name} - :: +---- - -s 9,virtio-blk,/root/test.img +``--part_info <part_info_name>`` + Set guest partition info path. - This add virtual block in PCI slot 9 and use ``/root/test.img`` as the - disk image +---- - * - :kbd:`-U, --uuid <uuid>` - - Set UUID for a VM. - Every VM is identified by a UUID. You can define that UUID with this - option. If you don't use this option, a default one - ("d2795438-25d6-11e8-864e-cb7a18b34643") will be used. +``-r``, ``--ramdisk <ramdisk_image_path>`` + Set the ramdisk (full path) for the User VM. The maximum length is 1023. + The supported ramdisk format depends on your User VM kernel configuration. - usage:: + usage: ``-r /path/to/your/ramdisk_image`` - -u "42795636-1d31-6512-7432-087d33b34756" +---- - set the newly created VM's UUID to ``42795636-1d31-6512-7432-087d33b34756`` +``-s``, ``--pci_slot <slot_config>`` + Setup PCI device configuration. - * - :kbd:`-v, --version` - - Show Device Model version + slot_config format is:: - * - :kbd:`--vsbl <vsbl_file_path>` - - Virtual Slim bootloader (vSBL) is the virtual bootloader supporting - booting of the User VM on the ACRN hypervisor platform. The vSBL design is - derived from Slim Bootloader, which follows a staged design approach - that provides hardware initialization and launching a payload that - provides the boot logic. + <bus>:<slot>:<func>,<emul>[,<config>] + <slot>[:<func>],<emul>[,<config>] - The vSBL image is installed on the Service OS root filesystem by the - service-os bundle, in ``/usr/share/acrn/bios/``. In the current design, - the vSBL supports booting Android guest OS or Linux guest OS using the - same vSBL image. For Android VM, the vSBL will load and verify trusty OS - first, and trusty OS will then load and verify Android OS according to - Android OS verification mechanism. + Where: - .. note:: - vSBL is currently only supported on Apollo Lake processors. + - ``slot`` is 0..31 + - ``func`` is 0..7 + - ``emul`` is a string describing the type of PCI device, e.g. + virtio-net + - ``config`` is an optional device-dependent string, used for + configuration. - usage:: + Examples:: - --vsbl /usr/share/acrn/bios/VSBL.bin + -s 7,xhci,1-2,2-2 - uses ``/usr/share/acrn/bios/VSBL.bin`` as the vSBL image + This configuration means the virtual xHCI will appear in PCI slot 7 + in User VM. Any physical USB device attached on 1-2 (bus 1, port 2) or + 2-2 (bus 2, port 2) will be detected by User VM and be used as expected. To + determine which bus and port a USB device is attached, you could run + ``lsusb -t`` in Service VM. - * - :kbd:`--ovmf [w,]<ovmf_file_path>` - :kbd:`--ovmf [w,]code=<ovmf_code_file>,vars=<ovmf_vars_file>` - - Open Virtual Machine Firmware (OVMF) is an EDK II based project to enable - UEFI support for Virtual Machines. + :: - ACRN does not support off-the-shelf OVMF builds targeted for QEMU and - KVM. Compatible OVMF images are included in the source tree, under - ``devicemodel/bios/``. + -s 9,virtio-blk,/root/test.img - usage:: + This adds virtual block in PCI slot 9 and uses ``/root/test.img`` as the + disk image. - --ovmf /usr/share/acrn/bios/OVMF.fd +---- - uses ``/usr/share/acrn/bios/OVMF.fd`` as the OVMF image +``-U``, ``--uuid <uuid>`` + Set UUID for a VM. Every VM is identified by a UUID. You can define that + UUID with this option. If you don't use this option, a default one + ("d2795438-25d6-11e8-864e-cb7a18b34643") will be used. - ACRN also supports using OVMF split images; ``OVMF_CODE.fd`` that contains - the OVMF firmware executable and ``OVMF_VARS.fd`` that contains the NV - data store. + usage:: - usage:: + -u "42795636-1d31-6512-7432-087d33b34756" - --ovmf code=/usr/share/acrn/bios/OVMF_CODE.fd,vars=/usr/share/acrn/bios/OVMF_VARS.fd + set the newly created VM's UUID to ``42795636-1d31-6512-7432-087d33b34756`` - ACRN supports the option "w" for OVMF. To preserve all changes in OVMF's - NV data store section, use this option to enable writeback mode. +---- - Writeback mode is only enabled for the ``OVMF_VARS.fd`` file in case of - OVMF split images, the firmware executable (``OVMF_CODE.fd``) remains - read-only. +``-v``, ``--version`` + Show Device Model version. - usage:: +---- - --ovmf w,/usr/share/acrn/bios/OVMF.fd +``--vsbl <vsbl_file_path>`` + Virtual Slim bootloader (vSBL) is the virtual bootloader supporting booting + of the User VM on the ACRN hypervisor platform. The vSBL design is derived + from Slim Bootloader, which follows a staged design approach that provides + hardware initialization and launching a payload that provides the boot + logic. - * - :kbd:`--cpu_affinity <list of pCPUs>` - - list of pCPUs assigned to this VM. + The vSBL image is installed on the Service OS root filesystem by the + service-os bundle, in ``/usr/share/acrn/bios/``. In the current design, + the vSBL supports booting Android guest OS or Linux guest OS using the same + vSBL image. For Android VM, the vSBL will load and verify trusty OS first, + and trusty OS will then load and verify Android OS according to Android OS + verification mechanism. - Example:: + .. note:: + vSBL is currently only supported on Apollo Lake processors. - --cpu_affinity 1,3 + usage:: - to assign physical CPUs (pCPUs) 1 and 3 to this VM. + --vsbl /usr/share/acrn/bios/VSBL.bin - * - :kbd:`--virtio_poll <poll_interval>` - - Enable virtio poll mode with poll interval xxx ns. + uses ``/usr/share/acrn/bios/VSBL.bin`` as the vSBL image. - Example:: +---- - --virtio_poll 1000000 +``--ovmf [w,]<ovmf_file_path>`` ``--ovmf [w,]code=<ovmf_code_file>,vars=<ovmf_vars_file>`` + Open Virtual Machine Firmware (OVMF) is an EDK II based project to enable + UEFI support for Virtual Machines. - enable virtio poll mode with poll interval 1ms. + ACRN does not support off-the-shelf OVMF builds targeted for QEMU and KVM. + Compatible OVMF images are included in the source tree, under + ``devicemodel/bios/``. - * - :kbd:`--acpidev_pt <HID>` - - This option is to enable ACPI device passthrough support. The ``HID`` is a - mandatory parameter for this option which is the Hardware ID of the ACPI - device. + usage:: - Example:: + --ovmf /usr/share/acrn/bios/OVMF.fd - --acpidev_pt MSFT0101 + uses ``/usr/share/acrn/bios/OVMF.fd`` as the OVMF image - To pass through a TPM (which HID is MSFT0101) ACPI device to a User VM. + ACRN also supports using OVMF split images; ``OVMF_CODE.fd`` that contains + the OVMF firmware executable and ``OVMF_VARS.fd`` that contains the NV + data store. - * - :kbd:`--mmiodev_pt <MMIO_Region>` - - This option is to enable MMIO device passthrough support. The ``MMIO_Region`` - is a mandatory parameter for this option which is the MMIO resource of the - MMIO device. The ``MMIO_Region`` needs to be the base address followed by - the length of the region, both separated by a comma. + usage:: - Example:: + --ovmf code=/usr/share/acrn/bios/OVMF_CODE.fd,vars=/usr/share/acrn/bios/OVMF_VARS.fd - --mmiodev_pt 0xFED40000,0x00005000 + ACRN supports the option "w" for OVMF. To preserve all changes in OVMF's + NV data store section, use this option to enable writeback mode. - To pass through a MMIO device to a User VM. The MMIO device has a MMIO region. - The base address of this region is 0xFED40000 and the size of the region - is 0x00005000. + Writeback mode is only enabled for the ``OVMF_VARS.fd`` file in case of + OVMF split images, the firmware executable (``OVMF_CODE.fd``) remains + read-only. - * - :kbd:`--vtpm2 <sock_path>` - - This option is to enable virtual TPM support. The sock_path is a mandatory - parameter for this option which is the path of swtpm socket fd. + usage:: - * - :kbd:`-W, --virtio_msix` - - This option forces virtio to use single-vector MSI. - By default, any virtio-based devices will use MSI-X as its interrupt - method. If you want to use single-vector MSI interrupt, you can do so - using this option. + --ovmf w,/usr/share/acrn/bios/OVMF.fd - * - :kbd:`-Y, --mptgen` - - Disable MPtable generation. - The MultiProcessor Specification (MPS) for the x86 architecture is an - open standard describing enhancements to both operating systems and - firmware that allows them to work with x86-compatible processors in a - multi-processor configuration. MPS covers Advanced Programmable - Interrupt Controller (APIC) architectures. +---- - By default, DM will create the MPtable for you. Use this option to - disable it. +``--cpu_affinity <list of pCPUs>`` + list of pCPUs assigned to this VM. - * - :kbd:`--lapic_pt` - - This option is to create a VM with the local APIC (LAPIC) passed-through. - With this option, a VM is created with ``LAPIC_PASSTHROUGH`` and - ``IO_COMPLETION_POLLING`` mode. This option is typically used for hard - real-time scenarios. + Example:: - By default, this option is not enabled. + --cpu_affinity 1,3 - * - :kbd:`--rtvm` - - This option is used to create a VM with real-time attributes. - With this option, a VM is created with ``GUEST_FLAG_RT`` and - ``GUEST_FLAG_IO_COMPLETION_POLLING`` mode. This kind of VM is - generally used for soft real-time scenarios (without ``--lapic_pt``) or - hard real-time scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``, - the Service VM cannot interfere with this kind of VM when it is - running. It can only be powered off from inside the VM itself. + to assign physical CPUs (pCPUs) 1 and 3 to this VM. - By default, this option is not enabled. +---- - * - :kbd:`--logger_setting <console,level=4;disk,level=4;kmsg,level=3>` - - This option sets the level of logging that is used for each log channel. - The general format of this option is ``<log channel>,level=<log level>``. - Different log channels are separated by a semi-colon (``;``). The various - log channels available are: ``console``, ``disk`` and ``kmsg``. The log - level ranges from 1 (``error``) up to 5 (``debug``). +``--virtio_poll <poll_interval>`` + Enable virtio poll mode with poll interval xxx ns. - By default, the log severity level is set to 4 (``info``). + Example:: - * - :kbd:`--pm_notify_channel <channel>` - - This option is used to define which channel could be used DM to - communicate with VM about power management event. + --virtio_poll 1000000 - ACRN supports three channels: ``ioc``, ``power button`` and ``uart``. + enable virtio poll mode with poll interval 1ms. - usage:: +---- - --pm_notify_channel ioc +``--acpidev_pt <HID>`` + This option is to enable ACPI device passthrough support. The ``HID`` is a + mandatory parameter for this option which is the Hardware ID of the ACPI + device. - Use ioc as power management event motify channel. + Example:: - * - :kbd:`--pm_by_vuart [pty|tty],<node_path>` - - This option is used to set a user OS power management by virtual UART. - With acrn-dm UART emulation and hypervisor UART emulation and configure, - service OS can communicate with user OS through virtual UART. By this - option, service OS can notify user OS to shutdown itself by vUART. + --acpidev_pt MSFT0101 - It need work with `--pm_notify_channel` and PCI UART setting (lpc and -l). + To pass through a TPM (which HID is MSFT0101) ACPI device to a User VM. - Example:: +---- - for general User VM, like LaaG or WaaG, it need set: - --pm_notify_channel uart --pm_by_vuart pty,/run/acrn/life_mngr_vm1 - -l com2,/run/acrn/life_mngr_vm1 - for RTVM, like RT-Linux: - --pm_notify_channel uart --pm_by_vuart tty,/dev/ttyS1 +``--mmiodev_pt <MMIO_Region>`` + This option is to enable MMIO device passthrough support. The + ``MMIO_Region`` is a mandatory parameter for this option which is the MMIO + resource of the MMIO device. The ``MMIO_Region`` needs to be the base + address followed by the length of the region, both separated by a comma. - For different User VM, it can be configured as needed. + Example:: - * - :kbd:`--windows` - - This option is used to run Windows User VMs. It supports Oracle - ``virtio-blk``, ``virtio-net`` and ``virtio-input`` devices for Windows - guests with secure boot. + --mmiodev_pt 0xFED40000,0x00005000 - usage:: + To pass through a MMIO device to a User VM. The MMIO device has a MMIO + region. The base address of this region is 0xFED40000 and the size of the + region is 0x00005000. - --windows +---- + +``--vtpm2 <sock_path>`` + This option is to enable virtual TPM support. The sock_path is a mandatory + parameter for this option which is the path of swtpm socket fd. + +---- + +``-W, --virtio_msix`` + This option forces virtio to use single-vector MSI. By default, any + virtio-based devices will use MSI-X as its interrupt method. If you want + to use single-vector MSI interrupt, you can do so using this option. + +---- + +``-Y, --mptgen`` + Disable MPtable generation. The MultiProcessor Specification (MPS) for the + x86 architecture is an open standard describing enhancements to both + operating systems and firmware that allows them to work with x86-compatible + processors in a multi-processor configuration. MPS covers Advanced + Programmable Interrupt Controller (APIC) architectures. + + By default, DM will create the MPtable for you. Use this option to disable + it. + +---- + +``--lapic_pt`` + This option is to create a VM with the local APIC (LAPIC) passed-through. + With this option, a VM is created with ``LAPIC_PASSTHROUGH`` and + ``IO_COMPLETION_POLLING`` mode. This option is typically used for hard + real-time scenarios. + + By default, this option is not enabled. + +---- + +``--rtvm`` + This option is used to create a VM with real-time attributes. With this + option, a VM is created with ``GUEST_FLAG_RT`` and + ``GUEST_FLAG_IO_COMPLETION_POLLING`` mode. This kind of VM is generally + used for soft real-time scenarios (without ``--lapic_pt``) or hard + real-time scenarios (with ``--lapic_pt``). With ``GUEST_FLAG_RT``, the + Service VM cannot interfere with this kind of VM when it is running. It + can only be powered off from inside the VM itself. + + By default, this option is not enabled. + +---- + +``--logger_setting <console,level=4;disk,level=4;kmsg,level=3>`` + This option sets the level of logging that is used for each log channel. + The general format of this option is ``<log channel>,level=<log level>``. + Different log channels are separated by a semi-colon (``;``). The various + log channels available are: ``console``, ``disk`` and ``kmsg``. The log + level ranges from 1 (``error``) up to 5 (``debug``). + + By default, the log severity level is set to 4 (``info``). + +---- + +``--pm_notify_channel <channel>`` + This option is used to define which channel could be used DM to + communicate with VM about power management event. + + ACRN supports three channels: ``ioc``, ``power button`` and ``uart``. + + usage:: + + --pm_notify_channel ioc + + Use ioc as power management event motify channel. + +---- + +``--pm_by_vuart [pty|tty],<node_path>`` + This option is used to set a user OS power management by virtual UART. + With acrn-dm UART emulation and hypervisor UART emulation and configure, + service OS can communicate with user OS through virtual UART. By this + option, service OS can notify user OS to shutdown itself by vUART. + + It must work with `--pm_notify_channel` and PCI UART setting (lpc and -l). + + Example:: + + for general User VM, such as LaaG or WaaG, it must set: + --pm_notify_channel uart --pm_by_vuart pty,/run/acrn/life_mngr_vm1 + -l com2,/run/acrn/life_mngr_vm1 + for RTVM, like RT-Linux: + --pm_notify_channel uart --pm_by_vuart tty,/dev/ttyS1 + + For a different User VM, it can be configured as needed. + +---- + +``--windows`` + This option is used to run Windows User VMs. It supports Oracle + ``virtio-blk``, ``virtio-net`` and ``virtio-input`` devices for Windows + guests with secure boot. + + usage:: + + --windows + +---- + +``--psram`` + This option enables Pseudo (Software) SRAM passthrough to the VM. + + usage:: + + --psram diff --git a/doc/user-guides/acrn-shell.rst b/doc/user-guides/acrn-shell.rst index e1e089330..f9c432fb8 100644 --- a/doc/user-guides/acrn-shell.rst +++ b/doc/user-guides/acrn-shell.rst @@ -54,7 +54,7 @@ The ACRN hypervisor shell supports the following commands: - Write ``value`` (in hexadecimal) to the Model-Specific Register (MSR) at index ``msr_index`` (in hexadecimal) for CPU ID ``pcpu_id`` -Command examples +Command Examples **************** The following sections provide further details and examples for some of these commands. diff --git a/doc/user-guides/hv-parameters.rst b/doc/user-guides/hv-parameters.rst index 7e97e1d87..d803f993f 100644 --- a/doc/user-guides/hv-parameters.rst +++ b/doc/user-guides/hv-parameters.rst @@ -3,7 +3,7 @@ ACRN Hypervisor Parameters ########################## -Generic hypervisor parameters +Generic Hypervisor Parameters ***************************** The ACRN hypervisor supports the following parameter: @@ -13,9 +13,15 @@ The ACRN hypervisor supports the following parameter: +=================+=============================+========================================================================================+ | | disabled | This disables the serial port completely. | | +-----------------------------+----------------------------------------------------------------------------------------+ -| ``uart=`` | bdf@<BDF value> | This sets the PCI serial port based on its BDF. e.g. ``bdf@0:18.1`` | +| ``uart=`` | bdf@<BDF value> | This sets the serial port PCI BDF (in HEX), e.g. ``bdf@0xc1`` | +| | | | +| | | BDF: Bus, Device, and Function (in HEX) of the serial PCI device. The BDF is packed | +| | | into a 16-bit WORD with format (B:8, D:5, F:3). For example, PCI device ``0:18.1`` | +| | | becomes ``0xc1`` | | +-----------------------------+----------------------------------------------------------------------------------------+ -| | port@<port address> | This sets the serial port address. | +| | port@<port address> | This sets the serial port PIO address, e.g. ``uart=port@0x3F8`` | +| +-----------------------------+----------------------------------------------------------------------------------------+ +| | mmio@<MMIO address> | This sets the serial port MMIO address, e.g. ``uart=mmio@0xfe040000`` | +-----------------+-----------------------------+----------------------------------------------------------------------------------------+ The Generic hypervisor parameters are specified in the GRUB multiboot/multiboot2 command. @@ -28,7 +34,7 @@ For example: insmod part_gpt insmod ext2 echo 'Loading ACRN hypervisor ...' - multiboot --quirk-modules-after-kernel /boot/acrn.32.out uart=bdf@0:18.1 + multiboot --quirk-modules-after-kernel /boot/acrn.32.out uart=bdf@0xc1 module /boot/bzImage Linux_bzImage module /boot/bzImage2 Linux_bzImage2 } diff --git a/doc/user-guides/images/i915-image1.png b/doc/user-guides/images/i915-image1.png deleted file mode 100644 index f7bdfbae7..000000000 Binary files a/doc/user-guides/images/i915-image1.png and /dev/null differ diff --git a/doc/user-guides/images/i915-image2.png b/doc/user-guides/images/i915-image2.png deleted file mode 100644 index 3479b7c8b..000000000 Binary files a/doc/user-guides/images/i915-image2.png and /dev/null differ diff --git a/doc/user-guides/images/i915-image3.png b/doc/user-guides/images/i915-image3.png deleted file mode 100644 index ea1054707..000000000 Binary files a/doc/user-guides/images/i915-image3.png and /dev/null differ diff --git a/doc/user-guides/images/i915-image4.png b/doc/user-guides/images/i915-image4.png deleted file mode 100644 index 1d9e2b77a..000000000 Binary files a/doc/user-guides/images/i915-image4.png and /dev/null differ diff --git a/doc/user-guides/images/i915-image5.png b/doc/user-guides/images/i915-image5.png deleted file mode 100644 index a9aab67a2..000000000 Binary files a/doc/user-guides/images/i915-image5.png and /dev/null differ diff --git a/doc/user-guides/kernel-parameters.rst b/doc/user-guides/kernel-parameters.rst index 5ba7e3d29..7b428f4bb 100644 --- a/doc/user-guides/kernel-parameters.rst +++ b/doc/user-guides/kernel-parameters.rst @@ -3,7 +3,7 @@ ACRN Kernel Parameters ###################### -Generic kernel parameters +Generic Kernel Parameters ************************* A number of kernel parameters control the behavior of ACRN-based systems. Some @@ -207,7 +207,7 @@ relevant for configuring or debugging ACRN-based systems. from the guest VM. If hypervisor relocation is disabled, verify that - :option:`CONFIG_HV_RAM_START` and :option:`CONFIG_HV_RAM_SIZE` + :option:`hv.MEMORY.HV_RAM_START` and :option:`hv.MEMORY.HV_RAM_SIZE` does not overlap with the hypervisor's reserved buffer space allocated in the Service VM. Service VM GPA and HPA are a 1:1 mapping. @@ -342,22 +342,6 @@ section below has more details on a few select parameters. i915.enable_gvt=1 - * - i915.gvt_workload_priority - - Service VM - - Define the priority level of User VM graphics workloads - - :: - - i915.gvt_workload_priority=1 - - * - i915.enable_initial_modeset - - Service VM - - On MRB, value must be ``1``. On Intel NUC or UP2 boards, value must be - ``0``. See :ref:`i915-enable-initial-modeset`. - - :: - - i915.enable_initial_modeset=1 - i915.enable_initial_modeset=0 - * - i915.nuclear_pageflip - Service VM,User VM - Force enable atomic functionality on platforms that don't have full support yet. @@ -365,13 +349,6 @@ section below has more details on a few select parameters. i915.nuclear_pageflip=1 - * - i915.domain_scaler_owner - - Service VM - - See `i915.domain_scaler_owner`_ - - :: - - i915.domain_scaler_owner=0x021100 - * - i915.enable_guc - Service VM - Enable GuC load for HuC load. @@ -402,7 +379,7 @@ section below has more details on a few select parameters. .. _GVT-g-kernel-options: -GVT-g (AcrnGT) Kernel Options details +GVT-g (AcrnGT) Kernel Options Details ===================================== This section provides additional information and details on the kernel command @@ -416,76 +393,6 @@ support in the host. By default, it's not enabled, so we need to add ``i915.enable_gvt=1`` in the Service VM kernel command line. This is a Service OS only parameter, and cannot be enabled in the User VM. -i915.gvt_workload_priority --------------------------- - -AcrnGT supports **Prioritized Rendering** as described in the -:ref:`GVT-g-prioritized-rendering` high-level design. This -configuration option controls the priority level of GVT-g guests. -Priority levels range from -1023 to 1023. - -The default priority is zero, the same priority as the Service VM. If -the level is less than zero, the guest's priority will be lower than the -Service VM, so graphics preemption will work and the prioritized -rendering feature will be enabled. If the level is greater than zero, -User VM graphics workloads will preempt most of the Service VM graphics workloads, -except for display updating related workloads that use a default highest -priority (1023). - -Currently, all User VMs share the same priority. -This is a Service VM only parameters, and does -not work in the User VM. - -.. _i915-enable-initial-modeset: - -i915.enable_initial_modeset ---------------------------- - -At time, kernel graphics must be initialized with a valid display -configuration with full display pipeline programming in place before the -user space is initialized and without a fbdev & fb console. - -When ``i915.enable_initial_modeset=1``, the FBDEV of i915 will not be -initialized, so users would not be able to see the fb console on screen. -If there is no graphics UI running by default, users will see black -screens displayed. - -When ``i915.enable_initial_modeset=0`` in Service VM, the plane restriction -(also known as plane-based domain ownership) feature will be disabled. -(See the next section and :ref:`plane_restriction` in the ACRN GVT-g -High Level Design for more information about this feature.) - -In the current configuration, we will set -``i915.enable_initial_modeset=1`` in Service VM and -``i915.enable_initial_modeset=0`` in User VM. - -i915.domain_scaler_owner -======================== - -On each Intel GPU display pipeline, there are several plane scalers -to zoom in/out the planes. For example, if a 720p video is played -full-screen on a 1080p display monitor, the kernel driver will use a -scaler to zoom in the video plane to a 1080p image and present it onto a -display pipeline. (Refer to "Intel Open Source Graphics PRM Vol 7: -display" for the details.) - -On Broxton platforms, Pipe A and Pipe B each -have two plane scalers, and Pipe C has one plane scaler. To support the -plane scaling in AcrnGT guest OS, we introduced the parameter -``i915.domain_scaler_owner``, to assign a specific scaler to the target -guest OS. - -As with the parameter ``i915.domain_plane_owners``, each nibble of -``i915.domain_scaler_owner`` represents the domain id that owns the scaler; -every nibble (4 bits) represents a scaler and every group of 2 nibbles -represents a pipe. This is a Service VM only configuration and cannot be -modified at runtime. Domain ID 0x0 is for the Service VM, the User VM -use domain IDs from 0x1 to 0xF. - -For example, if we set ``i915.domain_scaler_owner=0x021100``, the Service VM -owns scaler 1A, 2A; User VM #1 owns scaler 1B, 2B; and User VM #2 owns scaler -1C. - i915.enable_hangcheck ===================== diff --git a/misc/README.rst b/misc/README.rst index fec19ddc6..803d26445 100644 --- a/misc/README.rst +++ b/misc/README.rst @@ -1,4 +1,4 @@ -ACRN tools +ACRN Tools ########## The open source `Project ACRN`_ defines a device hypervisor reference stack and diff --git a/misc/config_tools/schema/VMtypes.xsd b/misc/config_tools/schema/VMtypes.xsd index f1e031972..94f905d3b 100644 --- a/misc/config_tools/schema/VMtypes.xsd +++ b/misc/config_tools/schema/VMtypes.xsd @@ -29,14 +29,14 @@ <xs:simpleType name="GuestFlagsOptionsType"> <xs:annotation> <xs:documentation> -- 0, 0UL and empty string means there is no guest flag is enabled. -- ``GUEST_FLAG_SECURE_WORLD_ENABLED`` specify whether the secure world is +- ``0``, ``0UL``, or an empty string means no guest flags are enabled. +- ``GUEST_FLAG_SECURE_WORLD_ENABLED`` specify that the secure world is enabled -- ``GUEST_FLAG_LAPIC_PASSTHROUGH`` specify whether LAPIC is passed through -- ``GUEST_FLAG_IO_COMPLETION_POLLING`` specify whether the hypervisor needs +- ``GUEST_FLAG_LAPIC_PASSTHROUGH`` specify that LAPIC is passed through +- ``GUEST_FLAG_IO_COMPLETION_POLLING`` specify that the hypervisor needs IO polling to completion -- ``GUEST_FLAG_HIDE_MTRR`` specify whether to hide MTRR from the VM -- ``GUEST_FLAG_RT`` specify whether the VM is RT-VM (real-time)</xs:documentation> +- ``GUEST_FLAG_HIDE_MTRR`` specify that MTRR is hidden from the VM +- ``GUEST_FLAG_RT`` specify that the VM is an RT-VM (real-time)</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="" /> @@ -64,7 +64,8 @@ <xs:sequence> <xs:element name="pcpu_id" type="xs:integer" default="2" maxOccurs="unbounded"> <xs:annotation> - <xs:documentation>A pCPU that this VM's vCPU is allowed to pin to.</xs:documentation> + <xs:documentation>A pCPU that this VM's vCPU is allowed to pin +to.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> @@ -76,7 +77,8 @@ <xs:annotation> <xs:documentation>Configure each CPU in VMs to a desired CLOS ID in the ``VM`` section of the scenario file. Follow :ref:`rdt_detection_capabilities` -to identify the maximum supported CLOS ID that can be used.</xs:documentation> +to identify the maximum supported CLOS ID that can be used. Default +value ``0``.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> @@ -94,7 +96,8 @@ to identify the maximum supported CLOS ID that can be used.</xs:documentation> </xs:element> <xs:element name="size" type="HexFormat" default="0"> <xs:annotation> - <xs:documentation>SGX EPC section size in Bytes, must be page aligned.</xs:documentation> + <xs:documentation>SGX EPC section size in Bytes, must be page +aligned.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> @@ -104,22 +107,26 @@ to identify the maximum supported CLOS ID that can be used.</xs:documentation> <xs:sequence> <xs:element name="start_hpa" type="HexFormat" default="0x100000000"> <xs:annotation> - <xs:documentation>The start physical address in host for the VM.</xs:documentation> + <xs:documentation>The starting physical address in host for the +VM.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="size" type="MemorySizeType" default="0x20000000"> <xs:annotation> - <xs:documentation>The memory size in bytes for the VM.</xs:documentation> + <xs:documentation>The memory size in bytes for the VM. Default +value is ``0x200000000``.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="start_hpa2" type="HexFormat" default="0x0" minOccurs="0"> <xs:annotation acrn:configurable="n"> - <xs:documentation>Start of second HPA for non-contiguous allocations in host for the VM.</xs:documentation> + <xs:documentation>Start of second HPA for non-contiguous +allocations in host for the VM.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="size_hpa2" type="MemorySizeType" default="0x0" minOccurs="0"> <xs:annotation acrn:configurable="n"> - <xs:documentation>Memory size of second HPA for non-contiguous allocations in Bytes for the VM.</xs:documentation> + <xs:documentation>Memory size of second HPA for non-contiguous +allocations in Bytes for the VM.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> @@ -128,10 +135,13 @@ to identify the maximum supported CLOS ID that can be used.</xs:documentation> <xs:complexType name="OSConfigurations"> <xs:sequence> <xs:element name="name"> - <xs:simpleType> <xs:annotation> - <xs:documentation>Specify the OS name of VM; currently, it is not referenced by the hypervisor code.String from 1 to 32 -characters long.</xs:documentation> + <xs:documentation>Specify the OS name of VM. +Is not referenced by the hypervisor code. </xs:documentation> + </xs:annotation> + <xs:simpleType> + <xs:annotation> + <xs:documentation>A string with 1 to 32 characters.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:minLength value="1" /> @@ -141,8 +151,7 @@ characters long.</xs:documentation> </xs:element> <xs:element name="kern_type" type="VMKernelType"> <xs:annotation> - <xs:documentation>Specify the kernel image type so that the hypervisor can load it correctly. -Currently supports ``KERNEL_BZIMAGE`` and ``KERNEL_ZEPHYR``.</xs:documentation> + <xs:documentation>Specify the kernel image type so the hypervisor can load it correctly.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="kern_mod" type="xs:string"> @@ -177,8 +186,8 @@ must exactly match the module tag in the GRUB multiboot cmdline.</xs:documentati <xs:simpleType name="VMKernelType"> <xs:annotation> - <xs:documentation>Specify the kernel image type so that the hypervisor can load it correctly. -Currently supports ``KERNEL_BZIMAGE`` and ``KERNEL_ZEPHYR``.</xs:documentation> + <xs:documentation>A string with either ``KERNEL_BZIMAGE`` or +``KERNEL_ZEPHYR``.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="KERNEL_BZIMAGE" /> @@ -198,7 +207,9 @@ Currently supports ``KERNEL_BZIMAGE`` and ``KERNEL_ZEPHYR``.</xs:documentation> <xs:simpleType name="LegacyVuartBase"> <xs:annotation> - <xs:documentation>vUART (A.K.A COM) enabling switch. Enable by exposing its base address, disable by returning INVALID_COM_BASE.</xs:documentation> + <xs:documentation>A string with either ``SOS_COM1_BASE``, +``SOS_COM2_BASE``, ``COM1_BASE``, ``COM2_BASE``, ``COM3_BASE``, +``COM4_BASE``, or indicating it's disabled with ``INVALID_COM_BASE``.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="SOS_COM1_BASE" /> @@ -213,7 +224,9 @@ Currently supports ``KERNEL_BZIMAGE`` and ``KERNEL_ZEPHYR``.</xs:documentation> <xs:simpleType name="LegacyVuartIrq"> <xs:annotation acrn:configurable="n"> - <xs:documentation>vCOM irq</xs:documentation> + <xs:documentation>A string with either ``SOS_COM1_IRQ``, +``SOS_COM2_IRQ``, ``COM1_IRQ``, ``COM2_IRQ``, ``COM3_IRQ``, +``COM4_IRQ``, ``CONFIG_COM_IRQ``, ``3``, ``4``, ``6``, or ``7``.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="SOS_COM1_IRQ" /> @@ -230,17 +243,19 @@ Currently supports ``KERNEL_BZIMAGE`` and ``KERNEL_ZEPHYR``.</xs:documentation> </xs:restriction> </xs:simpleType> -<xs:complexType name="LegancyVuartConfiguration"> +<xs:complexType name="LegacyVuartConfiguration"> <xs:sequence> <xs:element name="type" type="LegacyVuartType" default="VUART_LEGACY_PIO"> <xs:annotation> - <xs:documentation>vUART (aka COM) type; currently only supports the legacy PIO mode.</xs:documentation> + <xs:documentation>vUART (COM) type; only legacy PIO mode is +supported.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="base" type="LegacyVuartBase"> <xs:annotation> - <xs:documentation>vUART (A.K.A COM) enabling switch. Enable by exposing its COM_BASE -(SOS_COM_BASE for Service VM); disable by returning INVALID_COM_BASE.</xs:documentation> + <xs:documentation>vUART (COM) enabling switch. Enable by exposing its COM_BASE +(e.b., ``SOS_COM1_BASE`` for Service VM); disable by returning +``INVALID_COM_BASE``.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="irq" type="LegacyVuartIrq"> @@ -265,8 +280,8 @@ target VM the current VM connects to.</xs:documentation> <xs:simpleType name="PCIVuartBase"> <xs:annotation> - <xs:documentation>PCI based vUART enabling switch. Enable by specifying PCI_VUART; -disable by returning INVALID_PCI_BASE.</xs:documentation> + <xs:documentation>A string with ``PCI_VUART`` or indicating its +disabled using ``INVALID_PCI_BASE``.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="PCI_VUART" /> @@ -279,7 +294,8 @@ disable by returning INVALID_PCI_BASE.</xs:documentation> <xs:element name="base" type="PCIVuartBase" default="INVALID_PCI_BASE"> <xs:annotation> <xs:documentation>Console vUART (A.K.A PCI based vUART) enabling switch. - Enable by specifying PCI_VUART; disable by returning INVALID_PCI_BASE.</xs:documentation> +Enable by specifying PCI_VUART; disable by specifying +``INVALID_PCI_BASE``.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> @@ -291,7 +307,8 @@ disable by returning INVALID_PCI_BASE.</xs:documentation> <xs:element name="base" type="PCIVuartBase" default="INVALID_PCI_BASE"> <xs:annotation> <xs:documentation>Communication vUART (A.K.A PCI based vUART) enabling switch. -Enable by specifying PCI_VUART; disable by returning INVALID_PCI_BASE.</xs:documentation> +Enable by specifying PCI_VUART; disable by specifying +``INVALID_PCI_BASE``.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="target_vm_id" type="xs:integer"> @@ -312,12 +329,13 @@ Enable by specifying PCI_VUART; disable by returning INVALID_PCI_BASE.</xs:docum <xs:sequence> <xs:element name="TPM2" type="Boolean" default="n" minOccurs="0"> <xs:annotation> - <xs:documentation>TPM2 device to passthrough.</xs:documentation> + <xs:documentation>Specify TPM2 device to passthrough.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="p2sb" type="Boolean" default="n" minOccurs="0"> <xs:annotation> - <xs:documentation>Exposing the P2SB (Primary-to-Sideband) bridge to the pre-launched VM.</xs:documentation> + <xs:documentation>Expose the P2SB (Primary-to-Sideband) bridge +to the pre-launched VM.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> @@ -327,7 +345,7 @@ Enable by specifying PCI_VUART; disable by returning INVALID_PCI_BASE.</xs:docum <xs:sequence> <xs:element name="pci_dev" type="xs:string" maxOccurs="unbounded"> <xs:annotation> - <xs:documentation>A passthrough pci device.</xs:documentation> + <xs:documentation>A passthrough PCI device.</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> @@ -337,7 +355,7 @@ Enable by specifying PCI_VUART; disable by returning INVALID_PCI_BASE.</xs:docum <xs:sequence> <xs:element name="rootfs" type="xs:string" default="/dev/nvme0n1p3"> <xs:annotation> - <xs:documentation>rootfs for the Linux kernel.</xs:documentation> + <xs:documentation>Rootfs for the Linux kernel.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="bootargs" type="xs:string"> diff --git a/misc/config_tools/schema/config.xsd b/misc/config_tools/schema/config.xsd index ad6bcf10d..251500c1f 100644 --- a/misc/config_tools/schema/config.xsd +++ b/misc/config_tools/schema/config.xsd @@ -16,54 +16,55 @@ <xs:element name="RELEASE" type="Boolean" default="n"> <xs:annotation> <xs:documentation>Build an image for release (``y``) or debug (``n``). -In a **release** image, assertions are not enforced and these features -are **not** available: - -- logs -- serial console -- hypervisor shell</xs:documentation> +In a **release** image, assertions are not enforced and debugging +features are disabled, including logs, serial console, and the +hypervisor shell.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="SERIAL_CONSOLE" type="SerialConsoleOptions" default="/dev/ttyS0"> <xs:annotation> <xs:documentation>Specify the host serial device used for hypervisor debugging. -This option is only valid if the Service VM 'legacy_vuart` is -enabled. Leave this filed empty if the Service VM's 'console_vuart` is enabled. Uses -`bootargs` for `console_vuart` configuration.</xs:documentation> +This option is only valid if the Service VM :ref:`vm.legacy_vuart` is +enabled. Leave this field empty if the Service VM's :ref:`vm.console_vuart` is enabled. Uses +:option:`vm.os_config.bootargs` for :ref:`vm.console_vuart` +configuration.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="MEM_LOGLEVEL" type="LogLevelType" default="5"> <xs:annotation> - <xs:documentation>Default loglevel for log messages stored in memory. -Messages with a lower severity (higher value) are discarded.</xs:documentation> + <xs:documentation>Default loglevel for log messages stored in +memory. Value can be changed at runtime.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="NPK_LOGLEVEL" type="LogLevelType" default="5"> <xs:annotation> - <xs:documentation>Default loglevel for the hypervisor NPK log.</xs:documentation> + <xs:documentation>Default loglevel for the hypervisor North Peak +(NPK) log. Value can be changed at runtime.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="CONSOLE_LOGLEVEL" type="LogLevelType" default="3"> <xs:annotation> <xs:documentation>Default loglevel for log messages -written to the serial console. Messages with lower severity (higher -value) are not displayed.</xs:documentation> +written to the serial console.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="LOG_DESTINATION" default="7"> - <xs:simpleType> <xs:annotation> <xs:documentation>Bitmap indicating the destination of log messages. -Currently there are three log destinations available: +There are three log destinations available: -- bit 0 for the serial console (``0x1``), -- bit 1 for the Service VM log (``0x2``), and -- bit 2 for the NPK log (``0x4``). +- Bit 0 enables the serial console (``0x1``), +- Bit 1 enables the Service VM log (``0x2``), and +- Bit 2 enables the NPK log (``0x4``). For example, a value of ``3`` enables only the serial console and Service VM logs. Effective only in debug builds (when :option:`hv.DEBUG_OPTIONS.RELEASE` is ``n``).</xs:documentation> </xs:annotation> + <xs:simpleType> + <xs:annotation> + <xs:documentation>Integer value from 0 to 7.</xs:documentation> + </xs:annotation> <xs:restriction base="xs:integer"> <xs:minInclusive value="0" /> <xs:maxInclusive value="7" /> @@ -73,7 +74,7 @@ serial console and Service VM logs. Effective only in debug builds (when <xs:element name="LOG_BUF_SIZE" type="HexFormat" default="0x40000"> <xs:annotation> <xs:documentation>Capacity (in bytes) of logbuf for each -physical cpu, for example, ``0x40000``.</xs:documentation> +physical CPU, for example, ``0x40000``.</xs:documentation> </xs:annotation> </xs:element> </xs:all> @@ -81,13 +82,14 @@ physical cpu, for example, ``0x40000``.</xs:documentation> <xs:complexType name="FeatureOptionsType"> <xs:annotation> - <xs:documentation>Options for hypervisor feature enablement.</xs:documentation> + <xs:documentation>Options for enabling hypervisor features.</xs:documentation> </xs:annotation> <xs:all> <xs:element name="RELOC" type="Boolean" default="y"> <xs:annotation> - <xs:documentation>Specify if hypervisor relocation is enabled on booting.</xs:documentation> + <xs:documentation>Specify if hypervisor relocation is enabled on +booting.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="SCHEDULER" type="SchedulerType" default="SCHED_BVT"> @@ -104,9 +106,9 @@ boot option.</xs:documentation> </xs:element> <xs:element name="ENFORCE_TURNOFF_AC" type="Boolean" default="y"> <xs:annotation> - <xs:documentation>Force to disable #AC for Split-locked Access.If CPU has #AC for -split-locked access, HV enable it and VMs can't disable. Set this to enforce turn off that -#AC, for community developer only.</xs:documentation> + <xs:documentation>Force to disable #AC for Split-locked Access. If CPU has #AC for +split-locked access, HV enables it and VMs can't disable. Set this to enforce turning off that +#AC, for debugging purposes only.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="RDT" type="RDTType"> @@ -134,7 +136,8 @@ of DMA operations.</xs:documentation> </xs:element> <xs:element name="L1D_VMENTRY_ENABLED" type="Boolean" default="n"> <xs:annotation> - <xs:documentation>Enable L1 cache flush before VM entry.</xs:documentation> + <xs:documentation>Enable L1 cache flush before VM entry. Default +value ``n``.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="MCE_ON_PSC_DISABLED" type="Boolean" default="n"> @@ -180,22 +183,25 @@ the RAM region used by the hypervisor.</xs:documentation> <xs:element name="LOW_RAM_SIZE" type="HexFormat" default="0x00010000"> <xs:annotation> <xs:documentation>Size of the low RAM region below address -``0x10000``, starting from address ``0x0``..</xs:documentation> +``0x10000``, starting from address ``0x0``.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="UOS_RAM_SIZE" type="HexFormat" default="0x200000000"> <xs:annotation> - <xs:documentation>Size of the User VM OS RAM region.</xs:documentation> + <xs:documentation>Size of the User VM OS RAM region. Default +value ``0x200000000``.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="SOS_RAM_SIZE" type="HexFormat" default="0x400000000"> <xs:annotation> - <xs:documentation>Size of the Service VM OS RAM region.</xs:documentation> + <xs:documentation>Size of the Service VM OS RAM region. Default +value ``0x400000000``.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="PLATFORM_RAM_SIZE" type="HexFormat" default="0x400000000"> <xs:annotation> - <xs:documentation>Size of the physical platform RAM.</xs:documentation> + <xs:documentation>Size of the physical platform RAM. Default +value ``0x400000000``.</xs:documentation> </xs:annotation> </xs:element> </xs:all> @@ -209,7 +215,8 @@ maximum supported resource.</xs:documentation> <xs:all> <xs:element name="IOMMU_BUS_NUM" type="HexFormat" default="0x100"> <xs:annotation> - <xs:documentation>Highest PCI bus ID used during IOMMU initialization.</xs:documentation> + <xs:documentation>Highest PCI bus ID used during IOMMU +initialization.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="MAX_IR_ENTRIES" type="xs:integer" default="256"> @@ -218,10 +225,13 @@ maximum supported resource.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="MAX_IOAPIC_NUM" default="1"> + <xs:annotation> + <xs:documentation>Maximum number of IOAPICs.</xs:documentation> + </xs:annotation> <xs:simpleType> - <xs:annotation> - <xs:documentation>Maximum number of IO-APICs. Integer from 1 to 10.</xs:documentation> - </xs:annotation> + <xs:annotation> + <xs:documentation>Integer from 1 to 10.</xs:documentation> + </xs:annotation> <xs:restriction base="xs:integer"> <xs:minInclusive value="1" /> <xs:maxInclusive value="10" /> @@ -230,14 +240,17 @@ maximum supported resource.</xs:documentation> </xs:element> <xs:element name="MAX_KATA_VM_NUM" type="xs:integer" minOccurs="0" default="0"> <xs:annotation> - <xs:documentation>>Maximum number of KATA VM.</xs:documentation> + <xs:documentation>Maximum number of KATA VM.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="MAX_PCI_DEV_NUM" default="96"> + <xs:annotation> + <xs:documentation>Maximum number of PCI devices.</xs:documentation> + </xs:annotation> <xs:simpleType> - <xs:annotation> - <xs:documentation>Maximum number of PCI devices.Integer from 1 to 1024.</xs:documentation> - </xs:annotation> + <xs:annotation> + <xs:documentation>Integer from 1 to 1024.</xs:documentation> + </xs:annotation> <xs:restriction base="xs:integer"> <xs:minInclusive value="1" /> <xs:maxInclusive value="1024" /> @@ -245,10 +258,13 @@ maximum supported resource.</xs:documentation> </xs:simpleType> </xs:element> <xs:element name="MAX_IOAPIC_LINES" default="120"> + <xs:annotation> + <xs:documentation>Maximum number of interrupt lines per IOAPIC.</xs:documentation> + </xs:annotation> <xs:simpleType> - <xs:annotation> - <xs:documentation>Maximum number of interrupt lines per IOAPIC.Integer from 1 to 120.</xs:documentation> - </xs:annotation> + <xs:annotation> + <xs:documentation>Integer from 1 to 120.</xs:documentation> + </xs:annotation> <xs:restriction base="xs:integer"> <xs:minInclusive value="1" /> <xs:maxInclusive value="120" /> @@ -257,14 +273,17 @@ maximum supported resource.</xs:documentation> </xs:element> <xs:element name="MAX_PT_IRQ_ENTRIES" type="xs:integer" default="256"> <xs:annotation> - <xs:documentation>Maximum number of interrupt source for PT devices.</xs:documentation> + <xs:documentation>Maximum number of interrupt source for PT +devices.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="MAX_MSIX_TABLE_NUM" default="64"> + <xs:annotation> + <xs:documentation>Maximum number of MSI-X tables per device.</xs:documentation> + </xs:annotation> <xs:simpleType> <xs:annotation> - <xs:documentation>Maximum number of MSI-X tables per device. -Leave blank if not sure.Integer from 1 to 2048.</xs:documentation> + <xs:documentation>Integer value from 1 to 2048.</xs:documentation> </xs:annotation> <xs:restriction base="xs:integer"> <xs:minInclusive value="1" /> @@ -273,9 +292,12 @@ Leave blank if not sure.Integer from 1 to 2048.</xs:documentation> </xs:simpleType> </xs:element> <xs:element name="MAX_EMULATED_MMIO" default="16"> + <xs:annotation> + <xs:documentation>Maximum number of emulated MMIO regions.</xs:documentation> + </xs:annotation> <xs:simpleType> <xs:annotation> - <xs:documentation>Maximum number of emulated MMIO regions.Integer from 1 to 128.</xs:documentation> + <xs:documentation>Integer value from 1 to 128.</xs:documentation> </xs:annotation> <xs:restriction base="xs:integer"> <xs:minInclusive value="1" /> @@ -319,11 +341,13 @@ Leave blank if not sure.Integer from 1 to 2048.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="name" minOccurs="0"> - <xs:simpleType> <xs:annotation> <xs:documentation>Specify the VM name shown in the - hypervisor console ``vm_lists`` command. String from 1 to 32 - characters long.</xs:documentation> +hypervisor console ``vm_list`` command.</xs:documentation> + </xs:annotation> + <xs:simpleType> + <xs:annotation> + <xs:documentation>string from 1 to 32 characters long.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:minLength value="1" /> @@ -354,35 +378,16 @@ Refer SDM 17.19.2 for details, and use with caution.</xs:documentation> </xs:element> <xs:element name="memory" type="MemoryInfo" minOccurs="0"> <xs:annotation> - <xs:documentation>Specify memory information for hypervisor, Service OS and User OS: - -- ``STACK_SIZE``: Capacity of one stack, in bytes. -- ``HV_RAM_SIZE``: Size of the RAM region used by the hypervisor. -- ``HV_RAM_STAR``: 2M-aligned Start physical address of the RAM region used by the hypervisor. -- ``LOW_RAM_SIZE``: Size of the low RAM region. -- ``SOS_RAM_SIZE``: Size of the Service OS (SOS) RAM. -- ``PLATFORM_RAM_SIZE``: Size of the physical platform RAM.</xs:documentation> + <xs:documentation>Specify memory information for Service and User VMs.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="os_config" type="OSConfigurations" minOccurs="0"> <xs:annotation> - <xs:documentation>General information for host kernel, boot argument and memory, -the following elements are configured in this section: - -- ``name``: Specify the OS name of VM; currently, it is not referenced by the hypervisor code. -- ``kern_type``: Specify the kernel image type so that the hypervisor can load it correctly. - Currently supports ``KERNEL_BZIMAGE`` and ``KERNEL_ZEPHYR``. -- ``kern_mod``: The tag for the kernel image that acts as a multiboot module; it must - exactly match the module tag in the GRUB multiboot cmdline. -- ``ramdisk_mod``: The tag for the ramdisk image, which acts as a multiboot module; it - must exactly match the module tag in the GRUB multiboot cmdline. -- ``bootargs``: For internal use only and is not configurable. Specify the kernel boot arguments - in ``bootargs`` under the parent of ``board_private``. -- ``kern_load_addr``: The loading address in host memory for the VM kernel. -- ``kern_entry_addr``: The entry address in host memory for the VM kernel.</xs:documentation> + <xs:documentation>General information for host kernel, boot +argument and memory.</xs:documentation> </xs:annotation> </xs:element> - <xs:element name="legacy_vuart" type="LegancyVuartConfiguration" minOccurs="2" maxOccurs="2"> + <xs:element name="legacy_vuart" type="LegacyVuartConfiguration" minOccurs="2" maxOccurs="2"> <xs:annotation> <xs:documentation>Specify the vUART (aka COM) with the vUART ID by its ``id`` attribute. Refer to :ref:`vuart_config` for detailed vUART settings.</xs:documentation> @@ -397,7 +402,7 @@ its ``id`` attribute.</xs:documentation> <xs:element name="communication_vuart" type="CommunicationVuartConfiguration" maxOccurs="unbounded"> <xs:annotation> <xs:documentation>Specify the communication vUART (aka PCI based vUART) with the vUART ID by -its ``id`` attribute. When it is enabled, specify which target VM's vuart the current VM connects to.</xs:documentation> +its ``id`` attribute. When it is enabled, specify which target VM's vUART the current VM connects to.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="mmio_resources" type="MMIOResourcesConfiguration" minOccurs="0"> @@ -417,7 +422,7 @@ its ``id`` attribute. When it is enabled, specify which target VM's vuart the cu </xs:element> <xs:element name="pci_devs" type="PCIDevsConfiguration" minOccurs="0"> <xs:annotation> - <xs:documentation>pci devices list.</xs:documentation> + <xs:documentation>PCI devices list.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="board_private" type="BoardPrivateConfiguration" minOccurs="0" /> diff --git a/misc/config_tools/schema/types.xsd b/misc/config_tools/schema/types.xsd index 9bb2e0aec..96885fd68 100644 --- a/misc/config_tools/schema/types.xsd +++ b/misc/config_tools/schema/types.xsd @@ -3,7 +3,7 @@ <xs:simpleType name="Boolean"> <xs:annotation> - <xs:documentation>A boolean value, written as ``y`` or ``n``.</xs:documentation> + <xs:documentation>A Boolean value, written as ``y`` or ``n``.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:enumeration value="y" /> @@ -13,7 +13,7 @@ <xs:simpleType name="HexFormat"> <xs:annotation> - <xs:documentation>An integer value in hexadecimal format.</xs:documentation> + <xs:documentation>An Integer value in hexadecimal format.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:pattern value="0[Xx][0-9A-Fa-f]+|0" /> @@ -28,14 +28,14 @@ <xs:simpleType name="HVRamSizeType"> <xs:annotation> - <xs:documentation>Either empty, or a hexadecimal value.</xs:documentation> + <xs:documentation>Either empty, or an Integer value in hexadecimal format.</xs:documentation> </xs:annotation> <xs:union memberTypes="None HexFormat" /> </xs:simpleType> <xs:simpleType name="HVRamStartType"> <xs:annotation> - <xs:documentation>Either empty, or a hexadecimal value.</xs:documentation> + <xs:documentation>Either empty, or an Integer value in hexadecimal format.</xs:documentation> </xs:annotation> <xs:union memberTypes="None HexFormat" /> </xs:simpleType> @@ -52,7 +52,7 @@ <xs:simpleType name="MaxMsixTableSizeType"> <xs:annotation> - <xs:documentation>Either empty, or an integer value between 1 and 2048.</xs:documentation> + <xs:documentation>Either empty, or an Integer value between 1 and 2048.</xs:documentation> </xs:annotation> <xs:union memberTypes="None MaxMsixTableNumType" /> </xs:simpleType> @@ -65,27 +65,25 @@ <xs:simpleType name="MemorySizeType"> <xs:annotation> - <xs:documentation>Either a hexadecimal value or the string -``CONFIG_SOS_RAM_SIZE``.</xs:documentation> + <xs:documentation>An Integer value in hexadecimal format.</xs:documentation> </xs:annotation> <xs:union memberTypes="SOSRamSize HexFormat" /> </xs:simpleType> <xs:simpleType name="LogLevelType"> <xs:annotation> - <xs:documentation>An integer from 0 to 7 representing log message + <xs:documentation>An Integer from 0 to 7 representing log message severity and intent: - 1 (LOG_FATAL) system is unusable -- 2 (LOG_ACRN) +- 2 (LOG_ACRN) hypervisor failure - 3 (LOG_ERROR) error conditions - 4 (LOG_WARNING) warning conditions - 5 (LOG_INFO) informational - 6 (LOG_DEBUG) debug-level messages -Note that lower values have a higher severity. Only log messages with a -severity level higher (lower value) than a specified value will be -recorded.</xs:documentation> +A lower value has a higher severity. Log messages with a +higher value (lower severity) are discarded.</xs:documentation> </xs:annotation> <xs:restriction base="xs:integer"> <xs:minInclusive value="0" /> @@ -125,14 +123,16 @@ Read more about the available scheduling options in :ref:`cpu_sharing`.</xs:docu <xs:simpleType name="SerialConsoleOptions"> <xs:annotation> - <xs:documentation>Either empty or a string.</xs:documentation> + <xs:documentation>Either empty or a string, such as ``/dev/ttyS0``.</xs:documentation> </xs:annotation> <xs:union memberTypes="None SerialConsoleType" /> </xs:simpleType> <xs:simpleType name="IVSHMEMRegionType"> <xs:annotation> - <xs:documentation>Either empty or a string.</xs:documentation> + <xs:documentation>Either empty or a string naming the shared region, +its size, and the VM IDs that can access it, such as ``hv:/shm_region_0, 2, 0:2``. +See :ref:`ivshmem-hv` for more information.</xs:documentation> </xs:annotation> <xs:restriction base="xs:string"> <xs:pattern value="|hv:/\w+,\s?\d+\s?,\s?\d\s?(:\s?\d\s?)+" /> @@ -143,7 +143,8 @@ Read more about the available scheduling options in :ref:`cpu_sharing`.</xs:docu <xs:sequence> <xs:element name="IVSHMEM_ENABLED" type="Boolean" default="n"> <xs:annotation> - <xs:documentation>Enable inter-VM shared memory (IVSHMEM) feature.</xs:documentation> + <xs:documentation>Enable inter-VM shared memory (IVSHMEM) +feature.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="IVSHMEM_REGION" type="IVSHMEMRegionType" maxOccurs="unbounded"> @@ -176,7 +177,8 @@ RDT, setting this option to ``y`` is ignored.</xs:documentation> <xs:annotation> <xs:documentation>Specify whether to enable Code and Data Prioritization (CDP). CDP is an extension of CAT. Set to 'y' to enable the feature or 'n' to disable it. -The 'y' will be ignored when hardware does not support CDP.</xs:documentation> +The 'y' will be ignored when hardware does not support CDP. Default +value ``n``.</xs:documentation> </xs:annotation> </xs:element> <xs:element name="CLOS_MASK" type="xs:string" minOccurs="0" maxOccurs="unbounded"> @@ -197,7 +199,8 @@ are allowed. The value will be ignored when hardware does not support RDT.</xs:d <xs:sequence> <xs:element name="PSRAM_ENABLED" type="Boolean" default="n"> <xs:annotation> - <xs:documentation>Enable PTCM (Platform Tuning Configuration Manager).</xs:documentation> + <xs:documentation>Enable PTCM (Platform Tuning Configuration +Manager).</xs:documentation> </xs:annotation> </xs:element> </xs:sequence> diff --git a/misc/debug_tools/acrn_crashlog/README.rst b/misc/debug_tools/acrn_crashlog/README.rst index 998ba318a..50f618e8b 100644 --- a/misc/debug_tools/acrn_crashlog/README.rst +++ b/misc/debug_tools/acrn_crashlog/README.rst @@ -15,7 +15,7 @@ of interest, by using an XML configuration file. Building ******** -Build dependencies +Build Dependencies ================== The ``ACRN-Crashlog`` tool depends on the following libraries @@ -171,7 +171,7 @@ The source code structure: - ``usercrash``: to implement the tool which get the crash information for the crashing process in userspace. -acrnprobe +Acrnprobe ========= The ``acrnprobe`` detects all critical events on the platform and collects @@ -180,7 +180,7 @@ logs, and the log path would be delivered to telemetrics-client as a record if the telemetrics-client existed on the system. For more detail on acrnprobe, please refer :ref:`acrnprobe_doc`. -usercrash +Usercrash ========= The ``usercrash`` is a tool to get the crash info of the crashing process in diff --git a/misc/debug_tools/acrn_crashlog/acrnprobe/README.rst b/misc/debug_tools/acrn_crashlog/acrnprobe/README.rst index f199c8bb3..767e182e9 100644 --- a/misc/debug_tools/acrn_crashlog/acrnprobe/README.rst +++ b/misc/debug_tools/acrn_crashlog/acrnprobe/README.rst @@ -1,6 +1,6 @@ .. _acrnprobe_doc: -acrnprobe +Acrnprobe ######### Description @@ -143,7 +143,7 @@ Diagram +-----------------------------------------------------------------------------+ -Source files +Source Files ************ - main.c @@ -176,7 +176,7 @@ Source files - loop.c This file provides interfaces to read from image. -Configuration files +Configuration Files ******************* * ``/usr/share/defaults/telemetrics/acrnprobe.xml`` diff --git a/misc/debug_tools/acrn_crashlog/acrnprobe/conf.rst b/misc/debug_tools/acrn_crashlog/acrnprobe/conf.rst index cc89a4110..393797618 100644 --- a/misc/debug_tools/acrn_crashlog/acrnprobe/conf.rst +++ b/misc/debug_tools/acrn_crashlog/acrnprobe/conf.rst @@ -1,6 +1,6 @@ .. _acrnprobe-conf: -acrnprobe Configuration +Acrnprobe Configuration ####################### Description @@ -62,13 +62,13 @@ Layout As for the definition of ``sender``, ``trigger``, ``crash`` and ``info`` please refer to :ref:`acrnprobe_doc`. -Properties of group members +Properties of Group Members *************************** ``acrnprobe`` defined different groups in configuration file, which are ``senders``, ``triggers``, ``crashes`` and ``infos``. -Common properties +Common Properties ================= - ``id``: @@ -76,7 +76,7 @@ Common properties - ``enable``: This group member will be ignored if the value is NOT ``true``. -Other properties +Other Properties ================ - ``inherit``: @@ -87,14 +87,14 @@ Other properties - ``expression``: See `Crash`_. -Crash tree in acrnprobe +Crash Tree in Acrnprobe *********************** There could be a parent/child relationship between crashes. Refer to the diagrams below, crash B and D are the children of crash A, because crash B and D inherit from crash A, and crash C is the child of crash B. -Build crash tree in configuration +Build Crash Tree in Configuration ================================= .. graphviz:: images/crash-config.dot @@ -102,7 +102,7 @@ Build crash tree in configuration :align: center :caption: Build crash tree in configuration -Match crash at runtime +Match Crash at Runtime ====================== In order to find a more specific type, if one crash type matches diff --git a/misc/debug_tools/acrn_crashlog/usercrash/README.rst b/misc/debug_tools/acrn_crashlog/usercrash/README.rst index d1a5d09df..b42fdbc92 100644 --- a/misc/debug_tools/acrn_crashlog/usercrash/README.rst +++ b/misc/debug_tools/acrn_crashlog/usercrash/README.rst @@ -1,6 +1,6 @@ .. _usercrash_doc: -usercrash +Usercrash ######### Description diff --git a/misc/debug_tools/acrn_log/README.rst b/misc/debug_tools/acrn_log/README.rst index e19e281a3..75e090771 100644 --- a/misc/debug_tools/acrn_log/README.rst +++ b/misc/debug_tools/acrn_log/README.rst @@ -1,6 +1,6 @@ .. _acrnlog: -acrnlog +Acrnlog ####### Description @@ -32,7 +32,7 @@ Options: -s limit the size of each log file, in KB. 0 means no limitation. -n specify the number of log files to keep, old files would be deleted. -Temporary log file changes +Temporary Log File Changes ========================== You can temporarily change the log file setting by following these @@ -68,7 +68,7 @@ can use these commands: console_loglevel: 2, mem_loglevel: 5, npk_loglevel: 5 -Permanent log file changes +Permanent Log File Changes ========================== You can also permanently change the log file settings by diff --git a/misc/debug_tools/acrn_trace/README.rst b/misc/debug_tools/acrn_trace/README.rst index a278f07dc..a82223639 100644 --- a/misc/debug_tools/acrn_trace/README.rst +++ b/misc/debug_tools/acrn_trace/README.rst @@ -1,6 +1,6 @@ .. _acrntrace: -acrntrace +Acrntrace ######### Description @@ -12,7 +12,7 @@ A ``scripts`` directory includes scripts to analyze the trace data. Usage ***** -acrntrace +Acrntrace ========= The ``acrntrace`` tool runs on the Service OS (SOS) to capture trace data and @@ -98,7 +98,7 @@ Options: doesn't support for an invariant TSC. The results may therefore not be completely accurate in that regard. -Typical use example +Typical Use Example =================== Here's a typical use of ``acrntrace`` to capture trace data from the SOS, diff --git a/misc/packaging/README.rst b/misc/packaging/README.rst index e40edc775..4b1e1df9b 100644 --- a/misc/packaging/README.rst +++ b/misc/packaging/README.rst @@ -14,7 +14,7 @@ acrn-kernel, install them on your target system, and boot running ACRN. .. rst-class:: numbered-step -Set up prerequisites +Set Up Prerequisites ******************** Your development system should be running Ubuntu @@ -41,7 +41,7 @@ in the ``misc/packaging`` folder, so let's go there:: .. rst-class:: numbered-step -Configure Debian packaging details +Configure Debian Packaging Details ********************************** The build and packaging script ``install_uSoS.py`` does all the work to @@ -63,7 +63,7 @@ Here's the default ``release.json`` configuration: .. rst-class:: numbered-step -Run the package-building script +Run the Package-Building Script ******************************* The ``install_uSoS.py`` Python script does all the work to install @@ -94,7 +94,7 @@ the network or simply by using a USB drive. .. rst-class:: numbered-step -Prepare your target system with Ubuntu 18.04 +Prepare Your Target System With Ubuntu 18.04 ******************************************** Your target system must be one of the choices listed in the ``release.json`` @@ -108,7 +108,7 @@ Reboot your system to complete the installation. .. rst-class:: numbered-step -Install Debian packages on your target system +Install Debian Packages on Your Target System ********************************************* Copy the Debian packages you created on your development system, for @@ -156,7 +156,7 @@ installation on the NVMe drive (input is highlighted): Added boot menu entry for EFI firmware configuration done - + Then install the ACRN-patched kernel package:: @@ -167,7 +167,7 @@ After that, you're ready to reboot. .. rst-class:: numbered-step -Boot ACRN using the multiboot2 grub choice +Boot ACRN Using the Multiboot2 Grub Choice ****************************************** This time when you boot your target system you'll see some new options: @@ -206,7 +206,7 @@ If your target system has a serial port active, you can simply hit .. rst-class:: numbered-step -Verify ACRN is running +Verify ACRN Is Running ********************** After the system boots, you can verify ACRN was detected and is running diff --git a/misc/services/acrn_manager/README.rst b/misc/services/acrn_manager/README.rst index 41d303f89..9797a603b 100644 --- a/misc/services/acrn_manager/README.rst +++ b/misc/services/acrn_manager/README.rst @@ -1,6 +1,6 @@ .. _acrnctl: -acrnctl and acrnd +Acrnctl and Acrnd ################# @@ -56,7 +56,7 @@ container:: .. note:: You can download an :acrn_raw:`example launch_uos.sh script <devicemodel/samples/nuc/launch_uos.sh>` - that supports the ``-C`` (``run_container`` function) option. + that supports the ``-C`` (``run_container`` function) option. Note that the launch script must only launch one User VM instance. The VM name is important. ``acrnctl`` searches VMs by their @@ -136,7 +136,7 @@ update the backend file. .. _acrnd: -acrnd +Acrnd ***** The ``acrnd`` daemon process provides a way for launching or resuming a User VM