diff --git a/doc/asa.rst b/doc/asa.rst index 63c0bd65f..5cfc7ae4f 100644 --- a/doc/asa.rst +++ b/doc/asa.rst @@ -3,6 +3,21 @@ Security Advisory ################# +Addressed in ACRN v2.6 +************************ + +We recommend that all developers upgrade to this v2.6 release (or later), which +addresses the following security issue discovered in previous releases: + +----- + +- Memory leakage vulnerability in ``devicemodel/hw/pci/xhci.c`` + De-initializing of emulated USB devices results in a memory leakage issue + as some resources allocated for transfer are not properly released. + + **Affected Release:** v2.5 and earlier. + + Addressed in ACRN v2.5 ************************ diff --git a/doc/conf.py b/doc/conf.py index c0963ea6f..a00b39df0 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -381,8 +381,10 @@ html_redirect_pages = [ ('developer-guides/index', 'contribute'), ('getting-started/index', 'try'), ('user-guides/index', 'develop'), + ('tutorials/index', 'develop'), ('hardware', 'reference/hardware'), ('release_notes', 'release_notes/index'), ('getting-started/rt_industry', 'getting-started/getting-started'), ('getting-started/rt_industry_ubuntu', 'getting-started/getting-started'), + ('getting-started/building-from-source', 'getting-started/getting-started'), ] diff --git a/doc/develop.rst b/doc/develop.rst index 757390954..2c83fc04d 100644 --- a/doc/develop.rst +++ b/doc/develop.rst @@ -3,26 +3,16 @@ Advanced Guides ############### - -Configuration and Tools -*********************** +Advanced Scenario Tutorials +********************************* .. rst-class:: rst-columns2 .. toctree:: - :glob: :maxdepth: 1 - tutorials/acrn_configuration_tool - reference/config-options - user-guides/hv-parameters - user-guides/kernel-parameters - user-guides/acrn-shell - user-guides/acrn-dm-parameters - misc/debug_tools/acrn_crashlog/README - misc/packaging/README - misc/debug_tools/** - misc/services/acrn_manager/** + tutorials/using_hybrid_mode_on_nuc + tutorials/using_partition_mode_on_nuc Service VM Tutorials ******************** @@ -35,6 +25,8 @@ Service VM Tutorials tutorials/running_deb_as_serv_vm tutorials/using_yp +.. _develop_acrn_user_vm: + User VM Tutorials ***************** @@ -50,7 +42,27 @@ User VM Tutorials tutorials/using_vxworks_as_uos tutorials/using_zephyr_as_uos -Enable ACRN Features +Configuration Tutorials +*********************** + +.. rst-class:: rst-columns2 + +.. toctree:: + :glob: + :maxdepth: 1 + + tutorials/acrn_configuration_tool + tutorials/board_inspector_tool + tutorials/acrn_configurator_tool + reference/config-options + reference/config-options-launch + reference/hv-make-options + user-guides/hv-parameters + user-guides/kernel-parameters + user-guides/acrn-dm-parameters + misc/packaging/README + +Advanced Features ******************** .. rst-class:: rst-columns2 @@ -77,7 +89,6 @@ Enable ACRN Features tutorials/acrn-secure-boot-with-efi-stub tutorials/pre-launched-rt tutorials/enable_ivshmem - tutorials/enable_ptm Debug ***** @@ -85,9 +96,14 @@ Debug .. rst-class:: rst-columns2 .. toctree:: + :glob: :maxdepth: 1 tutorials/using_serial_port tutorials/debug tutorials/realtime_performance_tuning tutorials/rtvm_performance_tips + user-guides/acrn-shell + misc/debug_tools/acrn_crashlog/README + misc/debug_tools/** + misc/services/acrn_manager/** diff --git a/doc/developer-guides/VBSK-analysis.rst b/doc/developer-guides/VBSK-analysis.rst index 4caec3812..f01bc2d6e 100644 --- a/doc/developer-guides/VBSK-analysis.rst +++ b/doc/developer-guides/VBSK-analysis.rst @@ -80,14 +80,14 @@ two parts: kick overhead and notify overhead. - **Kick Overhead**: The User VM gets trapped when it executes sensitive instructions that notify the hypervisor first. The notification is assembled into an IOREQ, saved in a shared IO page, and then - forwarded to the VHM module by the hypervisor. The VHM notifies its + forwarded to the HSM module by the hypervisor. The HSM notifies its client for this IOREQ, in this case, the client is the vbs-echo backend driver. Kick overhead is defined as the interval from the beginning of User VM trap to a specific VBS-K driver, e.g. when virtio-echo gets notified. - **Notify Overhead**: After the data in virtqueue being processed by the - backend driver, vbs-echo calls the VHM module to inject an interrupt - into the frontend. The VHM then uses the hypercall provided by the + backend driver, vbs-echo calls the HSM module to inject an interrupt + into the frontend. The HSM then uses the hypercall provided by the hypervisor, which causes a User VM VMEXIT. The hypervisor finally injects an interrupt into the vLAPIC of the User VM and resumes it. The User VM therefore receives the interrupt notification. Notify overhead is diff --git a/doc/developer-guides/c_coding_guidelines.rst b/doc/developer-guides/c_coding_guidelines.rst index 0410887fe..46183e4ad 100644 --- a/doc/developer-guides/c_coding_guidelines.rst +++ b/doc/developer-guides/c_coding_guidelines.rst @@ -3283,10 +3283,10 @@ each function: 1) The comments block shall start with ``/**`` (slash-asterisk-asterisk) in a single line. -2) The comments block shall end with :literal:`\ */` (space-asterisk-slash) in +2) The comments block shall end with :literal:`\ */` (space-asterisk-slash) in a single line. 3) Other than the first line and the last line, every line inside the comments - block shall start with :literal:`\ *` (space-asterisk). It also applies to + block shall start with :literal:`\ *` (space-asterisk). It also applies to the line which is used to separate different paragraphs. We'll call it a blank line for simplicity. 4) For each function, following information shall be documented: diff --git a/doc/developer-guides/contribute_guidelines.rst b/doc/developer-guides/contribute_guidelines.rst index 7f2428bbb..b0d2ec077 100644 --- a/doc/developer-guides/contribute_guidelines.rst +++ b/doc/developer-guides/contribute_guidelines.rst @@ -135,6 +135,9 @@ Submitting Issues .. _ACRN-dev mailing list: https://lists.projectacrn.org/g/acrn-dev +.. _ACRN-users mailing list: + https://lists.projectacrn.org/g/acrn-users + .. _ACRN hypervisor issues: https://github.com/projectacrn/acrn-hypervisor/issues @@ -143,7 +146,8 @@ GitHub issues in the `ACRN hypervisor issues`_ list. Before submitting a bug or enhancement request, first check to see what's already been reported, and add to that discussion if you have additional information. (Be sure to check both the "open" and "closed" issues.) -You should also read through discussions in the `ACRN-dev mailing list`_ +You should also read through discussions in the `ACRN-users mailing list`_ +(and the `ACRN-dev mailing list`_) to see what's been reported on or discussed. You may find others that have encountered the issue you're finding, or that have similar ideas for changes or additions. @@ -463,13 +467,13 @@ All changes and topics sent to GitHub must be well-formed, as described above. Commit Message Body =================== -When editing the commit message, please briefly explain what your change +When editing the commit message, briefly explain what your change does and why it's needed. A change summary of ``"Fixes stuff"`` will be rejected. .. warning:: An empty change summary body is not permitted. Even for trivial changes, - please include a summary body in the commit message. + include a summary body in the commit message. The description body of the commit message must include: diff --git a/doc/developer-guides/doc_guidelines.rst b/doc/developer-guides/doc_guidelines.rst index 839e80318..4bd420d61 100644 --- a/doc/developer-guides/doc_guidelines.rst +++ b/doc/developer-guides/doc_guidelines.rst @@ -311,9 +311,9 @@ creates a hyperlink to that file in the current branch. For example, a GitHub link to the reST file used to create this document can be generated using ``:acrn_file:`doc/developer-guides/doc_guidelines.rst```, which will appear as :acrn_file:`doc/developer-guides/doc_guidelines.rst`, a link to -the “blob” file in the GitHub repo as displayed by GitHub. There’s also an +the "blob" file in the GitHub repo as displayed by GitHub. There's also an ``:acrn_raw:`doc/developer-guides/doc_guidelines.rst``` role that will link -to the “raw” uninterpreted file, +to the "raw" uninterpreted file, :acrn_raw:`doc/developer-guides/doc_guidelines.rst` file. (Click these links to see the difference.) diff --git a/doc/developer-guides/graphviz.rst b/doc/developer-guides/graphviz.rst index 74fc4d9dc..1fe90b59d 100644 --- a/doc/developer-guides/graphviz.rst +++ b/doc/developer-guides/graphviz.rst @@ -5,13 +5,13 @@ Drawings Using Graphviz We support using the Sphinx `graphviz extension`_ for creating simple graphs and line drawings using the dot language. The advantage of using -graphviz for drawings is that the source for a drawing is a text file that +Graphviz for drawings is that the source for a drawing is a text file that can be edited and maintained in the repo along with the documentation. .. _graphviz extension: http://graphviz.gitlab.io These source ``.dot`` files are generally kept separate from the document -itself, and included by using a graphviz directive: +itself, and included by using a Graphviz directive: .. code-block:: none @@ -38,7 +38,7 @@ the dot language and drawing options. Simple Directed Graph ********************* -For simple drawings with shapes and lines, you can put the graphviz commands +For simple drawings with shapes and lines, you can put the Graphviz commands in the content block for the directive. For example, for a simple directed graph (digraph) with two nodes connected by an arrow, you can write: @@ -108,7 +108,7 @@ are centered, left-justified, and right-justified, respectively. Finite-State Machine ******************** -Here's an example of using graphviz for defining a finite-state machine +Here's an example of using Graphviz for defining a finite-state machine for pumping gas: .. literalinclude:: images/gaspump.dot diff --git a/doc/developer-guides/hld/hld-APL_GVT-g.rst b/doc/developer-guides/hld/hld-APL_GVT-g.rst deleted file mode 100644 index 7218e3947..000000000 --- a/doc/developer-guides/hld/hld-APL_GVT-g.rst +++ /dev/null @@ -1,954 +0,0 @@ -.. _APL_GVT-g-hld: - -GVT-g High-Level Design -####################### - -Introduction -************ - -Purpose of This Document -======================== - -This high-level design (HLD) document describes the usage requirements -and high-level design for Intel |reg| Graphics Virtualization Technology for -shared virtual :term:`GPU` technology (:term:`GVT-g`) on Apollo Lake-I -SoCs. - -This document describes: - -- The different GPU virtualization techniques -- GVT-g mediated passthrough -- High-level design -- Key components -- GVT-g new architecture differentiation - -Audience -======== - -This document is for developers, validation teams, architects, and -maintainers of Intel |reg| GVT-g for the Apollo Lake SoCs. - -The reader should have some familiarity with the basic concepts of -system virtualization and Intel processor graphics. - -Reference Documents -=================== - -The following documents were used as references for this specification: - -- Paper in USENIX ATC '14 - *Full GPU Virtualization Solution with - Mediated Pass-Through* - https://www.usenix.org/node/183932 - -- Hardware Specification - PRMs - - https://01.org/linuxgraphics/documentation/hardware-specification-prms - -Background -********** - -Intel GVT-g is an enabling technology in emerging graphics -virtualization scenarios. It adopts a full GPU virtualization approach -based on mediated passthrough technology to achieve good performance, -scalability, and secure isolation among Virtual Machines (VMs). A virtual -GPU (vGPU), with full GPU features, is presented to each VM so that a -native graphics driver can run directly inside a VM. - -Intel GVT-g technology for Apollo Lake (APL) has been implemented in -open-source hypervisors or Virtual Machine Monitors (VMMs): - -- Intel GVT-g for ACRN, also known as, "AcrnGT" -- Intel GVT-g for KVM, also known as, "KVMGT" -- Intel GVT-g for Xen, also known as, "XenGT" - -The core vGPU device model is released under the BSD/MIT dual license, so it -can be reused in other proprietary hypervisors. - -Intel has a portfolio of graphics virtualization technologies -(:term:`GVT-g`, :term:`GVT-d`, and :term:`GVT-s`). GVT-d and GVT-s are -outside the scope of this document. - -This HLD applies to the Apollo Lake platform only. Support of other -hardware is outside the scope of this HLD. - -Targeted Usages -=============== - -The main targeted usage of GVT-g is in automotive applications, such as: - -- An Instrument cluster running in one domain -- An In Vehicle Infotainment (IVI) solution running in another domain -- Additional domains for specific purposes, such as Rear Seat - Entertainment or video camera capturing. - -.. figure:: images/APL_GVT-g-ive-use-case.png - :width: 900px - :align: center - :name: ive-use-case - - IVE Use Case - -Existing Techniques -=================== - -A graphics device is no different from any other I/O device with -respect to how the device I/O interface is virtualized. Therefore, -existing I/O virtualization techniques can be applied to graphics -virtualization. However, none of the existing techniques can meet the -general requirements of performance, scalability, and secure isolation -simultaneously. In this section, we review the pros and cons of each -technique in detail, enabling the audience to understand the rationale -behind the entire GVT-g effort. - -Emulation ---------- - -A device can be emulated fully in software, including its I/O registers -and internal functional blocks. Because there is no dependency on the -underlying hardware capability, compatibility can be achieved -across platforms. However, due to the CPU emulation cost, this technique -is usually used only for legacy devices such as a keyboard, mouse, and VGA -card. Fully emulating a modern accelerator such as a GPU would involve great -complexity and extremely low performance. It may be acceptable -for use in a simulation environment, but it is definitely not suitable -for production usage. - -API Forwarding --------------- - -API forwarding, or a split driver model, is another widely-used I/O -virtualization technology. It has been used in commercial virtualization -productions such as VMware*, PCoIP*, and Microsoft* RemoteFx*. -It is a natural path when researchers study a new type of -I/O virtualization usage—for example, when GPGPU computing in a VM was -initially proposed. Intel GVT-s is based on this approach. - -The architecture of API forwarding is shown in :numref:`api-forwarding`: - -.. figure:: images/APL_GVT-g-api-forwarding.png - :width: 400px - :align: center - :name: api-forwarding - - API Forwarding - -A frontend driver is employed to forward high-level API calls (OpenGL, -DirectX, and so on) inside a VM to a backend driver in the Hypervisor -for acceleration. The backend may be using a different graphics stack, -so API translation between different graphics protocols may be required. -The backend driver allocates a physical GPU resource for each VM, -behaving like a normal graphics application in a Hypervisor. Shared -memory may be used to reduce memory copying between the host and guest -graphic stacks. - -API forwarding can bring hardware acceleration capability into a VM, -with other merits such as vendor independence and high density. However, it -also suffers from the following intrinsic limitations: - -- Lagging features - Every new API version must be specifically - handled, which means slow time-to-market (TTM) to support new standards. - For example, - only DirectX9 is supported while DirectX11 is already in the market. - Also, there is a big gap in supporting media and compute usages. - -- Compatibility issues - A GPU is very complex, and consequently so are - high-level graphics APIs. Different protocols are not 100% compatible - on every subtly different API, so the customer can observe feature/quality - loss for specific applications. - -- Maintenance burden - Occurs when supported protocols and specific - versions are incremented. - -- Performance overhead - Different API forwarding implementations - exhibit quite different performance, which gives rise to a need for a - fine-grained graphics tuning effort. - -Direct Passthrough -------------------- - -"Direct passthrough" dedicates the GPU to a single VM, providing full -features and good performance at the cost of device sharing -capability among VMs. Only one VM at a time can use the hardware -acceleration capability of the GPU, which is a major limitation of this -technique. However, it is still a good approach for enabling graphics -virtualization usages on Intel server platforms, as an intermediate -solution. Intel GVT-d uses this mechanism. - -.. figure:: images/APL_GVT-g-pass-through.png - :width: 400px - :align: center - :name: gvt-pass-through - - Passthrough - -SR-IOV ------- - -Single Root IO Virtualization (SR-IOV) implements I/O virtualization -directly on a device. Multiple Virtual Functions (VFs) are implemented, -with each VF directly assignable to a VM. - -.. _Graphic_mediation: - -Mediated Passthrough -********************* - -Intel GVT-g achieves full GPU virtualization using a "mediated -passthrough" technique. - -Concept -======= - -Mediated passthrough enables a VM to access performance-critical I/O -resources (usually partitioned) directly, without intervention from the -hypervisor in most cases. Privileged operations from this VM are -trapped-and-emulated to provide secure isolation among VMs. - -.. figure:: images/APL_GVT-g-mediated-pass-through.png - :width: 400px - :align: center - :name: mediated-pass-through - - Mediated Passthrough - -The Hypervisor must ensure that no vulnerability is exposed when -assigning performance-critical resource to each VM. When a -performance-critical resource cannot be partitioned, a scheduler must be -implemented (either in software or hardware) to enable time-based sharing -among multiple VMs. In this case, the device must allow the hypervisor -to save and restore the hardware state associated with the shared resource, -either through direct I/O register reads and writes (when there is no software -invisible state) or through a device-specific context save and restore -mechanism (where there is a software invisible state). - -Examples of performance-critical I/O resources include the following: - -.. figure:: images/APL_GVT-g-perf-critical.png - :width: 800px - :align: center - :name: perf-critical - - Performance-Critical I/O Resources - - -The key to implementing mediated passthrough for a specific device is -to define the right policy for various I/O resources. - -Virtualization Policies for GPU Resources -========================================= - -:numref:`graphics-arch` shows how Intel Processor Graphics works at a high level. -Software drivers write commands into a command buffer through the CPU. -The Render Engine in the GPU fetches these commands and executes them. -The Display Engine fetches pixel data from the Frame Buffer and sends -them to the external monitors for display. - -.. figure:: images/APL_GVT-g-graphics-arch.png - :width: 400px - :align: center - :name: graphics-arch - - Architecture of Intel Processor Graphics - -This architecture abstraction applies to most modern GPUs, but may -differ in how graphics memory is implemented. Intel Processor Graphics -uses system memory as graphics memory. System memory can be mapped into -multiple virtual address spaces by GPU page tables. A 4 GB global -virtual address space called "global graphics memory", accessible from -both the GPU and CPU, is mapped through a global page table. Local -graphics memory spaces are supported in the form of multiple 4 GB local -virtual address spaces but are limited to access by the Render -Engine through local page tables. Global graphics memory is mostly used -for the Frame Buffer and also serves as the Command Buffer. Massive data -accesses are made to local graphics memory when hardware acceleration is -in progress. Other GPUs have similar page table mechanism accompanying -the on-die memory. - -The CPU programs the GPU through GPU-specific commands, shown in -:numref:`graphics-arch`, using a producer-consumer model. The graphics -driver programs GPU commands into the Command Buffer, including primary -buffer and batch buffer, according to the high-level programming APIs -such as OpenGL* and DirectX*. Then, the GPU fetches and executes the -commands. The primary buffer (called a ring buffer) may chain other -batch buffers together. The primary buffer and ring buffer are used -interchangeably thereafter. The batch buffer is used to convey the -majority of the commands (up to ~98% of them) per programming model. A -register tuple (head, tail) is used to control the ring buffer. The CPU -submits the commands to the GPU by updating the tail, while the GPU -fetches commands from the head and then notifies the CPU by updating -the head after the commands have finished execution. Therefore, when -the GPU has executed all commands from the ring buffer, the head and -tail pointers are the same. - -Having introduced the GPU architecture abstraction, it is important for -us to understand how real-world graphics applications use the GPU -hardware so that we can virtualize it in VMs efficiently. To do so, we -characterized the usages of the four critical interfaces for some -representative GPU-intensive 3D workloads (the Phoronix Test Suite): - -1) the Frame Buffer, -2) the Command Buffer, -3) the GPU Page Table Entries (PTEs), which carry the GPU page tables, and -4) the I/O registers, including Memory-Mapped I/O (MMIO) registers, - Port I/O (PIO) registers, and PCI configuration space registers - for internal state. - -:numref:`access-patterns` shows the average access frequency of running -Phoronix 3D workloads on the four interfaces. - -The Frame Buffer and Command Buffer exhibit the most -performance-critical resources, as shown in :numref:`access-patterns`. -When the applications are being loaded, lots of source vertices and -pixels are written by the CPU, so the Frame Buffer accesses occur in the -range of hundreds of thousands per second. Then at run-time, the CPU -programs the GPU through the commands to render the Frame Buffer, so -the Command Buffer accesses become the largest group (also in the -hundreds of thousands per second). PTE and I/O accesses are minor in both -load and run-time phases ranging in tens of thousands per second. - -.. figure:: images/APL_GVT-g-access-patterns.png - :width: 400px - :align: center - :name: access-patterns - - Access Patterns of Running 3D Workloads - -High-Level Architecture -*********************** - -:numref:`gvt-arch` shows the overall architecture of GVT-g, based on the -ACRN hypervisor, with Service VM as the privileged VM, and multiple user -guests. A GVT-g device model working with the ACRN hypervisor -implements the policies of trap and passthrough. Each guest runs the -native graphics driver and can directly access performance-critical -resources: the Frame Buffer and Command Buffer, with resource -partitioning (as presented later). To protect privileged resources—that -is, the I/O registers and PTEs—corresponding accesses from the graphics -driver in user VMs are trapped and forwarded to the GVT device model in the -Service VM for emulation. The device model leverages i915 interfaces to access -the physical GPU. - -In addition, the device model implements a GPU scheduler that runs -concurrently with the CPU scheduler in ACRN to share the physical GPU -timeslot among the VMs. GVT-g uses the physical GPU to directly execute -all the commands submitted from a VM, so it avoids the complexity of -emulating the Render Engine, which is the most complex part of the GPU. -In the meantime, the resource passthrough of both the Frame Buffer and -Command Buffer minimizes the hypervisor's intervention of CPU accesses, -while the GPU scheduler guarantees every VM a quantum time-slice for -direct GPU execution. With that, GVT-g can achieve near-native -performance for a VM workload. - -In :numref:`gvt-arch`, the yellow GVT device model works as a client on -top of an i915 driver in the Service VM. It has a generic Mediated Passthrough -(MPT) interface, compatible with all types of hypervisors. For ACRN, -some extra development work is needed for such MPT interfaces. For -example, we need some changes in ACRN-DM to make ACRN compatible with -the MPT framework. The vGPU lifecycle is the same as the lifecycle of -the guest VM creation through ACRN-DM. They interact through sysfs, -exposed by the GVT device model. - -.. figure:: images/APL_GVT-g-arch.png - :width: 600px - :align: center - :name: gvt-arch - - AcrnGT High-level Architecture - -Key Techniques -************** - -vGPU Device Model -================= - -The vGPU Device model is the main component because it constructs the -vGPU instance for each guest to satisfy every GPU request from the guest -and gives the corresponding result back to the guest. - -The vGPU Device Model provides the basic framework to do -trap-and-emulation, including MMIO virtualization, interrupt -virtualization, and display virtualization. It also handles and -processes all the requests internally (such as command scan and shadow), -schedules them in the proper manner, and finally submits to -the Service VM i915 driver. - -.. figure:: images/APL_GVT-g-DM.png - :width: 800px - :align: center - :name: GVT-DM - - GVT-g Device Model - -MMIO Virtualization -------------------- - -Intel Processor Graphics implements two PCI MMIO BARs: - -- **GTTMMADR BAR**: Combines both :term:`GGTT` modification range and Memory - Mapped IO range. It is 16 MB on :term:`BDW`, with 2 MB used by MMIO, 6 MB - reserved, and 8 MB allocated to GGTT. GGTT starts from - :term:`GTTMMADR` + 8 MB. In this section, we focus on virtualization of - the MMIO range, leaving discussion of GGTT virtualization for later. - -- **GMADR BAR**: As the PCI aperture is used by the CPU to access tiled - graphics memory, GVT-g partitions this aperture range among VMs for - performance reasons. - -A 2 MB virtual MMIO structure is allocated per vGPU instance. - -All the virtual MMIO registers are emulated as simple in-memory -read-write; that is, the guest driver will read back the same value that was -programmed earlier. A common emulation handler (for example, -intel_gvt_emulate_read/write) is enough to handle such general -emulation requirements. However, some registers must be emulated with -specific logic—for example, affected by change of other states or -additional audit or translation when updating the virtual register. -Therefore, a specific emulation handler must be installed for those -special registers. - -The graphics driver may have assumptions about the initial device state, -which stays with the point when the BIOS transitions to the OS. To meet -the driver expectation, we need to provide an initial state of vGPU that -a driver may observe on a pGPU. So the host graphics driver is expected -to generate a snapshot of physical GPU state, which it does before the guest -driver's initialization. This snapshot is used as the initial vGPU state -by the device model. - -PCI Configuration Space Virtualization --------------------------------------- - -The PCI configuration space also must be virtualized in the device -model. Different implementations may choose to implement the logic -within the vGPU device model or in the default system device model (for -example, ACRN-DM). GVT-g emulates the logic in the device model. - -Some information is vital for the vGPU device model, including -Guest PCI BAR, Guest PCI MSI, and Base of ACPI OpRegion. - -Legacy VGA Port I/O Virtualization ----------------------------------- - -Legacy VGA is not supported in the vGPU device model. We rely on the -default device model (for example, :term:`QEMU`) to provide legacy VGA -emulation, which means either ISA VGA emulation or -PCI VGA emulation. - -Interrupt Virtualization ------------------------- - -The GVT device model does not touch the hardware interrupt in the new -architecture, since it is hard to combine the interrupt controlling -logic between the virtual device model and the host driver. To prevent -architectural changes in the host driver, the host GPU interrupt does -not go to the virtual device model and the virtual device model has to -handle the GPU interrupt virtualization by itself. Virtual GPU -interrupts are categorized into three types: - -- Periodic GPU interrupts are emulated by timers. However, a notable - exception to this is the VBlank interrupt. Due to the demands of user space - compositors such as Wayland, which requires a flip done event to be - synchronized with a VBlank, this interrupt is forwarded from the Service VM - to the User VM when the Service VM receives it from the hardware. - -- Event-based GPU interrupts are emulated by the emulation logic (for - example, AUX Channel Interrupt). - -- GPU command interrupts are emulated by a command parser and workload - dispatcher. The command parser marks out which GPU command interrupts - are generated during the command execution, and the workload - dispatcher injects those interrupts into the VM after the workload is - finished. - -.. figure:: images/APL_GVT-g-interrupt-virt.png - :width: 400px - :align: center - :name: interrupt-virt - - Interrupt Virtualization - -Workload Scheduler ------------------- - -The scheduling policy and workload scheduler are decoupled for -scalability reasons. For example, a future QoS enhancement will impact -only the scheduling policy, and any i915 interface change or hardware submission -interface change (from execlist to :term:`GuC`) will need only workload -scheduler updates. - -The scheduling policy framework is the core of the vGPU workload -scheduling system. It controls all of the scheduling actions and -provides the developer with a generic framework for easy development of -scheduling policies. The scheduling policy framework controls the work -scheduling process without regard for how the workload is dispatched -or completed. All the detailed workload dispatching is hidden in the -workload scheduler, which is the actual executer of a vGPU workload. - -The workload scheduler handles everything about one vGPU workload. Each -hardware ring is backed by one workload scheduler kernel thread. The -workload scheduler picks the workload from current vGPU workload queue -and communicates with the virtual hardware submission interface to emulate the -"schedule-in" status for the vGPU. It performs context shadow, Command -Buffer scan and shadow, and PPGTT page table pin/unpin/out-of-sync before -submitting this workload to the host i915 driver. When the vGPU workload -is completed, the workload scheduler asks the virtual hardware submission -interface to emulate the "schedule-out" status for the vGPU. The VM -graphics driver then knows that a GPU workload is finished. - -.. figure:: images/APL_GVT-g-scheduling.png - :width: 500px - :align: center - :name: scheduling - - GVT-g Scheduling Framework - -Workload Submission Path ------------------------- - -Software submits the workload using the legacy ring buffer mode on Intel -Processor Graphics before Broadwell, which is no longer supported by the -GVT-g virtual device model. A new hardware submission interface named -"Execlist" is introduced since Broadwell. With the new hardware submission -interface, software can achieve better programmability and easier -context management. In Intel GVT-g, the vGPU submits the workload -through the virtual hardware submission interface. Each workload in submission -will be represented as an ``intel_vgpu_workload`` data structure, a vGPU -workload, which will be put on a per-vGPU and per-engine workload queue -later after performing a few basic checks and verifications. - -.. figure:: images/APL_GVT-g-workload.png - :width: 800px - :align: center - :name: workload - - GVT-g Workload Submission - - -Display Virtualization ----------------------- - -GVT-g reuses the i915 graphics driver in the Service VM to initialize the Display -Engine, and then manages the Display Engine to show different VM frame -buffers. When two vGPUs have the same resolution, only the frame buffer -locations are switched. - -.. figure:: images/APL_GVT-g-display-virt.png - :width: 800px - :align: center - :name: display-virt - - Display Virtualization - -Direct Display Model --------------------- - -.. figure:: images/APL_GVT-g-direct-display.png - :width: 600px - :align: center - :name: direct-display - - Direct Display Model - -In a typical automotive use case, there are two displays in the car -and each one must show one domain's content, with the two domains -being the Instrument cluster and the In Vehicle Infotainment (IVI). As -shown in :numref:`direct-display`, this can be accomplished through the direct -display model of GVT-g, where the Service VM and User VM are each assigned all hardware -planes of two different pipes. GVT-g has a concept of display owner on a -per hardware plane basis. If it determines that a particular domain is the -owner of a hardware plane, then it allows the domain's MMIO register write to -flip a frame buffer to that plane to go through to the hardware. Otherwise, -such writes are blocked by the GVT-g. - -Indirect Display Model ----------------------- - -.. figure:: images/APL_GVT-g-indirect-display.png - :width: 600px - :align: center - :name: indirect-display - - Indirect Display Model - -For security or fastboot reasons, it may be determined that the User VM is -either not allowed to display its content directly on the hardware or it may -be too late before it boots up and displays its content. In such a -scenario, the responsibility of displaying content on all displays lies -with the Service VM. One of the use cases that can be realized is to display the -entire frame buffer of the User VM on a secondary display. GVT-g allows for this -model by first trapping all MMIO writes by the User VM to the hardware. A proxy -application can then capture the address in GGTT where the User VM has written -its frame buffer and using the help of the Hypervisor and the Service VM's i915 -driver, can convert the Guest Physical Addresses (GPAs) into Host -Physical Addresses (HPAs) before making a texture source or EGL image -out of the frame buffer and then either post processing it further or -simply displaying it on a hardware plane of the secondary display. - -GGTT-Based Surface Sharing --------------------------- - -One of the major automotive use cases is called "surface sharing". This -use case requires that the Service VM accesses an individual surface or a set of -surfaces from the User VM without having to access the entire frame buffer of -the User VM. Unlike the previous two models, where the User VM did not have to do -anything to show its content and therefore a completely unmodified User VM -could continue to run, this model requires changes to the User VM. - -This model can be considered an extension of the indirect display model. -Under the indirect display model, the User VM's frame buffer was temporarily -pinned by it in the video memory access through the Global graphics -translation table. This GGTT-based surface sharing model takes this a -step further by having a compositor of the User VM to temporarily pin all -application buffers into GGTT. It then also requires the compositor to -create a metadata table with relevant surface information such as width, -height, and GGTT offset, and flip that in lieu of the frame buffer. -In the Service VM, the proxy application knows that the GGTT offset has been -flipped, maps it, and through it can access the GGTT offset of an -application that it wants to access. It is worth mentioning that in this -model, User VM applications did not require any changes, and only the -compositor, Mesa, and i915 driver had to be modified. - -This model has a major benefit and a major limitation. The -benefit is that since it builds on top of the indirect display model, -there are no special drivers necessary for it on either Service VM or User VM. -Therefore, any Real Time Operating System (RTOS) that uses -this model can simply do so without having to implement a driver, the -infrastructure for which may not be present in their operating system. -The limitation of this model is that video memory dedicated for a User VM is -generally limited to a couple of hundred MBs. This can easily be -exhausted by a few application buffers so the number and size of buffers -is limited. Since it is not a highly-scalable model in general, Intel -recommends the Hyper DMA buffer sharing model, described next. - -Hyper DMA Buffer Sharing ------------------------- - -.. figure:: images/APL_GVT-g-hyper-dma.png - :width: 800px - :align: center - :name: hyper-dma - - Hyper DMA Buffer Design - -Another approach to surface sharing is Hyper DMA Buffer sharing. This -model extends the Linux DMA buffer sharing mechanism in which one driver is -able to share its pages with another driver within one domain. - -Applications buffers are backed by i915 Graphics Execution Manager -Buffer Objects (GEM BOs). As in GGTT surface -sharing, this model also requires compositor changes. The compositor of the -User VM requests i915 to export these application GEM BOs and then passes -them on to a special driver called the Hyper DMA Buf exporter whose job -is to create a scatter gather list of pages mapped by PDEs and PTEs and -export a Hyper DMA Buf ID back to the compositor. - -The compositor then shares this Hyper DMA Buf ID with the Service VM's Hyper DMA -Buf importer driver which then maps the memory represented by this ID in -the Service VM. A proxy application in the Service VM can then provide the ID of this driver -to the Service VM i915, which can create its own GEM BO. Finally, the application -can use it as an EGL image and do any post-processing required before -either providing it to the Service VM compositor or directly flipping it on a -hardware plane in the compositor's absence. - -This model is highly scalable and can be used to share up to 4 GB worth -of pages. It is also not limited to sharing graphics buffers. Other -buffers for the IPU and others can also be shared with it. However, it -does require that the Service VM port the Hyper DMA Buffer importer driver. Also, -the Service VM must comprehend and implement the DMA buffer sharing model. - -For detailed information about this model, please refer to the `Linux -HYPER_DMABUF Driver High Level Design -`_. - -.. _plane_restriction: - -Plane-Based Domain Ownership ----------------------------- - -.. figure:: images/APL_GVT-g-plane-based.png - :width: 600px - :align: center - :name: plane-based - - Plane-Based Domain Ownership - -Yet another mechanism for showing content of both the Service VM and User VM on the -same physical display is called plane-based domain ownership. Under this -model, both the Service VM and User VM are provided a set of hardware planes that they can -flip their contents onto. Since each domain provides its content, there -is no need for any extra composition to be done through the Service VM. The display -controller handles alpha blending contents of different domains on a -single pipe. This saves on any complexity on either the Service VM or the User VM -SW stack. - -It is important to provide only specific planes and have them statically -assigned to different Domains. To achieve this, the i915 driver of both -domains is provided a command-line parameter that specifies the exact -planes that this domain has access to. The i915 driver then enumerates -only those hardware planes and exposes them to its compositor. It is then left -to the compositor configuration to use these planes appropriately and -show the correct content on them. No other changes are necessary. - -While the biggest benefit of this model is that is extremely simple and -quick to implement, it also has some drawbacks. First, since each domain -is responsible for showing the content on the screen, there is no -control of the User VM by the Service VM. If the User VM is untrusted, this could -potentially cause some unwanted content to be displayed. Also, there is -no post-processing capability, except that provided by the display -controller (for example, scaling, rotation, and so on). So each domain -must provide finished buffers with the expectation that alpha blending -with another domain will not cause any corruption or unwanted artifacts. - -Graphics Memory Virtualization -============================== - -To achieve near-to-native graphics performance, GVT-g passes through the -performance-critical operations, such as Frame Buffer and Command Buffer -from the VM. For the global graphics memory space, GVT-g uses graphics -memory resource partitioning and an address space ballooning mechanism. -For local graphics memory spaces, GVT-g implements per-VM local graphics -memory through a render context switch because local graphics memory is -accessible only by the GPU. - -Global Graphics Memory ----------------------- - -Graphics Memory Resource Partitioning -%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% - -GVT-g partitions the global graphics memory among VMs. Splitting the -CPU/GPU scheduling mechanism requires that the global graphics memory of -different VMs can be accessed by the CPU and the GPU simultaneously. -Consequently, GVT-g must, at any time, present each VM with its own -resource, leading to the resource partitioning approach, for global -graphics memory, as shown in :numref:`mem-part`. - -.. figure:: images/APL_GVT-g-mem-part.png - :width: 800px - :align: center - :name: mem-part - - Memory Partition and Ballooning - -The performance impact of reduced global graphics memory resources -due to memory partitioning is very limited according to various test -results. - -Address Space Ballooning -%%%%%%%%%%%%%%%%%%%%%%%% - -The address space ballooning technique is introduced to eliminate the -address translation overhead, shown in :numref:`mem-part`. GVT-g exposes the -partitioning information to the VM graphics driver through the PVINFO -MMIO window. The graphics driver marks the other VMs' regions as -'ballooned', and reserves them as not being used from its graphics -memory allocator. Under this design, the guest view of global graphics -memory space is exactly the same as the host view, and the driver -programmed addresses, using guest physical address, can be directly used -by the hardware. Address space ballooning is different from traditional -memory ballooning techniques. Memory ballooning is for memory usage -control concerning the number of ballooned pages, while address space -ballooning is to balloon special memory address ranges. - -Another benefit of address space ballooning is that there is no address -translation overhead as we use the guest Command Buffer for direct GPU -execution. - -Per-VM Local Graphics Memory ----------------------------- - -GVT-g allows each VM to use the full local graphics memory spaces of its -own, similar to the virtual address spaces on the CPU. The local -graphics memory spaces are visible only to the Render Engine in the GPU. -Therefore, any valid local graphics memory address, programmed by a VM, -can be used directly by the GPU. The GVT-g device model switches the -local graphics memory spaces, between VMs, when switching render -ownership. - -GPU Page Table Virtualization -============================= - -Shared Shadow GGTT ------------------- - -To achieve resource partitioning and address space ballooning, GVT-g -implements a shared shadow global page table for all VMs. Each VM has -its own guest global page table to translate the graphics memory page -number to the Guest memory Page Number (GPN). The shadow global page -table is then translated from the graphics memory page number to the -Host memory Page Number (HPN). - -The shared shadow global page table maintains the translations for all -VMs to support concurrent accesses from the CPU and GPU concurrently. -Therefore, GVT-g implements a single, shared shadow global page table by -trapping guest PTE updates, as shown in :numref:`shared-shadow`. The -global page table, in MMIO space, has 1024K PTE entries, each pointing -to a 4 KB system memory page, so the global page table overall creates a -4 GB global graphics memory space. GVT-g audits the guest PTE values -according to the address space ballooning information before updating -the shadow PTE entries. - -.. figure:: images/APL_GVT-g-shared-shadow.png - :width: 600px - :align: center - :name: shared-shadow - - Shared Shadow Global Page Table - -Per-VM Shadow PPGTT -------------------- - -To support local graphics memory access passthrough, GVT-g implements -per-VM shadow local page tables. The local graphics memory is accessible -only from the Render Engine. The local page tables have two-level -paging structures, as shown in :numref:`per-vm-shadow`. - -The first level, Page Directory Entries (PDEs), located in the global -page table, points to the second level, Page Table Entries (PTEs) in -system memory, so guest accesses to the PDE are trapped and emulated -through the implementation of shared shadow global page table. - -GVT-g also write-protects a list of guest PTE pages for each VM. The -GVT-g device model synchronizes the shadow page with the guest page, at -the time of write-protection page fault, and switches the shadow local -page tables at render context switches. - -.. figure:: images/APL_GVT-g-per-vm-shadow.png - :width: 800px - :align: center - :name: per-vm-shadow - - Per-VM Shadow PPGTT - -.. _GVT-g-prioritized-rendering: - -Prioritized Rendering and Preemption -==================================== - -Different Schedulers and Their Roles ------------------------------------- - -.. figure:: images/APL_GVT-g-scheduling-policy.png - :width: 800px - :align: center - :name: scheduling-policy - - Scheduling Policy - -In the system, there are three different schedulers for the GPU: - -- i915 User VM scheduler -- Mediator GVT scheduler -- i915 Service VM scheduler - -Because the User VM always uses the host-based command submission (ELSP) model -and it never accesses the GPU or the Graphic Micro Controller (:term:`GuC`) -directly, its scheduler cannot do any preemption by itself. -The i915 scheduler does ensure that batch buffers are -submitted in dependency order—that is, if a compositor has to wait for -an application buffer to finish before its workload can be submitted to -the GPU, then the i915 scheduler of the User VM ensures that this happens. - -The User VM assumes that by submitting its batch buffers to the Execlist -Submission Port (ELSP), the GPU will start working on them. However, -the MMIO write to the ELSP is captured by the Hypervisor, which forwards -these requests to the GVT module. GVT then creates a shadow context -based on this batch buffer and submits the shadow context to the Service VM -i915 driver. - -However, it is dependent on a second scheduler called the GVT -scheduler. This scheduler is time based and uses a round robin algorithm -to provide a specific time for each User VM to submit its workload when it -is considered as a "render owner". The workload of the User VMs that are not -render owners during a specific time period end up waiting in the -virtual GPU context until the GVT scheduler makes them render owners. -The GVT shadow context submits only one workload at -a time, and once the workload is finished by the GPU, it copies any -context state back to DomU and sends the appropriate interrupts before -picking up any other workloads from either this User VM or another one. This -also implies that this scheduler does not do any preemption of -workloads. - -Finally, there is the i915 scheduler in the Service VM. This scheduler uses the -:term:`GuC` or ELSP to do command submission of Service VM local content as well as any -content that GVT is submitting to it on behalf of the User VMs. This -scheduler uses :term:`GuC` or ELSP to preempt workloads. :term:`GuC` has four different -priority queues, but the Service VM i915 driver uses only two of them. One of -them is considered high priority and the other is normal priority with a -:term:`GuC` rule being that any command submitted on the high priority queue -would immediately try to preempt any workload submitted on the normal -priority queue. For ELSP submission, the i915 will submit a preempt -context to preempt the current running context and then wait for the GPU -engine to be idle. - -While the identification of workloads to be preempted is decided by -customizable scheduling policies, the i915 scheduler simply submits a -preemption request to the :term:`GuC` high-priority queue once a candidate for -preemption is identified. Based on the hardware's ability to preempt (on an -Apollo Lake SoC, 3D workload is preemptible on a 3D primitive level with -some exceptions), the currently executing workload is saved and -preempted. The :term:`GuC` informs the driver using an interrupt of a preemption -event occurring. After handling the interrupt, the driver submits the -high-priority workload through the normal priority :term:`GuC` queue. As such, -the normal priority :term:`GuC` queue is used for actual execbuf submission most -of the time with the high-priority :term:`GuC` queue being used only for the -preemption of lower-priority workload. - -Scheduling policies are customizable and left to customers to change if -they are not satisfied with the built-in i915 driver policy, where all -workloads of the Service VM are considered higher priority than those of the -User VM. This policy can be enforced through a Service VM i915 kernel command-line -parameter and can replace the default in-order command submission (no -preemption) policy. - -AcrnGT -******* - -ACRN is a flexible, lightweight reference hypervisor, built with -real-time and safety-criticality in mind, optimized to streamline -embedded development through an open-source platform. - -AcrnGT is the GVT-g implementation on the ACRN hypervisor. It adapts -the MPT interface of GVT-g onto ACRN by using the kernel APIs provided -by ACRN. - -:numref:`full-pic` shows the full architecture of AcrnGT with a Linux Guest -OS and an Android Guest OS. - -.. figure:: images/APL_GVT-g-full-pic.png - :width: 800px - :align: center - :name: full-pic - - Full picture of the AcrnGT - -AcrnGT in Kernel -================= - -The AcrnGT module in the Service VM kernel acts as an adaption layer to connect -between GVT-g in the i915, the VHM module, and the ACRN-DM user space -application: - -- AcrnGT module implements the MPT interface of GVT-g to provide - services to it, including set and unset trap areas, set and unset - write-protection pages, etc. - -- It calls the VHM APIs provided by the ACRN VHM module in the Service VM - kernel, to eventually call into the routines provided by ACRN - hypervisor through hyper-calls. - -- It provides user space interfaces through ``sysfs`` to the user space - ACRN-DM so that DM can manage the lifecycle of the virtual GPUs. - -AcrnGT in DM -============= - -To emulate a PCI device to a Guest, we need an AcrnGT sub-module in the -ACRN-DM. This sub-module is responsible for: - -- registering the virtual GPU device to the PCI device tree presented to - guest; - -- registerng the MMIO resources to ACRN-DM so that it can reserve - resources in ACPI table; - -- managing the lifecycle of the virtual GPU device, such as creation, - destruction, and resetting according to the state of the virtual - machine. diff --git a/doc/developer-guides/hld/hld-devicemodel.rst b/doc/developer-guides/hld/hld-devicemodel.rst index a0621ca84..b36a85add 100644 --- a/doc/developer-guides/hld/hld-devicemodel.rst +++ b/doc/developer-guides/hld/hld-devicemodel.rst @@ -216,14 +216,14 @@ DM Initialization * map[0]:0~ctx->lowmem_limit & map[2]:4G~ctx->highmem for RAM * ctx->highmem = request_memory_size - ctx->lowmem_limit * - * Begin End Type Length - * 0: 0 - 0xef000 RAM 0xEF000 - * 1 0xef000 - 0x100000 (reserved) 0x11000 - * 2 0x100000 - lowmem RAM lowmem - 0x100000 - * 3: lowmem - bff_fffff (reserved) 0xc00_00000-lowmem - * 4: 0xc00_00000 - dff_fffff PCI hole 512MB - * 5: 0xe00_00000 - fff_fffff (reserved) 512MB - * 6: 1_000_00000 - highmem RAM highmem-4G + * Begin Limit Type Length + * 0: 0 - 0xA0000 RAM 0xA0000 + * 1 0x100000 - lowmem part1 RAM 0x0 + * 2: SW SRAM_bot - SW SRAM_top (reserved) SOFTWARE_SRAM_MAX_SIZE + * 3: gpu_rsvd_bot - gpu_rsvd_top (reserved) 0x4004000 + * 4: lowmem part2 - 0x80000000 (reserved) 0x0 + * 5: 0xE0000000 - 0x100000000 MCFG, MMIO 512MB + * 6: HIGHRAM_START_ADDR - mmio64 start RAM ctx->highmem */ - **VM Loop Thread**: DM kicks this VM loop thread to create I/O diff --git a/doc/developer-guides/hld/hld-emulated-devices.rst b/doc/developer-guides/hld/hld-emulated-devices.rst index b89d50d65..64e62c0e9 100644 --- a/doc/developer-guides/hld/hld-emulated-devices.rst +++ b/doc/developer-guides/hld/hld-emulated-devices.rst @@ -15,7 +15,6 @@ documented in this section. UART virtualization Watchdog virtualization AHCI virtualization - GVT-g GPU Virtualization System timer virtualization UART emulation in hypervisor RTC emulation in hypervisor diff --git a/doc/developer-guides/hld/hld-overview.rst b/doc/developer-guides/hld/hld-overview.rst index be2a1ad8d..dd4134e73 100644 --- a/doc/developer-guides/hld/hld-overview.rst +++ b/doc/developer-guides/hld/hld-overview.rst @@ -175,6 +175,9 @@ ACRN adopts various approaches for emulating devices for the User VM: resources (mostly data-plane related) are passed-through to the User VMs and others (mostly control-plane related) are emulated. + +.. _ACRN-io-mediator: + I/O Emulation ------------- @@ -193,6 +196,7 @@ I/O read from the User VM. I/O (PIO/MMIO) Emulation Path :numref:`overview-io-emu-path` shows an example I/O emulation flow path. + When a guest executes an I/O instruction (port I/O or MMIO), a VM exit happens. The HV takes control and executes the request based on the VM exit reason ``VMX_EXIT_REASON_IO_INSTRUCTION`` for port I/O access, for @@ -224,8 +228,9 @@ HSM/hypercall. The HV then stores the result to the guest register context, advances the guest IP to indicate the completion of instruction execution, and resumes the guest. -MMIO access path is similar except for a VM exit reason of *EPT -violation*. +MMIO access path is similar except for a VM exit reason of *EPT violation*. +MMIO access is usually trapped through a ``VMX_EXIT_REASON_EPT_VIOLATION`` in +the hypervisor. DMA Emulation ------------- @@ -328,7 +333,7 @@ power operations. VM Manager creates the User VM based on DM application, and does User VM state management by interacting with lifecycle service in ACRN service. -Please refer to VM management chapter for more details. +Refer to VM management chapter for more details. ACRN Service ============ diff --git a/doc/developer-guides/hld/hld-security.rst b/doc/developer-guides/hld/hld-security.rst index 481345a41..707c34629 100644 --- a/doc/developer-guides/hld/hld-security.rst +++ b/doc/developer-guides/hld/hld-security.rst @@ -1034,7 +1034,7 @@ Note that there are some security considerations in this design: other User VM. Keeping the Service VM system as secure as possible is a very important goal in -the system security design, please follow the recommendations in +the system security design. Follow the recommendations in :ref:`sos_hardening`. SEED Derivation @@ -1058,7 +1058,7 @@ the non-secure OS issues this power event) is about to enter S3. While the restore state hypercall is called only by vBIOS when User VM is ready to resume from suspend state. -For security design consideration of handling secure world S3, please +For security design consideration of handling secure world S3, read the previous section: :ref:`uos_suspend_resume`. Platform Security Feature Virtualization and Enablement diff --git a/doc/developer-guides/hld/hld-vsbl.rst b/doc/developer-guides/hld/hld-vsbl.rst deleted file mode 100644 index d48fc9c88..000000000 --- a/doc/developer-guides/hld/hld-vsbl.rst +++ /dev/null @@ -1,4 +0,0 @@ -.. _hld-vsbl: - -Virtual Slim-Bootloader High-Level Design -######################################### diff --git a/doc/developer-guides/hld/hv-cpu-virt.rst b/doc/developer-guides/hld/hv-cpu-virt.rst index 2f304b1ae..4fd8e3ca0 100644 --- a/doc/developer-guides/hld/hv-cpu-virt.rst +++ b/doc/developer-guides/hld/hv-cpu-virt.rst @@ -116,7 +116,7 @@ any pCPU that is not included in it. CPU Assignment Management in HV =============================== -The physical CPU assignment is pre-defined by ``cpu_affinity`` in +The physical CPU assignment is predefined by ``cpu_affinity`` in ``vm config``, while post-launched VMs could be launched on pCPUs that are a subset of it. @@ -1084,7 +1084,7 @@ ACRN always enables I/O bitmap in *VMX_PROC_VM_EXEC_CONTROLS* and EPT in *VMX_PROC_VM_EXEC_CONTROLS2*. Based on them, *pio_instr_vmexit_handler* and *ept_violation_vmexit_handler* are used for IO/MMIO emulation for a emulated device. The emulated device -could locate in hypervisor or DM in the Service VM. Please refer to the "I/O +could locate in hypervisor or DM in the Service VM. Refer to the "I/O Emulation" section for more details. For an emulated device done in the hypervisor, ACRN provide some basic diff --git a/doc/developer-guides/hld/hv-dev-passthrough.rst b/doc/developer-guides/hld/hv-dev-passthrough.rst index c59f68ac5..dbe735050 100644 --- a/doc/developer-guides/hld/hv-dev-passthrough.rst +++ b/doc/developer-guides/hld/hv-dev-passthrough.rst @@ -83,7 +83,7 @@ one the following 4 cases: debug purpose, so the UART device is owned by hypervisor and is not visible to any VM. For now, UART is the only pci device could be owned by hypervisor. - **Pre-launched VM**: The passthrough devices will be used in a pre-launched VM is - pre-defined in VM configuration. These passthrough devices are owned by the + predefined in VM configuration. These passthrough devices are owned by the pre-launched VM after the VM is created. These devices will not be removed from the pre-launched VM. There could be pre-launched VM(s) in logical partition mode and hybrid mode. @@ -381,7 +381,7 @@ GSI Sharing Violation Check All the PCI devices that are sharing the same GSI should be assigned to the same VM to avoid physical GSI sharing between multiple VMs. In logical partition mode or hybrid mode, the PCI devices assigned to -pre-launched VM is statically pre-defined. Developers should take care not to +pre-launched VM is statically predefined. Developers should take care not to violate the rule. For post-launched VM, devices that don't support MSI, ACRN DM puts the devices sharing the same GSI pin to a GSI @@ -404,7 +404,7 @@ multiple PCI components with independent local time clocks within the same system. Intel supports PTM on several of its systems and devices, such as PTM root capabilities support on Whiskey Lake and Tiger Lake PCIe root ports, and PTM device support on an Intel I225-V/I225-LM family Ethernet controller. For -further details on PTM, please refer to the `PCIe specification +further details on PTM, refer to the `PCIe specification `_. ACRN adds PCIe root port emulation in the hypervisor to support the PTM feature @@ -473,7 +473,7 @@ hypervisor startup. The Device Model (DM) then checks whether the pass-through d supports PTM requestor capabilities and whether the corresponding root port supports PTM root capabilities, as well as some other sanity checks. If an error is detected during these checks, the error will be reported and ACRN will -not enable PTM in the Guest VM. This doesn’t prevent the user from launching the Guest +not enable PTM in the Guest VM. This doesn't prevent the user from launching the Guest VM and passing through the device to the Guest VM. If no error is detected, the device model will use ``add_vdev`` hypercall to add a virtual root port (VRP), acting as the PTM root, to the Guest VM before passing through the device to the Guest VM. diff --git a/doc/developer-guides/hld/hv-interrupt.rst b/doc/developer-guides/hld/hv-interrupt.rst index 86f64b343..d928df190 100644 --- a/doc/developer-guides/hld/hv-interrupt.rst +++ b/doc/developer-guides/hld/hv-interrupt.rst @@ -28,7 +28,7 @@ In the software modules view shown in :numref:`interrupt-sw-modules`, the ACRN hypervisor sets up the physical interrupt in its basic interrupt modules (e.g., IOAPIC/LAPIC/IDT). It dispatches the interrupt in the hypervisor interrupt flow control layer to the corresponding -handlers; this could be pre-defined IPI notification, timer, or runtime +handlers; this could be predefined IPI notification, timer, or runtime registered passthrough devices. The ACRN hypervisor then uses its VM interfaces based on vPIC, vIOAPIC, and vMSI modules, to inject the necessary virtual interrupt into the specific VM, or directly deliver @@ -246,9 +246,6 @@ ACRN hypervisor maintains a global IRQ Descriptor Table shared among the physical CPUs, so the same vector will link to the same IRQ number for all CPUs. -.. note:: need to reference API doc for irq_desc - - The *irq_desc[]* array's index represents IRQ number. A *handle_irq* will be called from *interrupt_dispatch* to commonly handle edge/level triggered IRQ and call the registered *action_fn*. diff --git a/doc/developer-guides/hld/hv-ioc-virt.rst b/doc/developer-guides/hld/hv-ioc-virt.rst index 6d301028c..abbef577a 100644 --- a/doc/developer-guides/hld/hv-ioc-virt.rst +++ b/doc/developer-guides/hld/hv-ioc-virt.rst @@ -613,7 +613,4 @@ for TTY line discipline in User VM:: -l com2,/run/acrn/ioc_$vm_name -Porting and Adaptation to Different Platforms -********************************************* -TBD diff --git a/doc/developer-guides/hld/hv-rdt.rst b/doc/developer-guides/hld/hv-rdt.rst index 6fd42fbc8..539103e00 100644 --- a/doc/developer-guides/hld/hv-rdt.rst +++ b/doc/developer-guides/hld/hv-rdt.rst @@ -46,19 +46,19 @@ to enforce the settings. .. code-block:: none :emphasize-lines: 2,4 - - y - n - 0xF + + y + + 0xF Once the cache mask is set of each individual CPU, the respective CLOS ID needs to be set in the scenario XML file under ``VM`` section. If user desires -to use CDP feature, CDP_ENABLED should be set to ``y``. +to use CDP feature, ``CDP_ENABLED`` should be set to ``y``. .. code-block:: none :emphasize-lines: 2 - + 0 .. note:: @@ -113,11 +113,11 @@ for non-root and root modes to enforce the settings. .. code-block:: none :emphasize-lines: 2,5 - - y - n - - 0 + + y + n + + 0 Once the cache mask is set of each individual CPU, the respective CLOS ID needs to be set in the scenario XML file under ``VM`` section. @@ -125,7 +125,7 @@ needs to be set in the scenario XML file under ``VM`` section. .. code-block:: none :emphasize-lines: 2 - + 0 .. note:: diff --git a/doc/developer-guides/hld/hv-startup.rst b/doc/developer-guides/hld/hv-startup.rst index 808dc112f..546a40fd7 100644 --- a/doc/developer-guides/hld/hv-startup.rst +++ b/doc/developer-guides/hld/hv-startup.rst @@ -113,8 +113,8 @@ initial states, including IDT and physical PICs. After the BSP detects that all APs are up, it will continue to enter guest mode; similar, after one AP complete its initialization, it will start entering guest mode as well. -When BSP & APs enter guest mode, they will try to launch pre-defined VMs whose vBSP associated with -this physical core; these pre-defined VMs are static configured in ``vm config`` and they could be +When BSP & APs enter guest mode, they will try to launch predefined VMs whose vBSP associated with +this physical core; these predefined VMs are static configured in ``vm config`` and they could be pre-launched Safety VM or Service VM; the VM startup will be explained in next section. .. _vm-startup: diff --git a/doc/developer-guides/hld/hv-vm-management.rst b/doc/developer-guides/hld/hv-vm-management.rst index 93ed4827f..3f3388927 100644 --- a/doc/developer-guides/hld/hv-vm-management.rst +++ b/doc/developer-guides/hld/hv-vm-management.rst @@ -32,8 +32,8 @@ VM powers off, the VM returns to a 'powered off' state again. A VM can be paused to wait for some operation when it is running, so there is also a 'paused' state. -:numref:`hvvm-state` illustrates the state-machine of a VM state transition, -please refer to :ref:`hv-cpu-virt` for related VCPU state. +:numref:`hvvm-state` illustrates the state-machine of a VM state transition. +Refer to :ref:`hv-cpu-virt` for related vCPU state. .. figure:: images/hld-image108.png :align: center @@ -49,7 +49,7 @@ Pre-Launched and Service VM The hypervisor is the owner to control pre-launched and Service VM's state by calling VM APIs directly, following the design of system power -management. Please refer to ACRN power management design for more details. +management. Refer to ACRN power management design for more details. Post-Launched User VMs @@ -59,5 +59,5 @@ DM takes control of post-launched User VMs' state transition after the Service V boots, by calling VM APIs through hypercalls. Service VM user level service such as Life-Cycle-Service and tools such -as Acrnd may work together with DM to launch or stop a User VM. Please -refer to ACRN tool introduction for more details. +as ``acrnd`` may work together with DM to launch or stop a User VM. +Refer to :ref:`acrnctl` documentation for more details. diff --git a/doc/developer-guides/hld/hv-vt-d.rst b/doc/developer-guides/hld/hv-vt-d.rst index 1798c6754..f953fae4d 100644 --- a/doc/developer-guides/hld/hv-vt-d.rst +++ b/doc/developer-guides/hld/hv-vt-d.rst @@ -49,16 +49,6 @@ Pre-Parsed DMAR Information For specific platforms, the ACRN hypervisor uses pre-parsed DMA remapping reporting information directly to save hypervisor bootup time. -DMA Remapping Unit for Integrated Graphics Device -================================================= - -Generally, there is a dedicated remapping hardware unit for the Intel -integrated graphics device. ACRN implements GVT-g for graphics, but -GVT-g is not compatible with VT-d. The remapping hardware unit for the -graphics device is disabled on ACRN if GVT-g is enabled. If the graphics -device needs to passthrough to a VM, then the remapping hardware unit -must be enabled. - DMA Remapping ************* diff --git a/doc/developer-guides/hld/images/APL_GVT-g-DM.png b/doc/developer-guides/hld/images/APL_GVT-g-DM.png deleted file mode 100644 index 3417cd845..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-DM.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-access-patterns.png b/doc/developer-guides/hld/images/APL_GVT-g-access-patterns.png deleted file mode 100644 index fe53aea35..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-access-patterns.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-api-forwarding.png b/doc/developer-guides/hld/images/APL_GVT-g-api-forwarding.png deleted file mode 100644 index 2b75b70b9..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-api-forwarding.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-arch.png b/doc/developer-guides/hld/images/APL_GVT-g-arch.png deleted file mode 100644 index a42b6849b..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-arch.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-direct-display.png b/doc/developer-guides/hld/images/APL_GVT-g-direct-display.png deleted file mode 100644 index c52056100..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-direct-display.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-display-virt.png b/doc/developer-guides/hld/images/APL_GVT-g-display-virt.png deleted file mode 100644 index ef733f1e7..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-display-virt.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-full-pic.png b/doc/developer-guides/hld/images/APL_GVT-g-full-pic.png deleted file mode 100644 index ef68aaae3..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-full-pic.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-graphics-arch.png b/doc/developer-guides/hld/images/APL_GVT-g-graphics-arch.png deleted file mode 100644 index 6e1e7f083..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-graphics-arch.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-hyper-dma.png b/doc/developer-guides/hld/images/APL_GVT-g-hyper-dma.png deleted file mode 100644 index b62f34939..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-hyper-dma.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-indirect-display.png b/doc/developer-guides/hld/images/APL_GVT-g-indirect-display.png deleted file mode 100644 index 071ee4252..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-indirect-display.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-interrupt-virt.png b/doc/developer-guides/hld/images/APL_GVT-g-interrupt-virt.png deleted file mode 100644 index 86f4c5463..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-interrupt-virt.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-ive-use-case.png b/doc/developer-guides/hld/images/APL_GVT-g-ive-use-case.png deleted file mode 100644 index 0f8ee8f93..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-ive-use-case.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-mediated-pass-through.png b/doc/developer-guides/hld/images/APL_GVT-g-mediated-pass-through.png deleted file mode 100644 index 83f6c1fbb..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-mediated-pass-through.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-mem-part.png b/doc/developer-guides/hld/images/APL_GVT-g-mem-part.png deleted file mode 100644 index 20254778d..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-mem-part.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-pass-through.png b/doc/developer-guides/hld/images/APL_GVT-g-pass-through.png deleted file mode 100644 index 5710fb6f4..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-pass-through.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-per-vm-shadow.png b/doc/developer-guides/hld/images/APL_GVT-g-per-vm-shadow.png deleted file mode 100644 index 4df83cbc9..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-per-vm-shadow.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-perf-critical.png b/doc/developer-guides/hld/images/APL_GVT-g-perf-critical.png deleted file mode 100644 index 3f102a563..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-perf-critical.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-plane-based.png b/doc/developer-guides/hld/images/APL_GVT-g-plane-based.png deleted file mode 100644 index 71fc951c6..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-plane-based.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-scheduling-policy.png b/doc/developer-guides/hld/images/APL_GVT-g-scheduling-policy.png deleted file mode 100644 index 44e81760a..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-scheduling-policy.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-scheduling.png b/doc/developer-guides/hld/images/APL_GVT-g-scheduling.png deleted file mode 100644 index a819a532d..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-scheduling.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-shared-shadow.png b/doc/developer-guides/hld/images/APL_GVT-g-shared-shadow.png deleted file mode 100644 index 1cbfb1ed0..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-shared-shadow.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/APL_GVT-g-workload.png b/doc/developer-guides/hld/images/APL_GVT-g-workload.png deleted file mode 100644 index e88ecd6ba..000000000 Binary files a/doc/developer-guides/hld/images/APL_GVT-g-workload.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/interrupt-image39.png b/doc/developer-guides/hld/images/interrupt-image39.png deleted file mode 100644 index 513ccf895..000000000 Binary files a/doc/developer-guides/hld/images/interrupt-image39.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/ioc-image49.png b/doc/developer-guides/hld/images/ioc-image49.png deleted file mode 100644 index 3671654de..000000000 Binary files a/doc/developer-guides/hld/images/ioc-image49.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/ioc-image65.png b/doc/developer-guides/hld/images/ioc-image65.png deleted file mode 100644 index 52a80e30c..000000000 Binary files a/doc/developer-guides/hld/images/ioc-image65.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/mem-image2.png b/doc/developer-guides/hld/images/mem-image2.png deleted file mode 100644 index b34ea6aeb..000000000 Binary files a/doc/developer-guides/hld/images/mem-image2.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/mem-image5.png b/doc/developer-guides/hld/images/mem-image5.png deleted file mode 100644 index 696cd5737..000000000 Binary files a/doc/developer-guides/hld/images/mem-image5.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/mem-image6.png b/doc/developer-guides/hld/images/mem-image6.png deleted file mode 100644 index 44ac7c3bb..000000000 Binary files a/doc/developer-guides/hld/images/mem-image6.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/mem-image7.png b/doc/developer-guides/hld/images/mem-image7.png deleted file mode 100644 index 39eb74b18..000000000 Binary files a/doc/developer-guides/hld/images/mem-image7.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/virtio-hld-image6.png b/doc/developer-guides/hld/images/virtio-hld-image6.png deleted file mode 100644 index fe0f7de58..000000000 Binary files a/doc/developer-guides/hld/images/virtio-hld-image6.png and /dev/null differ diff --git a/doc/developer-guides/hld/images/virtio-hld-image7.png b/doc/developer-guides/hld/images/virtio-hld-image7.png deleted file mode 100644 index 6b8888e31..000000000 Binary files a/doc/developer-guides/hld/images/virtio-hld-image7.png and /dev/null differ diff --git a/doc/developer-guides/hld/index.rst b/doc/developer-guides/hld/index.rst index 95549c92f..973e01cd8 100644 --- a/doc/developer-guides/hld/index.rst +++ b/doc/developer-guides/hld/index.rst @@ -25,5 +25,4 @@ system. Virtio Devices Power Management Tracing and Logging - Virtual Bootloader Security diff --git a/doc/developer-guides/hld/virtio-console.rst b/doc/developer-guides/hld/virtio-console.rst index 2d374a2e1..971ff9378 100644 --- a/doc/developer-guides/hld/virtio-console.rst +++ b/doc/developer-guides/hld/virtio-console.rst @@ -82,12 +82,12 @@ The device model configuration command syntax for virtio-console is:: - The ``stdio/tty/pty`` is TTY capable, which means :kbd:`TAB` and :kbd:`BACKSPACE` are supported, as on a regular terminal -- When TTY is used, please make sure the redirected TTY is sleeping, +- When TTY is used, make sure the redirected TTY is sleeping, (e.g., by ``sleep 2d`` command), and will not read input from stdin before it is used by virtio-console to redirect guest output. -- When virtio-console socket_type is appointed to client, please make sure - server VM(socket_type is appointed to server) has started. +- When virtio-console socket_type is appointed to client, make sure + server VM (socket_type is appointed to server) has started. - Claiming multiple virtio-serial ports as consoles is supported, however the guest Linux OS will only use one of them, through the @@ -222,7 +222,7 @@ SOCKET The virtio-console socket-type can be set as socket server or client. Device model will create a Unix domain socket if appointed the socket_type as server, then server VM or another user VM can bind and listen for communication requirement. If appointed to -client, please make sure the socket server is ready prior to launch device model. +client, make sure the socket server is ready prior to launch device model. 1. Add a PCI slot to the device model (``acrn-dm``) command line, adjusting the ```` to your use case in the VM1 configuration:: diff --git a/doc/developer-guides/hld/virtio-net.rst b/doc/developer-guides/hld/virtio-net.rst index 910dc1bc9..c4d3428b7 100644 --- a/doc/developer-guides/hld/virtio-net.rst +++ b/doc/developer-guides/hld/virtio-net.rst @@ -193,7 +193,7 @@ example, showing the flow through each layer: .. code-block:: c - vhm_intr_handler --> // HSM interrupt handler + hsm_intr_handler --> // HSM interrupt handler tasklet_schedule --> io_req_tasklet --> acrn_ioreq_distribute_request --> // ioreq can't be processed in HSM, forward it to device DM @@ -348,7 +348,7 @@ cases.) .. code-block:: c - vhm_dev_ioctl --> // process the IOCTL and call hypercall to inject interrupt + hsm_dev_ioctl --> // process the IOCTL and call hypercall to inject interrupt hcall_inject_msi --> **ACRN Hypervisor** @@ -426,11 +426,10 @@ our case, we use systemd to automatically create the network by default. You can check the files with prefix 50- in the Service VM ``/usr/lib/systemd/network/``: -- :acrn_raw:`50-acrn.netdev ` -- :acrn_raw:`50-acrn.netdev ` -- :acrn_raw:`50-acrn.network ` -- :acrn_raw:`50-tap0.netdev ` -- :acrn_raw:`50-eth.network ` +- :acrn_raw:`50-acrn.netdev ` +- :acrn_raw:`50-acrn.network ` +- :acrn_raw:`50-tap0.netdev ` +- :acrn_raw:`50-eth.network ` When the Service VM is started, run ``ifconfig`` to show the devices created by this systemd configuration: diff --git a/doc/developer-guides/images/GVT-g-porting-image1.png b/doc/developer-guides/images/GVT-g-porting-image1.png deleted file mode 100644 index 50aa2216d..000000000 Binary files a/doc/developer-guides/images/GVT-g-porting-image1.png and /dev/null differ diff --git a/doc/developer-guides/trusty.rst b/doc/developer-guides/trusty.rst index 0b710bbb9..8e5a9491f 100644 --- a/doc/developer-guides/trusty.rst +++ b/doc/developer-guides/trusty.rst @@ -9,7 +9,7 @@ Introduction `Trusty`_ is a set of software components supporting a Trusted Execution Environment (TEE). TEE is commonly known as an isolated processing environment in which applications can be securely executed irrespective of the rest of the -system. For more information about TEE, please visit the +system. For more information about TEE, visit the `Trusted Execution Environment wiki page `_. Trusty consists of: diff --git a/doc/getting-started/building-from-source.rst b/doc/getting-started/building-from-source.rst deleted file mode 100644 index fbe84247e..000000000 --- a/doc/getting-started/building-from-source.rst +++ /dev/null @@ -1,266 +0,0 @@ -.. _getting-started-building: - -Build ACRN From Source -###################### - -Following a general embedded-system programming model, the ACRN -hypervisor is designed to be customized at build time per hardware -platform and per usage scenario, rather than one binary for all -scenarios. - -The hypervisor binary is generated based on configuration settings in XML -files. Instructions about customizing these settings can be found in -:ref:`getting-started-hypervisor-configuration`. - -One binary for all platforms and all usage scenarios is not -supported. Dynamic configuration parsing is not used in -the ACRN hypervisor for these reasons: - -- **Maintain functional safety requirements.** Implementing dynamic parsing - introduces dynamic objects, which violate functional safety requirements. - -- **Reduce complexity.** ACRN is a lightweight reference hypervisor, built for - embedded IoT. As new platforms for embedded systems are rapidly introduced, - support for one binary could require more and more complexity in the - hypervisor, which is something we strive to avoid. - -- **Maintain small footprint.** Implementing dynamic parsing introduces - hundreds or thousands of lines of code. Avoiding dynamic parsing - helps keep the hypervisor's Lines of Code (LOC) in a desirable range (less - than 40K). - -- **Improve boot time.** Dynamic parsing at runtime increases the boot - time. Using a build-time configuration and not dynamic parsing - helps improve the boot time of the hypervisor. - - -Build the ACRN hypervisor, device model, and tools from source by following -these steps. - -.. contents:: - :local: - :depth: 1 - -.. _install-build-tools-dependencies: - -.. rst-class:: numbered-step - -Install Build Tools and Dependencies -************************************ - -ACRN development is supported on popular Linux distributions, each with their -own way to install development tools. This user guide covers the steps to -configure and build ACRN natively on **Ubuntu 18.04 or newer**. - -The following commands install the necessary tools for configuring and building -ACRN. - - .. code-block:: none - - sudo apt install gcc \ - git \ - make \ - libssl-dev \ - libpciaccess-dev \ - uuid-dev \ - libsystemd-dev \ - libevent-dev \ - libxml2-dev \ - libxml2-utils \ - libusb-1.0-0-dev \ - python3 \ - python3-pip \ - libblkid-dev \ - e2fslibs-dev \ - pkg-config \ - libnuma-dev \ - liblz4-tool \ - flex \ - bison \ - xsltproc \ - clang-format - - sudo pip3 install lxml xmlschema defusedxml - - wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz - tar zxvf acpica-unix-20210105.tar.gz - cd acpica-unix-20210105 - make clean && make iasl - sudo cp ./generate/unix/bin/iasl /usr/sbin/ - -.. rst-class:: numbered-step - -Get the ACRN Hypervisor Source Code -*********************************** - -The `ACRN hypervisor `_ -repository contains four main components: - -1. The ACRN hypervisor code is in the ``hypervisor`` directory. -#. The ACRN device model code is in the ``devicemodel`` directory. -#. The ACRN debug tools source code is in the ``misc/debug_tools`` directory. -#. The ACRN online services source code is in the ``misc/services`` directory. - -Enter the following to get the ACRN hypervisor source code: - -.. code-block:: none - - git clone https://github.com/projectacrn/acrn-hypervisor - - -.. _build-with-acrn-scenario: - -.. rst-class:: numbered-step - -Build With the ACRN Scenario -**************************** - -Currently, the ACRN hypervisor defines these typical usage scenarios: - -SDC: - The SDC (Software Defined Cockpit) scenario defines a simple - automotive use case that includes one pre-launched Service VM and one - post-launched User VM. - -LOGICAL_PARTITION: - This scenario defines two pre-launched VMs. - -INDUSTRY: - This scenario is an example for industrial usage with up to eight VMs: - one pre-launched Service VM, five post-launched Standard VMs (for Human - interaction etc.), one post-launched RT VMs (for real-time control), - and one Kata Container VM. - -HYBRID: - This scenario defines a hybrid use case with three VMs: one - pre-launched Safety VM, one pre-launched Service VM, and one post-launched - Standard VM. - -HYBRID_RT: - This scenario defines a hybrid use case with three VMs: one - pre-launched RTVM, one pre-launched Service VM, and one post-launched - Standard VM. - -XML configuration files for these scenarios on supported boards are available -under the ``misc/config_tools/data`` directory. - -Assuming that you are at the top level of the ``acrn-hypervisor`` directory, perform -the following to build the hypervisor, device model, and tools: - -.. note:: - The debug version is built by default. To build a release version, - build with ``RELEASE=y`` explicitly, regardless of whether a previous - build exists. - -* Build the debug version of ``INDUSTRY`` scenario on the ``nuc7i7dnb``: - - .. code-block:: none - - make BOARD=nuc7i7dnb SCENARIO=industry - -* Build the release version of ``HYBRID`` scenario on the ``whl-ipc-i5``: - - .. code-block:: none - - make BOARD=whl-ipc-i5 SCENARIO=hybrid RELEASE=y - -* Build the release version of ``HYBRID_RT`` scenario on the ``whl-ipc-i7`` - (hypervisor only): - - .. code-block:: none - - make BOARD=whl-ipc-i7 SCENARIO=hybrid_rt RELEASE=y hypervisor - -* Build the release version of the device model and tools: - - .. code-block:: none - - make RELEASE=y devicemodel tools - -You can also build ACRN with your customized scenario: - -* Build with your own scenario configuration on the ``nuc11tnbi5``, assuming the - scenario is defined in ``/path/to/scenario.xml``: - - .. code-block:: none - - make BOARD=nuc11tnbi5 SCENARIO=/path/to/scenario.xml - -* Build with your own board and scenario configuration, assuming the board and - scenario XML files are ``/path/to/board.xml`` and ``/path/to/scenario.xml``: - - .. code-block:: none - - make BOARD=/path/to/board.xml SCENARIO=/path/to/scenario.xml - -.. note:: - ACRN uses XML files to summarize board characteristics and scenario - settings. The ``BOARD`` and ``SCENARIO`` variables accept board/scenario - names as well as paths to XML files. When board/scenario names are given, the - build system searches for XML files with the same names under - ``misc/config_tools/data/``. When paths (absolute or relative) to the XML - files are given, the build system uses the files pointed at. If relative - paths are used, they are considered relative to the current working - directory. - -See the :ref:`hardware` document for information about platform needs for each -scenario. For more instructions to customize scenarios, see -:ref:`getting-started-hypervisor-configuration` and -:ref:`acrn_configuration_tool`. - -The build results are found in the ``build`` directory. You can specify -a different build directory by setting the ``O`` ``make`` parameter, -for example: ``make O=build-nuc``. - -To query the board, scenario, and build type of an existing build, the -``hvshowconfig`` target will help. - - .. code-block:: none - - $ make BOARD=tgl-rvp SCENARIO=hybrid_rt hypervisor - ... - $ make hvshowconfig - Build directory: /path/to/acrn-hypervisor/build/hypervisor - This build directory is configured with the settings below. - - BOARD = tgl-rvp - - SCENARIO = hybrid_rt - - RELEASE = n - -.. _getting-started-hypervisor-configuration: - -.. rst-class:: numbered-step - -Modify the Hypervisor Configuration -*********************************** - -The ACRN hypervisor is built with scenario encoded in an XML file (referred to -as the scenario XML hereinafter). The scenario XML of a build can be found at -``/hypervisor/.scenario.xml``, where ```` is the name of the build -directory. You can make further changes to this file to adjust to your specific -requirements. Another ``make`` will rebuild the hypervisor using the updated -scenario XML. - -The following commands show how to customize manually the scenario XML based on -the predefined ``INDUSTRY`` scenario for ``nuc7i7dnb`` and rebuild the -hypervisor. The ``hvdefconfig`` target generates the configuration files without -building the hypervisor, allowing users to tweak the configurations. - -.. code-block:: none - - make BOARD=nuc7i7dnb SCENARIO=industry hvdefconfig - vim build/hypervisor/.scenario.xml - #(Modify the XML file per your needs) - make - -.. note:: - A hypervisor build remembers the board and scenario previously - configured. Thus, there is no need to duplicate BOARD and SCENARIO in the - second ``make`` above. - -While the scenario XML files can be changed manually, we recommend you use the -ACRN web-based configuration app that provides valid options and descriptions -of the configuration entries. Refer to :ref:`acrn_config_tool_ui` for more -instructions. - -Descriptions of each configuration entry in scenario XML files are also -available at :ref:`scenario-config-options`. diff --git a/doc/getting-started/getting-started.rst b/doc/getting-started/getting-started.rst index 4c7e9c8eb..d9f63ceee 100644 --- a/doc/getting-started/getting-started.rst +++ b/doc/getting-started/getting-started.rst @@ -1,648 +1,831 @@ .. _gsg: .. _rt_industry_ubuntu_setup: +.. _getting-started-building: Getting Started Guide ##################### -.. contents:: - :local: - :depth: 1 +This guide will help you get started with ACRN. We'll show how to prepare a +build environment on your development computer. Then we'll walk through the +steps to set up a simple ACRN configuration on a target system. The +configuration is based on the ACRN predefined **industry** scenario and consists +of an ACRN hypervisor, Service VM, and one User VM, as illustrated in this +figure: -Introduction -************ +.. image:: ./images/gsg_scenario.png + :scale: 80% -This document describes the various steps to set up a system based on the following components: - -- ACRN: Industry scenario -- Service VM OS: Ubuntu (running off the NVMe storage device) -- Real-Time VM (RTVM) OS: Ubuntu modified to use a PREEMPT-RT kernel (running off the - SATA storage device) -- Post-launched User VM OS: Windows - -Verified Version -**************** - -- Ubuntu version: **18.04** -- GCC version: **7.5** -- ACRN-hypervisor branch: **release_2.5 (v2.5)** -- ACRN-Kernel (Service VM kernel): **release_2.5 (v2.5)** -- RT kernel for Ubuntu User OS: **4.19/preempt-rt (4.19.72-rt25)** -- HW: Intel NUC 11 Pro Kit NUC11TNHi5 (`NUC11TNHi5 - `_) - -.. note:: This NUC is based on the - `NUC11TNBi5 board `_. - The ``BOARD`` parameter that is used to build ACRN for this NUC is therefore ``nuc11tnbi5``. +Throughout this guide, you will be exposed to some of the tools, processes, and +components of the ACRN project. Let's get started. Prerequisites -************* +************** -- VMX/VT-D are enabled and secure boot is disabled in the BIOS -- Ubuntu 18.04 boot-able USB disk -- Monitors with HDMI interface (DP interface is optional) -- USB keyboard and mouse -- Ethernet cables +You will need two machines: a development computer and a target system. The +development computer is where you configure and build ACRN and your application. +The target system is where you deploy and run ACRN and your application. + +.. image:: ./images/gsg_host_target.png + :scale: 60% + +Before you begin, make sure your machines have the following prerequisites: + +**Development computer**: + +* Hardware specifications + + - A PC with Internet access (A fast system with multiple cores and 16MB + memory or more will make the builds go faster.) + +* Software specifications + + - Ubuntu Desktop 18.04 or newer + (ACRN development is not supported on Windows.) + +**Target system**: + +* Hardware specifications + + - Target board (see :ref:`hardware_tested`) + - Ubuntu 18.04 Desktop bootable USB disk: download the `Ubuntu 18.04.05 Desktop + ISO image `_ and follow the `Ubuntu documentation + `__ + for instructions for creating the USB disk. + - USB keyboard and mouse + - Monitor + - Ethernet cable and Internet access + - Serial-to-USB cable to view the ACRN and VM console (optional) + - A second USB disk with minimum 1GB capacity to copy files between the + development computer and target system + - Local storage device (NVMe or SATA drive, for example) .. rst-class:: numbered-step -Hardware Connection +Set Up the Hardware ******************* -Connect the NUC11TNHi5 with the appropriate external devices. +To set up the hardware environment: -#. Connect the NUC11TNHi5 NUC to a monitor via an HDMI cable. -#. Connect the mouse, keyboard, Ethernet cable, and power supply cable to - the NUC11TNHi5 board. -#. Insert the Ubuntu 18.04 USB boot disk into the USB port. +#. Connect the mouse, keyboard, monitor, and power supply cable to the target + system. - .. figure:: images/rt-ind-ubun-hw-1.png - :scale: 15 +#. Connect the target system to the LAN with the Ethernet cable. - .. figure:: images/rt-ind-ubun-hw-2.png - :scale: 15 +#. (Optional) Connect the serial cable between the target and development + computer to view the ACRN and VM console (for an example, see :ref:`connect_serial_port`). + +Example of a target system with cables connected: + +.. image:: ./images/gsg_nuc.png + :scale: 25% .. rst-class:: numbered-step - -.. _install-ubuntu-rtvm-sata: - -Install the Ubuntu User VM (RTVM) on the SATA Disk -************************************************** - -.. note:: The NUC11TNHi5 NUC contains both an NVMe and SATA disk. - Before you install the Ubuntu User VM on the SATA disk, either - remove the NVMe disk or delete its blocks. - -#. Insert the Ubuntu USB boot disk into the NUC11TNHi5 machine. -#. Power on the machine, then press F10 to select the USB disk as the boot - device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the - label depends on the brand/make of the USB drive. -#. Install the Ubuntu OS. -#. Select **Something else** to create the partition. - - .. figure:: images/native-ubuntu-on-SATA-1.png - -#. Configure the ``/dev/sda`` partition. Refer to the diagram below: - - .. figure:: images/native-ubuntu-on-SATA-3.png - - a. Select the ``/dev/sda`` partition, not ``/dev/nvme0p1``. - b. Select ``/dev/sda`` **ATA KINGSTON SA400S3** as the device for the - bootloader installation. Note that the label depends on the SATA disk used. - -#. Complete the Ubuntu installation on ``/dev/sda``. - -This Ubuntu installation will be modified later (see `Build and Install the RT kernel for the Ubuntu User VM`_) -to turn it into a real-time User VM (RTVM). - -.. rst-class:: numbered-step - -.. _install-ubuntu-Service VM-NVMe: - -Install the Ubuntu Service VM on the NVMe Disk -********************************************** - -.. note:: Before you install the Ubuntu Service VM on the NVMe disk, please - remove the SATA disk. - -#. Insert the Ubuntu USB boot disk into the NUC11TNHi5 machine. -#. Power on the machine, then press F10 to select the USB disk as the boot - device. Select **UEFI: SanDisk** to boot using **UEFI**. Note that the - label depends on the brand/make of the USB drive. -#. Install the Ubuntu OS. -#. Select **Something else** to create the partition. - - .. figure:: images/native-ubuntu-on-NVME-1.png - -#. Configure the ``/dev/nvme0n1`` partition. Refer to the diagram below: - - .. figure:: images/native-ubuntu-on-NVME-3.png - - a. Select the ``/dev/nvme0n1`` partition, not ``/dev/sda``. - b. Select ``/dev/nvme0n1`` **Lenovo SL700 PCI-E M.2 256G** as the device for the - bootloader installation. Note that the label depends on the NVMe disk used. - -#. Complete the Ubuntu installation and reboot the system. - - .. note:: Set ``acrn`` as the username for the Ubuntu Service VM. - - -.. rst-class:: numbered-step - -.. _build-and-install-acrn-on-ubuntu: - -Build and Install ACRN on Ubuntu +Prepare the Development Computer ******************************** -Pre-Steps -========= +To set up the ACRN build environment on the development computer: -#. Set the network configuration, proxy, etc. -#. Update Ubuntu: +#. On the development computer, run the following command to confirm that Ubuntu + Desktop 18.04 or newer is running: - .. code-block:: none + .. code-block:: bash - $ sudo -E apt update + cat /etc/os-release -#. Create a work folder: + If you have an older version, see `Ubuntu documentation + `__ to + install a new OS on the development computer. - .. code-block:: none +#. Update Ubuntu with any outstanding patches: - $ mkdir /home/acrn/work + .. code-block:: bash -Build the ACRN Hypervisor on Ubuntu -=================================== + sudo apt update -#. Install the necessary libraries: + Followed by: - .. code-block:: none + .. code-block:: bash - $ sudo apt install gcc \ - git \ - make \ - libssl-dev \ - libpciaccess-dev \ - uuid-dev \ - libsystemd-dev \ - libevent-dev \ - libxml2-dev \ - libxml2-utils \ - libusb-1.0-0-dev \ - python3 \ - python3-pip \ - libblkid-dev \ - e2fslibs-dev \ - pkg-config \ - libnuma-dev \ - liblz4-tool \ - flex \ - bison \ - xsltproc \ - clang-format + sudo apt upgrade -y - $ sudo pip3 install lxml xmlschema defusedxml +#. Install the necessary ACRN build tools: -#. Starting with the ACRN v2.2 release, we use the ``iasl`` tool to - compile an offline ACPI binary for pre-launched VMs while building ACRN, - so we need to install the ``iasl`` tool in the ACRN build environment. + .. code-block:: bash - Follow these steps to install ``iasl`` (and its dependencies) and - then update the ``iasl`` binary with a newer version not available - in Ubuntu 18.04: + sudo apt install gcc \ + git \ + make \ + vim \ + libssl-dev \ + libpciaccess-dev \ + uuid-dev \ + libsystemd-dev \ + libevent-dev \ + libxml2-dev \ + libxml2-utils \ + libusb-1.0-0-dev \ + python3 \ + python3-pip \ + libblkid-dev \ + e2fslibs-dev \ + pkg-config \ + libnuma-dev \ + liblz4-tool \ + flex \ + bison \ + xsltproc \ + clang-format \ + bc - .. code-block:: none +#. Install Python package dependencies: - $ cd /home/acrn/work - $ wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz - $ tar zxvf acpica-unix-20210105.tar.gz - $ cd acpica-unix-20210105 - $ make clean && make iasl - $ sudo cp ./generate/unix/bin/iasl /usr/sbin/ + .. code-block:: bash -#. Get the ACRN source code: + sudo pip3 install lxml xmlschema defusedxml - .. code-block:: none +#. Install the iASL compiler/disassembler used for advanced power management, + device discovery, and configuration (ACPI) within the host OS: - $ cd /home/acrn/work - $ git clone https://github.com/projectacrn/acrn-hypervisor - $ cd acrn-hypervisor + .. code-block:: bash -#. Switch to the v2.5 version: + mkdir ~/acrn-work + cd ~/acrn-work + wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz + tar zxvf acpica-unix-20210105.tar.gz + cd acpica-unix-20210105 + make clean && make iasl + sudo cp ./generate/unix/bin/iasl /usr/sbin - .. code-block:: none +#. Get the ACRN hypervisor and kernel source code. (Note these checkout tags + will be updated when we launch the v2.6 release): - $ git checkout v2.5 + .. code-block:: bash -#. Build ACRN: + cd ~/acrn-work + git clone https://github.com/projectacrn/acrn-hypervisor + cd acrn-hypervisor + git checkout v2.6 - .. code-block:: none + cd .. + git clone https://github.com/projectacrn/acrn-kernel + cd acrn-kernel + git checkout release_2.6 - $ make BOARD=nuc11tnbi5 SCENARIO=industry - $ sudo make install - $ sudo mkdir -p /boot/acrn - $ sudo cp build/hypervisor/acrn.bin /boot/acrn/ - -.. _build-and-install-ACRN-kernel: - -Build and Install the ACRN Kernel -================================= - -#. Build the Service VM kernel from the ACRN repo: - - .. code-block:: none - - $ cd /home/acrn/work/ - $ git clone https://github.com/projectacrn/acrn-kernel - $ cd acrn-kernel - -#. Switch to the 5.4 kernel: - - .. code-block:: none - - $ git checkout v2.5 - $ cp kernel_config_uefi_sos .config - $ make olddefconfig - $ make all - -Install the Service VM Kernel and Modules -========================================= - -.. code-block:: none - - $ sudo make modules_install - $ sudo cp arch/x86/boot/bzImage /boot/bzImage - -.. _gsg_update_grub: - -Update Grub for the Ubuntu Service VM -===================================== - -#. Update the ``/etc/grub.d/40_custom`` file as shown below. - - .. note:: - Enter the command line for the kernel in ``/etc/grub.d/40_custom`` as - a single line and not as multiple lines. Otherwise, the kernel will - fail to boot. - - .. code-block:: none - - menuentry "ACRN Multiboot Ubuntu Service VM" --id ubuntu-service-vm { - load_video - insmod gzio - insmod part_gpt - insmod ext2 - - search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1 - echo 'loading ACRN...' - multiboot2 /boot/acrn/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83" - module2 /boot/bzImage Linux_bzImage - } - - .. note:: - Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter) - (or use the device node directly) of the root partition (e.g. - ``/dev/nvme0n1p2``). Hint: use ``sudo blkid ``. - - Update the kernel name if you used a different name as the source - for your Service VM kernel. - - Add the ``menuentry`` at the bottom of :file:`40_custom`, keep the - ``exec tail`` line at the top intact. - -#. Modify the ``/etc/default/grub`` file to make the Grub menu visible when - booting and make it load the Service VM kernel by default. Modify the - lines shown below: - - .. code-block:: none - - GRUB_DEFAULT=ubuntu-service-vm - #GRUB_TIMEOUT_STYLE=hidden - GRUB_TIMEOUT=5 - GRUB_CMDLINE_LINUX="text" - -#. Update Grub on your system: - - .. code-block:: none - - $ sudo update-grub - -Enable Network Sharing for the User VM -====================================== - -In the Ubuntu Service VM, enable network sharing for the User VM: - -.. code-block:: none - - $ sudo systemctl enable systemd-networkd - $ sudo systemctl start systemd-networkd - - -Reboot the System -================= - -Reboot the system. You should see the Grub menu with the new **ACRN -ubuntu-service-vm** entry. Select it and proceed to booting the platform. The -system will start Ubuntu and you can now log in (as before). - -To verify that the hypervisor is effectively running, check ``dmesg``. The -typical output of a successful installation resembles the following: - -.. code-block:: none - - $ dmesg | grep ACRN - [ 0.000000] Hypervisor detected: ACRN - [ 0.862942] ACRN HVLog: acrn_hvlog_init - - -Additional Settings in the Service VM -===================================== - -Build and Install the RT Kernel for the Ubuntu User VM ------------------------------------------------------- - -Follow these instructions to build the RT kernel. - -#. Clone the RT kernel source code: - - .. note:: - This guide assumes you are doing this within the Service VM. This - **acrn-kernel** repository was already cloned under ``/home/acrn/work`` - earlier on so you can just ``cd`` into it and perform the ``git checkout`` - directly. - - .. code-block:: none - - $ git clone https://github.com/projectacrn/acrn-kernel - $ cd acrn-kernel - $ git checkout origin/4.19/preempt-rt - $ make mrproper - - .. note:: - The ``make mrproper`` is to make sure there is no ``.config`` file - left from any previous build (e.g. the one for the Service VM kernel). - -#. Build the kernel: - - .. code-block:: none - - $ cp x86-64_defconfig .config - $ make olddefconfig - $ make targz-pkg - -#. Copy the kernel and modules: - - .. code-block:: none - - $ sudo mount /dev/sda2 /mnt - $ sudo cp arch/x86/boot/bzImage /mnt/boot/ - $ sudo tar -zxvf linux-4.19.72-rt25-x86.tar.gz -C /mnt/ - $ sudo cd ~ && sudo umount /mnt && sync +.. _gsg-board-setup: .. rst-class:: numbered-step -Launch the RTVM -*************** +Prepare the Target and Generate a Board Configuration File +*************************************************************** -Grub in the Ubuntu User VM (RTVM) needs to be configured to use the new RT -kernel that was just built and installed on the rootfs. Follow these steps to -perform this operation. +A **board configuration file** is an XML file that stores hardware-specific information extracted from the target system. The file is used to configure +the ACRN hypervisor, because each hypervisor instance is specific to your +target hardware. -Update the Grub File -==================== +You use the **board inspector tool** to generate the board +configuration file. -#. Reboot into the Ubuntu User VM located on the SATA drive and log on. +.. important:: -#. Update the ``/etc/grub.d/40_custom`` file as shown below. + Whenever you change the configuration of the board, such as BIOS settings, + additional memory, or PCI devices, you must + generate a new board configuration file. - .. note:: - Enter the command line for the kernel in ``/etc/grub.d/40_custom`` as - a single line and not as multiple lines. Otherwise, the kernel will - fail to boot. +Install OS on the Target +============================ - .. code-block:: none +The target system needs Ubuntu 18.04 to run the board inspector tool. - menuentry "ACRN Ubuntu User VM" --id ubuntu-user-vm { - load_video - insmod gzio - insmod part_gpt - insmod ext2 +To install Ubuntu 18.04: - search --no-floppy --fs-uuid --set b2ae4879-c0b6-4144-9d28-d916b578f2eb - echo 'loading ACRN...' +#. Insert the Ubuntu bootable USB disk into the target system. - linux /boot/bzImage root=PARTUUID= rw rootwait nohpet console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M consoleblank=0 clocksource=tsc tsc=reliable x2apic_phys processor.max_cstate=0 intel_idle.max_cstate=0 intel_pstate=disable mce=ignore_ce audit=0 isolcpus=nohz,domain,1 nohz_full=1 rcu_nocbs=1 nosoftlockup idle=poll irqaffinity=0 - } +#. Power on the target system, and select the USB disk as the boot device + in the UEFI + menu. Note that the USB disk label presented in the boot options depends on + the brand/make of the USB drive. (You will need to configure the BIOS to boot + off the USB device first, if that option isn't available.) - .. note:: - Update this to use the UUID (``--set``) and PARTUUID (``root=`` parameter) - (or use the device node directly) of the root partition (e.g. ``/dev/sda2). - Hint: use ``sudo blkid /dev/sda*``. +#. After selecting the language and keyboard layout, select the **Normal + installation** and **Download updates while installing Ubuntu** (downloading + updates requires the target to have an Internet connection). - Update the kernel name if you used a different name as the source - for your Service VM kernel. + .. image:: ./images/gsg_ubuntu_install_01.png - Add the ``menuentry`` at the bottom of :file:`40_custom`, keep the - ``exec tail`` line at the top intact. +#. Use the checkboxes to choose whether you'd like to install Ubuntu alongside + another operating system, or delete your existing operating system and + replace it with Ubuntu: -#. Modify the ``/etc/default/grub`` file to make the grub menu visible when - booting and make it load the RT kernel by default. Modify the - lines shown below: + .. image:: ./images/gsg_ubuntu_install_02.jpg + :scale: 85% - .. code-block:: none +#. Complete the Ubuntu installation and create a new user account ``acrn`` and + set a password. - GRUB_DEFAULT=ubuntu-user-vm - #GRUB_TIMEOUT_STYLE=hidden - GRUB_TIMEOUT=5 +#. The next section shows how to configure BIOS settings. -#. Update Grub on your system: - - .. code-block:: none - - $ sudo update-grub - -#. Reboot into the Ubuntu Service VM - -Launch the RTVM -=============== - - .. code-block:: none - - $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh - -.. note:: - If using a KBL NUC, the script must be adapted to match the BDF on the actual HW platform - -Recommended Kernel Cmdline for RTVM ------------------------------------ - -.. code-block:: none - - root=PARTUUID= rw rootwait nohpet console=hvc0 console=ttyS0 \ - no_timer_check ignore_loglevel log_buf_len=16M consoleblank=0 \ - clocksource=tsc tsc=reliable x2apic_phys processor.max_cstate=0 \ - intel_idle.max_cstate=0 intel_pstate=disable mce=ignore_ce audit=0 \ - isolcpus=nohz,domain,1 nohz_full=1 rcu_nocbs=1 nosoftlockup idle=poll \ - irqaffinity=0 - - -Configure RDT -------------- - -In addition to setting the CAT configuration via HV commands, we allow -developers to add CAT configurations to the VM config and configure -automatically at the time of RTVM creation. Refer to :ref:`rdt_configuration` -for details on RDT configuration and :ref:`hv_rdt` for details on RDT -high-level design. - -Set Up the Core Allocation for the RTVM ---------------------------------------- - -In our recommended configuration, two cores are allocated to the RTVM: -core 0 for housekeeping and core 1 for RT tasks. In order to achieve -this, follow the below steps to allocate all housekeeping tasks to core 0: - -#. Prepare the RTVM launch script - - Follow the `Passthrough a hard disk to RTVM`_ section to make adjustments to - the ``/usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh`` launch script. - -#. Launch the RTVM: - - .. code-block:: none - - $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh - -#. Log in to the RTVM as root and run the script as below: - - .. code-block:: none - - #!/bin/bash - # Copyright (C) 2019 Intel Corporation. - # SPDX-License-Identifier: BSD-3-Clause - # Move all IRQs to core 0. - for i in `cat /proc/interrupts | grep '^ *[0-9]*[0-9]:' | awk {'print $1'} | sed 's/:$//' `; - do - echo setting $i to affine for core zero - echo 1 > /proc/irq/$i/smp_affinity - done - - # Move all rcu tasks to core 0. - for i in `pgrep rcu`; do taskset -pc 0 $i; done - - # Change real-time attribute of all rcu tasks to SCHED_OTHER and priority 0 - for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done - - # Change real-time attribute of all tasks on core 1 to SCHED_OTHER and priority 0 - for i in `pgrep /1`; do chrt -v -o -p 0 $i; done - - # Change real-time attribute of all tasks to SCHED_OTHER and priority 0 - for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done - - echo disabling timer migration - echo 0 > /proc/sys/kernel/timer_migration - - .. note:: Ignore the error messages that might appear while the script is - running. - -Run Cyclictest --------------- - -#. Refer to the :ref:`troubleshooting section ` - below that discusses how to enable the network connection for RTVM. - -#. Launch the RTVM and log in as root. - -#. Install the ``rt-tests`` tool: - - .. code-block:: none - - sudo apt install rt-tests - -#. Use the following command to start cyclictest: - - .. code-block:: none - - sudo cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log - - - Parameter descriptions: - - :-a 1: to bind the RT task to core 1 - :-p 80: to set the priority of the highest prio thread - :-m: lock current and future memory allocations - :-N: print results in ns instead of us (default us) - :-D 1h: to run for 1 hour, you can change it to other values - :-q: quiet mode; print a summary only on exit - :-H 30000 --histfile=test.log: dump the latency histogram to a local file - -.. rst-class:: numbered-step - -Launch the Windows VM -********************* - -Follow this :ref:`guide ` to prepare the Windows -image file and then reboot. - -Troubleshooting -*************** - -.. _enabling the network on the RTVM: - -Enabling the Network on the RTVM -================================ - -If you need to access the internet, you must add the following command line -to the ``launch_hard_rt_vm.sh`` script before launching it: - -.. code-block:: none - :emphasize-lines: 8 - - acrn-dm -A -m $mem_size -s 0:0,hostbridge \ - --lapic_pt \ - --rtvm \ - --virtio_poll 1000000 \ - -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \ - -s 2,passthru,00/17/0 \ - -s 3,virtio-console,@stdio:stdio_port \ - -s 8,virtio-net,tap0 \ - --ovmf /usr/share/acrn/bios/OVMF.fd \ - hard_rtvm - -.. _passthru to rtvm: - -Passthrough a Hard Disk to RTVM +Configure Target BIOS Settings =============================== -#. Use the ``lspci`` command to ensure that the correct SATA device IDs will - be used for the passthrough before launching the script: +#. Boot your target and enter the BIOS configuration editor. - .. code-block:: none + Tip: When you are booting your target, you'll see an option (quickly) to + enter the BIOS configuration editor, typically by pressing :kbd:`F2` during + the boot and before the GRUB menu (or Ubuntu login screen) appears. - # lspci -nn | grep -i sata - 00:17.0 SATA controller [0106]: Intel Corporation Device [8086:a0d3] (rev 20) +#. Configure these BIOS settings: -#. Modify the script to use the correct SATA device IDs and bus number: + * Enable **VMX** (Virtual Machine Extensions, which provide hardware + assist for CPU virtualization). + * Enable **VT-d** (Intel Virtualization Technology for Directed I/O, which + provides additional support for managing I/O virtualization). + * Disable **Secure Boot**. This simplifies the steps for this example. - .. code-block:: none + The names and locations of the BIOS settings differ depending on the target + hardware and BIOS version. You can search for the items in the BIOS + configuration editor. - # vim /usr/share/acrn/launch_hard_rt_vm.sh + For example, on a Tiger Lake NUC, quickly press :kbd:`F2` while the system + is booting. (If the GRUB menu or Ubuntu login screen + appears, press :kbd:`CTRL` + :kbd:`ALT` + :kbd:`DEL` to reboot again and + press :kbd:`F2` sooner.) The settings are in the following paths: - passthru_vpid=( - ["eth"]="8086 15f2" - ["sata"]="8086 a0d3" - ["nvme"]="126f 2263" - ) - passthru_bdf=( - ["eth"]="0000:58:00.0" - ["sata"]="0000:00:17.0" - ["nvme"]="0000:01:00.0" - ) + * **System Agent (SA) Configuration** > **VT-d** > **Enabled** + * **CPU Configuration** > **VMX** > **Enabled** + * **Boot** > **Secure Boot** > **Secure Boot** > **Disabled** - # SATA pass-through - echo ${passthru_vpid["sata"]} > /sys/bus/pci/drivers/pci-stub/new_id - echo ${passthru_bdf["sata"]} > /sys/bus/pci/devices/${passthru_bdf["sata"]}/driver/unbind - echo ${passthru_bdf["sata"]} > /sys/bus/pci/drivers/pci-stub/bind +#. Set other BIOS settings, such as Hyper-Threading, depending on the needs + of your application. - # NVME pass-through - #echo ${passthru_vpid["nvme"]} > /sys/bus/pci/drivers/pci-stub/new_id - #echo ${passthru_bdf["nvme"]} > /sys/bus/pci/devices/${passthru_bdf["nvme"]}/driver/unbind - #echo ${passthru_bdf["nvme"]} > /sys/bus/pci/drivers/pci-stub/bind +Generate a Board Configuration File +========================================= - .. code-block:: none - :emphasize-lines: 5 +#. On the target system, install the board inspector dependencies: - --lapic_pt \ - --rtvm \ - --virtio_poll 1000000 \ - -U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \ - -s 2,passthru,00/17/0 \ - -s 3,virtio-console,@stdio:stdio_port \ - -s 8,virtio-net,tap0 \ + .. code-block:: bash + + sudo apt install cpuid msr-tools pciutils dmidecode python3 python3-pip + +#. Install the Python package dependencies: + + .. code-block:: bash + + sudo pip3 install lxml + +#. Configure the GRUB kernel command line as follows: + + a. Edit the ``grub`` file. The following command uses ``vi``, but you + can use any text editor. + + .. code-block:: bash + + sudo vi /etc/default/grub + + #. Find the line starting with ``GRUB_CMDLINE_LINUX_DEFAULT`` and append: + + .. code-block:: bash + + idle=nomwait iomem=relaxed intel_idle.max_cstate=0 intel_pstate=disable + + Example: + + .. code-block:: bash + + GRUB_CMDLINE_LINUX_DEFAULT="quiet splash idle=nomwait iomem=relaxed intel_idle.max_cstate=0 intel_pstate=disable" + + These settings allow the board inspector tool to + gather important information about the board. + + #. Save and close the file. + + #. Update GRUB and reboot the system: + + .. code-block:: bash + + sudo update-grub + reboot + +#. Copy the board inspector tool folder from the development computer to the + target via USB disk as follows: + + a. Move to the development computer. + + #. On the development computer, insert the USB disk that you intend to + use to copy files. + + #. Ensure that there is only one USB disk inserted by running the + following command: + + .. code-block:: bash + + ls /media/$USER + + Confirm that one disk name appears. You'll use that disk name in + the following steps. + + #. Copy the board inspector tool folder from the acrn-hypervisor source code to the USB disk: + + .. code-block:: bash + + cd ~/acrn-work/ + disk="/media/$USER/"$(ls /media/$USER) + cp -r acrn-hypervisor/misc/config_tools/board_inspector/ $disk/ + sync && sudo umount $disk + + #. Insert the USB disk into the target system. + + #. Copy the board inspector tool from the USB disk to the target: + + .. code-block:: bash + + mkdir -p ~/acrn-work + disk="/media/$USER/"$(ls /media/$USER) + cp -r $disk/board_inspector ~/acrn-work + +#. On the target, load the ``msr`` driver, used by the board inspector: + + .. code-block:: bash + + sudo modprobe msr + +#. Run the board inspector tool ( ``board_inspector.py``) + to generate the board configuration file. This + example uses the parameter ``my_board`` as the file name. + + .. code-block:: bash + + cd ~/acrn-work/board_inspector/ + sudo python3 board_inspector.py my_board + +#. Confirm that the board configuration file ``my_board.xml`` was generated + in the current directory. + +#. Copy ``my_board.xml`` from the target to the development computer + via USB disk as follows: + + a. Make sure the USB disk is connected to the target. + + #. Copy ``my_board.xml`` to the USB disk: + + .. code-block:: bash + + disk="/media/$USER/"$(ls /media/$USER) + cp ~/acrn-work/board_inspector/my_board.xml $disk/ + sync && sudo umount $disk + + #. Insert the USB disk into the development computer. + + #. Copy ``my_board.xml`` from the USB disk to the development computer: + + .. code-block:: bash + + disk="/media/$USER/"$(ls /media/$USER) + cp $disk/my_board.xml ~/acrn-work + sudo umount $disk + +.. _gsg-dev-setup: + +.. rst-class:: numbered-step + +Generate a Scenario Configuration File and Launch Script +********************************************************* + +You use the **ACRN configurator** to generate scenario configuration files and launch scripts. + +A **scenario configuration file** is an XML file that holds the parameters of +a specific ACRN configuration, such as the number of VMs that can be run, +their attributes, and the resources they have access to. + +A **launch script** is a shell script that is used to create a User VM. + +To generate a scenario configuration file and launch script: + +#. On the development computer, install ACRN configurator dependencies: + + .. code-block:: bash + + cd ~/acrn-work/acrn-hypervisor/misc/config_tools/config_app + sudo pip3 install -r requirements + +#. Launch the ACRN configurator: + + .. code-block:: bash + + python3 acrn_configurator.py + +#. Your web browser should open the website ``__ + automatically, or you may need to visit this website manually. + The ACRN configurator is supported on Chrome and Firefox. + +#. Click the **Import Board info** button and browse to the board configuration + file ``my_board.xml`` previously generated. When it is successfully + imported, the board information appears. + Example: + + .. image:: ./images/gsg_config_board.png + :class: drop-shadow + +#. Generate the scenario configuration file: + + a. Click the **Scenario Setting** menu on the top banner of the UI and select + **Load a default scenario**. Example: + + .. image:: ./images/gsg_config_scenario_default.png + :class: drop-shadow + + #. In the dialog box, select **industry** as the default scenario setting and click **OK**. + + .. image:: ./images/gsg_config_scenario_load.png + :class: drop-shadow + + #. The scenario's configurable items appear. Feel free to look through all + the available configuration settings used in this sample scenario. This + is where you can change the sample scenario to meet your application's + particular needs. But for now, leave them as they're set in the + sample. + + #. Click the **Export XML** button to save the scenario configuration file + that will be + used in the build process. + + #. In the dialog box, keep the default name as is. Type + ``/home//acrn-work`` in the Scenario XML Path field. In the + following example, acrn is the username. Click **Submit** to save the + file. + + .. image:: ./images/gsg_config_scenario_save.png + :class: drop-shadow + + #. Confirm that ``industry.xml`` appears in the directory ``/home//acrn-work``. + +#. Generate the launch script: + + a. Click the **Launch Setting** menu on the top banner of the UI and select + **Load a default launch script**. + + .. image:: ./images/gsg_config_launch_default.png + :class: drop-shadow + + #. In the dialog box, select **industry_launch_6uos** as the default launch + setting and click **OK**. + + .. image:: ./images/gsg_config_launch_load.png + :class: drop-shadow + + #. Click the **Generate Launch Script** button. + + .. image:: ./images/gsg_config_launch_generate.png + :class: drop-shadow + + #. In the dialog box, type ``/home//acrn-work/`` in the Source Path + field. In the following example, ``acrn`` is the username. Click **Submit** + to save the script. + + .. image:: ./images/gsg_config_launch_save.png + :class: drop-shadow + + #. Confirm that ``launch_uos_id3.sh`` appears in the directory + ``/home//acrn-work/my_board/output/``. + +#. Close the browser and press :kbd:`CTRL` + :kbd:`C` to terminate the + ``acrn_configurator.py`` program running in the terminal window. + +.. rst-class:: numbered-step + +Build ACRN +*************** + +#. On the development computer, build the ACRN hypervisor: + + .. code-block:: bash + + cd ~/acrn-work/acrn-hypervisor + make -j $(nproc) BOARD=~/acrn-work/my_board.xml SCENARIO=~/acrn-work/industry.xml + make targz-pkg + + The build typically takes a few minutes. By default, the build results are + found in the build directory. For convenience, we also built a compressed tar + file to ease copying files to the target. + +#. Build the ACRN kernel for the Service VM: + + .. code-block:: bash + + cd ~/acrn-work/acrn-kernel + cp kernel_config_uefi_sos .config + make olddefconfig + make -j $(nproc) targz-pkg + + The kernel build can take 15 minutes or less on a fast computer, but could + take 1-3 hours depending on the performance of your development computer. + +#. Copy all the necessary files generated on the development computer to the + target system by USB disk as follows: + + a. Insert the USB disk into the development computer and run these commands: + + .. code-block:: bash + + disk="/media/$USER/"$(ls /media/$USER) + cp linux-5.10.52-acrn-sos-x86.tar.gz $disk/ + cp ~/acrn-work/acrn-hypervisor/build/hypervisor/acrn.bin $disk/ + cp ~/acrn-work/my_board/output/launch_uos_id3.sh $disk/ + cp ~/acrn-work/acpica-unix-20210105/generate/unix/bin/iasl $disk/ + cp ~/acrn-work/acrn-hypervisor/build/acrn-2.6-unstable.tar.gz $disk/ + sync && sudo umount $disk/ + + #. Insert the USB disk you just used into the target system and run these + commands to copy the tar files locally: + + .. code-block:: bash + + disk="/media/$USER/"$(ls /media/$USER) + cp $disk/linux-5.10.52-acrn-sos-x86.tar.gz ~/acrn-work + cp $disk/acrn-2.6-unstable.tar.gz ~/acrn-work + + #. Extract the Service VM files onto the target system: + + .. code-block:: bash + + cd ~/acrn-work + sudo tar -zxvf linux-5.10.52-acrn-sos-x86.tar.gz -C / --keep-directory-symlink + + #. Extract the ACRN tools and images: + + .. code-block:: bash + + sudo tar -zxvf acrn-2.6-unstable.tar.gz -C / --keep-directory-symlink + + #. Copy a few additional ACRN files to the expected locations: + + .. code-block:: bash + + sudo mkdir -p /boot/acrn/ + sudo cp $disk/acrn.bin /boot/acrn + sudo cp $disk/iasl /usr/sbin/ + cp $disk/launch_uos_id3.sh ~/acrn-work + sudo umount $disk/ + +.. rst-class:: numbered-step + +Install ACRN +************ + +In the following steps, you will configure GRUB on the target system. + +#. On the target, find the root file system (rootfs) device name by using the ``lsblk`` command: + + .. code-block:: console + :emphasize-lines: 24 + + ~$ lsblk + NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT + loop0 7:0 0 255.6M 1 loop /snap/gnome-3-34-1804/36 + loop1 7:1 0 62.1M 1 loop /snap/gtk-common-themes/1506 + loop2 7:2 0 2.5M 1 loop /snap/gnome-calculator/884 + loop3 7:3 0 241.4M 1 loop /snap/gnome-3-38-2004/70 + loop4 7:4 0 61.8M 1 loop /snap/core20/1081 + loop5 7:5 0 956K 1 loop /snap/gnome-logs/100 + loop6 7:6 0 2.2M 1 loop /snap/gnome-system-monitor/148 + loop7 7:7 0 2.4M 1 loop /snap/gnome-calculator/748 + loop8 7:8 0 29.9M 1 loop /snap/snapd/8542 + loop9 7:9 0 32.3M 1 loop /snap/snapd/12704 + loop10 7:10 0 65.1M 1 loop /snap/gtk-common-themes/1515 + loop11 7:11 0 219M 1 loop /snap/gnome-3-34-1804/72 + loop12 7:12 0 55.4M 1 loop /snap/core18/2128 + loop13 7:13 0 55.5M 1 loop /snap/core18/2074 + loop14 7:14 0 2.5M 1 loop /snap/gnome-system-monitor/163 + loop15 7:15 0 704K 1 loop /snap/gnome-characters/726 + loop16 7:16 0 276K 1 loop /snap/gnome-characters/550 + loop17 7:17 0 548K 1 loop /snap/gnome-logs/106 + loop18 7:18 0 243.9M 1 loop /snap/gnome-3-38-2004/39 + nvme0n1 259:0 0 119.2G 0 disk + ├─nvme0n1p1 259:1 0 512M 0 part /boot/efi + └─nvme0n1p2 259:2 0 118.8G 0 part / + + As highlighted, you're looking for the device name associated with the + partition named ``/``, in this case ``nvme0n1p2``. + +#. Run the ``blkid`` command to get the UUID and PARTUUID for the rootfs device + (replace the ``nvme0n1p2`` name with the name shown for the rootfs on your system): + + .. code-block:: bash + + sudo blkid /dev/nvme0n1p2 + + In the output, look for the UUID and PARTUUID (example below). You will need + them in the next step. + + .. code-block:: console + + /dev/nvme0n1p2: UUID="3cac5675-e329-4cal-b346-0a3e65f99016" TYPE="ext4" PARTUUID="03db7f45-8a6c-454b-adf7-30343d82c4f4" + +#. Add the ACRN Service VM to the GRUB boot menu: + + a. Edit the GRUB 40_custom file. The following command uses ``vi``, but + you can use any text editor. + + .. code-block:: bash + + sudo vi /etc/grub.d/40_custom + + #. Add the following text at the end of the file. Replace ```` and + ```` with the output from the previous step. + + .. code-block:: bash + :emphasize-lines: 6,8 + + menuentry "ACRN Multiboot Ubuntu Service VM" --id ubuntu-service-vm { + load_video + insmod gzio + insmod part_gpt + insmod ext2 + search --no-floppy --fs-uuid --set "UUID" + echo 'loading ACRN...' + multiboot2 /boot/acrn/acrn.bin root=PARTUUID="PARTUUID" + module2 /boot/vmlinuz-5.10.52-acrn-sos Linux_bzImage + } + + #. Save and close the file. + + #. Correct example image + + .. code-block:: console + + menuentry "ACRN Multiboot Ubuntu Service VM" --id ubuntu-service-vm { + load_video + insmod gzio + insmod part_gpt + insmod ext2 + search --no-floppy --fs-uuid --set "3cac5675-e329-4cal-b346-0a3e65f99016" + echo 'loading ACRN...' + multiboot2 /boot/acrn/acrn.bin root=PARTUUID="03db7f45-8a6c-454b-adf7-30343d82c4f4" + module2 /boot/vmlinuz-5.10.52-acrn-sos Linux_bzImage + } + +#. Make the GRUB menu visible when + booting and make it load the Service VM kernel by default: + + a. Edit the ``grub`` file: + + .. code-block:: bash + + sudo vi /etc/default/grub + + #. Edit lines with these settings (comment out the ``GRUB_TIMEOUT_STYLE`` line). + Leave other lines as they are: + + .. code-block:: bash + + GRUB_DEFAULT=ubuntu-service-vm + #GRUB_TIMEOUT_STYLE=hidden + GRUB_TIMEOUT=5 + + #. Save and close the file. + +#. Update GRUB and reboot the system: + + .. code-block:: bash + + sudo update-grub + reboot + +#. Confirm that you see the GRUB menu with the "ACRN Multiboot Ubuntu Service + VM" entry. Select it and proceed to booting ACRN. (It may be autoselected, in + which case it will boot with this option automatically in 5 seconds.) + + .. code-block:: console + :emphasize-lines: 8 + + GNU GRUB version 2.04 + ──────────────────────────────────────────────────────────────────────────────── + Ubuntu + Advanced options for Ubuntu + Ubuntu 18.04.05 LTS (18.04) (on /dev/nvme0n1p2) + Advanced options for Ubuntu 18.04.05 LTS (18.04) (on /dev/nvme0n1p2) + System setup + *ACRN Multiboot Ubuntu Service VM + +.. rst-class:: numbered-step + +Run ACRN and the Service VM +****************************** + +When the ACRN hypervisor starts to boot, the ACRN console log will be displayed +to the serial port (optional). The ACRN hypervisor boots the Ubuntu Service VM +automatically. + +#. On the target, log in to the Service VM. (It will look like a normal Ubuntu + session.) + +#. Verify that the hypervisor is running by checking ``dmesg`` in + the Service VM: + + .. code-block:: bash + + dmesg | grep ACRN + + You should see "Hypervisor detected: ACRN" in the output. Example output of a + successful installation (yours may look slightly different): + + .. code-block:: console + + [ 0.000000] Hypervisor detected: ACRN + [ 3.875620] ACRNTrace: Initialized acrn trace module with 4 cpu + +.. rst-class:: numbered-step + +Launch the User VM +******************* + +#. A User VM image is required on the target system before launching it. The + following steps use Ubuntu: + + a. Go to the `official Ubuntu website + `__ to get an ISO format of the Ubuntu + 18.04 desktop image. + + #. Put the ISO file in the path ``~/acrn-work/`` on the target system. + +#. Open the launch script in a text editor. The following command uses vi, but + you can use any text editor. + + .. code-block:: bash + + vi ~/acrn-work/launch_uos_id3.sh + +#. Look for the line that contains the term ``virtio-blk`` and replace + the existing image file path with your ISO image file path. + In the following example, the + ISO image file path is ``/home/acrn/acrn-work/ubuntu-18.04.5-desktop-amd64.iso``. + + .. code-block:: bash + :emphasize-lines: 4 + + acrn-dm -A -m $mem_size -s 0:0,hostbridge -U 615db82a-e189-4b4f-8dbb-d321343e4ab3 \ + --mac_seed $mac_seed \ + $logger_setting \ + -s 7,virtio-blk,/home/acrn/acrn-work/ubuntu-18.04.5-desktop-amd64.iso \ + -s 8,virtio-net,tap_YaaG3 \ + -s 6,virtio-console,@stdio:stdio_port \ --ovmf /usr/share/acrn/bios/OVMF.fd \ - hard_rtvm + -s 1:0,lpc \ + $vm_name -#. Upon deployment completion, launch the RTVM directly onto your NUC11TNHi5: +#. Save and close the file. - .. code-block:: none +#. Launch the User VM: - $ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh + .. code-block:: bash + + sudo chmod +x ~/acrn-work/launch_uos_id3.sh + sudo chmod +x /usr/bin/acrn-dm + sudo chmod +x /usr/sbin/iasl + sudo ~/acrn-work/launch_uos_id3.sh + +#. Confirm that you see the console of the User VM on the Service VM's terminal + (on the monitor connected to the target system). Example: + + .. code-block:: console + + Ubuntu 18.04.5 LTS ubuntu hvc0 + + ubuntu login: + +#. Log in to the User VM. For the Ubuntu 18.04 ISO, the user is ``ubuntu``, and + there's no password. + +#. Confirm that you see output similar to this example: + + .. code-block:: console + + Welcome to Ubuntu 18.04.5 LTS (GNU/Linux 5.4.0-42-generic x86_64) + + * Documentation: https://help.ubuntu.com + * Management: https://landscape.canonical.com + * Support: https://ubuntu.com/advantage + + 0 packages can be updated. + 0 updates are security updates. + + Your Hardware Enablement Stack (HWE) is supported until April 2023. + + The programs included with the Ubuntu system are free software; + the exact distribution terms for each program are described in the + individual files in /usr/share/doc/*/copyright. + + Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by + applicable law. + + To run a command as administrator (user "root"), use "sudo ". + See "man sudo_root" for details. + + ubuntu@ubuntu:~$ + +The User VM has launched successfully. You have completed this ACRN setup. + +Next Steps +************** + +:ref:`overview_dev` describes the ACRN configuration process, with links to additional details. diff --git a/doc/getting-started/images/DVMT-reallocated-64mb.png b/doc/getting-started/images/DVMT-reallocated-64mb.png deleted file mode 100644 index 87ffd7787..000000000 Binary files a/doc/getting-started/images/DVMT-reallocated-64mb.png and /dev/null differ diff --git a/doc/getting-started/images/PM-support-enabled.png b/doc/getting-started/images/PM-support-enabled.png deleted file mode 100644 index eee1e1fdc..000000000 Binary files a/doc/getting-started/images/PM-support-enabled.png and /dev/null differ diff --git a/doc/getting-started/images/acrn_terms.png b/doc/getting-started/images/acrn_terms.png new file mode 100644 index 000000000..6a6936b5f Binary files /dev/null and b/doc/getting-started/images/acrn_terms.png differ diff --git a/doc/getting-started/images/gsg_config_board.png b/doc/getting-started/images/gsg_config_board.png new file mode 100644 index 000000000..fd31b5aaa Binary files /dev/null and b/doc/getting-started/images/gsg_config_board.png differ diff --git a/doc/getting-started/images/gsg_config_board.psd b/doc/getting-started/images/gsg_config_board.psd new file mode 100644 index 000000000..a013cf1a5 Binary files /dev/null and b/doc/getting-started/images/gsg_config_board.psd differ diff --git a/doc/getting-started/images/gsg_config_launch_default.png b/doc/getting-started/images/gsg_config_launch_default.png new file mode 100644 index 000000000..b778f31f4 Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_default.png differ diff --git a/doc/getting-started/images/gsg_config_launch_default.psd b/doc/getting-started/images/gsg_config_launch_default.psd new file mode 100644 index 000000000..916c89bad Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_default.psd differ diff --git a/doc/getting-started/images/gsg_config_launch_generate.png b/doc/getting-started/images/gsg_config_launch_generate.png new file mode 100644 index 000000000..f49cd8dd3 Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_generate.png differ diff --git a/doc/getting-started/images/gsg_config_launch_generate.psd b/doc/getting-started/images/gsg_config_launch_generate.psd new file mode 100644 index 000000000..c3480fe93 Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_generate.psd differ diff --git a/doc/getting-started/images/gsg_config_launch_load.png b/doc/getting-started/images/gsg_config_launch_load.png new file mode 100644 index 000000000..a1a33aefe Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_load.png differ diff --git a/doc/getting-started/images/gsg_config_launch_load.psd b/doc/getting-started/images/gsg_config_launch_load.psd new file mode 100644 index 000000000..bfed04e8a Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_load.psd differ diff --git a/doc/getting-started/images/gsg_config_launch_save.png b/doc/getting-started/images/gsg_config_launch_save.png new file mode 100644 index 000000000..c09badc88 Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_save.png differ diff --git a/doc/getting-started/images/gsg_config_launch_save.psd b/doc/getting-started/images/gsg_config_launch_save.psd new file mode 100644 index 000000000..951e1bc3e Binary files /dev/null and b/doc/getting-started/images/gsg_config_launch_save.psd differ diff --git a/doc/getting-started/images/gsg_config_scenario_default.png b/doc/getting-started/images/gsg_config_scenario_default.png new file mode 100644 index 000000000..2cf12f7b4 Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_default.png differ diff --git a/doc/getting-started/images/gsg_config_scenario_default.psd b/doc/getting-started/images/gsg_config_scenario_default.psd new file mode 100644 index 000000000..acc32d2b6 Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_default.psd differ diff --git a/doc/getting-started/images/gsg_config_scenario_load.png b/doc/getting-started/images/gsg_config_scenario_load.png new file mode 100644 index 000000000..ffda6c601 Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_load.png differ diff --git a/doc/getting-started/images/gsg_config_scenario_load.psd b/doc/getting-started/images/gsg_config_scenario_load.psd new file mode 100644 index 000000000..310cebec6 Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_load.psd differ diff --git a/doc/getting-started/images/gsg_config_scenario_save.png b/doc/getting-started/images/gsg_config_scenario_save.png new file mode 100644 index 000000000..b35b4ed1d Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_save.png differ diff --git a/doc/getting-started/images/gsg_config_scenario_save.psd b/doc/getting-started/images/gsg_config_scenario_save.psd new file mode 100644 index 000000000..ccdec023c Binary files /dev/null and b/doc/getting-started/images/gsg_config_scenario_save.psd differ diff --git a/doc/getting-started/images/gsg_host_target.png b/doc/getting-started/images/gsg_host_target.png new file mode 100644 index 000000000..e6419e672 Binary files /dev/null and b/doc/getting-started/images/gsg_host_target.png differ diff --git a/doc/getting-started/images/gsg_nuc.png b/doc/getting-started/images/gsg_nuc.png new file mode 100644 index 000000000..7904483cc Binary files /dev/null and b/doc/getting-started/images/gsg_nuc.png differ diff --git a/doc/getting-started/images/gsg_overview_image_sources.pptx b/doc/getting-started/images/gsg_overview_image_sources.pptx new file mode 100644 index 000000000..a29b3f862 Binary files /dev/null and b/doc/getting-started/images/gsg_overview_image_sources.pptx differ diff --git a/doc/getting-started/images/gsg_scenario.png b/doc/getting-started/images/gsg_scenario.png new file mode 100644 index 000000000..373de0fbb Binary files /dev/null and b/doc/getting-started/images/gsg_scenario.png differ diff --git a/doc/getting-started/images/gsg_ubuntu_install_01.png b/doc/getting-started/images/gsg_ubuntu_install_01.png new file mode 100644 index 000000000..09910b902 Binary files /dev/null and b/doc/getting-started/images/gsg_ubuntu_install_01.png differ diff --git a/doc/getting-started/images/gsg_ubuntu_install_02.jpg b/doc/getting-started/images/gsg_ubuntu_install_02.jpg new file mode 100644 index 000000000..85e215c22 Binary files /dev/null and b/doc/getting-started/images/gsg_ubuntu_install_02.jpg differ diff --git a/doc/getting-started/images/icon_host.png b/doc/getting-started/images/icon_host.png new file mode 100644 index 000000000..e542f3103 Binary files /dev/null and b/doc/getting-started/images/icon_host.png differ diff --git a/doc/getting-started/images/icon_light.png b/doc/getting-started/images/icon_light.png new file mode 100644 index 000000000..2551d8dab Binary files /dev/null and b/doc/getting-started/images/icon_light.png differ diff --git a/doc/getting-started/images/icon_target.png b/doc/getting-started/images/icon_target.png new file mode 100644 index 000000000..1cde93259 Binary files /dev/null and b/doc/getting-started/images/icon_target.png differ diff --git a/doc/getting-started/images/native-ubuntu-on-NVME-1.png b/doc/getting-started/images/native-ubuntu-on-NVME-1.png deleted file mode 100644 index 7891e2aae..000000000 Binary files a/doc/getting-started/images/native-ubuntu-on-NVME-1.png and /dev/null differ diff --git a/doc/getting-started/images/native-ubuntu-on-NVME-2.png b/doc/getting-started/images/native-ubuntu-on-NVME-2.png deleted file mode 100644 index be1075e63..000000000 Binary files a/doc/getting-started/images/native-ubuntu-on-NVME-2.png and /dev/null differ diff --git a/doc/getting-started/images/native-ubuntu-on-NVME-3.png b/doc/getting-started/images/native-ubuntu-on-NVME-3.png deleted file mode 100644 index 253d83ba0..000000000 Binary files a/doc/getting-started/images/native-ubuntu-on-NVME-3.png and /dev/null differ diff --git a/doc/getting-started/images/native-ubuntu-on-SATA-1.png b/doc/getting-started/images/native-ubuntu-on-SATA-1.png deleted file mode 100644 index 84c69739d..000000000 Binary files a/doc/getting-started/images/native-ubuntu-on-SATA-1.png and /dev/null differ diff --git a/doc/getting-started/images/native-ubuntu-on-SATA-2.png b/doc/getting-started/images/native-ubuntu-on-SATA-2.png deleted file mode 100644 index 60c60875a..000000000 Binary files a/doc/getting-started/images/native-ubuntu-on-SATA-2.png and /dev/null differ diff --git a/doc/getting-started/images/native-ubuntu-on-SATA-3.png b/doc/getting-started/images/native-ubuntu-on-SATA-3.png deleted file mode 100644 index 297733fb2..000000000 Binary files a/doc/getting-started/images/native-ubuntu-on-SATA-3.png and /dev/null differ diff --git a/doc/getting-started/images/overview_flow.png b/doc/getting-started/images/overview_flow.png new file mode 100644 index 000000000..46f034a03 Binary files /dev/null and b/doc/getting-started/images/overview_flow.png differ diff --git a/doc/getting-started/images/overview_host_target.png b/doc/getting-started/images/overview_host_target.png new file mode 100644 index 000000000..dfc64f3d7 Binary files /dev/null and b/doc/getting-started/images/overview_host_target.png differ diff --git a/doc/getting-started/images/rt-ind-ubun-hw-1.png b/doc/getting-started/images/rt-ind-ubun-hw-1.png deleted file mode 100644 index 75ed9435e..000000000 Binary files a/doc/getting-started/images/rt-ind-ubun-hw-1.png and /dev/null differ diff --git a/doc/getting-started/images/rt-ind-ubun-hw-2.png b/doc/getting-started/images/rt-ind-ubun-hw-2.png deleted file mode 100644 index dc6c000bf..000000000 Binary files a/doc/getting-started/images/rt-ind-ubun-hw-2.png and /dev/null differ diff --git a/doc/getting-started/images/up2-gui.png b/doc/getting-started/images/up2-gui.png deleted file mode 100644 index af70d9b47..000000000 Binary files a/doc/getting-started/images/up2-gui.png and /dev/null differ diff --git a/doc/getting-started/overview_dev.rst b/doc/getting-started/overview_dev.rst new file mode 100644 index 000000000..e8fea1520 --- /dev/null +++ b/doc/getting-started/overview_dev.rst @@ -0,0 +1,309 @@ +.. _overview_dev: + +Configuration and Development Overview +###################################### + +This overview is for developers who are new or relatively new to ACRN. It will +help you get familiar with ACRN basics: ACRN components and general process for +building an ACRN hypervisor. + +The overview covers the process at an abstract and universal level. + +* Abstract: the overall structure rather than detailed instructions +* Universal: applicable to most use cases + +Although the overview describes the process as a series of steps, it's intended +to be a summary, not a step-by-step guide. Throughout the overview, you will see +links to the :ref:`gsg` for first-time setup instructions. Links to advanced +guides and additional information are also provided. + +.. _overview_dev_dev_env: + +Development Environment +*********************** + +The recommended development environment for ACRN consists of two machines: + +* **Development computer** where you configure and build ACRN images +* **Target system** where you install and run ACRN images + +.. image:: ./images/overview_host_target.png + :scale: 60% + +ACRN requires a serial output from the target system to the development computer +for :ref:`debugging and system messaging `. If your target doesn't +have a serial output, :ref:`here are some tips for connecting a serial output +`. + +You will need a way to copy the built ACRN images from the development computer +to the target system. A USB drive is recommended. + +General Process for Building an ACRN Hypervisor +*********************************************** + +The general process for configuring and building an ACRN hypervisor is +illustrated in the following figure. Additional details follow. + +.. image:: ./images/overview_flow.png + +.. _overview_dev_hw_scenario: + +|icon_light| Step 1: Select Hardware and Scenario +************************************************* + +.. |icon_light| image:: ./images/icon_light.png + :scale: 75% + +ACRN configuration is hardware and scenario specific. You will need to learn +about supported ACRN hardware and scenarios, and select the right ones for your +needs. + +Select Your Hardware +==================== + +ACRN supports certain Intel processors. Development kits are widely available. +See :ref:`hardware`. + +.. _overview_dev_select_scenario: + +Select Your Scenario +==================== + +A :ref:`scenario ` is a specific ACRN configuration, such as +the type and number of VMs that can be run, their attributes, and the resources +they have access to. + +This image shows an example of an ACRN scenario to illustrate the types of VMs +that ACRN offers: + +.. image:: ./images/acrn_terms.png + :scale: 75% + +ACRN offers three types of VMs: + +* **Pre-launched User VMs**: These VMs run independently of other VMs and own + dedicated hardware resources, such as a CPU core, memory, and I/O devices. + Other VMs may not even be aware of the existence of pre-launched VMs. The + configuration of these VMs is static and must be defined at build time. They + are well-suited for safety-critical applications. + +* **Service VM**: This VM is required for scenarios that have post-launched VMs. + It controls post-launched VMs and provides device sharing services to them. + ACRN supports one Service VM. + +* **Post-launched User VMs**: These VMs share hardware resources. Unlike + pre-launched VMs, you can change the configuration at run-time. They are + well-suited for non-safety applications, including human machine interface + (HMI), artificial intelligence (AI), computer vision, real-time, and others. + +The names "pre-launched" and "post-launched" refer to the boot order of these +VMs. The ACRN hypervisor launches the pre-launched VMs first, then launches the +Service VM. The Service VM launches the post-launched VMs. + +Due to the static configuration of pre-launched VMs, they are recommended only +if you need complete isolation from the rest of the system. Most use cases can +meet their requirements without pre-launched VMs. Even if your application has +stringent real-time requirements, start by testing the application on a +post-launched VM before considering a pre-launched VM. + +To help accelerate the configuration process, ACRN offers the following +:ref:`predefined scenarios `: + +* **Shared scenario:** A configuration in which the VMs share resources + (post-launched). + +* **Partitioned scenario:** A configuration in which the VMs are isolated from + each other and don't share resources (pre-launched). + +* **Hybrid scenario:** A configuration that has both pre-launched and + post-launched VMs. + +ACRN provides predefined configuration files and documentation to help you set +up these scenarios. + +* New ACRN users start with the shared scenario, as described in the :ref:`gsg`. + +* The other predefined scenarios are more complex. The :ref:`develop_acrn` + provide setup instructions. + +You can copy the predefined configuration files and customize them for your use +case, as described later in :ref:`overview_dev_config_editor`. + +|icon_host| Step 2: Prepare the Development Computer +**************************************************** + +.. |icon_host| image:: ./images/icon_host.png + :scale: 75% + +Your development computer requires certain dependencies to configure and build +ACRN: + +* Ubuntu OS +* Build tools +* ACRN hypervisor source code +* If your scenario has a Service VM: ACRN kernel source code + +The :ref:`gsg` provides step-by-step instructions for setting up your +development computer. + +In the next step, :ref:`overview_dev_board_config`, you will need the board +inspector tool found in the ACRN hypervisor source code to collect information +about the target hardware and generate a board configuration file. + +.. _overview_dev_board_config: + +|icon_target| Step 3: Generate a Board Configuration File +********************************************************* + +.. |icon_target| image:: ./images/icon_target.png + :scale: 75% + +A **board configuration file** is an XML file that stores hardware-specific +information extracted from the target system. It describes the capacity of +hardware resources (such as processors and memory), platform power states, +available devices, and BIOS settings. The file is used to configure the ACRN +hypervisor, because each hypervisor instance is specific to your target +hardware. + +The **board inspector tool** ``board_inspector.py`` enables you to generate a board +configuration file on the target system. The following sections provide an +overview and important information to keep in mind when using the tool. + +Configure BIOS Settings +======================= + +You must configure all of your target's BIOS settings before running the board +inspector tool, because the tool records the current BIOS settings in the board +configuration file. + +Some BIOS settings are required by ACRN. The :ref:`gsg` provides a list of the +settings. + +Use the Board Inspector to Generate a Board Configuration File +============================================================== + +The board inspector tool requires certain dependencies to be present on the +target system: + +* Ubuntu OS +* Tools and kernel command-line options that allow the board inspector to + collect information about the target hardware + +After setting up the dependencies, you run the board inspector via command-line. +The tool generates a board configuration file specific to your hardware. + +.. important:: Whenever you change the configuration of the board, such as BIOS + settings or PCI ports, you must generate a new board configuration file. + +The :ref:`gsg` provides step-by-step instructions for using the tool. For more +information about the tool, see :ref:`board_inspector_tool`. + +.. _overview_dev_config_editor: + +|icon_host| Step 4: Generate a Scenario Configuration File and Launch Scripts +***************************************************************************** + +As described in :ref:`overview_dev_select_scenario`, a scenario is a specific +ACRN configuration, such as the number of VMs that can be run, their attributes, +and the resources they have access to. These parameters are saved in a +**scenario configuration file** in XML format. + +A **launch script** is a shell script that is used to create a post-launched VM. + +The **ACRN configurator** ``acrn_configurator.py`` is a web-based user interface that +runs on your development computer. It enables you to customize, validate, and +generate scenario configuration files and launch scripts. The following sections +provide an overview and important information to keep in mind when using the +tool. + +Generate a Scenario Configuration File +====================================== + +Before using the ACRN configurator to generate a scenario configuration +file, be sure you have the board configuration file that you generated in +:ref:`overview_dev_board_config`. The tool needs the board configuration file to +validate that your custom scenario is supported by the target hardware. + +You can use the tool to create a new scenario configuration file or modify an +existing one, such as a predefined scenario described in +:ref:`overview_dev_hw_scenario`. The tool's GUI enables you to edit the +configurable items in the file, such as adding VMs, modifying VM attributes, or +deleting VMs. The tool validates your inputs against your board configuration +file. After validation is successful, the tool generates your custom scenario +configuration file. + +Generate Launch Scripts +======================= + +Before using the ACRN configurator to generate a launch script, be sure +you have your board configuration file and scenario configuration file. The tool +needs both files to validate your launch script configuration. + +The process of customizing launch scripts is similar to the process of +customizing scenario configuration files. You can choose to create a new launch +script or modify an existing one. You can then use the GUI to edit the +configurable parameters. The tool validates your inputs against your board +configuration file and scenario configuration file. After validation is +successful, the tool generates your custom launch script. + +.. note:: + The ACRN configurator may not show all editable + parameters for scenario configuration files and launch scripts. You can edit + the parameters manually. See :ref:`acrn_config_data`. + +The :ref:`gsg` walks you through a simple example of using the tool. For more +information about the tool, see :ref:`acrn_configurator_tool`. + +|icon_host| Step 5: Build ACRN +****************************** + +The ACRN hypervisor source code provides a makefile to build the ACRN hypervisor +binary and associated components. In the ``make`` command, you need to specify +your board configuration file and scenario configuration file. The build +typically takes a few minutes. + +If your scenario has a Service VM, you also need to build the ACRN kernel for +the Service VM. The ACRN kernel source code provides a predefined configuration +file and a makefile to build the ACRN kernel binary and associated components. +The build can take 1-3 hours depending on the performance of your development +computer and network. + +The :ref:`gsg` provides step-by-step instructions. + +For more information about the kernel, see :ref:`kernel-parameters`. + +.. _overview_dev_install: + +|icon_target| Step 6: Install and Run ACRN +****************************************** + +The last step is to make final changes to the target system configuration and +then boot ACRN. + +At a high level, you will: + +* Copy the built ACRN hypervisor files, kernel files, and launch scripts from + the development computer to the target. + +* Configure GRUB to boot the ACRN hypervisor, pre-launched VMs, and Service VM. + Reboot the target, and launch ACRN. + +* If your scenario contains a post-launched VM, install an OS image for the + post-launched VM and run the launch script you created in + :ref:`overview_dev_config_editor`. + +For a basic example, see the :ref:`gsg`. + +For details about GRUB, see :ref:`using_grub`. + +For more complex examples of post-launched VMs, see the +:ref:`develop_acrn_user_vm`. + +Next Steps +********** + +* To get ACRN up and running for the first time, see the :ref:`gsg` for + step-by-step instructions. + +* If you have already completed the :ref:`gsg`, see the :ref:`develop_acrn` for + more information about complex scenarios, advanced features, and debugging. diff --git a/doc/getting-started/roscube/roscube-gsg.rst b/doc/getting-started/roscube/roscube-gsg.rst index 6b2dde0ed..bdf5b9aae 100644 --- a/doc/getting-started/roscube/roscube-gsg.rst +++ b/doc/getting-started/roscube/roscube-gsg.rst @@ -1,711 +1,11 @@ +:orphan: + .. _roscube-gsg: Getting Started Guide for ACRN Industry Scenario With ROScube-I ############################################################### -.. contents:: - :local: - :depth: 1 - -Verified Version -**************** - -- Ubuntu version: **18.04** -- GCC version: **7.5.0** -- ACRN-hypervisor branch: **v2.1** -- ACRN-Kernel (Service VM kernel): **v2.1** -- RT kernel for Ubuntu User VM OS: **Linux kernel 4.19.59 with Xenomai 3.1** -- HW: `ROScube-I`_ - - ADLINK `ROScube-I`_ is a real-time `ROS 2`_-enabled robotic controller based - on Intel® Xeon® 9th Gen Intel® Core™ i7/i3 and 8th Gen Intel® Core™ i5 - processors. It features comprehensive I/O connectivity supporting a wide - variety of sensors and actuators for unlimited robotic applications. - -.. _ROScube-I: - https://www.adlinktech.com/Products/ROS2_Solution/ROS2_Controller/ROScube-I?lang=en - -.. _ROS 2: - https://index.ros.org/doc/ros2/ - -Architecture -************ - -This tutorial will show you how to install the ACRN Industry Scenario on ROScube-I. -The scenario is shown here: - -.. figure:: images/rqi-acrn-architecture.png - -* Service VM: Used to launch the User VM and real-time VM. -* User VM: Run `ROS 2`_ application in this VM, such as for SLAM or navigation. -* Real-time VM: Run critical tasks in this VM, such as the base driver. - -Prerequisites -************* - -* Connect the ROScube-I as shown here: - - - HDMI for monitor. - - Network on Ethernet port 1. - - Keyboard and mouse on USB. - - .. figure:: images/rqi-acrn-hw-connection.jpg - :width: 600px - -* Install Ubuntu 18.04 on ROScube-I. - -* Modify the following BIOS settings. - - .. csv-table:: - :widths: 15, 30, 10 - - "Hyper-threading", "Advanced -> CPU Configuration", "Disabled" - "Intel (VMX) Virtualization", "Advanced -> CPU Configuration", "Enabled" - "Intel(R) SpeedStep(tm)", "Advanced -> CPU Configuration", "Disabled" - "Intel(R) Speed Shift Technology", "Advanced -> CPU configuration", "Disabled" - "Turbo Mode", "Advanced -> CPU configuration", "Disabled" - "C States", "Advanced -> CPU configuration", "Disabled" - "VT-d", "Chipset -> System Agent (SA) Configuration", "Enabled" - "DVMT-Pre Allocated", "Chipset -> System Agent (SA) Configuration -> Graphics Configuration", "64M" - -.. rst-class:: numbered-step - -Install ACRN Hypervisor -*********************** - -Set Up Environment -================== - -#. Open ``/etc/default/grub/`` and add ``idle=nomwait intel_pstate=disable`` - to the end of ``GRUB_CMDLINE_LINUX_DEFAULT``. - - .. figure:: images/rqi-acrn-grub.png - -#. Update GRUB and then reboot. - - .. code-block:: bash - - sudo update-grub - sudo reboot - -#. Install the necessary libraries: - - .. code-block:: bash - - sudo apt update - sudo apt install -y gcc git make gnu-efi libssl-dev libpciaccess-dev \ - uuid-dev libsystemd-dev libevent-dev libxml2-dev libxml2-utils \ - libusb-1.0-0-dev python3 python3-pip libblkid-dev \ - e2fslibs-dev pkg-config libnuma-dev liblz4-tool flex bison - sudo pip3 install kconfiglib - -#. Get code from GitHub. - - .. code-block:: bash - - mkdir ~/acrn && cd ~/acrn - git clone https://github.com/projectacrn/acrn-hypervisor -b release_2.1 - cd acrn-hypervisor - -Configure Hypervisor -==================== - -#. Parse system information. - - .. code-block:: bash - - sudo apt install -y cpuid msr-tools - cd ~/acrn/acrn-hypervisor/misc/acrn-config/target/ - sudo python3 board_parser.py ros-cube-cfl - cp ~/acrn/acrn-hypervisor/misc/acrn-config/target/out/ros-cube-cfl.xml \ - ~/acrn/acrn-hypervisor/misc/acrn-config/xmls/board-xmls/ - -#. Run ACRN configuration app and it will open a browser page. - - .. code-block:: bash - - cd ~/acrn/acrn-hypervisor/misc/acrn-config/config_app - sudo pip3 install -r requirements - python3 app.py - - .. figure:: images/rqi-acrn-config-web.png - -#. Select "Import Board info". - - .. figure:: images/rqi-acrn-config-import-board.png - -#. Select target board name. - - .. figure:: images/rqi-acrn-config-select-board.png - -#. Select "Scenario Setting" and choose "Load a default scenario". - - .. figure:: images/rqi-acrn-config-scenario-settings.png - -#. Settings "HV": You can ignore this if your RAM is <= 16GB. - - .. figure:: images/rqi-acrn-config-hv-settings.png - -#. Settings "VM0": Select the hard disk currently used. - - .. figure:: images/rqi-acrn-config-vm0-settings.png - -#. Settings "VM1": Enable all the cpu_affinity. - You can press :kbd:`+` to increase CPU ID. - This doesn't mean to attach all CPUs to the VM. The CPU number can be - adjusted later. - - .. figure:: images/rqi-acrn-config-vm1-settings.png - -#. Settings "VM2": Set up RT flags and enable all the cpu_affinity. - - .. figure:: images/rqi-acrn-config-vm2-settings1.png - - .. figure:: images/rqi-acrn-config-vm2-settings2.png - -#. Export XML. - - .. figure:: images/rqi-acrn-config-export-xml.png - - .. figure:: images/rqi-acrn-config-export-xml-submit.png - -#. Generate configuration files. - - .. figure:: images/rqi-acrn-config-generate-config.png - - .. figure:: images/rqi-acrn-config-generate-config-submit.png - -#. Close the browser and stop the process (Ctrl+C). - -#. Optional: Patch the hypervisor if you want to passthrough GPIO to VM. - - .. code-block:: bash - - cd ~/acrn/acrn-hypervisor - wget https://raw.githubusercontent.com/Adlink-ROS/ROScube_ACRN_guide/v2.1/patch/0001-Fix-ROScube-I-gpio-pin-assignment-table.patch - git apply 0001-Fix-ROScube-I-gpio-pin-assignment-table.patch - -#. Build hypervisor - - .. code-block:: bash - - cd ~/acrn/acrn-hypervisor - make all \ - BOARD_FILE=misc/acrn-config/xmls/board-xmls/ros-cube-cfl.xml \ - SCENARIO_FILE=misc/acrn-config/xmls/config-xmls/ros-cube-cfl/user_defined/industry_ROS2SystemOS.xml \ - RELEASE=0 - -#. Install hypervisor - - .. code-block:: bash - - sudo make install - sudo mkdir /boot/acrn - sudo cp ~/acrn/acrn-hypervisor/build/hypervisor/acrn.bin /boot/acrn/ - -.. rst-class:: numbered-step - -Install Service VM Kernel -************************* - -Build Service VM Kernel -======================= - -#. Get code from GitHub - - .. code-block:: bash - - cd ~/acrn - git clone https://github.com/projectacrn/acrn-kernel -b release_2.1 - cd acrn-kernel - -#. Restore default ACRN configuration. - - .. code-block:: bash - - cp kernel_config_uefi_sos .config - make olddefconfig - sed -ri '/CONFIG_LOCALVERSION=/s/=.+/="-ROS2SystemSOS"/g' .config - sed -i '/CONFIG_PINCTRL_CANNONLAKE/c\CONFIG_PINCTRL_CANNONLAKE=m' .config - -#. Build Service VM kernel. It will take some time. - - .. code-block:: bash - - make all - -#. Install kernel and module. - - .. code-block:: bash - - sudo make modules_install - sudo cp arch/x86/boot/bzImage /boot/acrn-ROS2SystemSOS - -Update Grub -=========== - -#. Get the UUID and PARTUUID. - - .. code-block:: bash - - sudo blkid /dev/sda* - - .. note:: The UUID and PARTUUID we need should be ``/dev/sda2``, which is ``TYPE="ext4"``, - as shown in the following graph: - - .. figure:: images/rqi-acrn-blkid.png - -#. Update ``/etc/grub.d/40_custom`` as below. Remember to edit - ```` and ```` to your system's values. - - .. code-block:: bash - - menuentry "ACRN Multiboot Ubuntu Service VM" --id ubuntu-service-vm { - load_video - insmod gzio - insmod part_gpt - insmod ext2 - - search --no-floppy --fs-uuid --set - echo 'loading ACRN Service VM...' - multiboot2 /boot/acrn/acrn.bin root=PARTUUID="" - module2 /boot/acrn-ROS2SystemSOS Linux_bzImage - } - - .. figure:: images/rqi-acrn-grun-40_custom.png - -#. Update ``/etc/default/grub`` to make GRUB menu visible and load Service VM as default. - - .. code-block:: bash - - GRUB_DEFAULT=ubuntu-service-vm - #GRUB_TIMEOUT_STYLE=hidden - GRUB_TIMEOUT=5 - -#. Then update GRUB and reboot. - - .. code-block:: bash - - sudo update-grub - sudo reboot - -#. ``ACRN Multiboot Ubuntu Service VM`` entry will be shown in the GRUB - menu. Choose it to load ACRN. You can check that the installation is - successful by using ``dmesg``. - - .. code-block:: bash - - sudo dmesg | grep ACRN - - .. figure:: images/rqi-acrn-dmesg.png - -.. rst-class:: numbered-step - -Install User VM -*************** - -Before Create User VM -===================== - -#. Download Ubuntu image (Here we use `Ubuntu 18.04 LTS - `_ for example): - -#. Install necessary packages. - - .. code-block:: bash - - sudo apt install qemu-kvm libvirt-clients libvirt-daemon-system \ - bridge-utils virt-manager ovmf - sudo reboot - -Create User VM Image -==================== - -.. note:: Reboot into the **native Linux kernel** (not the ACRN kernel) - and create User VM image. - -#. Start virtual machine manager application. - - .. code-block:: bash - - sudo virt-manager - -#. Create a new virtual machine. - - .. figure:: images/rqi-acrn-kvm-new-vm.png - -#. Select your ISO image path. - - .. figure:: images/rqi-acrn-kvm-choose-iso.png - -#. Select CPU and RAM for the VM. You can modify as high as you can to - accelerate the installation time. The settings here are not related to - the resource of the User VM on ACRN, which can be decided later. - - .. figure:: images/rqi-acrn-kvm-cpu-ram.png - -#. Select disk size you want. **Note that this can't be modified after creating image!** - - .. figure:: images/rqi-acrn-kvm-storage.png - -#. Edit image name and select "Customize configuration before install". - - .. figure:: images/rqi-acrn-kvm-name.png - -#. Select correct Firmware, apply it, and Begin Installation. - - .. figure:: images/rqi-acrn-kvm-firmware.png - -#. Now you'll see the installation page of Ubuntu. - After installing Ubuntu, you can also install some necessary - packages, such as ssh, vim, and ROS 2. - We'll clone the image for real-time VM to save time. - -#. To install ROS 2, refer to `Installing ROS 2 via Debian Packages - `_ - -#. Optional: Use ACRN kernel if you want to passthrough GPIO to User VM. - - .. code-block:: bash - - sudo apt install git build-essential bison flex libelf-dev libssl-dev liblz4-tool - - # Clone code - git clone -b release_2.1 https://github.com/projectacrn/acrn-kernel - cd acrn-kernel - - # Set up kernel config - cp kernel_config_uos .config - make olddefconfig - export ACRN_KERNEL_UOS=`make kernelversion` - export UOS="ROS2SystemUOS" - export BOOT_DEFAULT="${ACRN_KERNEL_UOS}-${UOS}" - sed -ri "/CONFIG_LOCALVERSION=/s/=.+/=\"-${UOS}\"/g" .config - - # Build and install kernel and modules - make all - sudo make modules_install - sudo make install - - # Update Grub - sudo sed -ri \ - "/GRUB_DEFAULT/s/=.+/=\"Advanced options for Ubuntu>Ubuntu, with Linux ${BOOT_DEFAULT}\"/g" \ - /etc/default/grub - sudo update-grub - -#. When that completes, poweroff the VM. - - .. code-block:: bash - - sudo poweroff - -Run User VM -=========== - -Now back to the native machine to set up the environment for launching -the User VM. - -#. Manually fetch and install the ``iasl`` binary to ``/usr/bin`` (where - ACRN expects it) with a newer version of the - than what's included with Ubuntu 18.04: - - .. code-block:: bash - - cd /tmp - wget https://acpica.org/sites/acpica/files/acpica-unix-20210105.tar.gz - tar zxvf acpica-unix-20210105.tar.gz - cd acpica-unix-20210105 - make clean && make iasl - sudo cp ./generate/unix/bin/iasl /usr/sbin/ - -#. Convert KVM image file format. - - .. code-block:: bash - - mkdir -p ~/acrn/uosVM - cd ~/acrn/uosVM - sudo qemu-img convert -f qcow2 -O raw /var/lib/libvirt/images/ROS2SystemUOS.qcow2 ./ROS2SystemUOS.img - -#. Prepare a Launch Script File. - - .. code-block:: bash - - wget https://raw.githubusercontent.com/Adlink-ROS/ROScube_ACRN_guide/v2.1/scripts/launch_ubuntu_uos.sh - chmod +x ./launch_ubuntu_uos.sh - -#. Set up network and reboot to take effect. - - .. code-block:: bash - - mkdir -p ~/acrn/tools/ - cd ~/acrn/tools - wget https://raw.githubusercontent.com/Adlink-ROS/ROScube_ACRN_guide/v2.1/scripts/acrn_bridge.sh - chmod +x ./acrn_bridge.sh - ./acrn_bridge.sh - sudo reboot - -#. **Reboot to ACRN kernel** and now you can launch the VM. - - .. code-block:: bash - - cd ~/acrn/uosVM - sudo ./launch_ubuntu_uos.sh - -.. rst-class:: numbered-step - -Install Real-Time VM -******************** - -Copy Real-Time VM Image -======================= - -.. note:: Reboot into the **native Linux kernel** (not the ACRN kernel) - and create User VM image. - -#. Clone real-time VM from User VM. (Right-click User VM and then clone) - - .. figure:: images/rqi-acrn-rtos-clone.png - -#. You'll see the real-time VM is ready. - - .. figure:: images/rqi-acrn-rtos-ready.png - -Set Up Real-Time VM -=================== - -.. note:: The section will show you how to install Xenomai on ROScube-I. - If help is needed, `contact ADLINK - `_ for more - information, or ask a question on the `ACRN users mailing list - `_ - -#. Run the VM and modify your VM hostname. - - .. code-block:: bash - - hostnamectl set-hostname ros-RTOS - -#. Install Xenomai kernel. - - .. code-block:: bash - - # Install necessary packages - sudo apt install git build-essential bison flex kernel-package libelf-dev libssl-dev haveged - - # Clone code from GitHub - git clone -b F/4.19.59/base/ipipe/xenomai_3.1 https://github.com/intel/linux-stable-xenomai - - # Build - cd linux-stable-xenomai - cp arch/x86/configs/xenomai_test_defconfig .config - make olddefconfig - sed -i '/CONFIG_GPIO_VIRTIO/c\CONFIG_GPIO_VIRTIO=m' .config - CONCURRENCY_LEVEL=$(nproc) make-kpkg --rootcmd fakeroot --initrd kernel_image kernel_headers - - # Install - sudo dpkg -i ../linux-headers-4.19.59-xenomai+_4.19.59-xenomai+-10.00.Custom_amd64.deb \ - ../linux-image-4.19.59-xenomai+_4.19.59-xenomai+-10.00.Custom_amd64.deb - -#. Install Xenomai library and tools. For more details, refer to - `Xenomai Official Documentation - `_. - - .. code-block:: bash - - cd ~ - wget https://xenomai.org/downloads/xenomai/stable/xenomai-3.1.tar.bz2 - tar xf xenomai-3.1.tar.bz2 - cd xenomai-3.1 - ./configure --with-core=cobalt --enable-smp --enable-pshared - make -j`nproc` - sudo make install - -#. Allow non-root user to run Xenomai. - - .. code-block:: bash - - sudo addgroup xenomai --gid 1234 - sudo addgroup root xenomai - sudo usermod -a -G xenomai $USER - -#. Update ``/etc/default/grub``. - - .. code-block:: bash - - GRUB_DEFAULT="Advanced options for Ubuntu>Ubuntu, with Linux 4.19.59-xenomai+" - #GRUB_TIMEOUT_STYLE=hidden - GRUB_TIMEOUT=5 - ... - GRUB_CMDLINE_LINUX="xenomai.allowed_group=1234" - -#. Update GRUB. - - .. code-block:: bash - - sudo update-grub - -#. Poweroff the VM. - - .. code-block:: bash - - sudo poweroff - -Run Real-Time VM -================ - -Now back to the native machine and we'll set up the environment for -launching the real-time VM. - -#. Convert KVM image file format. - - .. code-block:: bash - - mkdir -p ~/acrn/rtosVM - cd ~/acrn/rtosVM - sudo qemu-img convert -f qcow2 \ - -O raw /var/lib/libvirt/images/ROS2SystemRTOS.qcow2 \ - ./ROS2SystemRTOS.img - -#. Create a new launch file - - .. code-block:: bash - - wget https://raw.githubusercontent.com/Adlink-ROS/ROScube_ACRN_guide/v2.1/scripts/launch_ubuntu_rtos.sh - chmod +x ./launch_ubuntu_rtos.sh - -#. **Reboot to ACRN kernel** and now you can launch the VM. - - .. code-block:: bash - - cd ~/acrn/rtosVM - sudo ./launch_ubuntu_rtos.sh - -.. note:: Use ``poweroff`` instead of ``reboot`` in the real-time VM. - In ACRN design, rebooting the real-time VM will also reboot the whole - system. - -Customizing the Launch File -*************************** - -The launch file in this tutorial has the following hardware resource allocation. - -.. csv-table:: - :header: "Resource", "Service VM", "User VM", "Real-time VM" - :widths: 15, 15, 15, 15 - - "CPU", "0", "1,2,3", "4,5" - "Memory", "Remaining", "8 GB", "2 GB" - "Ethernet", "Ethernet 1 & 2", "Ethernet 3", "Ethernet 4" - "USB", "Remaining", "1-2", "1-1" - -You can modify the launch file for your own hardware resource allocation. -We'll provide some modification methods below. -For more detail, see :ref:`acrn-dm_parameters`. - -CPU -=== - -Modify the ``--cpu-affinity`` value in the command ``acrn-dm`` command. -The number should be between 0 and max CPU ID. -For example, if you want to run VM with core 1 and 2, use ``--cpu-affinity 1,2``. - -Memory -====== - -Modify the ``mem_size`` in launch file. This variable will be passed to -``acrn-dm``. The possible values are 1024M, 2048M, 4096M, and 8192M. - -Ethernet -======== - -Run ``lspci -Dnn | grep "Ethernet controller"`` to get the ID of Ethernet port. - -.. figure:: images/rqi-acrn-ethernet-lspci.png - -You'll see 4 IDs, one for each Ethernet port. -Assign the ID of the port you want to passthrough in the launch file. -For example, if we want to passthrough Ethernet 3 to the VM: - -.. code-block:: bash - - passthru_vpid=( - ["ethernet"]="8086 1539" - ) - passthru_bdf=( - ["ethernet"]="0000:04:00.0" - ) - - # Passthrough ETHERNET - echo ${passthru_vpid["ethernet"]} > /sys/bus/pci/drivers/pci-stub/new_id - echo ${passthru_bdf["ethernet"]} > /sys/bus/pci/devices/${passthru_bdf["ethernet"]}/driver/unbind - echo ${passthru_bdf["ethernet"]} > /sys/bus/pci/drivers/pci-stub/bind - - acrn-dm - ⋮ - -s 4,passthru,04/00/0 \ - ⋮ - -USB -=== - -To passthrough USB to VM, we need to know the ID for each USB first. - -.. figure:: images/rqi-acrn-usb-port.png - -Then modify the launch file and add the USB ID. -For example, if you want to passthrough USB 1-2 and 1-4. - -.. code-block:: bash - - acrn-dm - ⋮ - -s 8,xhci,1-2,1-4 \ - ⋮ - -GPIO -==== - -This is the PIN definition of ROScube-I. - -.. figure:: images/rqi-pin-definition.png - -To pass GPIO to VM, you need to add the following section. - -.. code-block:: bash - - acrn-dm - ⋮ - -s X,virtio-gpio,@gpiochip0{=:=: ... :} \ - ⋮ - -The offset and pin mapping is as shown here: - -.. csv-table:: - :widths: 5, 10, 15 - - "Fn", "GPIO Pin", "In Chip Offset" - "DI0", "GPIO220", "72" - "DI1", "GPIO221", "73" - "DI2", "GPIO222", "74" - "DI3", "GPIO223", "75" - "DI4", "GPIO224", "76" - "DI5", "GPIO225", "77" - "DI6", "GPIO226", "78" - "DI7", "GPIO227", "79" - "DO0", "GPIO253", "105" - "DO1", "GPIO254", "106" - "DO2", "GPIO255", "107" - "DO3", "GPIO256", "108" - "DO4", "GPIO257", "109" - "DO5", "GPIO258", "110" - "DO6", "GPIO259", "111" - "DO7", "GPIO260", "112" - -For example, if you want to pass DI0 and DO0 to VM: - -.. code-block:: bash - - acrn-dm - ⋮ - -s X,virtio-gpio,@gpiochip0{72=gpi0:105=gpo0} \ - ⋮ +ROScube support is not verified with the latest release of ACRN. See the +`v2.5 release ROSCube documentation +`_ +for the latest instructions. diff --git a/doc/glossary.rst b/doc/glossary.rst index a654368bd..bfe1a653d 100644 --- a/doc/glossary.rst +++ b/doc/glossary.rst @@ -6,6 +6,15 @@ Glossary of Terms .. glossary:: :sorted: + AaaG + LaaG + WaaG + Acronyms for Android, Linux, and Windows as a Guest VM. ACRN supports a + variety of :term:`User VM` OS choices. Your choice depends on the + needs of your application. For example, Windows is popular for + Human-Machine Interface (HMI) applications in industrial applications, + while Linux is a likely OS choice for a VM running an AI application. + ACPI Advanced Configuration and Power Interface @@ -14,19 +23,6 @@ Glossary of Terms real-time and safety-criticality in mind, optimized to streamline embedded development through an open source platform. - AcrnGT - Intel GVT-g technology for ACRN. - - ACRN-DM - A user mode device model application running in Service OS to provide - device emulations in ACRN hypervisor. - - aperture - CPU-visible graphics memory - - low GM - see :term:`aperture` - API Application Program Interface: A defined set of routines and protocols for building application software. @@ -40,63 +36,37 @@ Glossary of Terms BIOS Basic Input/Output System. - Dom0 i915 - The Intel Graphics driver running in Domain 0 + DM + Device Model + An application within the Service VM responsible for creating and + launching a User VM and then performing device emulation for the devices + configured for sharing with that User VM. The Service VM and Device Model + can access hardware resources directly through native drivers and provide + device sharing services to User VMs. User VMs can access hardware devices + directly if they've been configured as passthrough devices. - ELSP - GPU's ExecList submission port + Development Computer + Host + As with most IoT development environments, you configure, compile, and + build your application on a separate system from where the application is + deployed and run (i.e., the :term:`Target`). ACRN recommends using Ubuntu + 18.04 as the OS on your development computer and that is an assumption in + our documentation. - GGTT - Global Graphic Translation Table. The virtual address page table - used by a GPU to reference system memory. - - GMA - Graphics Memory Address - - GPU - Graphics Processing Unit - - GTT - Graphic Translation Table - - GTTMMADR - Graphic Translation Table Memory Map Address - - GuC - Graphic Micro-controller - - GVT - Graphics Virtual Technology. GVT-g core device model module up-streamed - to the Linux kernel. + Guest + Guest VM + A term used to refer to any :term:`VM` that runs on the hypervisor. Both Service + and User VMs are considered Guest VMs from the hypervisor's perspective, + albeit with different properties. *(You'll find the term Guest used in the + names of functions and variables in the ACRN source code.)* GVT-d - Virtual dedicated graphics acceleration (one VM to one physical GPU) + Virtual dedicated graphics acceleration (one VM to one physical GPU). - GVT-g - Virtual graphics processing unit (multiple VMs to one physical GPU) - - GVT-s - Virtual shared graphics acceleration (multiple VMs to one physical GPU) - - Hidden GM - Hidden or High graphics memory, not visible to the CPU. - - High GM - See :term:`Hidden GM` - - Hybrid Mode - One of three operation modes (hybrid, partition, sharing) that ACRN supports. - In this mixed mode, physical hardware resources can be both partitioned to - individual user VMs and shared across user VMs. - - I2C - Inter-Integrated Circuit - - i915 - The Intel Graphics driver - - IC - Instrument Cluster + Hybrid + One of three operation scenarios (partitioned, shared, and hybrid) that ACRN supports. + In the hybrid mode, some physical hardware resources can be partitioned to + individual User VMs while others are shared across User VMs. IDT Interrupt Descriptor Table: a data structure used by the x86 @@ -107,64 +77,39 @@ Glossary of Terms Interrupt Service Routine: Also known as an interrupt handler, an ISR is a callback function whose execution is triggered by a hardware interrupt (or software interrupt instructions) and is used to handle - high-priority conditions that require interrupting the current code + high-priority conditions that require interrupting the code currently executing on the processor. - IVE - In-Vehicle Experience - - IVI - In-vehicle Infotainment - - OS - Operating System - - OSPM - Operating System Power Management - Passthrough Device - Physical devices (typically PCI) exclusively assigned to a guest. In - the Project ACRN architecture, passthrough devices are owned by the - foreground OS. + Physical I/O devices (typically PCI) exclusively assigned to a User VM so + that the VM can access the hardware device directly and with minimal (if any) + VM management involvement. Normally, the Service VM owns the hardware + devices shared among User VMs and virtualized access is done through + Device Model emulation. - Partition Mode - One of three operation modes (partition, sharing, hybrid) that ACRN supports. - Physical hardware resources are partitioned to individual user VMs. - - PCI - Peripheral Component Interface. - - PDE - Page Directory Entry - - PM - Power Management + Partitioned + One of three operation scenarios (partitioned, shared, and hybrid) that ACRN supports. + Physical hardware resources are dedicated to individual User VMs. Pre-launched VM - Pre-launched VMs are started by the ACRN hypervisor before the - Service VM is launched. (See :term:`Post-launched VM`) + A :term:`User VM` launched by the hypervisor before the :term:`Service VM` + is started. Such a User VM runs independently of and is partitioned from + the Service VM and other post-launched VMs. It has its own carefully + configured and dedicated hardware resources such as CPUs, memory, and I/O + devices. Other VMs, including the Service VM, may not even be aware of a + pre-launched VM's existence. A pre-launched VM can be used as a + special-case :term:`Safety VM` for reacting to critical system failures. + It cannot take advantage of the Service VM or Device Model services. Post-launched VM - Post-Launched VMs are launched and configured by the Service VM. - (See :term:`Pre-launched VM`) - - PTE - Page Table Entry - - PV - Para-virtualization (See - https://en.wikipedia.org/wiki/Paravirtualization) - - PVINFO - Para-Virtualization Information Page, a MMIO range used to - implement para-virtualization + A :term:`User VM` configured and launched by the Service VM and typically + accessing shared hardware resources managed by the Service VM and Device + Model. Most User VMs are post-launched while special-purpose User VMs are + pre-launched. QEMU Quick EMUlator. Machine emulator running in user space. - RSE - Rear Seat Entertainment - RDT Intel Resource Director Technology (Intel RDT) provides a set of monitoring and allocation capabilities to control resources such as @@ -172,36 +117,54 @@ Glossary of Terms Memory Bandwidth Allocation (MBA). RTVM - Real-time VM. A specially-designed VM that can run hard real-time or - soft real-time workloads (or applications) much more efficiently - than the typical User VM through the use of a passthrough interrupt - controller, polling-mode Virtio, Intel RDT allocation features (CAT, - MBA), and I/O prioritization. RTVMs are typically a :term:`Pre-launched VM`. - A non-:term:`Safety VM` with real-time requirements is a - :term:`Post-launched VM`. + Real-time VM + A :term:`User VM` configured specifically for real-time applications and + their performance needs. ACRN supports near bare-metal performance for a + post-launched real-time VM by configuring certain key technologies or + enabling device-passthrough to avoid common virtualization and + device-access overhead issues. Such technologies include: using a + passthrough interrupt controller, polling-mode Virtio, Intel RDT + allocation features (CAT, MBA), and I/O prioritization. RTVMs are + typically a :term:`Pre-launched VM`. A non-:term:`Safety VM` with + real-time requirements is a :term:`Post-launched VM`. Safety VM - A special VM with dedicated hardware resources, running in - partition mode, and providing overall system health-monitoring - functionality. Currently, a Safety VM is always a pre-launched User VM. + A special VM with dedicated hardware resources for providing overall + system health-monitoring functionality. A safety VM is always a + pre-launched User VM, either in a partitioned or hybrid scenario. - SDC - Software Defined Cockpit - - Service VM - The Service VM is generally the first VM launched by ACRN and can - access hardware resources directly by running native drivers and - provides device sharing services to User VMs via the Device Model. - - Sharing Mode - One of three operation modes (sharing, hybrid, partition) that ACRN supports. - Most of the physical hardware resources are shared across user VMs. + Scenario + A collection of hypervisor and VM configuration settings that define an + ACRN-based application's environment. A scenario configuration is stored + in a scenario XML file and edited using a GUI configuration tool. The + scenario configuration, along with the target board configuration, is used + by the ACRN build system to modify the source code to build tailored + images of the hypervisor and Service VM for the application. ACRN provides + example scenarios for shared, partitioned, and hybrid configurations that + developers can use to define a scenario configuration appropriate for + their own application. SOS - Obsolete, see :term:`Service VM` - Service OS - Obsolete, see :term:`Service VM` + Service VM + A special VM, directly launched by the hypervisor. The Service VM can + access hardware resources directly by running native drivers and provides + device sharing services to post-launched User VMs through the ACRN Device + Model (DM). Hardware resources include CPUs, memory, graphics memory, USB + devices, disk, and network mediation. *(Historically, the Service VM was + called the Service OS or SOS. You may still see these terms used in the + code and API interfaces.)* + + Industry + Shared + One of three operation scenarios (shared, hybrid, partitioned) that ACRN supports. + Most of the physical hardware resources are shared across User VMs. + *(Industry scenario is being renamed to Shared in the v2.7 release.)* + + Target + This is the hardware where the configured ACRN hypervisor and + developer-written application (built on the :term:`Development Computer`) is + deployed and runs. UEFI Unified Extensible Firmare Interface. UEFI replaces the @@ -210,33 +173,25 @@ Glossary of Terms important, support Secure Boot, checking the OS validity to ensure no malware has tampered with the boot process. - User VM - User Virtual Machine. - UOS - Obsolete, see :term:`User VM` - User OS - Obsolete, see :term:`User VM` - - vGPU - Virtual GPU Instance, created by GVT-g and used by a VM - - HSM - Hypervisor Service Module - - Virtio-BE - Back-End, VirtIO framework provides front-end driver and back-end driver - for IO mediators, developer has habit of using Shorthand. So they say - Virtio-BE and Virtio-FE - - Virtio-FE - Front-End, VirtIO framework provides front-end driver and back-end - driver for IO mediators, developer has habit of using Shorthand. So - they say Virtio-BE and Virtio-FE + User VM + A :term:`VM` where user-defined environments and applications run. User VMs can + run different OSes based on their needs, including for example, Ubuntu for + an AI application, Android or Windows for a Human-Machine Interface, or a + hard real-time control OS such as Zephyr, VxWorks, or RT-Linux for soft or + hard real-time control. There are three types of ACRN User VMs: pre-launched, + post-launched standard, and post-launched real-time. *(Historically, a + User VM was also called a User OS, or simply UOS. You may still see these + other terms used in the code and API interfaces.)* VM - Virtual Machine, a guest OS running environment + Virtual Machine + A compute resource that uses software instead of physical hardware to run a + program. Multiple VMs can run independently on the same physical machine, + and with their own OS. A hypervisor uses direct access to the underlying + machine to create the software environment for sharing and managing + hardware resources. VMM Virtual Machine Monitor diff --git a/doc/introduction/images/ACRN-Hybrid-RT.png b/doc/introduction/images/ACRN-Hybrid-RT.png deleted file mode 100644 index 7fde71b13..000000000 Binary files a/doc/introduction/images/ACRN-Hybrid-RT.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-Industry.png b/doc/introduction/images/ACRN-Industry.png deleted file mode 100644 index a77640d14..000000000 Binary files a/doc/introduction/images/ACRN-Industry.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-Logical-Partition.png b/doc/introduction/images/ACRN-Logical-Partition.png deleted file mode 100644 index e638f4158..000000000 Binary files a/doc/introduction/images/ACRN-Logical-Partition.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-V2-SDC-scenario.png b/doc/introduction/images/ACRN-V2-SDC-scenario.png deleted file mode 100644 index ba8c39216..000000000 Binary files a/doc/introduction/images/ACRN-V2-SDC-scenario.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-V2-industrial-scenario.png b/doc/introduction/images/ACRN-V2-industrial-scenario.png deleted file mode 100644 index a77640d14..000000000 Binary files a/doc/introduction/images/ACRN-V2-industrial-scenario.png and /dev/null differ diff --git a/doc/introduction/images/ACRN-hybrid-rt-example.png b/doc/introduction/images/ACRN-hybrid-rt-example.png new file mode 100644 index 000000000..30b679d04 Binary files /dev/null and b/doc/introduction/images/ACRN-hybrid-rt-example.png differ diff --git a/doc/introduction/images/ACRN-industry-example.png b/doc/introduction/images/ACRN-industry-example.png new file mode 100644 index 000000000..d027739e6 Binary files /dev/null and b/doc/introduction/images/ACRN-industry-example.png differ diff --git a/doc/introduction/images/ACRN-partitioned-example.png b/doc/introduction/images/ACRN-partitioned-example.png new file mode 100644 index 000000000..805022d22 Binary files /dev/null and b/doc/introduction/images/ACRN-partitioned-example.png differ diff --git a/doc/introduction/images/VMX-brief.png b/doc/introduction/images/VMX-brief.png deleted file mode 100644 index 878f7f63b..000000000 Binary files a/doc/introduction/images/VMX-brief.png and /dev/null differ diff --git a/doc/introduction/images/architecture.png b/doc/introduction/images/architecture.png deleted file mode 100644 index f728af177..000000000 Binary files a/doc/introduction/images/architecture.png and /dev/null differ diff --git a/doc/introduction/images/boot-flow-2.dot b/doc/introduction/images/boot-flow-2.dot index 819d38d07..6effe74b6 100644 --- a/doc/introduction/images/boot-flow-2.dot +++ b/doc/introduction/images/boot-flow-2.dot @@ -1,7 +1,7 @@ digraph G { rankdir=LR; bgcolor="transparent"; - UEFI -> "GRUB" -> "acrn.32.out" -> "Pre-launched\nVM Kernel" - "acrn.32.out" -> "Service VM\nKernel" -> "ACRN\nDevice Model" -> + UEFI -> "GRUB" -> "acrn.bin" -> "Pre-launched\nVM Kernel" + "acrn.bin" -> "Service VM\nKernel" -> "ACRN\nDevice Model" -> "Virtual\nBootloader"; } diff --git a/doc/introduction/images/boot-flow.dot b/doc/introduction/images/boot-flow.dot deleted file mode 100644 index d65c82a50..000000000 --- a/doc/introduction/images/boot-flow.dot +++ /dev/null @@ -1,6 +0,0 @@ -digraph G { - rankdir=LR; - bgcolor="transparent"; - UEFI -> "acrn.efi" -> "OS\nBootloader" -> - "SOS\nKernel" -> "ACRN\nDevice Model" -> "Virtual\nBootloader"; -} diff --git a/doc/introduction/images/device-model.png b/doc/introduction/images/device-model.png deleted file mode 100644 index 62f5b0762..000000000 Binary files a/doc/introduction/images/device-model.png and /dev/null differ diff --git a/doc/introduction/images/io-emulation-path.png b/doc/introduction/images/io-emulation-path.png deleted file mode 100644 index 412e1caed..000000000 Binary files a/doc/introduction/images/io-emulation-path.png and /dev/null differ diff --git a/doc/introduction/images/virtio-architecture.png b/doc/introduction/images/virtio-architecture.png deleted file mode 100644 index 04f19ff1b..000000000 Binary files a/doc/introduction/images/virtio-architecture.png and /dev/null differ diff --git a/doc/introduction/images/virtio-framework-kernel.png b/doc/introduction/images/virtio-framework-kernel.png deleted file mode 100644 index baeb587fd..000000000 Binary files a/doc/introduction/images/virtio-framework-kernel.png and /dev/null differ diff --git a/doc/introduction/images/virtio-framework-userland.png b/doc/introduction/images/virtio-framework-userland.png deleted file mode 100644 index bcfb9e28f..000000000 Binary files a/doc/introduction/images/virtio-framework-userland.png and /dev/null differ diff --git a/doc/introduction/index.rst b/doc/introduction/index.rst index 72370f3a5..3c269edf4 100644 --- a/doc/introduction/index.rst +++ b/doc/introduction/index.rst @@ -3,392 +3,403 @@ What Is ACRN ############ -Introduction to Project ACRN -**************************** - -ACRN |trade| is a flexible, lightweight reference hypervisor, built with -real-time and safety-criticality in mind, and optimized to streamline -embedded development through an open source platform. ACRN defines a -device hypervisor reference stack and an architecture for running -multiple software subsystems, managed securely, on a consolidated system -using a virtual machine manager (VMM). It also defines a reference -framework implementation for virtual device emulation, called the "ACRN -Device Model". - -The ACRN Hypervisor is a Type 1 reference hypervisor stack, running -directly on the bare-metal hardware, and is suitable for a variety of -IoT and embedded device solutions. The ACRN hypervisor addresses the gap -that currently exists between datacenter hypervisors, and hard -partitioning hypervisors. The ACRN hypervisor architecture partitions -the system into different functional domains, with carefully selected -user VM sharing optimizations for IoT and embedded devices. - -ACRN Open Source Roadmap -************************ - -Stay informed on what's ahead for ACRN by visiting the -`ACRN Project Roadmap `_ on the -projectacrn.org website. - -For up-to-date happenings, visit the `ACRN blog `_. - -ACRN High-Level Architecture -**************************** - -The ACRN architecture has evolved since its initial v0.1 release in -July 2018. Beginning with the v1.1 release, the ACRN architecture has -flexibility to support *logical partitioning*, *sharing*, and a *hybrid* -mode. As shown in :numref:`V2-hl-arch`, hardware resources can be -partitioned into two parts: - -.. figure:: images/ACRN-V2-high-level-arch.png - :width: 700px - :align: center - :name: V2-hl-arch - - ACRN high-level architecture - -Shown on the left of :numref:`V2-hl-arch`, resources are partitioned and -used by a pre-launched user virtual machine (VM). Pre-launched here -means that it is launched by the hypervisor directly, even before the -service VM is launched. The pre-launched VM runs independently of other -virtual machines and owns dedicated hardware resources, such as a CPU -core, memory, and I/O devices. Other virtual machines may not even be -aware of the pre-launched VM's existence. Because of this, it can be -used as a safety OS virtual machine. Platform hardware failure -detection code runs inside this pre-launched VM and will take emergency -actions when system critical failures occur. - -Shown on the right of :numref:`V2-hl-arch`, the remaining hardware -resources are shared among the service VM and user VMs. The service VM -is similar to Xen's Dom0, and a user VM is similar to Xen's DomU. The -service VM is the first VM launched by ACRN, if there is no pre-launched -VM. The service VM can access hardware resources directly by running -native drivers and it provides device sharing services to the user VMs -through the Device Model. Currently, the service VM is based on Linux, -but it can also use other operating systems as long as the ACRN Device -Model is ported into it. A user VM can be Ubuntu*, Android*, -Windows* or VxWorks*. There is one special user VM, called a -post-launched real-time VM (RTVM), designed to run a hard real-time OS, -such as Zephyr*, VxWorks*, or Xenomai*. Because of its real-time capability, RTVM -can be used for soft programmable logic controller (PLC), inter-process -communication (IPC), or Robotics applications. - -.. _usage-scenarios: - -Usage Scenarios -*************** - -ACRN can be used for heterogeneous workload consolidation in -resource-constrained embedded platform, targeting for functional safety, -or hard real-time support. It can take multiple separate systems and -enable a workload consolidation solution operating on a single compute -platform to run both safety-critical applications and non-safety -applications, together with security functions that safeguard the -system. - -There are a number of predefined scenarios included in ACRN's source code. They -all build upon the three fundamental modes of operation that have been explained -above, i.e. the *logical partitioning*, *sharing*, and *hybrid* modes. They -further specify the number of VMs that can be run, their attributes and the -resources they have access to, either shared with other VMs or exclusively. - -The predefined scenarios are in the :acrn_file:`misc/config_tools/data` folder -in the source code. - -The :ref:`acrn_configuration_tool` tutorial explains how to use the ACRN -configuration toolset to create your own scenario or modify an existing one. - -Industrial Workload Consolidation -================================= - -.. figure:: images/ACRN-V2-industrial-scenario.png - :width: 600px - :align: center - :name: V2-industrial-scenario - - ACRN Industrial Workload Consolidation scenario - -Supporting Workload consolidation for industrial applications is even -more challenging. The ACRN hypervisor needs to run different workloads with no -interference, increase security functions that safeguard the system, run hard -real-time sensitive workloads together with general computing workloads, and -conduct data analytics for timely actions and predictive maintenance. - -Virtualization is especially important in industrial environments -because of device and application longevity. Virtualization enables -factories to modernize their control system hardware by using VMs to run -older control systems and operating systems far beyond their intended -retirement dates. - -As shown in :numref:`V2-industrial-scenario`, the Service VM can start a number -of post-launched User VMs and can provide device sharing capabilities to these. -In total, up to 7 post-launched User VMs can be started: - -- 5 regular User VMs, -- One `Kata Containers `_ User VM (see - :ref:`run-kata-containers` for more details), and -- One real-time VM (RTVM). - -In this example, one post-launched User VM provides Human Machine Interface -(HMI) capability, another provides Artificial Intelligence (AI) capability, some -compute function is run the Kata Container and the RTVM runs the soft -Programmable Logic Controller (PLC) that requires hard real-time -characteristics. - -:numref:`V2-industrial-scenario` shows ACRN's block diagram for an -Industrial usage scenario: - -- ACRN boots from the SoC platform, and supports firmware such as the - UEFI BIOS. -- The ACRN hypervisor can create VMs that run different OSes: - - - a Service VM such as Ubuntu*, - - a Human Machine Interface (HMI) application OS such as Windows*, - - an Artificial Intelligence (AI) application on Linux*, - - a Kata Container application, and - - a real-time control OS such as Zephyr*, VxWorks* or RT-Linux*. - -- The Service VM, provides device sharing functionalities, such as - disk and network mediation, to other virtual machines. - It can also run an orchestration agent allowing User VM orchestration - with tools such as Kubernetes*. -- The HMI Application OS can be Windows* or Linux*. Windows is dominant - in Industrial HMI environments. -- ACRN can support a soft real-time OS such as preempt-rt Linux for - soft-PLC control, or a hard real-time OS that offers less jitter. - -Automotive Application Scenarios -================================ - -As shown in :numref:`V2-SDC-scenario`, the ACRN hypervisor can be used -for building Automotive Software Defined Cockpit (SDC) and in-vehicle -experience (IVE) solutions. - -.. figure:: images/ACRN-V2-SDC-scenario.png - :width: 600px - :align: center - :name: V2-SDC-scenario - - ACRN Automotive SDC scenario - -As a reference implementation, ACRN provides the basis for embedded -hypervisor vendors to build solutions with a reference I/O mediation -solution. In this scenario, an automotive SDC system consists of the -instrument cluster (IC) system running in the Service VM and the in-vehicle -infotainment (IVI) system is running the post-launched User VM. Additionally, -one could modify the SDC scenario to add more post-launched User VMs that can -host rear seat entertainment (RSE) systems (not shown on the picture). - -An **instrument cluster (IC)** system is used to show the driver operational -information about the vehicle, such as: - -- the speed, fuel level, trip mileage, and other driving information of - the car; -- projecting heads-up images on the windshield, with alerts for low - fuel or tire pressure; -- showing rear-view and surround-view cameras for parking assistance. - -An **in-vehicle infotainment (IVI)** system's capabilities can include: - -- navigation systems, radios, and other entertainment systems; -- connection to mobile devices for phone calls, music, and applications - via voice recognition; -- control interaction by gesture recognition or touch. - -A **rear seat entertainment (RSE)** system could run: - -- entertainment system; -- virtual office; -- connection to the front-seat IVI system and mobile devices (cloud - connectivity); -- connection to mobile devices for phone calls, music, and applications - via voice recognition; -- control interaction by gesture recognition or touch. - -The ACRN hypervisor can support both Linux* VM and Android* VM as User -VMs managed by the ACRN hypervisor. Developers and OEMs can use this -reference stack to run their own VMs, together with IC, IVI, and RSE -VMs. The Service VM runs in the background and the User VMs run as -Post-Launched VMs. - -A block diagram of ACRN's SDC usage scenario is shown in -:numref:`V2-SDC-scenario` above. - -- The ACRN hypervisor sits right on top of the bootloader for fast booting - capabilities. -- Resources are partitioned to ensure safety-critical and - non-safety-critical domains are able to coexist on one platform. -- Rich I/O mediators allow sharing of various I/O devices across VMs, - delivering a comprehensive user experience. -- Multiple operating systems are supported by one SoC through efficient - virtualization. - -Best Known Configurations -************************* - -The ACRN GitHub codebase defines five best known configurations (BKC) -targeting SDC and Industry usage scenarios. Developers can start with -one of these predefined configurations and customize it to their own -application scenario needs. - -.. list-table:: Scenario-based Best Known Configurations - :header-rows: 1 - - * - Predefined BKC - - Usage Scenario - - VM0 - - VM1 - - VM2 - - VM3 - - * - Software Defined Cockpit - - SDC - - Service VM - - Post-launched VM - - One Kata Containers VM - - - - * - Industry Usage Config - - Industry - - Service VM - - Up to 5 Post-launched VMs - - One Kata Containers VM - - Post-launched RTVM (Soft or Hard real-time) - - * - Hybrid Usage Config - - Hybrid - - Pre-launched VM (Safety VM) - - Service VM - - Post-launched VM - - - - * - Hybrid real-time Usage Config - - Hybrid RT - - Pre-launched VM (real-time VM) - - Service VM - - Post-launched VM - - - - * - Logical Partition - - Logical Partition - - Pre-launched VM (Safety VM) - - Pre-launched VM (QM Linux VM) - - - - - -Here are block diagrams for each of these four scenarios. - -SDC Scenario -============ - -In this SDC scenario, an instrument cluster (IC) system runs with the -Service VM and an in-vehicle infotainment (IVI) system runs in a user -VM. - -.. figure:: images/ACRN-V2-SDC-scenario.png - :width: 600px - :align: center - :name: ACRN-SDC - - SDC scenario with two VMs - -Industry Scenario -================= - -In this Industry scenario, the Service VM provides device sharing capability for -a Windows-based HMI User VM. One post-launched User VM can run a Kata Container -application. Another User VM supports either hard or soft real-time OS -applications. Up to five additional post-launched User VMs support functions -such as human/machine interface (HMI), artificial intelligence (AI), computer -vision, etc. - -.. figure:: images/ACRN-Industry.png - :width: 600px - :align: center - :name: Industry - - Industry scenario - -Hybrid Scenario -=============== - -In this Hybrid scenario, a pre-launched Safety/RTVM is started by the -hypervisor. The Service VM runs a post-launched User VM that runs non-safety or -non-real-time tasks. - -.. figure:: images/ACRN-Hybrid.png - :width: 600px - :align: center - :name: ACRN-Hybrid - - Hybrid scenario - -Hybrid Real-Time (RT) Scenario -============================== - -In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the -hypervisor. The Service VM runs a post-launched User VM that runs non-safety or -non-real-time tasks. - -.. figure:: images/ACRN-Hybrid-RT.png - :width: 600px - :align: center - :name: ACRN-Hybrid-RT - - Hybrid RT scenario - -Logical Partition Scenario -========================== - -This scenario is a simplified configuration for VM logical -partitioning: both User VMs are independent and isolated, they do not share -resources, and both are automatically launched at boot time by the hypervisor. -The User VMs can be Real-Time VMs (RTVMs), Safety VMs, or standard User VMs. - -.. figure:: images/ACRN-Logical-Partition.png - :width: 600px - :align: center - :name: logical-partition - - Logical Partitioning scenario - +Introduction +************ + +IoT and Edge system developers face mounting demands on the systems they build, as connected +devices are increasingly expected to support a range of hardware resources, +operating systems, and software tools and applications. Virtualization is key to +meeting these broad needs. Most existing hypervisor and Virtual Machine Manager +solutions don't offer the right size, boot speed, real-time support, and +flexibility for IoT and Edge systems. Data center hypervisor code is too big, doesn't +offer safety or hard real-time capabilities, and requires too much performance +overhead for embedded development. The ACRN hypervisor was built to fill this +need. + +ACRN is a type 1 reference hypervisor stack that runs on bare-metal hardware, +with fast booting, and is configurable for a variety of IoT, Edge, and embedded device +solutions. It provides a flexible, lightweight hypervisor, built with real-time +and safety-criticality in mind, optimized to streamline embedded development +through an open-source, scalable reference platform. It has an architecture that +can run multiple OSs and VMs, managed securely, on a consolidated system by +means of efficient virtualization. Resource partitioning ensures +co-existing heterogeneous workloads on one system hardware platform do not +interfere with each other. + +ACRN defines a reference framework implementation for virtual device emulation, +called the ACRN Device Model or DM, with rich I/O mediators. It also supports +non-emulated device passthrough access to satisfy time-sensitive requirements +and low-latency access needs of real-time applications. To keep the hypervisor +code base as small and efficient as possible, the bulk of the Device Model +implementation resides in the Service VM to provide sharing and other +capabilities. + +ACRN is built to virtualize embedded IoT and Edge development functions +(for a camera, audio, graphics, storage, networking, and more), so it's ideal +for a broad range of IoT and Edge uses, including industrial, automotive, and retail +applications. Licensing ********* .. _BSD-3-Clause: https://opensource.org/licenses/BSD-3-Clause -Both the ACRN hypervisor and ACRN Device model software are provided +The ACRN hypervisor and ACRN Device Model software are provided under the permissive `BSD-3-Clause`_ license, which allows *"redistribution and use in source and binary forms, with or without modification"* together with the intact copyright notice and disclaimers noted in the license. -ACRN Device Model, Service VM, and User VM -****************************************** +Key Capabilities +**************** -To keep the hypervisor code base as small and efficient as possible, the -bulk of the device model implementation resides in the Service VM to -provide sharing and other capabilities. The details of which devices are -shared and the mechanism used for their sharing is described in -`pass-through`_ section below. +ACRN has these key capabilities and benefits: + +* **Small Footprint**: The hypervisor is optimized for resource-constrained devices + with significantly fewer lines of code (about 40K) than datacenter-centric + hypervisors (over 150K). +* **Built with Real-time in Mind**: Low-latency, fast boot times, and responsive + hardware device communication supporting near bare-metal performance. Both + soft and hard real-time VM needs are supported including no VMExit during + runtime operations, LAPIC and PCI passthrough, static CPU assignment, and + more. +* **Built for Embedded IoT and Edge Virtualization**: ACRN supports virtualization beyond the + basics and includes CPU, I/O, and networking virtualization of embedded IoT + and Edge + device functions and a rich set of I/O mediators to share devices across + multiple VMs. The Service VM communicates directly with the system hardware + and devices ensuring low latency access. The hypervisor is booted directly by the + bootloader for fast and secure booting. +* **Built with Safety-Critical Virtualization in Mind**: Safety-critical workloads + can be isolated from the rest of the VMs and have priority to meet their + design needs. Partitioning of resources supports safety-critical and + non-safety-critical domains coexisting on one SoC using Intel VT-backed + isolation. +* **Adaptable and Flexible**: ACRN has multi-OS support with efficient + virtualization for VM OSs including Linux, Android, Zephyr, and Windows, as + needed for a variety of application use cases. ACRN scenario configurations + support shared, partitioned, and hybrid VM models to support a variety of + application use cases. +* **Truly Open Source**: With its permissive BSD licensing and reference + implementation, ACRN offers scalable support with a significant up-front R&D + cost saving, code transparency, and collaborative software development with + industry leaders. + +Background +********** + +The ACRN architecture has evolved since its initial v0.1 release in July 2018. +Beginning with the v1.1 release, the ACRN architecture has flexibility to +support VMs with shared HW resources, partitioned HW resources, and a hybrid +VM model that simultaneously supported shared and partitioned resources. It enables a +workload consolidation solution taking multiple separate systems and running +them on a single compute platform to run heterogeneous workloads, with hard and +soft real-time support. + +Workload management and orchestration are also enabled with ACRN, allowing +open-source orchestrators such as OpenStack to manage ACRN VMs. ACRN supports +secure container runtimes such as Kata Containers orchestrated via Docker or +Kubernetes. + + +High-Level Architecture +*********************** + +ACRN is a Type 1 hypervisor, meaning it runs directly on bare-metal +hardware. It implements a hybrid Virtual Machine Manager (VMM) architecture, +using a privileged Service VM that manages the I/O devices and provides I/O +mediation. Multiple User VMs are supported with each of them potentially running +different OSs. By running systems in separate VMs, you can isolate VMs +and their applications, reducing potential attack surfaces and minimizing +interference, but potentially introducing additional latency for applications. + +ACRN relies on Intel Virtualization Technology (Intel VT) and runs in Virtual +Machine Extension (VMX) root operation, host mode, or VMM mode. All the User VMs +and the Service VM run in VMX non-root operation, or guest mode. The Service VM runs with the system's highest virtual machine priority to ensure required device time-sensitive requirements and system quality of service (QoS). Service VM tasks run with mixed priority. Upon a callback servicing a particular User VM request, the corresponding software (or mediator) in the Service VM inherits the User VM priority. -There may also be additional low-priority background tasks within the -Service OS. -In the automotive example we described above, the User VM is the central -hub of vehicle control and in-vehicle entertainment. It provides support -for radio and entertainment options, control of the vehicle climate -control, and vehicle navigation displays. It also provides connectivity -options for using USB, Bluetooth, and Wi-Fi for third-party device -interaction with the vehicle, such as Android Auto\* or Apple CarPlay*, -and many other features. +As mentioned earlier, hardware resources used by VMs can be configured into +two parts, as shown in this hybrid VM sample configuration: + +.. figure:: images/ACRN-V2-high-level-arch.png + :width: 700px + :align: center + :name: V2-hl-arch + + ACRN High-Level Architecture Hybrid Example + +Shown on the left of :numref:`V2-hl-arch`, we've partitioned resources dedicated +to a User VM launched by the hypervisor and before the Service VM is started. +This pre-launched VM runs independently of other virtual machines and owns +dedicated hardware resources, such as a CPU core, memory, and I/O devices. Other +VMs may not even be aware of the pre-launched VM's existence. Because of this, +it can be used as a Safety VM that runs hardware failure detection code and can +take emergency actions when system critical failures occur. Failures in other +VMs or rebooting the Service VM will not directly impact execution of this +pre-launched Safety VM. + +Shown on the right of :numref:`V2-hl-arch`, the remaining hardware resources are +shared among the Service VM and User VMs. The Service VM is launched by the +hypervisor after any pre-launched VMs are launched. The Service VM can access +remaining hardware resources directly by running native drivers and provides +device sharing services to the User VMs, through the Device Model. These +post-launched User VMs can run one of many OSs including Ubuntu, Android, +Windows, or a real-time OS such as Zephyr, VxWorks, or Xenomai. Because of its +real-time capability, a real-time VM (RTVM) can be used for software +programmable logic controller (PLC), inter-process communication (IPC), or +Robotics applications. These shared User VMs could be impacted by a failure in +the Service VM since they may rely on its mediation services for device access. + +The Service VM owns most of the devices including the platform devices, and +provides I/O mediation. The notable exceptions are the devices assigned to the +pre-launched User VM. Some PCIe devices may be passed through to the +post-launched User OSes via the VM configuration. + +The ACRN hypervisor also runs the ACRN VM manager to collect running +information of the User VMs, and controls the User VMs such as starting, +stopping, and pausing a VM, and pausing or resuming a virtual CPU. + +See the :ref:`hld-overview` developer reference material for more in-depth +information. + +ACRN Device Model Architecture +****************************** + +Because devices may need to be shared between VMs, device emulation is +used to give VM applications (and their OSs) access to these shared devices. +Traditionally there are three architectural approaches to device +emulation: + +* **Device emulation within the hypervisor**: a common method implemented within + the VMware workstation product (an operating system-based hypervisor). In + this method, the hypervisor includes emulations of common devices that the + various guest operating systems can share, including virtual disks, virtual + network adapters, and other necessary platform elements. + +* **User space device emulation**: rather than the device emulation embedded + within the hypervisor, it is implemented in a separate user space application. + QEMU, for example, provides this kind of device emulation also used by other + hypervisors. This model is advantageous, because the device emulation is + independent of the hypervisor and can therefore be shared for other + hypervisors. It also permits arbitrary device emulation without having to + burden the hypervisor (which operates in a privileged state) with this + functionality. + +* **Paravirtualized (PV) drivers**: a hypervisor-based device emulation model + introduced by the `XEN Project`_. In this model, the hypervisor includes the + physical device drivers, and each guest operating system includes a + hypervisor-aware driver that works in concert with the hypervisor drivers. + +.. _XEN Project: + https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum + +There's a price to pay for sharing devices. Whether device emulation is +performed in the hypervisor, or in user space within an independent VM, overhead +exists. This overhead is worthwhile as long as the devices need to be shared by +multiple guest operating systems. If sharing is not necessary, then there are +more efficient methods for accessing devices, for example, "passthrough." + +All emulation, para-virtualization, and passthrough are used in ACRN project. +ACRN defines a device emulation model where the Service VM owns all devices not +previously partitioned to pre-launched User VMs, and emulates these devices for +the User VM via the ACRN Device Model. The ACRN Device Model is thereby a +placeholder of the User VM. It allocates memory for the User VM OS, configures +and initializes the devices used by the User VM, loads the virtual firmware, +initializes the virtual CPU state, and invokes the ACRN hypervisor service to +execute the guest instructions. ACRN Device Model is an application running in +the Service VM that emulates devices based on command line configuration. + +See the :ref:`hld-devicemodel` developer reference for more information. + +Device Passthrough +****************** + +At the highest level, device passthrough is about providing isolation +of a device to a given guest operating system so that the device can be +used exclusively by that User VM. + +.. figure:: images/device-passthrough.png + :align: center + :name: device-passthrough + + Device Passthrough + +Near-native performance can be achieved by using device passthrough. This is +ideal for networking applications (or those with high disk I/O needs) that have +not adopted virtualization because of contention and performance degradation +through the hypervisor (using a driver in the hypervisor or through the +hypervisor to a user space emulation). Assigning devices to specific User VMs is +also useful when those devices inherently wouldn't be shared. For example, if a +system includes multiple video adapters, those adapters could be passed through +to unique User VM domains. + +Finally, there may be specialized PCI devices that only one User VM uses, +so they should be passed through to the User VM. Individual USB ports could be +isolated to a given domain too, or a serial port (which is itself not shareable) +could be isolated to a particular User VM. In the ACRN hypervisor, we support USB +controller passthrough only, and we don't support passthrough for a legacy +serial port (for example, ``0x3f8``). + +Hardware Support for Device Passthrough +======================================= + +Intel's processor architectures provide support for device passthrough with +Virtual Technology for Directed I/O (VT-d). VT-d maps User VM physical addresses to +machine physical addresses, so devices can use User VM physical addresses directly. +When this mapping occurs, the hardware takes care of access (and protection), +and the User VM OS can use the device as if it were a +non-virtualized system. In addition to mapping User VM to physical memory, +isolation prevents this device from accessing memory belonging to other VMs +or the hypervisor. + +Another innovation that helps interrupts scale to large numbers of VMs is called +Message Signaled Interrupts (MSI). Rather than relying on physical interrupt +pins to be associated with a User VM, MSI transforms interrupts into messages that +are more easily virtualized, scaling to thousands of individual interrupts. MSI +has been available since PCI version 2.2 and is also available in PCI Express +(PCIe). MSI is ideal for I/O virtualization, as it allows isolation of +interrupt sources (as opposed to physical pins that must be multiplexed or +routed through software). + +Hypervisor Support for Device Passthrough +========================================= + +By using the latest virtualization-enhanced processor architectures, hypervisors +and virtualization solutions can support device passthrough (using VT-d), +including Xen, KVM, and ACRN hypervisor. In most cases, the User VM OS +must be compiled to support passthrough by using kernel +build-time options. + +.. _static-configuration-scenarios: + +Static Configuration Based on Scenarios +*************************************** + +Scenarios are a way to describe the system configuration settings of the ACRN +hypervisor, VMs, and resources they have access to that meet your specific +application's needs such as compute, memory, storage, graphics, networking, and +other devices. Scenario configurations are stored in an XML format file and +edited using the ACRN configurator. + +Following a general embedded-system programming model, the ACRN hypervisor is +designed to be statically customized at build time per hardware and scenario, +rather than providing one binary for all scenarios. Dynamic configuration +parsing is not used in the ACRN hypervisor for these reasons: + +* **Reduce complexity**. ACRN is a lightweight reference hypervisor, built for + embedded IoT and Edge. As new platforms for embedded systems are rapidly introduced, + support for one binary could require more and more complexity in the + hypervisor, which is something we strive to avoid. +* **Maintain small footprint**. Implementing dynamic parsing introduces hundreds or + thousands of lines of code. Avoiding dynamic parsing helps keep the + hypervisor's Lines of Code (LOC) in a desirable range (less than 40K). +* **Improve boot time**. Dynamic parsing at runtime increases the boot time. Using a + static build-time configuration and not dynamic parsing helps improve the boot + time of the hypervisor. + +The scenario XML file together with a target board XML file are used to build +the ACRN hypervisor image tailored to your hardware and application needs. The ACRN +project provides a board inspector tool to automatically create the board XML +file by inspecting the target hardware. ACRN also provides a +:ref:`configurator tool ` +to create and edit a tailored scenario XML file based on predefined sample +scenario configurations. + +.. _usage-scenarios: + +Predefined Sample Scenarios +*************************** + +Project ACRN provides some predefined sample scenarios to illustrate how you +can define your own configuration scenarios. + + +* **Industry** is a traditional computing, memory, and device resource sharing + model among VMs. The ACRN hypervisor launches the Service VM. The Service VM + then launches any post-launched User VMs and provides device and resource + sharing mediation through the Device Model. The Service VM runs the native + device drivers to access the hardware and provides I/O mediation to the User + VMs. + + .. figure:: images/ACRN-industry-example.png + :width: 700px + :align: center + :name: arch-shared-example + + ACRN High-Level Architecture Industry (Shared) Example + + Virtualization is especially important in industrial environments because of + device and application longevity. Virtualization enables factories to + modernize their control system hardware by using VMs to run older control + systems and operating systems far beyond their intended retirement dates. + + The ACRN hypervisor needs to run different workloads with little-to-no + interference, increase security functions that safeguard the system, run hard + real-time sensitive workloads together with general computing workloads, and + conduct data analytics for timely actions and predictive maintenance. + + In this example, one post-launched User VM provides Human Machine Interface + (HMI) capability, another provides Artificial Intelligence (AI) capability, + some compute function is run the Kata Container, and the RTVM runs the soft + Programmable Logic Controller (PLC) that requires hard real-time + characteristics. + + - The Service VM, provides device sharing functionalities, such as disk and + network mediation, to other virtual machines. It can also run an + orchestration agent allowing User VM orchestration with tools such as + Kubernetes. + - The HMI Application OS can be Windows* or Linux*. Windows is dominant in + Industrial HMI environments. + - ACRN can support a soft real-time OS such as preempt-rt Linux for soft-PLC + control, or a hard real-time OS that offers less jitter. + +* **Partitioned** is a VM resource partitioning model when a User VM requires + independence and isolation from other VMs. A partitioned VM's resources are + statically configured and are not shared with other VMs. Partitioned User VMs + can be Real-Time VMs, Safety VMs, or standard VMs and are launched at boot + time by the hypervisor. There is no need for the Service VM or Device Model + since all partitioned VMs run native device drivers and directly access their + configured resources. + + .. figure:: images/ACRN-partitioned-example.png + :width: 700px + :align: center + :name: arch-partitioned-example + + ACRN High-Level Architecture Partitioned Example + + This scenario is a simplified configuration showing VM partitioning: both + User VMs are independent and isolated, they do not share resources, and both + are automatically launched at boot time by the hypervisor. The User VMs can + be Real-Time VMs (RTVMs), Safety VMs, or standard User VMs. + +* **Hybrid** scenario simultaneously supports both sharing and partitioning on + the consolidated system. The pre-launched (partitioned) User VMs, with their + statically configured and unshared resources, are started by the hypervisor. + The hypervisor then launches the Service VM. The post-launched (shared) User + VMs are started by the Device Model in the Service VM and share the remaining + resources. + + .. figure:: images/ACRN-hybrid-rt-example.png + :width: 700px + :align: center + :name: arch-hybrid-rt-example + + ACRN High-Level Architecture Hybrid-RT Example + + In this Hybrid real-time (RT) scenario, a pre-launched RTVM is started by the + hypervisor. The Service VM runs a post-launched User VM that runs non-safety or + non-real-time tasks. + +You can find the predefined scenario XML files in the +:acrn_file:`misc/config_tools/data` folder in the hypervisor source code. The +:ref:`acrn_configuration_tool` tutorial explains how to use the ACRN +configurator to create your own scenario, or to view and modify an existing one. Boot Sequence ************* @@ -420,448 +431,36 @@ The Boot process proceeds as follows: the ACRN Device Model and Virtual bootloader through ``dm-verity``. #. The virtual bootloader starts the User-side verified boot process. -In this boot mode, the boot options of pre-launched VM and service VM are defined +In this boot mode, the boot options of a pre-launched VM and the Service VM are defined in the variable of ``bootargs`` of struct ``vm_configs[vm id].os_config`` in the source code ``configs/scenarios/$(SCENARIO)/vm_configurations.c`` (which resides under the hypervisor build directory) by default. -Their boot options can be overridden by the GRUB menu. See :ref:`using_grub` for +These boot options can be overridden by the GRUB menu. See :ref:`using_grub` for details. The boot options of a post-launched VM are not covered by hypervisor -source code or a GRUB menu; they are defined in a guest image file or specified by +source code or a GRUB menu; they are defined in the User VM's OS image file or specified by launch scripts. -.. note:: +`Slim Bootloader`_ is an alternative boot firmware that can be used to +boot ACRN. The `Boot ACRN Hypervisor +`_ tutorial +provides more information on how to use SBL with ACRN. - `Slim Bootloader`_ is an alternative boot firmware that can be used to - boot ACRN. The `Boot ACRN Hypervisor - `_ tutorial - provides more information on how to use SBL with ACRN. +Learn More +********** +The ACRN documentation offers more details of topics found in this introduction +about the ACRN hypervisor architecture, Device Model, Service VM, and more. -ACRN Hypervisor Architecture -**************************** +These documents provide introductory information about development with ACRN: -ACRN hypervisor is a Type 1 hypervisor, running directly on bare-metal -hardware. It implements a hybrid VMM architecture, using a privileged -service VM, running the Service VM that manages the I/O devices and -provides I/O mediation. Multiple User VMs are supported, with each of -them running different OSs. +* :ref:`overview_dev` +* :ref:`gsg` +* :ref:`acrn_configuration_tool` -Running systems in separate VMs provides isolation between other VMs and -their applications, reducing potential attack surfaces and minimizing -safety interference. However, running the systems in separate VMs may -introduce additional latency for applications. +These documents provide more details and in-depth discussions of the ACRN +hypervisor architecture and high-level design, and a collection of advanced +guides and tutorials: -:numref:`V2-hl-arch` shows the ACRN hypervisor architecture, with -all types of Virtual Machines (VMs) represented: +* :ref:`hld` +* :ref:`develop_acrn` -- Pre-launched User VM (Safety/RTVM) -- Pre-launched Service VM -- Post-launched User VM -- Kata Container VM (post-launched) -- real-time VM (RTVM) - -The Service VM owns most of the devices including the platform devices, and -provides I/O mediation. The notable exceptions are the devices assigned to the -pre-launched User VM. Some PCIe devices may be passed through -to the post-launched User OSes via the VM configuration. The Service VM runs -hypervisor-specific applications together, such as the ACRN device model, and -ACRN VM manager. - -ACRN hypervisor also runs the ACRN VM manager to collect running -information of the User OS, and controls the User VM such as starting, -stopping, and pausing a VM, pausing or resuming a virtual CPU. - -.. figure:: images/architecture.png - :width: 600px - :align: center - :name: ACRN-architecture - - ACRN Hypervisor Architecture - -ACRN hypervisor takes advantage of Intel Virtualization Technology -(Intel VT), and ACRN hypervisor runs in Virtual Machine Extension (VMX) -root operation, or host mode, or VMM mode. All the guests, including -User VM and Service VM, run in VMX non-root operation, or guest mode. (Hereafter, -we use the terms VMM mode and Guest mode for simplicity). - -The VMM mode has 4 protection rings, but runs the ACRN hypervisor in -ring 0 privilege only, leaving rings 1-3 unused. The guest (including -Service VM and User VM), running in Guest mode, also has its own four protection -rings (ring 0 to 3). The User kernel runs in ring 0 of guest mode, and -user land applications run in ring 3 of User mode (ring 1 & 2 are -usually not used by commercial OSes). - -.. figure:: images/VMX-brief.png - :align: center - :name: VMX-brief - - VMX Brief - -As shown in :numref:`VMX-brief`, VMM mode and guest mode are switched -through VM Exit and VM Entry. When the bootloader hands off control to -the ACRN hypervisor, the processor hasn't enabled VMX operation yet. The -ACRN hypervisor needs to enable VMX operation through a VMXON instruction -first. Initially, the processor stays in VMM mode when the VMX operation -is enabled. It enters guest mode through a VM resume instruction (or -first-time VM launch), and returns to VMM mode through a VM exit event. VM -exit occurs in response to certain instructions and events. - -The behavior of processor execution in guest mode is controlled by a -virtual machine control structure (VMCS). VMCS contains the guest state -(loaded at VM Entry, and saved at VM Exit), the host state, (loaded at -the time of VM exit), and the guest execution controls. ACRN hypervisor -creates a VMCS data structure for each virtual CPU, and uses the VMCS to -configure the behavior of the processor running in guest mode. - -When the execution of the guest hits a sensitive instruction, a VM exit -event may happen as defined in the VMCS configuration. Control goes back -to the ACRN hypervisor when the VM exit happens. The ACRN hypervisor -emulates the guest instruction (if the exit was due to privilege issue) -and resumes the guest to its next instruction, or fixes the VM exit -reason (for example if a guest memory page is not mapped yet) and resume -the guest to re-execute the instruction. - -Note that the address space used in VMM mode is different from that in -guest mode. The guest mode and VMM mode use different memory-mapping -tables, and therefore the ACRN hypervisor is protected from guest -access. The ACRN hypervisor uses EPT to map the guest address, using the -guest page table to map from guest linear address to guest physical -address, and using the EPT table to map from guest physical address to -machine physical address or host physical address (HPA). - -ACRN Device Model Architecture -****************************** - -Because devices may need to be shared between VMs, device emulation is -used to give VM applications (and OSes) access to these shared devices. -Traditionally there are three architectural approaches to device -emulation: - -* The first architecture is **device emulation within the hypervisor**, which - is a common method implemented within the VMware\* workstation product - (an operating system-based hypervisor). In this method, the hypervisor - includes emulations of common devices that the various guest operating - systems can share, including virtual disks, virtual network adapters, - and other necessary platform elements. - -* The second architecture is called **user space device emulation**. As the - name implies, rather than the device emulation being embedded within - the hypervisor, it is instead implemented in a separate user space - application. QEMU, for example, provides this kind of device emulation - also used by many independent hypervisors. This model is - advantageous, because the device emulation is independent of the - hypervisor and can therefore be shared for other hypervisors. It also - permits arbitrary device emulation without having to burden the - hypervisor (which operates in a privileged state) with this - functionality. - -* The third variation on hypervisor-based device emulation is - **paravirtualized (PV) drivers**. In this model introduced by the `XEN - Project`_, the hypervisor includes the physical drivers, and each guest - operating system includes a hypervisor-aware driver that works in - concert with the hypervisor drivers. - -.. _XEN Project: - https://wiki.xenproject.org/wiki/Understanding_the_Virtualization_Spectrum - -In the device emulation models discussed above, there's a price to pay -for sharing devices. Whether device emulation is performed in the -hypervisor, or in user space within an independent VM, overhead exists. -This overhead is worthwhile as long as the devices need to be shared by -multiple guest operating systems. If sharing is not necessary, then -there are more efficient methods for accessing devices, for example -"passthrough". - -ACRN device model is a placeholder of the User VM. It allocates memory for -the User OS, configures and initializes the devices used by the User VM, -loads the virtual firmware, initializes the virtual CPU state, and -invokes the ACRN hypervisor service to execute the guest instructions. -ACRN Device model is an application running in the Service VM that -emulates devices based on command line configuration, as shown in -the architecture diagram :numref:`device-model` below: - -.. figure:: images/device-model.png - :align: center - :name: device-model - - ACRN Device Model - -ACRN Device model incorporates these three aspects: - -**Device Emulation**: - ACRN Device model provides device emulation routines that register - their I/O handlers to the I/O dispatcher. When there is an I/O request - from the User VM device, the I/O dispatcher sends this request to the - corresponding device emulation routine. - -**I/O Path**: - see `ACRN-io-mediator`_ below - -**HSM**: - The Hypervisor Service Module is a kernel module in the - Service VM acting as a middle layer to support the device model. The HSM - client handling flow is described below: - - #. ACRN hypervisor IOREQ is forwarded to the HSM by an upcall - notification to the Service VM. - #. HSM will mark the IOREQ as "in process" so that the same IOREQ will - not pick up again. The IOREQ will be sent to the client for handling. - Meanwhile, the HSM is ready for another IOREQ. - #. IOREQ clients are either a Service VM Userland application or a Service VM - Kernel space module. Once the IOREQ is processed and completed, the - Client will issue an IOCTL call to the HSM to notify an IOREQ state - change. The HSM then checks and hypercalls to ACRN hypervisor - notifying it that the IOREQ has completed. - -.. note:: - * Userland: dm as ACRN Device Model. - * Kernel space: VBS-K, MPT Service, HSM itself - -.. _pass-through: - -Device Passthrough -****************** - -At the highest level, device passthrough is about providing isolation -of a device to a given guest operating system so that the device can be -used exclusively by that guest. - -.. figure:: images/device-passthrough.png - :align: center - :name: device-passthrough - - Device Passthrough - -Near-native performance can be achieved by using device passthrough. -This is ideal for networking applications (or those with high disk I/O -needs) that have not adopted virtualization because of contention and -performance degradation through the hypervisor (using a driver in the -hypervisor or through the hypervisor to a user space emulation). -Assigning devices to specific guests is also useful when those devices -inherently wouldn't be shared. For example, if a system includes -multiple video adapters, those adapters could be passed through to -unique guest domains. - -Finally, there may be specialized PCI devices that only one guest domain -uses, so they should be passed through to the guest. Individual USB -ports could be isolated to a given domain too, or a serial port (which -is itself not shareable) could be isolated to a particular guest. In -ACRN hypervisor, we support USB controller passthrough only, and we -don't support passthrough for a legacy serial port, (for example -0x3f8). - - -Hardware Support for Device Passthrough -======================================= - -Intel's current processor architectures provides support for device -passthrough with VT-d. VT-d maps guest physical address to machine -physical address, so device can use guest physical address directly. -When this mapping occurs, the hardware takes care of access (and -protection), and the guest operating system can use the device as if it -were a non-virtualized system. In addition to mapping guest to physical -memory, isolation prevents this device from accessing memory belonging -to other guests or the hypervisor. - -Another innovation that helps interrupts scale to large numbers of VMs -is called Message Signaled Interrupts (MSI). Rather than relying on -physical interrupt pins to be associated with a guest, MSI transforms -interrupts into messages that are more easily virtualized (scaling to -thousands of individual interrupts). MSI has been available since PCI -version 2.2 but is also available in PCI Express (PCIe), where it allows -fabrics to scale to many devices. MSI is ideal for I/O virtualization, -as it allows isolation of interrupt sources (as opposed to physical pins -that must be multiplexed or routed through software). - -Hypervisor Support for Device Passthrough -========================================= - -By using the latest virtualization-enhanced processor architectures, -hypervisors and virtualization solutions can support device -passthrough (using VT-d), including Xen, KVM, and ACRN hypervisor. -In most cases, the guest operating system (User -OS) must be compiled to support passthrough, by using -kernel build-time options. Hiding the devices from the host VM may also -be required (as is done with Xen using pciback). Some restrictions apply -in PCI, for example, PCI devices behind a PCIe-to-PCI bridge must be -assigned to the same guest OS. PCIe does not have this restriction. - -.. _ACRN-io-mediator: - -ACRN I/O Mediator -***************** - -:numref:`io-emulation-path` shows the flow of an example I/O emulation path. - -.. figure:: images/io-emulation-path.png - :align: center - :name: io-emulation-path - - I/O Emulation Path - -Following along with the numbered items in :numref:`io-emulation-path`: - -1. When a guest executes an I/O instruction (PIO or MMIO), a VM exit happens. - ACRN hypervisor takes control, and analyzes the VM - exit reason, which is a VMX_EXIT_REASON_IO_INSTRUCTION for PIO access. -2. ACRN hypervisor fetches and analyzes the guest instruction, and - notices it is a PIO instruction (``in AL, 20h`` in this example), and put - the decoded information (including the PIO address, size of access, - read/write, and target register) into the shared page, and - notify/interrupt the Service VM to process. -3. The hypervisor service module (HSM) in Service VM receives the - interrupt, and queries the IO request ring to get the PIO instruction - details. -4. It checks to see if any kernel device claims - ownership of the IO port: if a kernel module claimed it, the kernel - module is activated to execute its processing APIs. Otherwise, the HSM - module leaves the IO request in the shared page and wakes up the - device model thread to process. -5. The ACRN device model follows the same mechanism as the HSM. The I/O - processing thread of device model queries the IO request ring to get the - PIO instruction details and checks to see if any (guest) device emulation - module claims ownership of the IO port: if a module claimed it, - the module is invoked to execute its processing APIs. -6. After the ACRN device module completes the emulation (port IO 20h access - in this example), (say uDev1 here), uDev1 puts the result into the - shared page (in register AL in this example). -7. ACRN device model then returns control to ACRN hypervisor to indicate the - completion of an IO instruction emulation, typically through HSM/hypercall. -8. The ACRN hypervisor then knows IO emulation is complete, and copies - the result to the guest register context. -9. The ACRN hypervisor finally advances the guest IP to - indicate completion of instruction execution, and resumes the guest. - -The MMIO path is very similar, except the VM exit reason is different. MMIO -access is usually trapped through a VMX_EXIT_REASON_EPT_VIOLATION in -the hypervisor. - -Virtio Framework Architecture -***************************** - -.. _Virtio spec: - http://docs.oasis-open.org/virtio/virtio/v1.0/virtio-v1.0.html - -Virtio is an abstraction for a set of common emulated devices in any -type of hypervisor. In the ACRN reference stack, our -implementation is compatible with `Virtio spec`_ 0.9 and 1.0. By -following this spec, virtual environments and guests -should have a straightforward, efficient, standard and extensible -mechanism for virtual devices, rather than boutique per-environment or -per-OS mechanisms. - -Virtio provides a common frontend driver framework that not only -standardizes device interfaces, but also increases code reuse across -different virtualization platforms. - -.. figure:: images/virtio-architecture.png - :width: 500px - :align: center - :name: virtio-architecture - - Virtio Architecture - -To better understand Virtio, especially its usage in -the ACRN project, several key concepts of Virtio are highlighted -here: - -**Front-End Virtio driver** (a.k.a. frontend driver, or FE driver in this document) - Virtio adopts a frontend-backend architecture, which enables a simple - but flexible framework for both frontend and backend Virtio driver. The - FE driver provides APIs to configure the interface, pass messages, produce - requests, and notify backend Virtio driver. As a result, the FE driver - is easy to implement and the performance overhead of emulating device is - eliminated. - -**Back-End Virtio driver** (a.k.a. backend driver, or BE driver in this document) - Similar to FE driver, the BE driver, runs either in user-land or - kernel-land of host OS. The BE driver consumes requests from FE driver - and send them to the host's native device driver. Once the requests are - done by the host native device driver, the BE driver notifies the FE - driver about the completeness of the requests. - -**Straightforward**: Virtio devices as standard devices on existing Buses - Instead of creating new device buses from scratch, Virtio devices are - built on existing buses. This gives a straightforward way for both FE - and BE drivers to interact with each other. For example, FE driver could - read/write registers of the device, and the virtual device could - interrupt FE driver, on behalf of the BE driver, in case of something is - happening. Currently, Virtio supports PCI/PCIe bus and MMIO bus. In - ACRN project, only PCI/PCIe bus is supported, and all the Virtio devices - share the same vendor ID 0x1AF4. - -**Efficient**: batching operation is encouraged - Batching operation and deferred notification are important to achieve - high-performance I/O, since notification between FE and BE driver - usually involves an expensive exit of the guest. Therefore, batching - operating and notification suppression are highly encouraged if - possible. This will give an efficient implementation for performance - critical devices. - -**Standard: virtqueue** - All the Virtio devices share a standard ring buffer and descriptor - mechanism, called a virtqueue, shown in Figure 6. A virtqueue - is a queue of scatter-gather buffers. There are three important - methods on virtqueues: - - * ``add_buf`` is for adding a request/response buffer in a virtqueue - * ``get_buf`` is for getting a response/request in a virtqueue, and - * ``kick`` is for notifying the other side for a virtqueue to - consume buffers. - - The virtqueues are created in guest physical memory by the FE drivers. - The BE drivers only need to parse the virtqueue structures to obtain - the requests and get the requests done. Virtqueue organization is - specific to the User OS. In the implementation of Virtio in Linux, the - virtqueue is implemented as a ring buffer structure called - ``vring``. - - In ACRN, the virtqueue APIs can be leveraged - directly so users don't need to worry about the details of the - virtqueue. Refer to the User VM for - more details about the virtqueue implementations. - -**Extensible: feature bits** - A simple extensible feature negotiation mechanism exists for each virtual - device and its driver. Each virtual device could claim its - device-specific features while the corresponding driver could respond to - the device with the subset of features the driver understands. The - feature mechanism enables forward and backward compatibility for the - virtual device and driver. - -In the ACRN reference stack, we implement user-land and kernel -space as shown in :numref:`virtio-framework-userland`: - -.. figure:: images/virtio-framework-userland.png - :width: 600px - :align: center - :name: virtio-framework-userland - - Virtio Framework - User Land - -In the Virtio user-land framework, the implementation is compatible with -Virtio Spec 0.9/1.0. The VBS-U is statically linked with the Device Model, -and communicates with the Device Model through the PCIe interface: PIO/MMIO -or MSI/MSI-X. VBS-U accesses Virtio APIs through the user space ``vring`` service -API helpers. User space ``vring`` service API helpers access shared ring -through a remote memory map (mmap). HSM maps User VM memory with the help of -ACRN Hypervisor. - -.. figure:: images/virtio-framework-kernel.png - :width: 600px - :align: center - :name: virtio-framework-kernel - - Virtio Framework - Kernel Space - -VBS-U offloads data plane processing to VBS-K. VBS-U initializes VBS-K -at the right timings, for example. The FE driver sets -VIRTIO_CONFIG_S_DRIVER_OK to avoid unnecessary device configuration -changes while running. VBS-K can access shared rings through the VBS-K -virtqueue APIs. VBS-K virtqueue APIs are similar to VBS-U virtqueue -APIs. VBS-K registers as a HSM client to handle a continuous range of -registers. - -There may be one or more HSM-clients for each VBS-K, and there can be a -single HSM-client for all VBS-Ks as well. VBS-K notifies FE through HSM -interrupt APIs. diff --git a/doc/nocl.rst b/doc/nocl.rst deleted file mode 100644 index a4f4bad2f..000000000 --- a/doc/nocl.rst +++ /dev/null @@ -1,28 +0,0 @@ -:orphan: - -.. _nocl: - -.. comment This page is a common place holder for references to /latest/ - documentation that was removed from the 2.2 release but there are - lingering references to these docs out in the wild and in the Google - index. Give the reader a reference to the /2.1/ document instead. - -This Document Was Removed -######################### - -.. raw:: html - - - - -In ACRN v2.2, deprivileged boot mode is no longer the default and will -be completely removed in ACRN v2.3. We're focusing instead on using -multiboot2 boot (via Grub). Multiboot2 is not supported in Clear Linux -though, so we're removing Clear Linux as the Service VM of choice and -with that, tutorial documentation about Clear Linux. diff --git a/doc/reference/config-options-launch.rst b/doc/reference/config-options-launch.rst new file mode 100644 index 000000000..52cc43529 --- /dev/null +++ b/doc/reference/config-options-launch.rst @@ -0,0 +1,100 @@ +.. _launch-config-options: + +Launch Configuration Options +############################## + +As explained in :ref:`acrn_configuration_tool`, launch configuration files +define post-launched User VM settings. This document describes these option settings. + +``uos``: + Specify the User VM ``id`` to the Service VM. + +``uos_type``: + Specify the User VM type, such as ``CLEARLINUX``, ``ANDROID``, ``ALIOS``, + ``PREEMPT-RT LINUX``, ``GENERIC LINUX``, ``WINDOWS``, ``YOCTO``, ``UBUNTU``, + ``ZEPHYR`` or ``VXWORKS``. + +``rtos_type``: + Specify the User VM Real-time capability: Soft RT, Hard RT, or none of them. + +``mem_size``: + Specify the User VM memory size in megabytes. + +``gvt_args``: + GVT arguments for the VM. Set it to ``gvtd`` for GVT-d. Leave it blank + to disable the GVT. + +``vbootloader``: + Virtual bootloader type; currently only supports OVMF. + +``vuart0``: + Specify whether the device model emulates the vUART0 (vCOM1); refer to + :ref:`vuart_config` for details. If set to ``Enable``, the vUART0 is + emulated by the device model; if set to ``Disable``, the vUART0 is + emulated by the hypervisor if it is configured in the scenario XML. + +``poweroff_channel``: + Specify whether the User VM power off channel is through the IOC, + power button, or vUART. + +``allow_trigger_s5``: + Allow the VM to trigger S5 shutdown flow. This flag works with + ``poweroff_channel`` + ``vuart1(pty)`` and ``vuart1(tty)`` only. + +``enable_ptm``: + Enable the Precision Timing Measurement (PTM) feature. + +``usb_xhci``: + USB xHCI mediator configuration. Input format: + ``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``. + Refer to :ref:`usb_virtualization` for details. + +``shm_regions``: + List of shared memory regions for inter-VM communication. + +``shm_region`` (a child node of ``shm_regions``): + Configure the shared memory regions for the current VM, input format: + ``[hv|dm]:/,``. Refer to :ref:`ivshmem-hld` + for details. + +``console_vuart``: + Enable a PCI-based console vUART. Refer to :ref:`vuart_config` for details. + +``communication_vuarts``: + List of PCI-based communication vUARTs. Refer to :ref:`vuart_config` for + details. + +``communication_vuart`` (a child node of ``communication_vuarts``): + Enable a PCI-based communication vUART with its ID. Refer to + :ref:`vuart_config` for details. + +``passthrough_devices``: + Select the passthrough device from the PCI device list. Currently we support: + ``usb_xdci``, ``audio``, ``audio_codec``, ``ipu``, ``ipu_i2c``, + ``cse``, ``wifi``, ``bluetooth``, ``sd_card``, + ``ethernet``, ``sata``, and ``nvme``. + +``network`` (a child node of ``virtio_devices``): + The virtio network device setting. + Input format: ``[tap_name|macvtap_name],[vhost],[mac=XX:XX:XX:XX:XX:XX]``. + +``block`` (a child node of ``virtio_devices``): + The virtio block device setting. + Input format: ``[blk partition:][img path]`` e.g.: ``/dev/sda3:./a/b.img``. + +``console`` (a child node of ``virtio_devices``): + The virtio console device setting. + Input format: + ``[@]stdio|tty|pty|sock:portname[=portpath][,[@]stdio|tty|pty:portname[=portpath]]``. + +``cpu_affinity``: + List of pCPU that this VM's vCPUs are pinned to. + +.. note:: + + The ``configurable`` and ``readonly`` attributes are used to mark + whether the item is configurable for users. When ``configurable="n"`` + and ``readonly="y"``, the item is not configurable from the web + interface. When ``configurable="n"``, the item does not appear on the + interface. diff --git a/doc/reference/hardware.rst b/doc/reference/hardware.rst index 18d8e9baa..3ee4331c8 100644 --- a/doc/reference/hardware.rst +++ b/doc/reference/hardware.rst @@ -38,6 +38,7 @@ ACRN assumes the following conditions are satisfied from the Platform BIOS: * There should be no conflict in resources among the PCI devices or with other platform devices. +.. _hardware_tested: Tested Platforms by ACRN Release ******************************** @@ -107,7 +108,7 @@ For general instructions setting up ACRN on supported hardware platforms, visit If an XML file is not provided by project ACRN for your board, we recommend you use the board inspector tool to generate an XML file specifically for your board. -Refer to the :ref:`acrn_configuration_tool` for more details on using the board inspector +Refer to :ref:`board_inspector_tool` for more details on using the board inspector tool. diff --git a/doc/reference/hv-make-options.rst b/doc/reference/hv-make-options.rst new file mode 100644 index 000000000..df9f6d823 --- /dev/null +++ b/doc/reference/hv-make-options.rst @@ -0,0 +1,233 @@ +.. _hypervisor-make-options: + +Hypervisor Makefile Options +########################### + +The ACRN hypervisor source code provides a ``Makefile`` to build the ACRN +hypervisor binary and associated components. + +Assuming that you are at the top level of the ``acrn-hypervisor`` directory, +you can run the ``make`` command to start the build. See +:ref:`acrn_configuration_tool` for information about required input files. + +Build Options and Targets +************************** + +The following table shows ACRN-specific command-line options: + +.. list-table:: + :widths: 33 77 + :header-rows: 1 + + * - Option + - Description + + * - ``BOARD`` + - Required. Path to the board configuration file. + + * - ``SCENARIO`` + - Required. Path to the scenario configuration file. + + * - ``RELEASE`` + - Optional. Build a release version or a debug version. Valid values + are ``y`` for release version or ``n`` for debug version. (Default + is ``n``.) + + * - ``ASL_COMPILER`` + - Optional. Use an ``iasl`` compiler that is not in the default path + (``/usr/sbin``). + + * - ``O`` + - Optional. Path to the directory where the built files will be stored. + (Default is the ``build`` directory.) + +The following table shows ACRN-specific targets. The default target (if no target is specified on the command-line) is to build the ``hypervisor``, ``devicemodel``, and ``tools``. + +.. list-table:: + :widths: 33 77 + :header-rows: 1 + + * - Makefile Target + - Description + + * - ``hypervisor`` + - Optional. Build the hypervisor. + + * - ``devicemodel`` + - Optional. Build the Device Model. The ``tools`` will also be built as + a dependency. + + * - ``tools`` + - Optional. Build the tools. + + * - ``doc`` + - Optional. Build the project's HTML documentation (using Sphinx), output + to the ``build/doc`` folder. + + * - ``life_mngr`` + - Optional. Build the Lifecycle Manager daemon that runs in the User VM + to manage power state transitions (S5). + + * - ``targz-pkg`` + - Optional. Create a compressed tarball (``acrn-$(FULL_VERSION).tar.gz``) + in the build folder (default: ``build``) with all the build artifacts. + +Example of a command to build the debug version: + +.. code-block:: none + + make BOARD=~/acrn-work/my_board.xml SCENARIO=~/acrn-work/industry.xml + +Example of a command to build the release version: + +.. code-block:: none + + make BOARD=~/acrn-work/my_board.xml SCENARIO=~/acrn-work/industry.xml RELEASE=y + +Example of a command to build the release version (hypervisor only): + +.. code-block:: none + + make BOARD=~/acrn-work/my_board.xml SCENARIO=~/acrn-work/industry.xml RELEASE=y hypervisor + +Example of a command to build the release version of the Device Model and tools: + +.. code-block:: none + + make RELEASE=y devicemodel tools + +Example of a command to put the built files in the specified directory +(``build-nuc``): + +.. code-block:: none + + make O=build-nuc BOARD=~/acrn-work/my_board.xml SCENARIO=~/acrn-work/industry.xml + +Example of a command that specifies ``iasl`` compiler: + +.. code-block:: none + + make BOARD=~/acrn-work/my_board.xml SCENARIO=~/acrn-work/industry.xml ASL_COMPILER=/usr/local/bin/iasl + +ACRN uses XML files to summarize board characteristics and scenario settings. +The ``BOARD`` and ``SCENARIO`` variables accept board/scenario names as well +as paths to XML files. When board/scenario names are given, the build system +searches for XML files with the same names under ``misc/config_tools/data/``. +When paths (absolute or relative) to the XML files are given, the build system +uses the files pointed at. If relative paths are used, they are considered +relative to the current working directory. + +.. _acrn_makefile_targets: + +Makefile Targets for Configuration +*********************************** + +ACRN source also includes the following makefile targets to aid customization. + +.. list-table:: + :widths: 33 77 + :header-rows: 1 + + * - Target + - Description + + * - ``hvdefconfig`` + - Generate configuration files (a bunch of C source files) in the build + directory without building the hypervisor. This target can be used when + you want to customize the configurations based on a predefined scenario. + + * - ``hvshowconfig`` + - Print the target ``BOARD``, ``SCENARIO`` and build type (debug or + release) of a build. + + * - ``hvdiffconfig`` + - After modifying the generated configuration files, you can use this + target to generate a patch that shows the differences made. + + * - ``hvapplydiffconfig PATCH=/path/to/patch`` + - Register a patch to be applied on the generated configuration files + every time they are regenerated. The ``PATCH`` variable specifies the + path (absolute or relative to current working directory) of the patch. + Multiple patches can be registered by invoking this target multiple + times. + +Example of ``hvshowconfig`` to query the board, scenario, and build +type of an existing build: + +.. code-block:: none + + $ make BOARD=tgl-rvp SCENARIO=hybrid_rt hypervisor + ... + $ make hvshowconfig + Build directory: /path/to/acrn-hypervisor/build/hypervisor + This build directory is configured with the settings below. + - BOARD = tgl-rvp + - SCENARIO = hybrid_rt + - RELEASE = n + +Example of ``hvdefconfig`` to generate the configuration files in the +build directory, followed by an example of editing one of the configuration +files manually (``scenario.xml``) and then building the hypervisor: + +.. code-block:: none + + make BOARD=nuc7i7dnb SCENARIO=industry hvdefconfig + vim build/hypervisor/.scenario.xml + #(Modify the XML file per your needs) + make + +A hypervisor build remembers the board and scenario previously configured. +Thus, there is no need to duplicate ``BOARD`` and ``SCENARIO`` in the second +``make`` above. + +While the scenario configuration files can be changed manually, we recommend +you use the :ref:`ACRN configurator tool `, which +provides valid options and descriptions of the configuration entries. + +The targets ``hvdiffconfig`` and ``hvapplydiffconfig`` are provided for users +who already have offline patches to the generated configuration files. Prior to +v2.4, the generated configuration files are also in the repository. Some users +may already have chosen to modify these files directly to customize the +configurations. + +.. note:: + + We highly recommend new users save and maintain customized configurations in + XML, not in patches to generated configuration files. + +Example of how to use ``hvdiffconfig`` to generate a patch and save +it to ``config.patch``: + +.. code-block:: console + + acrn-hypervisor$ make BOARD=ehl-crb-b SCENARIO=hybrid_rt hvdefconfig + ... + acrn-hypervisor$ vim build/hypervisor/configs/scenarios/hybrid_rt/pci_dev.c + (edit the file manually) + acrn-hypervisor$ make hvdiffconfig + ... + Diff on generated configuration files is available at /path/to/acrn-hypervisor/build/hypervisor/config.patch. + To make a patch effective, use 'hvapplydiffconfig PATCH=/path/to/patch' to + register it to a build. + ... + acrn-hypervisor$ cp build/hypervisor/config.patch config.patch + +Example of how to use ``hvapplydiffconfig`` to apply +``config.patch`` to a new build: + +.. code-block:: console + + acrn-hypervisor$ make clean + acrn-hypervisor$ make BOARD=ehl-crb-b SCENARIO=hybrid_rt hvdefconfig + ... + acrn-hypervisor$ make hvapplydiffconfig PATCH=config.patch + ... + /path/to/acrn-hypervisor/config.patch is registered for build directory /path/to/acrn-hypervisor/build/hypervisor. + Registered patches will be applied the next time 'make' is invoked. + To unregister a patch, remove it from /path/to/acrn-hypervisor/build/hypervisor/configs/.diffconfig. + ... + acrn-hypervisor$ make hypervisor + ... + Applying patch /path/to/acrn-hypervisor/config.patch: + patching file scenarios/hybrid_rt/pci_dev.c + ... diff --git a/doc/release_notes/release_notes_0.1.rst b/doc/release_notes/release_notes_0.1.rst index a908239b0..ac06c389b 100644 --- a/doc/release_notes/release_notes_0.1.rst +++ b/doc/release_notes/release_notes_0.1.rst @@ -33,7 +33,7 @@ Virtual Graphics support added: and User OS can run GPU workload simultaneously. - Direct display supported by ACRNGT. Service OS and User OS are each assigned to different display. The display ports supports eDP and HDMI. -- See :ref:`APL_GVT-G-hld` documentation for more information. +- See the HLD GVT-g documentation for more information. Virtio Standard Is Supported ============================ diff --git a/doc/release_notes/release_notes_1.0.rst b/doc/release_notes/release_notes_1.0.rst index 6833935ac..f64c21c62 100644 --- a/doc/release_notes/release_notes_1.0.rst +++ b/doc/release_notes/release_notes_1.0.rst @@ -150,8 +150,8 @@ GVT-g for ACRN GVT-g for ACRN (a.k.a AcrnGT) is a feature to enable GPU sharing Service OS and User OS, so both can run GPU workload simultaneously. Direct display is supported by AcrnGT, where the Service OS and User OS are each assigned to -a different display. The display ports support eDP and HDMI. See :ref:`APL_GVT-g-hld` -for more information. +a different display. The display ports support eDP and HDMI. +See the HLD GVT-g documentation for more information. GPU - Preemption ================ @@ -214,9 +214,9 @@ We have many reference documents `available * ACRN Roadmap: look ahead in `2019 `_ * Performance analysis of `VBS-k framework - `_ + `_ * HLD design doc for `IOC virtualization - `_ + `_ * Additional project `coding guidelines `_ * :ref:`Zephyr RTOS as Guest OS ` @@ -480,7 +480,7 @@ Known Issues These steps reproduce the issue: 1) Build Zephyr image by follow the `guide - `_. + `_. 2) Copy the "Zephyr.img", "OVMF.fd" and "launch_zephyr.sh" to ISD. 3) execute the launch_zephyr.sh script. diff --git a/doc/release_notes/release_notes_2.5.rst b/doc/release_notes/release_notes_2.5.rst index 77fa92c16..6681897ac 100644 --- a/doc/release_notes/release_notes_2.5.rst +++ b/doc/release_notes/release_notes_2.5.rst @@ -91,7 +91,7 @@ installed by executing the following command: sudo pip3 install lxml .. note:: - Refer to :ref:`acrn_config_workflow` for a complete list of tools required to + Refer to :ref:`gsg` for a complete list of tools required to run the board inspector. With the prerequisites done, copy the entire board inspector folder from diff --git a/doc/release_notes/release_notes_2.6.rst b/doc/release_notes/release_notes_2.6.rst new file mode 100644 index 000000000..773362530 --- /dev/null +++ b/doc/release_notes/release_notes_2.6.rst @@ -0,0 +1,153 @@ +.. _release_notes_2.6: + +ACRN v2.6 (Sep 2021) +#################### + +We are pleased to announce the release of the Project ACRN hypervisor +version 2.6. + +ACRN is a flexible, lightweight reference hypervisor that is built with +real-time and safety-criticality in mind. It is optimized to streamline +embedded development through an open-source platform. See the +:ref:`introduction` introduction for more information. + +All project ACRN source code is maintained in the +https://github.com/projectacrn/acrn-hypervisor repository and includes +folders for the ACRN hypervisor, the ACRN device model, tools, and +documentation. You can download this source code either as a zip or +tar.gz file (see the `ACRN v2.6 GitHub release page +`_) or +use Git ``clone`` and ``checkout`` commands:: + + git clone https://github.com/projectacrn/acrn-hypervisor + cd acrn-hypervisor + git checkout v2.6 + +The project's online technical documentation is also tagged to +correspond with a specific release: generated v2.6 documents can be +found at https://projectacrn.github.io/2.6/. Documentation for the +latest development branch is found at https://projectacrn.github.io/latest/. + +ACRN v2.6 requires Ubuntu 18.04. Follow the instructions in the +:ref:`gsg` to get started with ACRN. + + +What's New in v2.6 +****************** + +Nested Virtualization Technology Performance Tuning + The performance of nested virtualization, a feature first introduced as a + preview in the v2.5 release, was improved. CPU and I/O performance of level 2 + virtual machines (for example, a VM running on a KVM/QEMU VM that itself is a + VM on ACRN hypervisor) is now on par with a VM running on KVM on bare metal. + Read more in the :ref:`nested_virt` tutorial. + +Support loading OSs in ELF format + ACRN hypervisor now can load OS images packed in ELF (Executable and Linkable + Format). This adds flexibility to OSs such as Zephyr running in pre-launched + VMs. + + +Upgrading to v2.6 From Previous Releases +**************************************** + +We highly recommended that you follow these instructions to +upgrade to v2.6 from previous ACRN releases. + +Generate New Board XML +====================== + +Board XML files, generated by ACRN board inspector, contain board information +that is essential to build ACRN. Compared to previous versions, ACRN v2.6 adds +the following hardware information to board XMLs to support new features and +fixes. + + - Maximum width of physical and linear addresses + - Device objects in the ACPI namespace + - Routing of PCI interrupt pins + - Number of requested vectors of MSI-capable PCI devices + +The new board XML can be generated using the ACRN board inspector in the same +way as ACRN v2.5. Refer to :ref:`acrn_config_workflow` for a complete list of +steps to deploy and run the tool. + +Add New Configuration Options +============================= + +In v2.6, the following elements are added to scenario XML files. + +- :option:`hv.FEATURES.ENFORCE_TURNOFF_GP` (Default value is ``n``) +- :option:`hv.FEATURES.SECURITY_VM_FIXUP` (Default value is ``n``) + +Document Updates +**************** + +We've made major improvements to the introductory ACRN documentation including: + +.. rst-class:: rst-columns2 + +* :ref:`introduction` +* :ref:`overview_dev` +* :ref:`gsg` +* :ref:`acrn_configuration_tool` + +We've also made edits throughout the documentation to improve clarity, +formatting, and presentation: + +.. rst-class:: rst-columns2 + +* :ref:`hld-devicemodel` +* :ref:`hld-overview` +* :ref:`hld-power-management` +* :ref:`hld-virtio-devices` +* :ref:`hld-io-emulation` +* :ref:`virtio-net` +* :ref:`acrn_on_qemu` +* :ref:`cpu_sharing` +* :ref:`nested_virt` +* :ref:`setup_openstack_libvirt` +* :ref:`using_hybrid_mode_on_nuc` +* :ref:`acrn_doc` + +Fixed Issues Details +******************** + +.. comment example item + - :acrn-issue:`5626` - [CFL][industry] Host Call Trace once detected + +- :acrn-issue:`6012` - [Mainline][PTCM] [ConfigTool]Obsolete terms cleanup for SSRAM +- :acrn-issue:`6284` - [v2.6] vulnerable coding style in hypervisor and DM +- :acrn-issue:`6340` - [EF]Invalid LPC entry prevents GOP driver from working properly in WaaG for DP3 +- :acrn-issue:`6342` - [v2.6] vulnerable coding style in config tool python source +- :acrn-issue:`6360` - ACRN Makefile missing dependencies +- :acrn-issue:`6366` - TPM pass-thru shall be able to support start method 6, not only support Start Method of 7 +- :acrn-issue:`6388` - [hypercube][tgl][ADL]AddressSanitizer: SEGV virtio_console +- :acrn-issue:`6389` - [hv ivshmem] map SHM BAR with PAT ignored +- :acrn-issue:`6405` - [ADL-S][Industry][Yocto] WaaG BSOD in startup when run reboot or create/destroy stability test. +- :acrn-issue:`6417` - ACRN ConfigTool improvement from DX view +- :acrn-issue:`6423` - ACPI NVS region might not be mapped on prelaunched-VM +- :acrn-issue:`6428` - [acrn-configuration-tool] Fail to generate launch script when disable CPU sharing +- :acrn-issue:`6431` - virtio_console use-after-free +- :acrn-issue:`6434` - HV panic when SOS VM boot 5.4 kernel +- :acrn-issue:`6442` - [EF]Post-launched VMs do not boot with "EFI Network" enabled +- :acrn-issue:`6461` - [config_tools] kernel load addr/entry addr should not be configurable for kernel type KERNEL_ELF +- :acrn-issue:`6473` - [HV]HV can't be used after dumpreg rtvm vcpu +- :acrn-issue:`6476` - [hypercube][TGL][ADL]pci_xhci_insert_event SEGV on read from NULL +- :acrn-issue:`6481` - ACRN on QEMU can't boot up with v2.6 branch +- :acrn-issue:`6482` - [ADL-S][RTVM]rtvm poweroff causes sos to crash +- :acrn-issue:`6502` - [ADL][HV][UC lock] SoS kernel panic when #GP for UC lock enabled +- :acrn-issue:`6507` - [TGL][HV][hybrid] during boot zephyr64.elf find HV error: "Unable to copy HPA 0x100000 to GPA 0x7fe00000 in VM0" +- :acrn-issue:`6508` - [HV]Refine pass-thru device PIO BAR handling +- :acrn-issue:`6510` - [ICX-RVP][SSRAM] No SSRAM entries in guest PTCT +- :acrn-issue:`6518` - [hypercube][ADL]acrn-dm program crash during hypercube testing +- :acrn-issue:`6528` - [TGL][HV][hybrid_rt] dmidecode Fail on pre-launched RTVM +- :acrn-issue:`6530` - [ADL-S][EHL][Hybrid]Path of sos rootfs in hybrid.xml is wrong +- :acrn-issue:`6533` - [hypercube][tgl][ADL] mem leak while poweroff in guest +- :acrn-issue:`6592` - [doc] failed to make hvdiffconfig + +Known Issues +************ + +- :acrn-issue:`6630` - Fail to enable 7 PCI based VUART on 5.10.56 RTVM +- :acrn-issue:`6631` - [KATA][5.10 Kernel]failed to start docker with Service VM 5.10 kernel + diff --git a/doc/static/acrn-custom.css b/doc/static/acrn-custom.css index 4b244cba5..38873b28a 100644 --- a/doc/static/acrn-custom.css +++ b/doc/static/acrn-custom.css @@ -312,3 +312,13 @@ a.reference.external::after { font-size: 80%; content: " \f08e"; } + +/* generic light gray box shadow (for use on images via class directive) */ +.drop-shadow { + box-shadow: 5px 5px 10px #aaaaaa; +} + +/* add some space after an image with a shadow style applied */ +img.drop-shadow { + margin-bottom: 2em !important; +} diff --git a/doc/try.rst b/doc/try.rst index ec6891f3a..0f145adf1 100644 --- a/doc/try.rst +++ b/doc/try.rst @@ -3,21 +3,19 @@ Getting Started ############### -After reading the :ref:`introduction`, use these guides to get started +After reading the :ref:`introduction`, use these documents to get started using ACRN in a reference setup. We'll show how to set up your development and target hardware, and then how to boot the ACRN -hypervisor, the Service VM, and a User VM on the Intel platform. +hypervisor, the Service VM, and a User VM on a supported Intel target platform. -ACRN is supported on platforms listed in :ref:`hardware`. - -Follow these getting started guides to give ACRN a try: .. toctree:: :maxdepth: 1 reference/hardware + getting-started/overview_dev getting-started/getting-started - getting-started/building-from-source - getting-started/roscube/roscube-gsg - tutorials/using_hybrid_mode_on_nuc - tutorials/using_partition_mode_on_nuc + +After getting familiar with ACRN development, check out these +:ref:`develop_acrn` for information about more-advanced scenarios and enabling +ACRN advanced capabilities. diff --git a/doc/tutorials/acrn-secure-boot-with-efi-stub.rst b/doc/tutorials/acrn-secure-boot-with-efi-stub.rst index b619001f6..179fbdac7 100644 --- a/doc/tutorials/acrn-secure-boot-with-efi-stub.rst +++ b/doc/tutorials/acrn-secure-boot-with-efi-stub.rst @@ -57,7 +57,7 @@ Building Build Dependencies ================== -- Build Tools and Dependencies described in the :ref:`getting-started-building` guide +- Build Tools and Dependencies described in the :ref:`gsg` guide - ``gnu-efi`` package - Service VM Kernel ``bzImage`` - pre-launched RTVM Kernel ``bzImage`` diff --git a/doc/tutorials/acrn_configuration_tool.rst b/doc/tutorials/acrn_configuration_tool.rst index b6f4f8685..19471a851 100644 --- a/doc/tutorials/acrn_configuration_tool.rst +++ b/doc/tutorials/acrn_configuration_tool.rst @@ -9,25 +9,24 @@ well as configure hypervisor capabilities and provision VMs. ACRN configuration consists of the following key components. - - Configuration data saved as XML files. - - A configuration toolset that helps users to generate and edit configuration - data. The toolset includes: +* Configuration data saved as XML files. +* A configuration toolset that helps users to generate and edit configuration + data. The toolset includes: - - A **board inspector** that collects board-specific information on target - machines. - - A **configuration editor** that lets you edit configuration data via a web-based UI. + - **Board inspector**: Collects board-specific information on target + machines. + - **ACRN configurator**: Enables you to edit configuration data via a + web-based UI. The following sections introduce the concepts and tools of ACRN configuration from the aspects below. - - :ref:`acrn_config_types` introduces the objectives and main contents of - different types of configuration data. - - :ref:`acrn_config_workflow` overviews the steps to customize ACRN - configuration using the configuration toolset. - - :ref:`acrn_config_data` explains the location and format of configuration - data saved as XML files. - - :ref:`acrn_config_tool_ui` gives detailed instructions on using the - configuration editor. +* :ref:`acrn_config_types` introduces the objectives and main contents of + different types of configuration data. +* :ref:`acrn_config_workflow` overviews the steps to customize ACRN + configuration using the configuration toolset. +* :ref:`acrn_config_data` explains the location and format of configuration + data saved as XML files. .. _acrn_config_types: @@ -35,232 +34,136 @@ Types of Configurations *********************** ACRN includes three types of configurations: board, scenario, and launch. The -following sections briefly describe the objectives and main contents of each -type. +configuration data are saved in three XML files. The following sections briefly +describe the objectives and main contents of each file. -Board Configuration -=================== +Board Configuration File +======================== -The board configuration stores hardware-specific information extracted on the -target platform. It describes the capacity of hardware resources (such as -processors and memory), platform power states, available devices, and BIOS -versions. This information is used by ACRN configuration tool to check feature -availability and allocate resources among VMs, as well as by ACRN hypervisor to -initialize and manage the platform at runtime. +The board configuration file stores hardware-specific information extracted +from the target platform. Examples of information: -The board configuration is scenario-neutral by nature. Thus, multiple scenario +* Capacity of hardware resources (such as processors and memory) +* Platform power states +* Available devices +* BIOS versions + +You need a board configuration file to create scenario configurations. The +board configuration is scenario-neutral by nature. Thus, multiple scenario configurations can be based on the same board configuration. -Scenario Configuration -====================== +You also need a board configuration file to build an ACRN hypervisor. The +build process uses the file to build a hypervisor that can +initialize and manage the platform at runtime. -The scenario configuration defines a working scenario by configuring hypervisor -capabilities and defining VM attributes and resources. You can specify the -following in scenario configuration. +Scenario Configuration File +=========================== - - Hypervisor capabilities +The scenario configuration file defines a working scenario by configuring +hypervisor capabilities and defining some VM attributes and resources. +We call these settings "static" because they are used to build the hypervisor. +You can specify the following information in a scenario configuration: - - Availability and settings of hypervisor features, such as debugging - facilities, scheduling algorithm, ivshmem, and security features. - - Hardware management capacity of the hypervisor, such as maximum PCI devices - and maximum interrupt lines supported. - - Memory consumption of the hypervisor, such as the entry point and stack - size. +* Hypervisor capabilities - - VM attributes and resources + - Availability and settings of hypervisor features, such as debugging + facilities, scheduling algorithm, inter-VM shared memory (ivshmem), + and security features. + - Hardware management capacity of the hypervisor, such as maximum PCI devices + and maximum interrupt lines supported. + - Memory consumption of the hypervisor, such as the entry point and stack + size. - - VM attributes, such as VM names. - - Maximum number of VMs supported. - - Resources allocated to each VM, such as number of vCPUs, amount of guest - memory, and pass-through devices. - - Guest OS settings, such as boot protocol and guest kernel parameters. - - Settings of virtual devices, such as virtual UARTs. +* VM attributes and resources -For pre-launched VMs, the VM attributes and resources are exactly the amount of -resource allocated to them. For post-launched VMs, the number of vCPUs define -the upper limit the Service VM can allocate to them and settings of virtual -devices still apply. Other resources are under the control of the Service VM and -can be dynamically allocated to post-launched VMs. + - VM attributes, such as VM names. + - Maximum number of VMs supported. + - Resources allocated to each VM, such as number of vCPUs, amount of guest + memory, and pass-through devices. + - User VM settings, such as boot protocol and VM OS kernel parameters. + - Settings of virtual devices, such as virtual UARTs. -The scenario configuration is used by ACRN configuration tool to reserve -sufficient memory for the hypervisor to manage the VMs at build time, as well as -by ACRN hypervisor to initialize its capabilities and set up the VMs at runtime. +You need a scenario configuration file to build an ACRN hypervisor. The +build process uses the file to build a hypervisor that can initialize its +capabilities and set up the VMs at runtime. -Launch Configuration -==================== +The scenario configuration defines User VMs as follows: -The launch configuration defines the attributes and resources of a -post-launched VM. The main contents are similar to the VM attributes and -resources in scenario configuration. The launch configuration is used to generate shell scripts that -invoke ``acrn-dm`` to create post-launched VMs. Unlike board and scenario -configurations used at build time or by ACRN hypervisor, launch -configuration are used dynamically in the Service VM. +* For pre-launched User VMs, the scenario configuration defines all attributes + and resources (these VMs have static configurations by nature). The VM + attributes and resources are exactly the amount + of resources allocated to them. + +* For post-launched User VMs, the scenario configuration defines only static + attributes and resources. Other resources are under the control of the + Service VM and can be dynamically allocated to these VMs via launch + scripts. + +Launch Configuration File for Launch Scripts +============================================ + +The launch configuration file applies only to scenarios that have +post-launched User VMs. The file defines certain attributes and +resources of the post-launched VMs specified in the scenario configuration +file. We call these settings "dynamic" because they are used at runtime. + +You need a launch configuration file to generate a launch script (shell script) +for each post-launched User VM. The launch script invokes the +Service VM's :ref:`Device Model ` ``acrn-dm`` to create +the VM. Unlike board and scenario configurations used at build time or by +ACRN hypervisor, launch configurations are used dynamically in the Service VM. .. _acrn_config_workflow: Using ACRN Configuration Toolset ******************************** -ACRN configuration toolset is provided to create and edit configuration -data. The toolset can be found in ``misc/config_tools``. +The ACRN configuration toolset enables you to create +and edit configuration data. The toolset consists of the following: -Here is the workflow to customize ACRN configurations using the configuration -toolset. +* :ref:`Board inspector tool ` +* :ref:`ACRN configurator tool ` -#. Get the board info. +As introduced in :ref:`overview_dev`, configuration takes place at +:ref:`overview_dev_board_config` and :ref:`overview_dev_config_editor` in +the overall development process: - a. Set up a native Linux environment on the target board. Make sure the - following tools are installed and the kernel boots with the following - command line options. +.. image:: ../getting-started/images/overview_flow.png - | **Native Linux requirement:** - | **Release:** Ubuntu 18.04+ - | **Tools:** cpuid, rdmsr, lspci, lxml, dmidecode (optional) - | **Kernel cmdline:** "idle=nomwait intel_idle.max_cstate=0 intel_pstate=disable" - - #. Copy the ``board_inspector`` directory into the target file system and then run the - ``sudo python3 cli.py $(BOARD)`` command. - #. A ``$(BOARD).xml`` that includes all needed hardware-specific information - is generated under the current working directory. Here, ``$(BOARD)`` is the - specified board name. - -#. Customize your needs. - - a. Copy ``$(BOARD).xml`` to the host development machine. - #. Run the ACRN configuration editor (available at - ``misc/config_tools/config_app/app.py``) on the host machine and import - the ``$(BOARD).xml``. Select your working scenario under **Scenario Setting** - and input the desired scenario settings. The tool will do validation checks - on the input based on the ``$(BOARD).xml``. The customized settings can be - exported to your own ``$(SCENARIO).xml``. If you have a customized scenario - XML file, you can also import it to the editor for modification. - #. In ACRN configuration editor, input the launch script parameters for the - post-launched User VM under **Launch Setting**. The editor will validate - the input based on both the ``$(BOARD).xml`` and ``$(SCENARIO).xml`` and then - export settings to your ``$(LAUNCH).xml``. - - .. note:: Refer to :ref:`acrn_config_tool_ui` for more details on - the configuration editor. - -#. Build with your XML files. Refer to :ref:`getting-started-building` to build - the ACRN hypervisor with your XML files on the host machine. - -#. Deploy VMs and run ACRN hypervisor on the target board. - -.. figure:: images/offline_tools_workflow.png - :align: center - - Configuration Workflow - -.. _acrn_makefile_targets: - -Makefile Targets for Configuration -================================== - -In addition to the ``BOARD`` and ``SCENARIO`` variables, ACRN source also -includes the following makefile targets to aid customization. - -.. list-table:: - :widths: 20 50 - :header-rows: 1 - - * - Target - - Description - - * - ``hvdefconfig`` - - Generate configuration files (a bunch of C source files) in the - build directory without building the hypervisor. This target can be used - when you want to customize the configurations based on a predefined - scenario. - - * - ``hvshowconfig`` - - Print the target ``BOARD``, ``SCENARIO`` and build type (debug or - release) of a build. - - * - ``hvdiffconfig`` - - After modifying the generated configuration files, you can use this - target to generate a patch that shows the differences made. - - * - ``hvapplydiffconfig PATCH=/path/to/patch`` - - Register a patch to be applied on the generated configuration files - every time they are regenerated. The ``PATCH`` variable specifies the path - (absolute or relative to current working directory) of the - patch. Multiple patches can be registered by invoking this target - multiple times. - -The targets ``hvdiffconfig`` and ``hvapplydiffconfig`` -are provided for users who already have offline patches to the generated -configuration files. Prior to v2.4, the generated configuration files are also -in the repository. Some users may already have chosen to modify these files -directly to customize the configurations. - -.. note:: - We highly recommend new users save and maintain customized configurations - in XML, not in patches to generated configuration files. - -Here is an example how to use the ``hvdiffconfig`` to generate a patch and save -it to ``config.patch``. - -.. code-block:: console - - acrn-hypervisor$ make BOARD=ehl-crb-b SCENARIO=hybrid_rt hvdefconfig - ... - acrn-hypervisor$ vim build/hypervisor/configs/scenarios/hybrid_rt/pci_dev.c - (edit the file manually) - acrn-hypervisor$ make hvdiffconfig - ... - Diff on generated configuration files is available at /path/to/acrn-hypervisor/build/hypervisor/config.patch. - To make a patch effective, use 'applydiffconfig PATCH=/path/to/patch' to register it to a build. - ... - acrn-hypervisor$ cp build/hypervisor/config.patch config.patch - -The example below shows how to use ``hvapplydiffconfig`` to apply -``config.patch`` to a new build. - -.. code-block:: console - - acrn-hypervisor$ make clean - acrn-hypervisor$ make BOARD=ehl-crb-b SCENARIO=hybrid_rt hvdefconfig - ... - acrn-hypervisor$ make hvapplydiffconfig PATCH=config.patch - ... - /path/to/acrn-hypervisor/config.patch is registered for build directory /path/to/acrn-hypervisor/build/hypervisor. - Registered patches will be applied the next time 'make' is invoked. - To unregister a patch, remove it from /path/to/acrn-hypervisor/build/hypervisor/configs/.diffconfig. - ... - acrn-hypervisor$ make hypervisor - ... - Applying patch /path/to/acrn-hypervisor/config.patch: - patching file scenarios/hybrid_rt/pci_dev.c - ... +ACRN source also includes makefile targets to aid customization. See +:ref:`hypervisor-make-options`. .. _acrn_config_data: ACRN Configuration Data *********************** -ACRN configuration data are saved in three XML files: ``board``, ``scenario``, -and ``launch`` XML. The ``board`` XML contains board configuration and is -generated by the board inspector on the target machine. The ``scenario`` and -``launch`` XMLs, containing scenario and launch configurations respectively, can -be customized by using the configuration editor. End users can load their own -configurations by importing customized XMLs or by saving the configurations by -exporting XMLs. +The following sections explain the format of the board, scenario, and launch +configuration files. Although we recommend using the ACRN configuration toolset +to create these files, this reference may be useful for advanced usage and +troubleshooting. -The predefined XMLs provided by ACRN are located in the ``misc/config_tools/data/`` -directory of the ``acrn-hypervisor`` repo. +ACRN source code offers predefined XMLs, and the generic templates used for +new boards and scenarios, in the ``misc/config_tools/data/`` directory of +the ``acrn-hypervisor`` repo. Board XML Format ================ -The board XML has an ``acrn-config`` root element and a ``board`` attribute: +The board XML has an ``acrn-config`` root element and a +``board`` attribute: .. code-block:: xml -Board XML files are input to the configuration editor and the build system, and are not -intended for end users to modify. +The ``board`` attribute defines the board name and must match the +``board`` attribute in the scenario configuration file and the launch +configuration file. The file name of the board configuration file +(example: ``my_board.xml``) doesn't affect the board name. + +Board XML files are input to the ACRN configurator tool and the build system, +and are not intended for end users to modify. Scenario XML Format =================== @@ -272,290 +175,33 @@ The scenario XML has an ``acrn-config`` root element as well as ``board`` and -See :ref:`scenario-config-options` for a full explanation of available scenario -XML elements. Users are recommended to tweak the configuration data by using -ACRN configuration editor. +The ``board`` attribute specifies the board name and must match the ``board`` +attribute in the board configuration file. +The ``scenario`` attribute specifies the scenario name, followed by hypervisor +and VM settings. + +See :ref:`scenario-config-options` for a full explanation of available scenario +XML elements. Launch XML Format ================= -The launch XML has an ``acrn-config`` root element as well as ``board``, -``scenario`` and ``uos_launcher`` attributes: +The launch XML has an ``acrn-config`` root element as well as +``board``, ``scenario`` and ``uos_launcher`` attributes: .. code-block:: xml -Attributes of the ``uos_launcher`` specify the number of User VMs that the -current scenario has: +The ``board`` attribute specifies the board name and must match the ``board`` +attribute in the board configuration file and the scenario configuration file. -``uos``: - Specify the User VM with its relative ID to Service VM by the ``id`` attribute. +The ``scenario`` attribute specifies the scenario name and must match the +``scenario`` attribute in the scenario configuration file. -``uos_type``: - Specify the User VM type, such as ``CLEARLINUX``, ``ANDROID``, ``ALIOS``, - ``PREEMPT-RT LINUX``, ``GENERIC LINUX``, ``WINDOWS``, ``YOCTO``, ``UBUNTU``, - ``ZEPHYR`` or ``VXWORKS``. +The ``uos_launcher`` attribute specifies the number of post-launched User VMs +in a scenario. -``rtos_type``: - Specify the User VM Real-time capability: Soft RT, Hard RT, or none of them. - -``mem_size``: - Specify the User VM memory size in megabytes. - -``gvt_args``: - GVT arguments for the VM. Set it to ``gvtd`` for GVT-d, otherwise it's - for GVT-g arguments. The GVT-g input format: ``low_gm_size high_gm_size fence_sz``, - The recommendation is ``64 448 8``. Leave it blank to disable the GVT. - -``vbootloader``: - Virtual bootloader type; currently only supports OVMF. - -``vuart0``: - Specify whether the device model emulates the vUART0(vCOM1); refer to - :ref:`vuart_config` for details. If set to ``Enable``, the vUART0 is - emulated by the device model; if set to ``Disable``, the vUART0 is - emulated by the hypervisor if it is configured in the scenario XML. - -``poweroff_channel``: - Specify whether the User VM power off channel is through the IOC, - power button, or vUART. - -``allow_trigger_s5``: - Allow VM to trigger s5 shutdown flow, this flag works with ``poweroff_channel`` - ``vuart1(pty)`` and ``vuart1(tty)`` only. - -``enable_ptm``: - Enable the Precision Timing Measurement (PTM) feature. - -``usb_xhci``: - USB xHCI mediator configuration. Input format: - ``bus#-port#[:bus#-port#: ...]``, e.g.: ``1-2:2-4``. - Refer to :ref:`usb_virtualization` for details. - -``shm_regions``: - List of shared memory regions for inter-VM communication. - -``shm_region`` (a child node of ``shm_regions``): - configure the shared memory regions for current VM, input format: - ``hv:/<;shm name>; (or dm:/;), <;shm size in MB>;``. Refer to :ref:`ivshmem-hld` for details. - -``console_vuart``: - Enable a PCI-based console vUART. Refer to :ref:`vuart_config` for details. - -``communication_vuarts``: - List of PCI-based communication vUARTs. Refer to :ref:`vuart_config` for details. - -``communication_vuart`` (a child node of ``communication_vuarts``): - Enable a PCI-based communication vUART with its ID. Refer to :ref:`vuart_config` for details. - -``passthrough_devices``: - Select the passthrough device from the lspci list. Currently we support: - ``usb_xdci``, ``audio``, ``audio_codec``, ``ipu``, ``ipu_i2c``, - ``cse``, ``wifi``, ``bluetooth``, ``sd_card``, - ``ethernet``, ``sata``, and ``nvme``. - -``network`` (a child node of ``virtio_devices``): - The virtio network device setting. - Input format: ``tap_name,[vhost],[mac=XX:XX:XX:XX:XX:XX]``. - -``block`` (a child node of ``virtio_devices``): - The virtio block device setting. - Input format: ``[blk partition:][img path]`` e.g.: ``/dev/sda3:./a/b.img``. - -``console`` (a child node of ``virtio_devices``): - The virtio console device setting. - Input format: - ``[@]stdio|tty|pty|sock:portname[=portpath][,[@]stdio|tty|pty:portname[=portpath]]``. - -``cpu_affinity``: - List of pCPU that this VM's vCPUs are pinned to. - -.. note:: - - The ``configurable`` and ``readonly`` attributes are used to mark - whether the item is configurable for users. When ``configurable="n"`` - and ``readonly="y"``, the item is not configurable from the web - interface. When ``configurable="n"``, the item does not appear on the - interface. - -.. _acrn_config_tool_ui: - -Use the ACRN Configuration Editor -********************************* - -The ACRN configuration editor provides a web-based user interface for the following: - -- reads board info -- configures and validates scenario and launch configurations -- generates launch scripts for the specified post-launched User VMs. -- dynamically creates a new scenario configuration and adds or deletes VM - settings in it -- dynamically creates a new launch configuration and adds or deletes User VM - settings in it - -Prerequisites -============= - -.. _get acrn repo guide: - https://projectacrn.github.io/latest/getting-started/building-from-source.html#get-the-acrn-hypervisor-source-code - -- Clone the ACRN hypervisor repo - - .. code-block:: bash - - $ git clone https://github.com/projectacrn/acrn-hypervisor - -- Install ACRN configuration editor dependencies: - - .. code-block:: bash - - $ cd ~/acrn-hypervisor/misc/config_tools/config_app - $ sudo pip3 install -r requirements - - -Instructions -============ - -#. Launch the ACRN configuration editor: - - .. code-block:: bash - - $ python3 app.py - -#. Open a browser and navigate to the website - ``_ automatically, or you may need to visit this - website manually. Make sure you can connect to open network from browser - because the editor needs to download some JavaScript files. - - .. note:: The ACRN configuration editor is supported on Chrome, Firefox, - and Microsoft Edge. Do not use Internet Explorer. - - The website is shown below: - - .. figure:: images/config_app_main_menu.png - :align: center - :name: ACRN config tool main menu - -#. Set the board info: - - a. Click **Import Board info**. - - .. figure:: images/click_import_board_info_button.png - :align: center - - #. Upload the board XML you have generated from the ACRN board inspector. - - #. After board XML is uploaded, you will see the board name from the - Board info list. Select the board name to be configured. - - .. figure:: images/select_board_info.png - :align: center - -#. Load or create the scenario configuration by selecting among the following: - - - Choose a scenario from the **Scenario Setting** menu that lists all - user-defined scenarios for the board you selected in the previous step. - - - Click the **Create a new scenario** from the **Scenario Setting** menu to - dynamically create a new scenario configuration for the current board. - - - Click the **Load a default scenario** from the **Scenario Setting** menu, - and then select one default scenario configuration to load a predefined - scenario XML for the current board. - - The default scenario XMLs are located at - ``misc/config_tools/data/[board]/``. You can edit the scenario name when - creating or loading a scenario. If the current scenario name is duplicated - with an existing scenario setting name, rename the current scenario name or - overwrite the existing one after the confirmation message. - - .. figure:: images/choose_scenario.png - :align: center - - Note that you can also use a customized scenario XML by clicking **Import - XML**. The configuration editor automatically directs to the new scenario - XML once the import is complete. - -#. The configurable items display after one scenario is created, loaded, - or selected. Following is an industry scenario: - - .. figure:: images/configure_scenario.png - :align: center - - - You can edit these items directly in the text boxes, or you can choose - single or even multiple items from the drop-down list. - - - Read-only items are marked as gray. - - - Hover the mouse cursor over the item to display the description. - -#. Dynamically add or delete VMs: - - - Click **Add a VM below** in one VM setting, and then select one VM type - to add a new VM under the current VM. - - - Click **Remove this VM** in one VM setting to remove the current VM for - the scenario setting. - - When one VM is added or removed in the scenario, the configuration editor - reassigns the VM IDs for the remaining VMs by the order of Pre-launched VMs, - Service VMs, and Post-launched VMs. - - .. figure:: images/configure_vm_add.png - :align: center - -#. Click **Export XML** to save the scenario XML; you can rename it in the - pop-up model. - - .. note:: - All customized scenario XMLs will be in user-defined groups - located in ``misc/config_tools/data/[board]/user_defined/``. - - Before saving the scenario XML, the configuration editor validates the - configurable items. If errors exist, the configuration editor lists all - incorrectly configured items and shows the errors as below: - - .. figure:: images/err_acrn_configuration.png - :align: center - - After the scenario is saved, the page automatically directs to the saved - scenario XMLs. Delete the configured scenario by clicking **Export XML** -> **Remove**. - -The **Launch Setting** is quite similar to the **Scenario Setting**: - -#. Upload board XML or select one board as the current board. - -#. Load or create one launch configuration by selecting among the following: - - - Click **Create a new launch script** from the **Launch Setting** menu. - - - Click **Load a default launch script** from the **Launch Setting** menu. - - - Select one launch XML from the menu. - - - Import a local launch XML by clicking **Import XML**. - -#. Select one scenario for the current launch configuration from the **Select - Scenario** drop-down box. - -#. Configure the items for the current launch configuration. - -#. Add or remove User VM (UOS) launch scripts: - - - Add a UOS launch script by clicking **Configure an UOS below** for the - current launch configuration. - - - Remove a UOS launch script by clicking **Remove this VM** for the - current launch configuration. - -#. Save the current launch configuration to the user-defined XML files by - clicking **Export XML**. The configuration editor validates the current - configuration and lists all incorrectly configured items. - -#. Click **Generate Launch Script** to save the current launch configuration and - then generate the launch script. - - .. figure:: images/generate_launch_script.png - :align: center +See :ref:`launch-config-options` for a full explanation of available launch +XML elements. diff --git a/doc/tutorials/acrn_configurator_tool.rst b/doc/tutorials/acrn_configurator_tool.rst new file mode 100644 index 000000000..02b47e4cd --- /dev/null +++ b/doc/tutorials/acrn_configurator_tool.rst @@ -0,0 +1,217 @@ +.. _acrn_configurator_tool: + +ACRN Configurator Tool +###################### + +This guide describes all features and uses of the tool. + +About the ACRN Configurator Tool +********************************* + +The ACRN configurator tool ``acrn_configurator.py`` provides a web-based +user interface to help you customize your +:ref:`ACRN configuration `. Capabilities: + +- reads board information from the specified board configuration file +- provides a GUI to help you configure and validate scenario and + launch configuration files +- generates launch scripts for the specified post-launched User VMs +- dynamically creates a new scenario configuration and adds or deletes VM + settings in it +- dynamically creates a new launch configuration and adds or deletes User VM + settings in it + +The tool guides you to configure ACRN in a particular order, due to +dependencies among the different types of configuration files. Here's an +overview of what to expect: + +#. Import the board configuration file that you generated via the + :ref:`board inspector tool `. + +#. Customize your scenario configuration file by defining hypervisor and + VM settings that will be used to build the ACRN hypervisor. + +#. If your scenario has post-launched User VMs, customize launch scripts + that the Service VM will use to create the VMs + and allocate resources to them dynamically at runtime. + Customizing launch scripts involves these steps: + + a. Configure settings for all post-launched User VMs in your scenario + and save the configuration in a launch configuration file. + + #. Generate the launch scripts. The ACRN configurator creates one + launch script for each VM defined in the launch configuration file. + +Generate a Scenario Configuration File and Launch Scripts +********************************************************* + +The following steps describe all options in the ACRN configurator for generating +a custom scenario configuration file and launch scripts. + +#. Make sure the development computer is set up and ready to launch the ACRN + configurator, according to :ref:`gsg-dev-setup` in the Getting Started Guide. + +#. Launch the ACRN configurator. This example assumes the tool is in the + ``~/acrn-work/`` directory. Feel free to modify the command as needed. + + .. code-block:: bash + + python3 ~/acrn-work/acrn-hypervisor/misc/config_tools/config_app/acrn_configurator.py + +#. Your web browser should open the website ``_ + automatically, or you may need to visit this website manually. The ACRN + configurator is supported on Chrome and Firefox. + +#. Click the **Import Board info** button and browse to your board + configuration file. After the file is uploaded, make sure the board name + is selected in the **Board info** drop-down list and the board information + appears. + +#. Start the scenario configuration process by selecting an option from the + **Scenario Setting** menu on the top banner of the UI or by importing a + scenario configuration file via the **Import XML** button. The four options + are described below: + + * Click **Create a new scenario** from the **Scenario Setting** menu to + dynamically create a new scenario configuration for the current board. + + * Click **Load a default scenario** from the **Scenario Setting** menu to + select a :ref:`predefined scenario configuration `. + + * Click the **Scenario Setting** menu and select a scenario from the list + under **scenario setting list**. + + .. image:: images/choose_scenario.png + :align: center + + * Click the **Import XML** button to import a customized scenario + configuration file. + The file must be one that was written for the current board. Any mismatch + in the board name and the one found in the scenario configuration file you + are trying to import will lead to an error message. + +#. When the scenario configuration file is available for editing, the + configurable items appear below the **Scenario Setting** row. You may + need to scroll down to see them. Example: + + .. image:: images/configure_scenario.png + :align: center + + * You can edit these items directly in the text boxes, or you can choose + single or even multiple items from the drop-down list. + + * Read-only items are marked as gray. + + * Hover the mouse cursor over the item to see the description. + +#. Add or delete VMs: + + * Click **Add a VM below** in a VM's settings, and then select a VM type + to add a new VM under the current VM. + + * Click **Remove this VM** in a VM's settings to remove the VM from the + scenario. + + When a VM is added or removed, the configurator reassigns the VM IDs for + the remaining VMs by the order of pre-launched User VMs, Service VM, and + post-launched User VMs. + + .. image:: images/configure_vm_add.png + :align: center + +#. Click **Export XML** to save the scenario configuration file. A dialog box + appears, enabling you to save the file to a specific folder by inputting the + absolute path to this folder. If you don't specify a path, the file will be + saved to the default folder: ``acrn-hypervisor/../user_config/``. + + Before saving the scenario configuration file, the configurator validates + the configurable items. If errors exist, the configurator lists all + incorrectly configured items and shows the errors. Example: + + .. image:: images/err_acrn_configuration.png + :align: center + + After the scenario is saved, the page automatically displays the saved + scenario configuration file. + +#. To delete a scenario configuration file, click **Export XML** > **Remove**. + The configurator will delete the loaded file, even if you change the name of + the file in the dialog box. + +#. If your scenario has post-launched User VMs, continue to the next step + to create launch scripts for those VMs. If your scenario doesn't have + post-launched User VMs, you can skip to the final step to close the tool. + +#. Start the launch script configuration process by + selecting an option from the **Launch Setting** menu on the top banner of + the UI or by importing a launch configuration file via the **Import XML** + button. The four options are described below: + + * Click **Create a new launch script** from the **Launch Setting** menu to + dynamically create a new launch configuration for the current board. + + * Click **Load a default launch script** from the **Launch Setting** menu to + select a predefined launch configuration. + + * Click the **Launch Setting** menu and select a launch configuration + from the list under **launch setting list**. + + .. image:: images/choose_launch.png + :align: center + + * Click the **Import XML** button to import a customized launch + configuration file. + +#. Select a scenario for the current launch configuration from the + **Select Scenario** drop-down box. + +#. When the launch configuration file is available for editing, the + configurable items appear below the **Launch Setting** row. You may need + to scroll down to see them. Example: + + .. image:: images/configure_launch.png + :align: center + + * You can edit these items directly in the text boxes, or you can choose + single or even multiple items from the drop-down list. + + * Read-only items are marked as gray. + + * Hover the mouse cursor over the item to see the description. + +#. Add or remove User VM (UOS) launch scripts: + + * Click **Configure an UOS below** to add a User VM launch script. + + * Click **Remove this VM** to remove a User VM launch script. + + .. image:: images/configure_launch_add.png + :align: center + +#. Click **Export XML** to save the launch configuration file. A dialog box + appears, enabling you to save the file to a specific folder by inputting the + absolute path to this folder. If you don't specify a path, the file will + be saved to the default folder: + ``acrn-hypervisor/../user_config/``. + + Before saving the launch configuration file, the configurator validates the + configurable items. If errors exist, the configurator lists all incorrectly + configured items and shows the errors. + +#. To delete a launch configuration file, click **Export XML** > **Remove**. + The configurator will delete the loaded file, even if you change the name of + the file in the dialog box. + +#. Click **Generate Launch Script** to save the current launch configuration + and then generate a launch script for each VM defined in the launch + configuration. + + .. image:: images/generate_launch_script.png + :align: center + +#. Confirm that the launch scripts appear in the + ``/output`` directory. + +#. When you are done using the tool, close the browser and press + :kbd:`CTRL` + :kbd:`C` to terminate the + ``acrn_configurator.py`` program running in the terminal window. diff --git a/doc/tutorials/acrn_on_qemu.rst b/doc/tutorials/acrn_on_qemu.rst index 243357f0c..b47afe1ea 100644 --- a/doc/tutorials/acrn_on_qemu.rst +++ b/doc/tutorials/acrn_on_qemu.rst @@ -11,8 +11,8 @@ with basic functionality such as running Service VM (SOS) and User VM (UOS) for This setup was tested with the following configuration, -- ACRN Hypervisor: ``v2.5`` tag -- ACRN Kernel: ``v2.5`` tag +- ACRN Hypervisor: ``v2.6`` tag +- ACRN Kernel: ``v2.6`` tag - QEMU emulator version 4.2.1 - Service VM/User VM is Ubuntu 20.04 - Platforms Tested: Kaby Lake, Skylake @@ -53,7 +53,6 @@ Prepare Service VM (L1 Guest) --connect qemu:///system \ --name ACRNSOS \ --machine q35 \ - --cpu host-passthrough,+invtsc \ --ram 4096 \ --disk path=/var/lib/libvirt/images/acrnsos.img,size=32 \ --vcpus 4 \ @@ -62,7 +61,7 @@ Prepare Service VM (L1 Guest) --os-variant ubuntu18.04 \ --graphics none \ --clock offset=utc,tsc_present=yes,kvmclock_present=no \ - --qemu-commandline="-machine kernel-irqchip=split -device intel-iommu,intremap=on,caching-mode=on,aw-bits=48" \ + --qemu-commandline="-machine kernel-irqchip=split -cpu Denverton,+invtsc,+lm,+nx,+smep,+smap,+mtrr,+clflushopt,+vmx,+x2apic,+popcnt,-xsave,+sse,+rdrand,+vmx-apicv-xapic,+vmx-apicv-x2apic,+vmx-flexpriority,+tsc-deadline,+pdpe1gb -device intel-iommu,intremap=on,caching-mode=on,aw-bits=48" \ --location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \ --extra-args "console=tty0 console=ttyS0,115200n8" @@ -124,16 +123,16 @@ Install ACRN Hypervisor .. important:: All the steps below are performed **inside** the Service VM guest that we built in the previous section. -#. Install the ACRN build tools and dependencies following the :ref:`install-build-tools-dependencies` +#. Install the ACRN build tools and dependencies following the :ref:`gsg` -#. Clone ACRN repo and check out the ``v2.5`` tag. +#. Clone ACRN repo and check out the ``v2.6`` tag. .. code-block:: none cd ~ git clone https://github.com/projectacrn/acrn-hypervisor.git cd acrn-hypervisor - git checkout v2.5 + git checkout v2.6 #. Build ACRN for QEMU, @@ -141,7 +140,7 @@ Install ACRN Hypervisor make BOARD=qemu SCENARIO=sdc - For more details, refer to :ref:`getting-started-building`. + For more details, refer to :ref:`gsg`. #. Install the ACRN Device Model and tools @@ -156,9 +155,9 @@ Install ACRN Hypervisor sudo cp build/hypervisor/acrn.32.out /boot #. Clone and configure the Service VM kernel repository following the instructions at - :ref:`build-and-install-ACRN-kernel` and using the ``v2.5`` tag. The User VM (L2 guest) + :ref:`gsg` and using the ``v2.6`` tag. The User VM (L2 guest) uses the ``virtio-blk`` driver to mount the rootfs. This driver is included in the default - kernel configuration as of the ``v2.5`` tag. + kernel configuration as of the ``v2.6`` tag. #. Update Grub to boot the ACRN hypervisor and load the Service VM kernel. Append the following configuration to the :file:`/etc/grub.d/40_custom`. @@ -218,61 +217,27 @@ Install ACRN Hypervisor Bring-Up User VM (L2 Guest) *************************** -1. Build the ACRN User VM kernel. +1. Build the User VM disk image (``UserVM.img``) following :ref:`build-the-ubuntu-kvm-image` and copy it to the ACRNSOS (L1 Guest). + Alternatively you can also use an `Ubuntu Desktop ISO image `_. + Rename the downloaded ISO image to ``UserVM.iso``. - .. code-block:: none - - cd ~/acrn-kernel - cp kernel_config_uos .config - make olddefconfig - make - -#. Copy the User VM kernel to your home folder, we will use it to launch the User VM (L2 guest) - - .. code-block:: none - - cp arch/x86/boot/bzImage ~/bzImage_uos - -#. Build the User VM disk image (``UOS.img``) following :ref:`build-the-ubuntu-kvm-image` and copy it to the ACRNSOS (L1 Guest). - Alternatively you can also use ``virt-install`` **in the host environment** to create a User VM image similarly to how we built ACRNSOS previously. - - .. code-block:: none - - virt-install \ - --name UOS \ - --ram 1024 \ - --disk path=/var/lib/libvirt/images/UOS.img,size=8,format=raw \ - --vcpus 2 \ - --virt-type kvm \ - --os-type linux \ - --os-variant ubuntu18.04 \ - --graphics none \ - --location 'http://archive.ubuntu.com/ubuntu/dists/bionic/main/installer-amd64/' \ - --extra-args "console=tty0 console=ttyS0,115200n8" - -#. Transfer the ``UOS.img`` User VM disk image to the Service VM (L1 guest). - - .. code-block:: - - sudo scp /var/lib/libvirt/images/UOS.img @ - - Where ```` is your username in the Service VM and ```` its IP address. +#. Transfer the ``UserVM.img`` or ``UserVM.iso`` User VM disk image to the Service VM (L1 guest). #. Launch User VM using the ``launch_ubuntu.sh`` script. .. code-block:: none cp ~/acrn-hypervisor/misc/config_tools/data/samples_launch_scripts/launch_ubuntu.sh ~/ + cp ~/acrn-hypervisor/devicemodel/bios/OVMF.fd ~/ -#. Update the script to use your disk image and kernel +#. Update the script to use your disk image (``UserVM.img or ``UserVM.iso``). .. code-block:: none acrn-dm -A -m $mem_size -s 0:0,hostbridge \ - -s 3,virtio-blk,~/UOS.img \ + -s 3,virtio-blk,~/UserVM.img \ -s 4,virtio-net,tap0 \ -s 5,virtio-console,@stdio:stdio_port \ - -k ~/bzImage_uos \ - -B "earlyprintk=serial,ttyS0,115200n8 consoleblank=0 root=/dev/vda1 rw rootwait maxcpus=1 nohpet console=tty0 console=hvc0 console=ttyS0 no_timer_check ignore_loglevel log_buf_len=16M tsc=reliable" \ + --ovmf ~/OVMF.fd \ $logger_setting \ $vm_name diff --git a/doc/tutorials/board_inspector_tool.rst b/doc/tutorials/board_inspector_tool.rst new file mode 100644 index 000000000..52ac2008c --- /dev/null +++ b/doc/tutorials/board_inspector_tool.rst @@ -0,0 +1,115 @@ +.. _board_inspector_tool: + +Board Inspector Tool +#################### + +This guide describes all features and uses of the tool. + +About the Board Inspector Tool +****************************** + +The board inspector tool ``board_inspector.py`` enables you to generate a board +configuration file on the target system. The board configuration file stores +hardware-specific information extracted from the target platform and is used to +customize your :ref:`ACRN configuration `. + +Generate a Board Configuration File +*********************************** + +.. important:: + + Whenever you change the configuration of the board, such as BIOS settings, + additional memory, or PCI devices, you must generate a new board + configuration file. + +The following steps describe all options in the board inspector for generating +a board configuration file. + +#. Make sure the target system is set up and ready to run the board inspector, + according to :ref:`gsg-board-setup` in the Getting Started Guide. + +#. Load the ``msr`` driver, used by the board inspector: + + .. code-block:: bash + + sudo modprobe msr + +#. Run the board inspector tool (``board_inspector.py``) to generate the board + configuration file. This example assumes the tool is in the + ``~/acrn-work/`` directory and ``my_board`` is the desired file + name. Feel free to modify the commands as needed. + + .. code-block:: bash + + cd ~/acrn-work/board_inspector/ + sudo python3 board_inspector.py my_board + + Upon success, the tool displays the following message: + + .. code-block:: console + + PTCT table has been saved to PTCT successfully! + +#. Confirm that the board configuration file ``my_board.xml`` was generated in + the current directory. + +.. _board_inspector_cl: + +Command-Line Options +******************** + +You can configure the board inspector via command-line options. Running the +board inspector with the ``-h`` option yields the following usage message: + +.. code-block:: + + usage: board_inspector.py [-h] [--out OUT] [--basic] [--loglevel LOGLEVEL] + [--check-device-status] board_name + + positional arguments: + board_name the name of the board that runs the ACRN hypervisor + + optional arguments: + -h, --help show this help message and exit + --out OUT the name of board info file + --basic do not extract advanced information such as ACPI namespace + --loglevel LOGLEVEL choose log level, e.g. info, warning or error + --check-device-status + + filter out devices whose _STA object evaluates to 0 + +Details about certain arguments: + +.. list-table:: + :widths: 33 77 + :header-rows: 1 + + * - Argument + - Details + + * - ``board_name`` + - Required. The board name is used as the file name of the board + configuration file and is placed inside the file for other tools to read. + + * - ``--out`` + - Optional. Specify a file path where the board configuration file will be + saved (example: ``~/acrn_work``). If only a filename is provided in this + option, the board inspector will generate the file in the current + directory. + + * - ``--basic`` + - Optional. By default, the board inspector parses the ACPI namespace when + generating board configuration files. This option provides a way to + disable ACPI namespace parsing in case the parsing blocks the generation + of board configuration files. + + * - ``--loglevel`` + - Optional. Choose log level, e.g., info, warning or error. + (Default is warning.) + + * - ``--check-device-status`` + - Optional. On some boards, the device status (reported by the _STA + object) returns 0 while the device object is still useful for + pass-through devices. By default, the board inspector includes the + devices in the board configuration file. This option filters out the + devices, so that they cannot be used. diff --git a/doc/tutorials/cpu_sharing.rst b/doc/tutorials/cpu_sharing.rst index fd8bc7ed8..f93bb3bab 100644 --- a/doc/tutorials/cpu_sharing.rst +++ b/doc/tutorials/cpu_sharing.rst @@ -130,7 +130,7 @@ Scheduler configuration * The scheduler used at runtime is defined in the scenario XML file via the :option:`hv.FEATURES.SCHEDULER` option. The default scheduler - is **SCHED_BVT**. Use the :ref:`ACRN configuration tool ` + is **SCHED_BVT**. Use the :ref:`ACRN configurator tool ` if you want to change this scenario option value. diff --git a/doc/tutorials/debug.rst b/doc/tutorials/debug.rst index 13fa5f35f..fe7ff263c 100644 --- a/doc/tutorials/debug.rst +++ b/doc/tutorials/debug.rst @@ -90,7 +90,7 @@ noted above. For example, add the following code into function shell_cmd_help added information Once you have instrumented the code, you need to rebuild the hypervisor and -install it on your platform. Refer to :ref:`getting-started-building` +install it on your platform. Refer to :ref:`gsg` for detailed instructions on how to do that. We set console log level to 5, and mem log level to 2 through the @@ -205,8 +205,7 @@ shown in the following example: 4. After we have inserted the trace code addition, we need to rebuild the ACRN hypervisor and install it on the platform. Refer to - :ref:`getting-started-building` for - detailed instructions on how to do that. + :ref:`gsg` for detailed instructions on how to do that. 5. Now we can use the following command in the Service VM console to generate acrntrace data into the current directory:: diff --git a/doc/tutorials/enable_ivshmem.rst b/doc/tutorials/enable_ivshmem.rst index 09faaa9f5..848e3b2f0 100644 --- a/doc/tutorials/enable_ivshmem.rst +++ b/doc/tutorials/enable_ivshmem.rst @@ -37,7 +37,7 @@ steps: communication and separate it with ``:``. For example, the communication between VM0 and VM2, it can be written as ``0:2`` -- Build with the XML configuration, refer to :ref:`getting-started-building`. +- Build with the XML configuration, refer to :ref:`gsg`. Ivshmem DM-Land Usage ********************* @@ -199,7 +199,7 @@ Linux-based VMs (VM0 is a pre-launched VM and VM2 is a post-launched VM). .. code-block:: none :emphasize-lines: 2,3 - + y hv:/shm_region_0, 2, 0:2 diff --git a/doc/tutorials/enable_ptm.rst b/doc/tutorials/enable_ptm.rst deleted file mode 100644 index d83356da5..000000000 --- a/doc/tutorials/enable_ptm.rst +++ /dev/null @@ -1,87 +0,0 @@ -.. _enable-ptm: - -Enable PCIe Precision Time Management -##################################### - -The PCI Express (PCIe) specification defines a Precision Time Measurement (PTM) -mechanism that lets you coordinate and synchronize events across multiple PCI -components within the same system with very fine time precision. - -ACRN adds PCIe root port emulation in the hypervisor to support the PTM feature -and emulates a simple PTM hierarchy. ACRN enables PTM in a Guest VM if the user -sets the ``enable_ptm`` option when passing through a device to a post-launched -VM and :ref:`vm.PTM` is enabled in the scenario configuration. When you enable -PTM, the passthrough device is connected to a virtual root port instead of the host -bridge as it normally would. - -Here is an example launch script that configures a supported Ethernet card for -passthrough and enables PTM on it: - -.. code-block:: bash - :emphasize-lines: 9-11,17 - - declare -A passthru_vpid - declare -A passthru_bdf - passthru_vpid=( - ["ethptm"]="8086 15f2" - ) - passthru_bdf=( - ["ethptm"]="0000:aa:00.0" - ) - echo ${passthru_vpid["ethptm"]} > /sys/bus/pci/drivers/pci-stub/new_id - echo ${passthru_bdf["ethptm"]} > /sys/bus/pci/devices/${passthru_bdf["ethptm"]}/driver/unbind - echo ${passthru_bdf["ethptm"]} > /sys/bus/pci/drivers/pci-stub/bind - - acrn-dm -A -m $mem_size -s 0:0,hostbridge \ - -s 3,virtio-blk,uos-test.img \ - -s 4,virtio-net,tap0 \ - -s 5,virtio-console,@stdio:stdio_port \ - -s 6,passthru,a9/00/0,enable_ptm \ - --ovmf /usr/share/acrn/bios/OVMF.fd - -.. important:: By default, the :ref:`vm.PTM` option is disabled in ACRN VMs. Use the - :ref:`ACRN configuration tool ` to enable PTM - in the scenario XML file that configures the Guest VM. - -Here is the bus hierarchy in the Guest VM (as shown by the ``lspci`` command):: - - lspci -tv - -[0000:00]-+-00.0 Network Appliance Corporation Device 1275 - +-03.0 Red Hat, Inc. Virtio block device - +-04.0 Red Hat, Inc. Virtio network device - +-05.0 Red Hat, Inc. Virtio console - \-06.0-[01]----00.0 Intel Corporation Device 15f2 - -(Instead of ``Device 15f2`` you might see ``Ethernet Controller I225LM``.) - -You can also verify that PTM was enabled by using ``dmesg`` in the guest VM:: - - dmesg | grep -i ptm - [ 1.555284] pci_ptm_init: 00:00.00, ispcie=1, type=0x4 - [ 1.555356] Cannot find PTM ext cap. - [ 1.561311] pci_ptm_init: 00:03.00, ispcie=0, type=0x0 - [ 1.567146] pci_ptm_init: 00:04.00, ispcie=0, type=0x0 - [ 1.572983] pci_ptm_init: 00:05.00, ispcie=0, type=0x0 - [ 1.718038] pci_ptm_init: 00:06.00, ispcie=1, type=0x4 - [ 1.722034] ptm is ptm_root. - [ 1.723033] Condition-2: ptm is enabled. - [ 1.723052] pci 0000:00:06.0: PTM enabled (root), 4ns granularity - [ 1.766438] pci_ptm_init: a9:00.00, ispcie=1, type=0x0 - [ 5.715000] igc_probe enable ptm. - [ 5.715068] pci_enable_ptm: a9:00.00, ispcie=1, type=0x0 - [ 5.715294] ptm is enabled on endpoint device. - [ 5.715371] igc 0000:a9:00.0: PTM enabled, 4ns granularity - -PTM Implementation Notes -************************ - -To simplify the implementation, the virtual root port only supports the most -basic PCIe configuration and operation, in addition to PTM capabilities. - -To use PTM in a virtualized environment, you may want to first verify that PTM -is supported by the device and is enabled on the bare metal machine and in the -Guest VM kernel (e.g., ``CONFIG_PCIE_PTM=y`` option is set in the Linux kernel). - -You can find more details about the PTM implementation in the -:ref:`ACRN HLD PCIe PTM documentation `. - diff --git a/doc/tutorials/gpu-passthru.rst b/doc/tutorials/gpu-passthru.rst index e1a153376..24ceea9e8 100644 --- a/doc/tutorials/gpu-passthru.rst +++ b/doc/tutorials/gpu-passthru.rst @@ -89,7 +89,7 @@ Passthrough the GPU to Guest echo "0000:00:02.0" > /sys/bus/pci/devices/0000:00:02.0/driver/unbind echo "0000:00:02.0" > /sys/bus/pci/drivers/pci-stub/bind - Replace ``-s 2,pci-gvt -G "$2" \`` with ``-s 2,passthru,0/2/0,gpu \`` + Replace ``-s 2,pci-gvt -G "$2" \`` with ``-s 2,passthru,0/2/0 \`` 4. Run ``launch_win.sh``. diff --git a/doc/tutorials/images/Celadon_apps.png b/doc/tutorials/images/Celadon_apps.png deleted file mode 100644 index 2f1110976..000000000 Binary files a/doc/tutorials/images/Celadon_apps.png and /dev/null differ diff --git a/doc/tutorials/images/Celadon_home.png b/doc/tutorials/images/Celadon_home.png deleted file mode 100644 index fcb136568..000000000 Binary files a/doc/tutorials/images/Celadon_home.png and /dev/null differ diff --git a/doc/tutorials/images/KBL-serial-port-header-to-RS232-cable.jpg b/doc/tutorials/images/KBL-serial-port-header-to-RS232-cable.jpg deleted file mode 100644 index b1832653a..000000000 Binary files a/doc/tutorials/images/KBL-serial-port-header-to-RS232-cable.jpg and /dev/null differ diff --git a/doc/tutorials/images/KBL-serial-port-header.png b/doc/tutorials/images/KBL-serial-port-header.png deleted file mode 100644 index b1b5d904d..000000000 Binary files a/doc/tutorials/images/KBL-serial-port-header.png and /dev/null differ diff --git a/doc/tutorials/images/NUC-serial-port.png b/doc/tutorials/images/NUC-serial-port.png deleted file mode 100644 index eaee85dc9..000000000 Binary files a/doc/tutorials/images/NUC-serial-port.png and /dev/null differ diff --git a/doc/tutorials/images/RT-NUC-setup.png b/doc/tutorials/images/RT-NUC-setup.png deleted file mode 100644 index a42a3bfa0..000000000 Binary files a/doc/tutorials/images/RT-NUC-setup.png and /dev/null differ diff --git a/doc/tutorials/images/The-GUI-of-weston.png b/doc/tutorials/images/The-GUI-of-weston.png deleted file mode 100644 index b85a46498..000000000 Binary files a/doc/tutorials/images/The-GUI-of-weston.png and /dev/null differ diff --git a/doc/tutorials/images/The-console-of-AGL.png b/doc/tutorials/images/The-console-of-AGL.png deleted file mode 100644 index 32cafebc5..000000000 Binary files a/doc/tutorials/images/The-console-of-AGL.png and /dev/null differ diff --git a/doc/tutorials/images/The-overview-of-AGL-as-UOS.png b/doc/tutorials/images/The-overview-of-AGL-as-UOS.png deleted file mode 100644 index 177c06549..000000000 Binary files a/doc/tutorials/images/The-overview-of-AGL-as-UOS.png and /dev/null differ diff --git a/doc/tutorials/images/USB-to-TTL-serial-cable.png b/doc/tutorials/images/USB-to-TTL-serial-cable.png deleted file mode 100644 index 876d29cc0..000000000 Binary files a/doc/tutorials/images/USB-to-TTL-serial-cable.png and /dev/null differ diff --git a/doc/tutorials/images/acrn-dm_qos_architecture.png b/doc/tutorials/images/acrn-dm_qos_architecture.png deleted file mode 100644 index c561cfec2..000000000 Binary files a/doc/tutorials/images/acrn-dm_qos_architecture.png and /dev/null differ diff --git a/doc/tutorials/images/acrn_cat_hld.png b/doc/tutorials/images/acrn_cat_hld.png deleted file mode 100644 index a9ba20d44..000000000 Binary files a/doc/tutorials/images/acrn_cat_hld.png and /dev/null differ diff --git a/doc/tutorials/images/acrn_qemu_3.png b/doc/tutorials/images/acrn_qemu_3.png deleted file mode 100644 index ce6b12c98..000000000 Binary files a/doc/tutorials/images/acrn_qemu_3.png and /dev/null differ diff --git a/doc/tutorials/images/adk_install_1.png b/doc/tutorials/images/adk_install_1.png deleted file mode 100644 index e748ef130..000000000 Binary files a/doc/tutorials/images/adk_install_1.png and /dev/null differ diff --git a/doc/tutorials/images/adk_install_2.png b/doc/tutorials/images/adk_install_2.png deleted file mode 100644 index e37eb3e46..000000000 Binary files a/doc/tutorials/images/adk_install_2.png and /dev/null differ diff --git a/doc/tutorials/images/agl-cables.jpg b/doc/tutorials/images/agl-cables.jpg deleted file mode 100644 index fda63db10..000000000 Binary files a/doc/tutorials/images/agl-cables.jpg and /dev/null differ diff --git a/doc/tutorials/images/agl-demo-concept.jpg b/doc/tutorials/images/agl-demo-concept.jpg deleted file mode 100644 index c86777432..000000000 Binary files a/doc/tutorials/images/agl-demo-concept.jpg and /dev/null differ diff --git a/doc/tutorials/images/agl-demo-setup.jpg b/doc/tutorials/images/agl-demo-setup.jpg deleted file mode 100644 index 976f5977f..000000000 Binary files a/doc/tutorials/images/agl-demo-setup.jpg and /dev/null differ diff --git a/doc/tutorials/images/choose_launch.png b/doc/tutorials/images/choose_launch.png new file mode 100644 index 000000000..3da086e9e Binary files /dev/null and b/doc/tutorials/images/choose_launch.png differ diff --git a/doc/tutorials/images/choose_scenario.png b/doc/tutorials/images/choose_scenario.png index 5095833e8..dceb8d2bb 100644 Binary files a/doc/tutorials/images/choose_scenario.png and b/doc/tutorials/images/choose_scenario.png differ diff --git a/doc/tutorials/images/click_import_board_info_button.png b/doc/tutorials/images/click_import_board_info_button.png index cb1775f8d..36a6ce6b3 100644 Binary files a/doc/tutorials/images/click_import_board_info_button.png and b/doc/tutorials/images/click_import_board_info_button.png differ diff --git a/doc/tutorials/images/config_app_main_menu.png b/doc/tutorials/images/config_app_main_menu.png index 90ebece7a..2d2cac44c 100644 Binary files a/doc/tutorials/images/config_app_main_menu.png and b/doc/tutorials/images/config_app_main_menu.png differ diff --git a/doc/tutorials/images/configure_launch.png b/doc/tutorials/images/configure_launch.png new file mode 100644 index 000000000..56ed805d7 Binary files /dev/null and b/doc/tutorials/images/configure_launch.png differ diff --git a/doc/tutorials/images/configure_launch_add.png b/doc/tutorials/images/configure_launch_add.png new file mode 100644 index 000000000..88625f852 Binary files /dev/null and b/doc/tutorials/images/configure_launch_add.png differ diff --git a/doc/tutorials/images/configure_scenario.png b/doc/tutorials/images/configure_scenario.png index 068dd9bf6..7e7a75273 100644 Binary files a/doc/tutorials/images/configure_scenario.png and b/doc/tutorials/images/configure_scenario.png differ diff --git a/doc/tutorials/images/configure_vm_add.png b/doc/tutorials/images/configure_vm_add.png index 442e1145d..08161d0aa 100644 Binary files a/doc/tutorials/images/configure_vm_add.png and b/doc/tutorials/images/configure_vm_add.png differ diff --git a/doc/tutorials/images/cpu_utilization_image.png b/doc/tutorials/images/cpu_utilization_image.png deleted file mode 100644 index 8a1e002d1..000000000 Binary files a/doc/tutorials/images/cpu_utilization_image.png and /dev/null differ diff --git a/doc/tutorials/images/debug_image26.png b/doc/tutorials/images/debug_image26.png deleted file mode 100644 index ed5682f07..000000000 Binary files a/doc/tutorials/images/debug_image26.png and /dev/null differ diff --git a/doc/tutorials/images/debug_image27.png b/doc/tutorials/images/debug_image27.png deleted file mode 100644 index a07fdf2c5..000000000 Binary files a/doc/tutorials/images/debug_image27.png and /dev/null differ diff --git a/doc/tutorials/images/debug_image5.png b/doc/tutorials/images/debug_image5.png deleted file mode 100644 index 800994106..000000000 Binary files a/doc/tutorials/images/debug_image5.png and /dev/null differ diff --git a/doc/tutorials/images/default-acrn-network.png b/doc/tutorials/images/default-acrn-network.png deleted file mode 100644 index 1edcf3fab..000000000 Binary files a/doc/tutorials/images/default-acrn-network.png and /dev/null differ diff --git a/doc/tutorials/images/enable_custom_boot.png b/doc/tutorials/images/enable_custom_boot.png deleted file mode 100644 index a992208b8..000000000 Binary files a/doc/tutorials/images/enable_custom_boot.png and /dev/null differ diff --git a/doc/tutorials/images/enroll_pk_key_1.png b/doc/tutorials/images/enroll_pk_key_1.png deleted file mode 100644 index 398bb1f3c..000000000 Binary files a/doc/tutorials/images/enroll_pk_key_1.png and /dev/null differ diff --git a/doc/tutorials/images/enroll_pk_key_2.png b/doc/tutorials/images/enroll_pk_key_2.png deleted file mode 100644 index be80184a8..000000000 Binary files a/doc/tutorials/images/enroll_pk_key_2.png and /dev/null differ diff --git a/doc/tutorials/images/enroll_pk_key_3.png b/doc/tutorials/images/enroll_pk_key_3.png deleted file mode 100644 index 3ca1920cb..000000000 Binary files a/doc/tutorials/images/enroll_pk_key_3.png and /dev/null differ diff --git a/doc/tutorials/images/enroll_pk_key_4.png b/doc/tutorials/images/enroll_pk_key_4.png deleted file mode 100644 index 42d7f297d..000000000 Binary files a/doc/tutorials/images/enroll_pk_key_4.png and /dev/null differ diff --git a/doc/tutorials/images/enroll_pk_key_5.png b/doc/tutorials/images/enroll_pk_key_5.png deleted file mode 100644 index f7864c97a..000000000 Binary files a/doc/tutorials/images/enroll_pk_key_5.png and /dev/null differ diff --git a/doc/tutorials/images/enroll_pk_key_6.png b/doc/tutorials/images/enroll_pk_key_6.png deleted file mode 100644 index 921f94470..000000000 Binary files a/doc/tutorials/images/enroll_pk_key_6.png and /dev/null differ diff --git a/doc/tutorials/images/err_acrn_configuration.png b/doc/tutorials/images/err_acrn_configuration.png index 3226619e2..cf20fcffa 100644 Binary files a/doc/tutorials/images/err_acrn_configuration.png and b/doc/tutorials/images/err_acrn_configuration.png differ diff --git a/doc/tutorials/images/example-of-OVS-usage.png b/doc/tutorials/images/example-of-OVS-usage.png deleted file mode 100644 index 1bdcf3bed..000000000 Binary files a/doc/tutorials/images/example-of-OVS-usage.png and /dev/null differ diff --git a/doc/tutorials/images/exit_uefi_shell.png b/doc/tutorials/images/exit_uefi_shell.png deleted file mode 100644 index 42c2df043..000000000 Binary files a/doc/tutorials/images/exit_uefi_shell.png and /dev/null differ diff --git a/doc/tutorials/images/generate_launch_script.png b/doc/tutorials/images/generate_launch_script.png index cf6007a01..463e13f7a 100644 Binary files a/doc/tutorials/images/generate_launch_script.png and b/doc/tutorials/images/generate_launch_script.png differ diff --git a/doc/tutorials/images/gsg-successful-boot.png b/doc/tutorials/images/gsg-successful-boot.png deleted file mode 100644 index ebac108b6..000000000 Binary files a/doc/tutorials/images/gsg-successful-boot.png and /dev/null differ diff --git a/doc/tutorials/images/install_wim_index.png b/doc/tutorials/images/install_wim_index.png deleted file mode 100644 index 1a2889435..000000000 Binary files a/doc/tutorials/images/install_wim_index.png and /dev/null differ diff --git a/doc/tutorials/images/menuconfig-partition-mode.png b/doc/tutorials/images/menuconfig-partition-mode.png deleted file mode 100644 index f92a82860..000000000 Binary files a/doc/tutorials/images/menuconfig-partition-mode.png and /dev/null differ diff --git a/doc/tutorials/images/menuconfig-rdt.png b/doc/tutorials/images/menuconfig-rdt.png deleted file mode 100644 index b7d638367..000000000 Binary files a/doc/tutorials/images/menuconfig-rdt.png and /dev/null differ diff --git a/doc/tutorials/images/partition_mode_up2.png b/doc/tutorials/images/partition_mode_up2.png deleted file mode 100644 index a8d9c8a71..000000000 Binary files a/doc/tutorials/images/partition_mode_up2.png and /dev/null differ diff --git a/doc/tutorials/images/platformflashtool_start_to_flash.png b/doc/tutorials/images/platformflashtool_start_to_flash.png deleted file mode 100644 index 97c2aeb95..000000000 Binary files a/doc/tutorials/images/platformflashtool_start_to_flash.png and /dev/null differ diff --git a/doc/tutorials/images/reset_in_bios.png b/doc/tutorials/images/reset_in_bios.png deleted file mode 100644 index a4fb45b97..000000000 Binary files a/doc/tutorials/images/reset_in_bios.png and /dev/null differ diff --git a/doc/tutorials/images/reset_in_uefi_shell.png b/doc/tutorials/images/reset_in_uefi_shell.png deleted file mode 100644 index 3accc555a..000000000 Binary files a/doc/tutorials/images/reset_in_uefi_shell.png and /dev/null differ diff --git a/doc/tutorials/images/sbl_boot_flow_UP2.png b/doc/tutorials/images/sbl_boot_flow_UP2.png deleted file mode 100644 index d756dd920..000000000 Binary files a/doc/tutorials/images/sbl_boot_flow_UP2.png and /dev/null differ diff --git a/doc/tutorials/images/sdc2-defconfig.png b/doc/tutorials/images/sdc2-defconfig.png deleted file mode 100644 index 4ca1c7564..000000000 Binary files a/doc/tutorials/images/sdc2-defconfig.png and /dev/null differ diff --git a/doc/tutorials/images/sdc2-launch-2-laag.png b/doc/tutorials/images/sdc2-launch-2-laag.png deleted file mode 100644 index 8ec0a2902..000000000 Binary files a/doc/tutorials/images/sdc2-launch-2-laag.png and /dev/null differ diff --git a/doc/tutorials/images/sdc2-save-mini-config.png b/doc/tutorials/images/sdc2-save-mini-config.png deleted file mode 100644 index f498df9b1..000000000 Binary files a/doc/tutorials/images/sdc2-save-mini-config.png and /dev/null differ diff --git a/doc/tutorials/images/sdc2-selected.png b/doc/tutorials/images/sdc2-selected.png deleted file mode 100644 index 24ffc029d..000000000 Binary files a/doc/tutorials/images/sdc2-selected.png and /dev/null differ diff --git a/doc/tutorials/images/secure_boot_config_1.png b/doc/tutorials/images/secure_boot_config_1.png deleted file mode 100644 index 2f9f07eed..000000000 Binary files a/doc/tutorials/images/secure_boot_config_1.png and /dev/null differ diff --git a/doc/tutorials/images/secure_boot_config_2.png b/doc/tutorials/images/secure_boot_config_2.png deleted file mode 100644 index 1f5566761..000000000 Binary files a/doc/tutorials/images/secure_boot_config_2.png and /dev/null differ diff --git a/doc/tutorials/images/secure_boot_config_3.png b/doc/tutorials/images/secure_boot_config_3.png deleted file mode 100644 index fb28a788f..000000000 Binary files a/doc/tutorials/images/secure_boot_config_3.png and /dev/null differ diff --git a/doc/tutorials/images/secure_boot_enabled.png b/doc/tutorials/images/secure_boot_enabled.png deleted file mode 100644 index b748cf3b6..000000000 Binary files a/doc/tutorials/images/secure_boot_enabled.png and /dev/null differ diff --git a/doc/tutorials/images/select_board_info.png b/doc/tutorials/images/select_board_info.png index fe1bf0945..c517b47ee 100644 Binary files a/doc/tutorials/images/select_board_info.png and b/doc/tutorials/images/select_board_info.png differ diff --git a/doc/tutorials/images/select_custom_mode.png b/doc/tutorials/images/select_custom_mode.png deleted file mode 100644 index 44044cd61..000000000 Binary files a/doc/tutorials/images/select_custom_mode.png and /dev/null differ diff --git a/doc/tutorials/images/the-bottom-side-of-UP2-board.png b/doc/tutorials/images/the-bottom-side-of-UP2-board.png deleted file mode 100644 index e924aa786..000000000 Binary files a/doc/tutorials/images/the-bottom-side-of-UP2-board.png and /dev/null differ diff --git a/doc/tutorials/images/the-connection-of-serial-port.png b/doc/tutorials/images/the-connection-of-serial-port.png deleted file mode 100644 index 7382de954..000000000 Binary files a/doc/tutorials/images/the-connection-of-serial-port.png and /dev/null differ diff --git a/doc/tutorials/images/uefi_shell_boot_default.png b/doc/tutorials/images/uefi_shell_boot_default.png deleted file mode 100644 index 37844132e..000000000 Binary files a/doc/tutorials/images/uefi_shell_boot_default.png and /dev/null differ diff --git a/doc/tutorials/images/up2_sbl_cables_connections.png b/doc/tutorials/images/up2_sbl_cables_connections.png deleted file mode 100644 index d0d23e223..000000000 Binary files a/doc/tutorials/images/up2_sbl_cables_connections.png and /dev/null differ diff --git a/doc/tutorials/images/up2_sbl_connections.png b/doc/tutorials/images/up2_sbl_connections.png deleted file mode 100644 index 7f2521e59..000000000 Binary files a/doc/tutorials/images/up2_sbl_connections.png and /dev/null differ diff --git a/doc/tutorials/images/using_cat_up2.png b/doc/tutorials/images/using_cat_up2.png deleted file mode 100644 index bdd9790cc..000000000 Binary files a/doc/tutorials/images/using_cat_up2.png and /dev/null differ diff --git a/doc/tutorials/images/vm_console_login.png b/doc/tutorials/images/vm_console_login.png deleted file mode 100644 index 6b97dacb1..000000000 Binary files a/doc/tutorials/images/vm_console_login.png and /dev/null differ diff --git a/doc/tutorials/images/vuart-config-3.png b/doc/tutorials/images/vuart-config-3.png deleted file mode 100644 index 52d9491ab..000000000 Binary files a/doc/tutorials/images/vuart-config-3.png and /dev/null differ diff --git a/doc/tutorials/images/vuart-config-4.png b/doc/tutorials/images/vuart-config-4.png deleted file mode 100644 index 6b6755e6b..000000000 Binary files a/doc/tutorials/images/vuart-config-4.png and /dev/null differ diff --git a/doc/tutorials/images/vuart-config-5.png b/doc/tutorials/images/vuart-config-5.png deleted file mode 100644 index ddb8d9f78..000000000 Binary files a/doc/tutorials/images/vuart-config-5.png and /dev/null differ diff --git a/doc/tutorials/images/windows_install_A.png b/doc/tutorials/images/windows_install_A.png deleted file mode 100644 index 8042c17b1..000000000 Binary files a/doc/tutorials/images/windows_install_A.png and /dev/null differ diff --git a/doc/tutorials/images/windows_install_B.png b/doc/tutorials/images/windows_install_B.png deleted file mode 100644 index 48adccb5f..000000000 Binary files a/doc/tutorials/images/windows_install_B.png and /dev/null differ diff --git a/doc/tutorials/images/windows_install_C.png b/doc/tutorials/images/windows_install_C.png deleted file mode 100644 index 9616a6f0b..000000000 Binary files a/doc/tutorials/images/windows_install_C.png and /dev/null differ diff --git a/doc/tutorials/nvmx_virtualization.rst b/doc/tutorials/nvmx_virtualization.rst index 5ea2b2f45..6820ad04c 100644 --- a/doc/tutorials/nvmx_virtualization.rst +++ b/doc/tutorials/nvmx_virtualization.rst @@ -91,6 +91,20 @@ Constraints on L1 guest configuration: * Only the ``SCHED_NOOP`` scheduler is supported. ACRN can't receive timer interrupts on LAPIC passthrough pCPUs +VPID allocation +=============== + +ACRN doesn't emulate L2 VPIDs and allocates VPIDs for L1 VMs from the reserved top +16-bit VPID range (``0x10000U - CONFIG_MAX_VM_NUM * MAX_VCPUS_PER_VM`` and up). +If the L1 hypervisor enables VPID for L2 VMs and allocates L2 VPIDs not in this +range, ACRN doesn't need to flush L2 VPID during L2 VMX transitions. + +This is the expected behavior in most of the time. But in special cases where a +L2 VPID allocated by L1 hypervisor is within this reserved range, it's possible +that this L2 VPID may conflict with a L1 VPID. In this case, ACRN flushes VPID +on L2 VMExit/VMEntry that are associated with this L2 VPID, which may significantly +negatively impact performances of this L2 VM. + Service OS VM configuration *************************** @@ -99,10 +113,10 @@ ACRN only supports enabling the nested virtualization feature on the Service VM, VMs. The nested virtualization feature is disabled by default in ACRN. You can -enable it using the :ref:`Use the ACRN Configuration Editor ` +enable it using the :ref:`ACRN configurator tool ` with these settings: -.. note:: Normally you'd use the configuration tool GUI to edit the scenario XML file. +.. note:: Normally you'd use the configurator tool GUI to edit the scenario XML file. The tool wasn't updated in time for the v2.5 release, so you'll need to manually edit the ACRN scenario XML configuration file to edit the ``SCHEDULER``, ``NVMX_ENABLED``, ``pcpu_id`` , ``guest_flags``, ``legacy_vuart``, and ``console_vuart`` settings for @@ -196,7 +210,7 @@ with these settings: Since CPU sharing is disabled, you may need to delete all ``POST_STD_VM`` and ``KATA_VM`` VMs from the scenario configuration file, which may share pCPU with the Service OS VM. -#. Follow instructions in :ref:`getting-started-building` and build with this XML configuration. +#. Follow instructions in :ref:`gsg` and build with this XML configuration. Prepare for Service VM Kernel and rootfs @@ -209,7 +223,7 @@ Instructions on how to boot Ubuntu as the Service VM can be found in The Service VM kernel needs to be built from the ``acrn-kernel`` repo, and some changes to the kernel ``.config`` are needed. Instructions on how to build and install the Service VM kernel can be found -in :ref:`Build and Install the ACRN Kernel `. +in :ref:`gsg`. Here is a summary of how to modify and build the kernel: diff --git a/doc/tutorials/pre-launched-rt.rst b/doc/tutorials/pre-launched-rt.rst index 4687c34f5..1310b93e2 100644 --- a/doc/tutorials/pre-launched-rt.rst +++ b/doc/tutorials/pre-launched-rt.rst @@ -50,7 +50,7 @@ install Ubuntu on the NVMe drive, and use grub to launch the Service VM. Install Pre-Launched RT Filesystem on SATA and Kernel Image on NVMe =================================================================== -Follow the :ref:`install-ubuntu-rtvm-sata` guide to install RT rootfs on SATA drive. +Follow the :ref:`gsg` to install RT rootfs on SATA drive. The Kernel should be on the NVMe drive along with GRUB. You'll need to copy the RT kernel @@ -75,15 +75,15 @@ Ethernet 03:00.0 devices to the Pre-Launched RT VM, build ACRN with: make BOARD_FILE=$PWD/misc/acrn-config/xmls/board-xmls/whl-ipc-i5.xml SCENARIO_FILE=$PWD/misc/acrn-config/xmls/config-xmls/whl-ipc-i5/hybrid_rt.xml RELEASE=0 -After the build completes, please update ACRN on NVMe. It is +After the build completes, update ACRN on NVMe. It is /boot/EFI/BOOT/acrn.bin, if /dev/nvme0n1p1 is mounted at /boot. Add Pre-Launched RT Kernel Image to GRUB Config =============================================== The last step is to modify the GRUB configuration file to load the Pre-Launched -kernel. (For more information about this, see :ref:`Update Grub for the Ubuntu Service VM -` section in the :ref:`gsg`.) The grub config file will look something +kernel. (For more information about this, see +the :ref:`gsg`.) The grub config file will look something like this: .. code-block:: none diff --git a/doc/tutorials/rdt_configuration.rst b/doc/tutorials/rdt_configuration.rst index 5931d514d..5aa97e952 100644 --- a/doc/tutorials/rdt_configuration.rst +++ b/doc/tutorials/rdt_configuration.rst @@ -149,20 +149,20 @@ Configure RDT for VM Using VM Configuration platform-specific XML file that helps ACRN identify RDT-supported platforms. RDT on ACRN is enabled by configuring the ``FEATURES`` sub-section of the scenario XML file as in the below example. For - details on building ACRN with a scenario, refer to :ref:`build-with-acrn-scenario`. + details on building ACRN with a scenario, refer to :ref:`gsg`. .. code-block:: none :emphasize-lines: 6 - y - SCHED_BVT - y - - *y* - n - - + y + SCHED_BVT + y + + y + n + + #. Once RDT is enabled in the scenario XML file, the next step is to program @@ -177,17 +177,17 @@ Configure RDT for VM Using VM Configuration :emphasize-lines: 8,9,10,11,12 - y - SCHED_BVT - y - - y - n - *0xff* - *0x3f* - *0xf* - *0x3* - *0* + y + SCHED_BVT + y + + y + n + 0xff + 0x3f + 0xf + 0x3 + 0 .. note:: @@ -206,12 +206,12 @@ Configure RDT for VM Using VM Configuration :emphasize-lines: 5,6,7,8 - PRE_STD_VM - ACRN PRE-LAUNCHED VM0 - 26c5e0d8-8f8a-47d8-8109-f201ebd61a5e - - *0* - *1* + PRE_STD_VM + ACRN PRE-LAUNCHED VM0 + 26c5e0d8-8f8a-47d8-8109-f201ebd61a5e + + 0 + 1 @@ -249,7 +249,7 @@ Configure RDT for VM Using VM Configuration per-LP CLOS is applied to the core. If HT is turned on, don't place high priority threads on sibling LPs running lower priority threads. -#. Based on our scenario, build and install ACRN. See :ref:`build-with-acrn-scenario` +#. Based on our scenario, build and install ACRN. See :ref:`gsg` for building and installing instructions. #. Restart the platform. diff --git a/doc/tutorials/rtvm_performance_tips.rst b/doc/tutorials/rtvm_performance_tips.rst index 2f35cb1d4..fa26cbb1c 100644 --- a/doc/tutorials/rtvm_performance_tips.rst +++ b/doc/tutorials/rtvm_performance_tips.rst @@ -148,7 +148,7 @@ Tip: Do not share CPUs allocated to the RTVM with other RT or non-RT VMs. However, for an RT VM, CPUs should be dedicatedly allocated for determinism. Tip: Use RDT such as CAT and MBA to allocate dedicated resources to the RTVM. - ACRN enables Intel® Resource Director Technology such as CAT, and MBA + ACRN enables Intel Resource Director Technology such as CAT, and MBA components such as the GPU via the memory hierarchy. The availability of RDT is hardware-specific. Refer to the :ref:`rdt_configuration`. diff --git a/doc/tutorials/running_deb_as_serv_vm.rst b/doc/tutorials/running_deb_as_serv_vm.rst index 12fe71006..c8f378b18 100644 --- a/doc/tutorials/running_deb_as_serv_vm.rst +++ b/doc/tutorials/running_deb_as_serv_vm.rst @@ -30,7 +30,7 @@ Use the following instructions to install Debian. `_ to install it on your board; we are using a Kaby Lake Intel NUC (NUC7i7DNHE) in this tutorial. -- :ref:`install-build-tools-dependencies` for ACRN. +- :ref:`gsg` for ACRN. - Update to the newer iASL: .. code-block:: bash diff --git a/doc/tutorials/running_deb_as_user_vm.rst b/doc/tutorials/running_deb_as_user_vm.rst index d9c0da07d..8780b3f8d 100644 --- a/doc/tutorials/running_deb_as_user_vm.rst +++ b/doc/tutorials/running_deb_as_user_vm.rst @@ -12,7 +12,7 @@ Intel NUC Kit. If you have not, refer to the following instructions: - Install a `Ubuntu 18.04 desktop ISO `_ on your board. -- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` guide to setup the Service VM. +- Follow the instructions in :ref:`gsg` guide to setup the Service VM. We are using a Kaby Lake Intel NUC (NUC7i7DNHE) and Debian 10 as the User VM in this tutorial. diff --git a/doc/tutorials/running_ubun_as_user_vm.rst b/doc/tutorials/running_ubun_as_user_vm.rst index a29974b18..4d4d39c81 100644 --- a/doc/tutorials/running_ubun_as_user_vm.rst +++ b/doc/tutorials/running_ubun_as_user_vm.rst @@ -12,7 +12,7 @@ Intel NUC Kit. If you have not, refer to the following instructions: - Install a `Ubuntu 18.04 desktop ISO `_ on your board. -- Follow the instructions :ref:`install-ubuntu-Service VM-NVMe` to set up the Service VM. +- Follow the instructions in :ref:`gsg` to set up the Service VM. Before you start this tutorial, make sure the KVM tools are installed on the diff --git a/doc/tutorials/setup_openstack_libvirt.rst b/doc/tutorials/setup_openstack_libvirt.rst index f713f59ab..fb8123356 100644 --- a/doc/tutorials/setup_openstack_libvirt.rst +++ b/doc/tutorials/setup_openstack_libvirt.rst @@ -18,7 +18,7 @@ Install ACRN ************ #. Install ACRN using Ubuntu 20.04 as its Service VM. Refer to - :ref:`Build and Install ACRN on Ubuntu `. + :ref:`gsg`. #. Make the acrn-kernel using the `kernel_config_uefi_sos `_ @@ -37,9 +37,8 @@ Install ACRN available loop devices. Follow the `snaps guide `_ to clean up old snap revisions if you're running out of loop devices. -#. Make sure the networking bridge ``acrn-br0`` is created. If not, - create it using the instructions in - :ref:`Build and Install ACRN on Ubuntu `. +#. Make sure the networking bridge ``acrn-br0`` is created. See + :ref:`hostbridge_virt_hld` for more information. Set Up and Launch LXC/LXD ************************* @@ -155,7 +154,7 @@ Set Up ACRN Prerequisites Inside the Container $ lxc exec openstack -- su -l stack -2. Download and compile ACRN's source code. Refer to :ref:`getting-started-building`. +2. Download and compile ACRN's source code. Refer to :ref:`gsg`. .. note:: All tools and build dependencies must be installed before you run the first ``make`` command. diff --git a/doc/tutorials/sgx_virtualization.rst b/doc/tutorials/sgx_virtualization.rst index b6169cb5f..216d2311a 100644 --- a/doc/tutorials/sgx_virtualization.rst +++ b/doc/tutorials/sgx_virtualization.rst @@ -3,8 +3,8 @@ Enable SGX Virtualization ######################### -SGX refers to `Intel® Software Guard Extensions `_ (Intel® SGX). This is a set of instructions that can be used by +SGX refers to `Intel Software Guard Extensions `_ (Intel SGX). This is a set of instructions that can be used by applications to set aside protected areas for select code and data in order to prevent direct attacks on executing code or data stored in memory. SGX allows an application to instantiate a protected container, referred to as an diff --git a/doc/tutorials/using_hybrid_mode_on_nuc.rst b/doc/tutorials/using_hybrid_mode_on_nuc.rst index bf34e1e77..fb6b008cb 100644 --- a/doc/tutorials/using_hybrid_mode_on_nuc.rst +++ b/doc/tutorials/using_hybrid_mode_on_nuc.rst @@ -32,7 +32,7 @@ as shown in :numref:`hybrid_scenario_on_nuc`. Set-up base installation ************************ -- Use the `Intel NUC Kit NUC7i7DNHE `_. +- Use the `Intel NUC Kit NUC11TNBi5 `_. - Connect to the serial port as described in :ref:`Connecting to the serial port `. - Install Ubuntu 18.04 on your SATA device or on the NVME disk of your Intel NUC. @@ -46,21 +46,21 @@ Prepare the Zephyr kernel that you will run in VM0 later. - Follow step 1 from the :ref:`using_zephyr_as_uos` instructions - .. note:: We only need the binary Zephyr kernel, not the entire ``zephyr.img`` + .. note:: We only need the ELF binary Zephyr kernel, not the entire ``zephyr.img`` -- Copy the :file:`zephyr/zephyr.bin` to the ``/boot`` folder:: +- Copy the :file:`zephyr/zephyr.elf` to the ``/boot`` folder:: - sudo cp zephyr/zephyr.bin /boot + sudo cp zephyr/zephyr.elf /boot .. rst-class:: numbered-step Set-up ACRN on your device ************************** -- Follow the instructions in :Ref:`getting-started-building` to build ACRN using the - ``hybrid`` scenario. Here is the build command-line for the `Intel NUC Kit NUC7i7DNHE `_:: +- Follow the instructions in :Ref:`gsg` to build ACRN using the + ``hybrid`` scenario. Here is the build command-line for the `Intel NUC Kit NUC11TNBi5 `_:: - make BOARD=nuc7i7dnb SCENARIO=hybrid + make clean && make BOARD=nuc11tnbi5 SCENARIO=hybrid - Install the ACRN hypervisor and tools @@ -103,24 +103,44 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo insmod ext2 echo 'Loading hypervisor Hybrid scenario ...' multiboot2 /boot/acrn.bin - module2 /boot/zephyr.bin xxxxxx + module2 /boot/zephyr.elf xxxxxx module2 /boot/bzImage yyyyyy module2 /boot/ACPI_VM0.bin ACPI_VM0 } - - .. note:: The module ``/boot/zephyr.bin`` is the VM0 (Zephyr) kernel file. + + .. note:: The module ``/boot/zephyr.elf`` is the VM0 (Zephyr) kernel file. The param ``xxxxxx`` is VM0's kernel file tag and must exactly match the - ``kern_mod`` of VM0, which is configured in the ``misc/config_tools/data/nuc7i7dnb/hybrid.xml`` + ``kern_mod`` of VM0, which is configured in the ``misc/config_tools/data/nuc11tnbi5/hybrid.xml`` file. The multiboot module ``/boot/bzImage`` is the Service VM kernel file. The param ``yyyyyy`` is the bzImage tag and must exactly match the - ``kern_mod`` of VM1 in the ``misc/config_tools/data/nuc7i7dnb/hybrid.xml`` + ``kern_mod`` of VM1 in the ``misc/config_tools/data/nuc11tnbi5/hybrid.xml`` file. The kernel command-line arguments used to boot the Service VM are - ``bootargs`` of VM1 in the ``misc/config_tools/data/nuc7i7dnb/hybrid.xml``. + ``bootargs`` of VM1 in the ``misc/config_tools/data/nuc11tnbi5/hybrid.xml``. The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0 (Zephyr). The parameter ``ACPI_VM0`` is VM0's ACPI tag and should not be modified. +#. Correct example Grub configuration (with ``module2`` image paths set): + + .. code-block:: console + :emphasize-lines: 10,11,12 + + menuentry 'ACRN hypervisor Hybrid Scenario' --id ACRN_Hybrid --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' { + recordfail + load_video + gfxmode $linux_gfx_mode + insmod gzio + insmod part_gpt + insmod ext2 + echo 'Loading hypervisor Hybrid scenario ...' + multiboot2 /boot/acrn.bin + module2 /boot/zephyr.elf Zephyr_ElfImage + module2 /boot/bzImage Linux_bzImage + module2 /boot/ACPI_VM0.bin ACPI_VM0 + + } + #. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu visible when booting: @@ -143,6 +163,9 @@ Perform the following to update Ubuntu GRUB so it can boot the hypervisor and lo Hybrid Scenario Startup Check ***************************** +#. Connect to the serial port as described in this :ref:`Connecting to the + serial port ` tutorial. + #. Use these steps to verify that the hypervisor is properly running: a. Log in to the ACRN hypervisor shell from the serial console. diff --git a/doc/tutorials/using_partition_mode_on_nuc.rst b/doc/tutorials/using_partition_mode_on_nuc.rst index 2de64a402..48acc86c3 100644 --- a/doc/tutorials/using_partition_mode_on_nuc.rst +++ b/doc/tutorials/using_partition_mode_on_nuc.rst @@ -18,13 +18,12 @@ Validated Versions ****************** - Ubuntu version: **18.04** -- ACRN hypervisor tag: **v2.4** -- ACRN kernel tag: **v2.4** +- ACRN hypervisor tag: **v2.6** Prerequisites ************* -* `Intel Whiskey Lake `_ +* `Intel NUC Kit NUC11TNBi5 `_. * NVMe disk * SATA disk * Storage device with USB interface (such as USB Flash @@ -43,28 +42,14 @@ Prerequisites Update Kernel Image and Modules of Pre-Launched VM ************************************************** -#. On your development workstation, clone the ACRN kernel source tree, and - build the Linux kernel image that will be used to boot the pre-launched VMs: +#. On the local Ubuntu target machine, find the kernel file, + copy to your (``/boot`` directory) and name the file ``bzImage``. + The ``uname -r`` command returns the kernel release, for example, + ``4.15.0-55-generic``): .. code-block:: none - $ git clone https://github.com/projectacrn/acrn-kernel.git - Cloning into 'acrn-kernel'... - ... - $ cd acrn-kernel - $ cp kernel_config_uos .config - $ make olddefconfig - scripts/kconfig/conf --olddefconfig Kconfig - # - # configuration written to .config - # - $ make - $ make modules_install INSTALL_MOD_PATH=out/ - - The last two commands build the bootable kernel image as - ``arch/x86/boot/bzImage``, and loadable kernel modules under the ``./out/`` - folder. Copy these files to a removable disk for installing on the - Intel NUC later. + $ sudo cp /boot/vmlinuz-$(uname -r) /boot/bzImage #. The current ACRN logical partition scenario implementation requires a multi-boot capable bootloader to boot both the ACRN hypervisor and the @@ -75,6 +60,7 @@ Update Kernel Image and Modules of Pre-Launched VM default, the GRUB bootloader is installed on the EFI System Partition (ESP) that's used to bootstrap the ACRN hypervisor. + #. After installing the Ubuntu OS, power off the Intel NUC. Attach the SATA disk and storage device with the USB interface to the Intel NUC. Power on the Intel NUC and make sure it boots the Ubuntu OS from the NVMe SSD. Plug in @@ -90,20 +76,14 @@ Update Kernel Image and Modules of Pre-Launched VM # Mount the Ubuntu OS root filesystem on the SATA disk $ sudo mount /dev/sda3 /mnt - $ sudo cp -r /lib/modules/* /mnt/lib/modules + $ sudo cp -r /lib/modules/* /mnt/lib/modules $ sudo umount /mnt # Mount the Ubuntu OS root filesystem on the USB flash disk $ sudo mount /dev/sdb3 /mnt - $ sudo cp -r /lib/modules/* /mnt/lib/modules + $ sudo cp -r /lib/modules/* /mnt/lib/modules $ sudo umount /mnt -#. Copy the bootable kernel image to the /boot directory: - .. code-block:: none - - $ sudo cp /bzImage /boot/ - -.. rst-class:: numbered-step Update ACRN Hypervisor Image **************************** @@ -141,18 +121,18 @@ Update ACRN Hypervisor Image #. Clone the ACRN source code and configure the build options. - Refer to :ref:`getting-started-building` to set up the ACRN build + Refer to :ref:`gsg` to set up the ACRN build environment on your development workstation. - Clone the ACRN source code and check out to the tag v2.4: + Clone the ACRN source code and check out to the tag v2.6: .. code-block:: none $ git clone https://github.com/projectacrn/acrn-hypervisor.git $ cd acrn-hypervisor - $ git checkout v2.4 + $ git checkout v2.6 -#. Check the ``pci_devs`` sections in ``misc/config_tools/data/whl-ipc-i7/logical_partition.xml`` +#. Check the ``pci_devs`` sections in ``misc/config_tools/data/nuc11tnbi5/logical_partition.xml`` for each pre-launched VM to ensure you are using the right PCI device BDF information (as reported by ``lspci -vv``). If you need to make changes to this file, create a copy of it and use it subsequently when building ACRN (``SCENARIO=/path/to/newfile.xml``). @@ -161,7 +141,7 @@ Update ACRN Hypervisor Image .. code-block:: none - $ make hypervisor BOARD=whl-ipc-i7 SCENARIO=logical_partition RELEASE=0 + $ make hypervisor BOARD=nuc11tnbi5 SCENARIO=logical_partition RELEASE=0 .. note:: The ``acrn.bin`` will be generated to ``./build/hypervisor/acrn.bin``. @@ -217,14 +197,33 @@ Update Ubuntu GRUB to Boot Hypervisor and Load Kernel Image Update the UUID (``--set``) and PARTUUID (``root=`` parameter) (or use the device node directly) of the root partition (e.g.``/dev/nvme0n1p2). Hint: use ``sudo blkid``. The kernel command-line arguments used to boot the pre-launched VMs is ``bootargs`` - in the ``misc/config_tools/data/whl-ipc-i7/logical_partition.xml`` + in the ``misc/config_tools/data/nuc11tnbi5/logical_partition.xml`` The ``module2 /boot/bzImage`` param ``XXXXXX`` is the bzImage tag and must exactly match the ``kern_mod`` - in the ``misc/config_tools/data/whl-ipc-i7/logical_partition.xml`` file. + in the ``misc/config_tools/data/nuc11tnbi5/logical_partition.xml`` file. The module ``/boot/ACPI_VM0.bin`` is the binary of ACPI tables for pre-launched VM0, the parameter ``ACPI_VM0`` is VM0's ACPI tag and should not be modified. The module ``/boot/ACPI_VM1.bin`` is the binary of ACPI tables for pre-launched VM1 the parameter ``ACPI_VM1`` is VM1's ACPI tag and should not be modified. +#. Correct example Grub configuration (with ``module2`` image paths set): + + .. code-block:: console + + menuentry 'ACRN hypervisor Logical Partition Scenario' --id ACRN_Logical_Partition --class ubuntu --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-e23c76ae-b06d-4a6e-ad42-46b8eedfd7d3' { + recordfail + load_video + gfxmode $linux_gfx_mode + insmod gzio + insmod part_gpt + insmod ext2 + search --no-floppy --fs-uuid --set 9bd58889-add7-410c-bdb7-1fbc2af9b0e1 + echo 'Loading hypervisor logical partition scenario ...' + multiboot2 /boot/acrn.bin root=PARTUUID="e515916d-aac4-4439-aaa0-33231a9f4d83" + module2 /boot/bzImage Linux_bzImage + module2 /boot/ACPI_VM0.bin ACPI_VM0 + module2 /boot/ACPI_VM1.bin ACPI_VM1 + } + #. Modify the ``/etc/default/grub`` file as follows to make the GRUB menu visible when booting: @@ -253,6 +252,8 @@ Update Ubuntu GRUB to Boot Hypervisor and Load Kernel Image Logical Partition Scenario Startup Check **************************************** +#. Connect to the serial port as described in this :ref:`Connecting to the + serial port ` tutorial. #. Use these steps to verify that the hypervisor is properly running: diff --git a/doc/tutorials/using_serial_port.rst b/doc/tutorials/using_serial_port.rst index 57f706863..4a405bf2f 100644 --- a/doc/tutorials/using_serial_port.rst +++ b/doc/tutorials/using_serial_port.rst @@ -94,16 +94,16 @@ Convert the BDF to Hex Format Refer this :ref:`hv-parameters` to change bdf 01:00.1 to Hex format: 0x101; then add it to the grub menu: -.. Note:: +.. code-block:: bash - multiboot2 /boot/acrn.bin root=PARTUUID="b1bebafc-2b06-43e2-bf6a-323337daebc0“ uart=bdf@0x101 + multiboot2 /boot/acrn.bin root=PARTUUID="b1bebafc-2b06-43e2-bf6a-323337daebc0" uart=bdf@0x101 .. Note:: - uart=bdf@0x100 for port 1 + ``uart=bdf@0x100`` for port 1 - uart=bdf@0x101 for port 2 + ``uart=bdf@0x101`` for port 2 - uart=bdf@0x101 is preferred for the industry scenario; otherwise, it can't + ``uart=bdf@0x101`` is preferred for the industry scenario; otherwise, it can't input in the Hypervisor console after the Service VM boots up. There is no such limitation for the hybrid or hybrid_rt scenarios. diff --git a/doc/tutorials/using_vxworks_as_uos.rst b/doc/tutorials/using_vxworks_as_uos.rst index eed4259b1..0a1505d60 100644 --- a/doc/tutorials/using_vxworks_as_uos.rst +++ b/doc/tutorials/using_vxworks_as_uos.rst @@ -92,7 +92,7 @@ Steps for Using VxWorks as User VM You now have a virtual disk image with bootable VxWorks in ``VxWorks.img``. -#. Follow :ref:`install-ubuntu-Service VM-NVMe` to boot the ACRN Service VM. +#. Follow :ref:`gsg` to boot the ACRN Service VM. #. Boot VxWorks as User VM. diff --git a/doc/tutorials/using_windows_as_uos.rst b/doc/tutorials/using_windows_as_uos.rst index ec0b330c8..335157bdc 100644 --- a/doc/tutorials/using_windows_as_uos.rst +++ b/doc/tutorials/using_windows_as_uos.rst @@ -104,7 +104,7 @@ Prepare the Script to Create an Image #for memsize setting mem_size=4096M acrn-dm -A -m $mem_size -s 0:0,hostbridge -s 1:0,lpc -l com1,stdio \ - -s 2,passthru,0/2/0,gpu \ + -s 2,passthru,0/2/0 \ -s 8,virtio-net,tap0 \ -s 4,virtio-blk,/home/acrn/work/win10-ltsc.img -s 5,ahci,cd:/home/acrn/work/Windows10.iso \ @@ -262,7 +262,7 @@ Explanation for acrn-dm Popular Command Lines .. note:: Use these acrn-dm command line entries according to your real requirements. -* ``-s 2,passthru,0/2/0,gpu``: +* ``-s 2,passthru,0/2/0``: This is GVT-d to passthrough the VGA controller to Windows. You may need to change 0/2/0 to match the bdf of the VGA controller on your platform. diff --git a/doc/tutorials/using_zephyr_as_uos.rst b/doc/tutorials/using_zephyr_as_uos.rst index 36f4004ff..7e16bf493 100644 --- a/doc/tutorials/using_zephyr_as_uos.rst +++ b/doc/tutorials/using_zephyr_as_uos.rst @@ -92,7 +92,7 @@ Steps for Using Zephyr as User VM the ACRN Service VM, then you will need to transfer this image to the ACRN Service VM (via, e.g, a USB drive or network) -#. Follow :ref:`install-ubuntu-Service VM-NVMe` +#. Follow :ref:`gsg` to boot "The ACRN Service OS" based on Ubnuntu OS (ACRN tag: v2.2) diff --git a/doc/tutorials/waag-secure-boot.rst b/doc/tutorials/waag-secure-boot.rst index fc747687b..78712834b 100644 --- a/doc/tutorials/waag-secure-boot.rst +++ b/doc/tutorials/waag-secure-boot.rst @@ -22,7 +22,7 @@ the OEM can generate their own PK. Here we show two ways to generate a PK: ``openssl`` and Microsoft tools. -Generate PK Using Openssl +Generate PK Using OpenSSL ========================= - Generate a Self-Signed Certificate as PK from a new key using the @@ -139,7 +139,7 @@ which we'll summarize below. (CSP) For the detailed information of each Microsoft Cryptographic Service - Provider, please check the `Microsoft CRP document + Provider, check the `Microsoft CRP document `_ Here, we chose "Microsoft Strong Cryptographic Provider" for example:: @@ -393,12 +393,12 @@ Download KEK and DB From Microsoft KEK (Key Exchange Key): `Microsoft Corporation KEK CA 2011 `_: - allows updates to db and dbx. + allows updates to DB and DBX. DB (Allowed Signature database): `Microsoft Windows Production CA 2011 `_: - This CA in the Signature Database (db) allows Windows to boot. + This CA in the Signature Database (DB) allows Windows to boot. `Microsoft Corporation UEFI CA 2011 `_: @@ -407,25 +407,28 @@ DB (Allowed Signature database): Compile OVMF With Secure Boot Support ************************************* +.. code-block:: bash - git clone https://github.com/projectacrn/acrn-edk2.git + git clone https://github.com/projectacrn/acrn-edk2.git - cd acrn-edk2 + cd acrn-edk2 - git checkout -b ovmf b64fe247c434e2a4228b9804c522575804550f82 + git checkout -b ovmf b64fe247c434e2a4228b9804c522575804550f82 - git submodule update --init CryptoPkg/Library/OpensslLib/openssl + git submodule update --init CryptoPkg/Library/OpensslLib/openssl - source edksetup.sh - make -C BaseTools + source edksetup.sh + make -C BaseTools - vim Conf/target.txt +Edit the ``Conf/target.txt`` file and set these values:: - ACTIVE_PLATFORM = OvmfPkg/OvmfPkgX64.dsc - TARGET_ARCH = X64 - TOOL_CHAIN_TAG = GCC5 + ACTIVE_PLATFORM = OvmfPkg/OvmfPkgX64.dsc + TARGET_ARCH = X64 + TOOL_CHAIN_TAG = GCC5 - build -DFD_SIZE_2MB -DDEBUG_ON_SERIAL_PORT=TRUE -DSECURE_BOOT_ENABLE +Then continue doing the build:: + + build -DFD_SIZE_2MB -DDEBUG_ON_SERIAL_PORT=TRUE -DSECURE_BOOT_ENABLE Notes: diff --git a/doc/user-guides/acrn-dm-parameters.rst b/doc/user-guides/acrn-dm-parameters.rst index 8ff1569ee..e3cecd459 100644 --- a/doc/user-guides/acrn-dm-parameters.rst +++ b/doc/user-guides/acrn-dm-parameters.rst @@ -55,27 +55,6 @@ Here are descriptions for each of these ``acrn-dm`` command line parameters: ---- -``-G``, ``--gvtargs `` - ACRN implements GVT-g for graphics virtualization (aka AcrnGT). This - option allows you to set some of its parameters. - - GVT_args format: ``low_gm_sz high_gm_sz fence_sz`` - - Where: - - - ``low_gm_sz``: GVT-g aperture size, unit is MB - - ``high_gm_sz``: GVT-g hidden gfx memory size, unit is MB - - ``fence_sz``: the number of fence registers - - Example:: - - -G "10 128 6" - - sets up 10Mb for GVT-g aperture, 128M for GVT-g hidden memory, and 6 - fence registers. - ----- - ``-h``, ``--help`` Show a summary of commands. diff --git a/doc/user-guides/kernel-parameters.rst b/doc/user-guides/kernel-parameters.rst index 1de671682..211920519 100644 --- a/doc/user-guides/kernel-parameters.rst +++ b/doc/user-guides/kernel-parameters.rst @@ -339,93 +339,3 @@ relevant for configuring or debugging ACRN-based systems. settings to satisfy their particular workloads in Service VM, the ``hugepages`` and ``hugepagesz`` parameters could be redefined in GRUB menu to override the settings from ACRN config tool. - -Intel GVT-g (AcrnGT) Parameters -******************************* - -This table gives an overview of all the Intel GVT-g parameters that are -available to tweak the behavior of the graphics sharing (Intel GVT-g, aka -AcrnGT) capabilities in ACRN. The `GVT-g-kernel-options`_ -section below has more details on a few select parameters. - -.. list-table:: - :header-rows: 1 - :widths: 10,10,50,30 - - * - Parameter - - Used in Service VM or User VM - - Description - - Usage example - - * - i915.enable_gvt - - Service VM - - Enable Intel GVT-g graphics virtualization support in the host - - :: - - i915.enable_gvt=1 - - * - i915.nuclear_pageflip - - Service VM,User VM - - Force enable atomic functionality on platforms that don't have full support yet. - - :: - - i915.nuclear_pageflip=1 - - * - i915.enable_guc - - Service VM - - Enable GuC load for HuC load. - - :: - - i915.enable_guc=0x02 - - * - i915.enable_guc - - User VM - - Disable GuC - - :: - - i915.enable_guc=0 - - * - i915.enable_hangcheck - - User VM - - Disable check GPU activity for detecting hangs. - - :: - - i915.enable_hangcheck=0 - - * - i915.enable_fbc - - User VM - - Enable frame buffer compression for power savings - - :: - - i915.enable_fbc=1 - -.. _GVT-g-kernel-options: - -GVT-g (AcrnGT) Kernel Options Details -===================================== - -This section provides additional information and details on the kernel command -line options that are related to AcrnGT. - -i915.enable_gvt ---------------- - -This option enables support for Intel GVT-g graphics virtualization -support in the host. By default, it's not enabled, so we need to add -``i915.enable_gvt=1`` in the Service VM kernel command line. This is a Service -OS only parameter, and cannot be enabled in the User VM. - -i915.enable_hangcheck -===================== - -This parameter enable detection of a GPU hang. When enabled, the i915 -will start a timer to check if the workload is completed in a specific -time. If not, i915 will treat it as a GPU hang and trigger a GPU reset. - -In AcrnGT, the workload in Service VM and User VM can be set to different -priorities. If Service VM is assigned a higher priority than the User VM, the User VM's -workload might not be able to run on the HW on time. This may lead to -the guest i915 triggering a hangcheck and lead to a guest GPU reset. -This reset is unnecessary so we use ``i915.enable_hangcheck=0`` to -disable this timeout check and prevent guest from triggering unnecessary -GPU resets. diff --git a/misc/config_tools/schema/config.xsd b/misc/config_tools/schema/config.xsd index 367b5c92f..91ebcaa7d 100644 --- a/misc/config_tools/schema/config.xsd +++ b/misc/config_tools/schema/config.xsd @@ -400,6 +400,11 @@ Refer SDM 17.19.2 for details, and use with caution. Specify memory information for Service and User VMs. + + + Specify the VM vCPU priority for scheduling. + + General information for host kernel, boot diff --git a/misc/config_tools/schema/types.xsd b/misc/config_tools/schema/types.xsd index 3de5dc004..f00c7eb25 100644 --- a/misc/config_tools/schema/types.xsd +++ b/misc/config_tools/schema/types.xsd @@ -101,7 +101,7 @@ higher value (lower severity) are discarded. - Three scheduler options are supported: + Four scheduler options are supported: - ``SCHED_NOOP``: The NOOP (No-Operation) scheduler means there is a strict 1 to 1 mapping between vCPUs and pCPUs. @@ -113,6 +113,8 @@ higher value (lower severity) are discarded. earliest effective virtual time. *TODO: BVT scheduler will be built on top of a prioritized scheduling mechanism, i.e. higher priority threads get scheduled first, and same priority tasks are scheduled per BVT.* +- ``SCHED_PRIO``: The priority based scheduler. vCPU scheduling will be based on + their pre-configured priorities. Read more about the available scheduling options in :ref:`cpu_sharing`. @@ -120,6 +122,21 @@ Read more about the available scheduling options in :ref:`cpu_sharing`. + + + + + + + Two priorities are supported for priority based scheduler: + +- ``PRIO_LOW``: low priority for vCPU scheduling. +- ``PRIO_HIGH``: high priority for vCPU scheduling. + + + + +