doc: remove guest_flag and NVMX_ENABLE

Global parameter NVMX_ENABLE is removed from user interface,
and remove guest_flag

Tracked-On: #6690
Signed-off-by: hangliu1 <hang1.liu@linux.intel.com>
This commit is contained in:
hangliu1 2022-02-27 22:08:24 -05:00 committed by acrnsi-robot
parent c2695f290d
commit 5e6341ee89
2 changed files with 5 additions and 9 deletions

View File

@ -118,14 +118,12 @@ with these settings:
.. note:: Normally you'd use the ACRN Configurator GUI to edit the scenario XML file. .. note:: Normally you'd use the ACRN Configurator GUI to edit the scenario XML file.
The tool wasn't updated in time for the v2.5 release, so you'll need to manually edit The tool wasn't updated in time for the v2.5 release, so you'll need to manually edit
the ACRN scenario XML configuration file to edit the ``SCHEDULER``, ``NVMX_ENABLED``, the ACRN scenario XML configuration file to edit the ``SCHEDULER``, ``pcpu_id``,
``pcpu_id`` , ``guest_flags``, ``legacy_vuart``, and ``console_vuart`` settings for ``guest_flags``, ``legacy_vuart``, and ``console_vuart`` settings for
the Service VM, as shown below. the Service VM, as shown below.
#. Configure system level features: #. Configure system level features:
- Edit :option:`hv.FEATURES.NVMX_ENABLED` to `y` to enable nested virtualization
- Edit :option:`hv.FEATURES.SCHEDULER` to ``SCHED_NOOP`` to disable CPU sharing - Edit :option:`hv.FEATURES.SCHEDULER` to ``SCHED_NOOP`` to disable CPU sharing
.. code-block:: xml .. code-block:: xml
@ -148,14 +146,13 @@ with these settings:
<CLOS_MASK>0xfff</CLOS_MASK> <CLOS_MASK>0xfff</CLOS_MASK>
<CLOS_MASK>0xfff</CLOS_MASK> <CLOS_MASK>0xfff</CLOS_MASK>
</RDT> </RDT>
<NVMX_ENABLED>y</NVMX_ENABLED>
<HYPERV_ENABLED>y</HYPERV_ENABLED> <HYPERV_ENABLED>y</HYPERV_ENABLED>
#. In each guest VM configuration: #. In each guest VM configuration:
- Edit :option:`vm.guest_flags.guest_flag` on the Service VM section and add ``GUEST_FLAG_NVMX_ENABLED`` - Edit :option:`vm.nested_virtualization_support` on the Service VM section and set it to `y`
to enable the nested virtualization feature on the Service VM. to enable the nested virtualization feature on the Service VM.
- Edit :option:`vm.guest_flags.guest_flag` and add ``GUEST_FLAG_LAPIC_PASSTHROUGH`` to enable local - Edit :option:`vm.lapic_passthrough` and set it to `y` to enable local
APIC passthrough on the Service VM. APIC passthrough on the Service VM.
- Edit :option:`vm.cpu_affinity.pcpu_id` to assign ``pCPU`` IDs to run the Service VM. If you are - Edit :option:`vm.cpu_affinity.pcpu_id` to assign ``pCPU`` IDs to run the Service VM. If you are
using debug build and need the hypervisor console, don't assign using debug build and need the hypervisor console, don't assign

View File

@ -34,8 +34,7 @@ into XML in the scenario file:
#. In each Guest VM configuration: #. In each Guest VM configuration:
- Edit :option:`vm.guest_flags.guest_flag` and add ``GUEST_FLAG_VCAT_ENABLED`` - Edit :option:`vm.virtual_cat_support` to 'y' to enable the vCAT feature on the VM.
to enable the vCAT feature on the VM.
- Edit :option:`vm.clos.vcpu_clos` to assign COS IDs to the VM. - Edit :option:`vm.clos.vcpu_clos` to assign COS IDs to the VM.