Refinement for VM type sanity check relays on VM UUID number.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
remove configurable="0" for pci_devs of pre-launched VM so that the
passthru devices could be configurable on webUI.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
Parse cpu_affinity from launch config xmls and generate '--cpu_affinity' as
acrn-dm args.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
1.Remove cpu_sharing item from launch config xml.
2.Add cpu_affinity for launch config xmls to configurable POST VM cpu
affinity from webUI.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
Also clear all accessed bit in pagetables when the VM do wbinvd. Then
the next wbinvd will benefit from it.
Tracked-On: #4703
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Change the tag name from "vcpu_affinity" to 'cpu_affinity" in all the per
board scenario xml files, and change the descritpions.
Tracked-On: #4641
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
hypervisor is replacing vm_configs[].vcpu_affinity[] array with
cpu_affinity_bitmap, the meaning of which has some changes as well.
This patch changes the names in generated vm_configuration.c:
-.vcpu_affinity = VMx_CONFIG_VCPU_AFFINITY,
+.cpu_affinity_bitmap = VMx_CONFIG_CPU_AFFINITY,
Changes in vm_configuration.h:
-#define VMx_CONFIG_VCPU_AFFINITY {AFFINITY_CPU(xU), AFFINITY_CPU(xU)}
+#define VMx_CONFIG_CPU_AFFINITY (AFFINITY_CPU(xU) | AFFINITY_CPU(xU))
Also don't generate default assignment of vcpu_num in vm_configuration.c,
because now hypervisor can easily calculate it from cpu_affinity_bitmap.
Accordingly, in the scenario xml files, tag "vcpu_affinity" is changed
to "cpu_affinity".
Tracked-On: #4641
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
User has a chance to specify VCPU affinity through acrn-dm command line
argument. Examples of the command line:
3 PCPUs: 1/2/3
--cpu_affinity 1-3
5 PCPUs: 2/3/6/7/8
--cpu_affinity 2,3,6-8
8 PCPUs: 2/3/6/7/9/10/11/12
--cpu_affinity 2,3,6-7,9,10-12
The specified pCPUs must be included in the guest VM's statically
defined vm_config[].cpu_affinity_bitmap.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
- add a new member cpu_affinity to struct acrn_create_vm, so that acrn-dm
is able to assign CPU affinity through HC_CREATE_VM hypercall.
- if vm_create.cpu_affinity is zero, hypervisor launches the VM with the
statically configured CPU affinity.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently the vcpu_affinity[] array fixes the vCPU to pCPU mapping.
While the new cpu_affinity_bitmap doesn't explicitly sepcify this
mapping, instead, it implicitly assumes that vCPU0 maps to the pCPU
with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
so on.
This makes it possible for post-launched VM to run vCPUs on a subset of
these pCPUs only, and not all of them.
acrn-dm may launch post-launched VMs with the current approach: indicate
VM UUID and hypervisor launches all VCPUs from the PCPUs that are masked
in cpu_affinity_bitmap.
Also acrn-dm can choose to launch the VM on a subset of PCPUs that is
defined in cpu_affinity_bitmap. In this way, acrn-dm must specify the
subset of PCPUs in the CREATE_VM hypercall.
Additionally, with this change, a guest's vcpu_num can be easily calculated
from cpu_affinity_bitmap, so don't assign vcpu_num in vm_configuration.c.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Hypervisor reports VM configuration information to SOS which can be used to
dynamically allocate VCPU affinity.
Servise OS can get the vm_configs in this order:
1. call platform_info HC (set vm_configs_addr with 0) to get max_vms and
vm_config_entry_size.
2. allocate memory for acrn_vm_config array based on the number of VMs
and entry size that just got in step 1.
3. call platform_info HC again to collect VM configurations.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- since now we don't need to print error messages if copy_to/from_gpa()
fails, then in many cases we can simplify the function return handling.
In the following example, my fix could change the 'ret' value from
the original '-1' to the actual errno returned from copy_to_gpa(). But
this is valid. Ideally we may replace all '-1' with the actual errno.
- if (copy_to_gpa() < 0) {
- pr_err("error messages");
- ret = -1;
- } else {
- ret = 0;
- }
+ ret = copy_to_gpa();
- in most cases, 'ret' is declared with a default value 0 or -1, then the
redundant assignment statements can be removed.
- replace white spaces with tabs.
Tracked-On: #3854
Signed-off-by: Zide Chen <zide.chen@intel.com>
When dynamically creating or adding or deleting VMs, acrn config app
will assign VM IDs for the VMs with the orders of prelaunched VMs and SOS
VM and postlaunched VMs to be consistent with the VM order of vm_configs
in vm configuration. When dynamically creating launch scripts, acrn
config app will let users to specify the VM ID before launch setting.
Tracked-On: #4641
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
There are no board/scenario/uos_launcher attributes for created
scenario and launch settings, which causes xml mismatch error
when generating configuration files or launch scripts.
This patch is to add the attributes for scearnio and launch
settings.
Tracked-On: #4641
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
Add max VM count check when generating scenario XML file.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
Refine some items for configuarable="0"
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
Add 2 UUIDs for post-launched Standard VM.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
Add SOS_IDLE for SOS cmdline so that "idle=halt" paramter will be needed
only when CPU sharing is enabled;
Tracked-On: #4329
Signed-off-by: Victor Sun <victor.sun@intel.com>
The parameter of "idle=halt" for SOS cmdline is only needed when cpu sharing
is enabled, otherwise it will impact SOS power.
Tracked-On: #4329
Signed-off-by: Victor Sun <victor.sun@intel.com>
SCENARIO XML file has included RELEASE or DEBUG info already, so if RELEASE
is not specified in make command, Makefile should not override RELEASE info
in SCENARIO XML. If RELEASE is specified in make command, then RELEASE info
in SCENARIO XML could be overridden by make command.
The patch also fixed a issue that get correct board defconfig when build
hypervisor from TARGET_DIR;
Tracked-On: #4688
Signed-off-by: Victor Sun <victor.sun@intel.com>
Pre-Launched VMs need the vbar_base values pre-populated
for pass-thru PCI devices.
This patch moves the pci parser logic from board_cfg_gen to library
so that scenario_cfg_gen scripts can use pci parser routines while
generating pci_dev file logical partition scenario.
Tracked-On: #4666
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Remove sdc2 scenario since the VM launch requirement under this scenario
will be satisfied by industry scenario.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Leave HV_RAM_SIZE/HV_RAM_START default to blank, and config tool would
generate for them, otherwise, they would be set by the UI.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Leave HV_RAM_SIZE/HV_RAM_START default to blank, and config tool would
generate for them, otherwise, they would be set by the UI.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
1. Set default value for HV template.
2. Refine vuartx in template vm setting xmls.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
1. modify epc_section to configurable="0".
2. add 'idle=halt' for sos bootargs to impove performance.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
UI got syntax warning while loading new logical partitons xml, add
missed '</clos>' for it.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Remove sdc2 scenario since the VM launch requirement under this scenario
could be satisfied by industry scenario now;
Tracked-On: #4661
Signed-off-by: Victor Sun <victor.sun@intel.com>
In industry scenario, hypervisor will support 1 post-launched RT VM
and 1 post-launched kata VM and up to 5 post-launched standard VMs;
Tracked-On: #4661
Signed-off-by: Victor Sun <victor.sun@intel.com>
gcc in latest clearlinux enabled the pointer-sign warning as
default. This breaks the acrn-crashlog build.
Make acrn-crashlog build issue fixed by convert to correct
pointer type.
Tracked-On: #4636
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
The patch enables CPU sharing feature by default, the default scheduler is
set to SCHED_BVT;
Tracked-On: #4661
Signed-off-by: Victor Sun <victor.sun@intel.com>
The template xmls are used to be the dafault VM settings for dynamically
adding VMs.
Tracked-On: #4641
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
config app implements interfaces to dynamically:
create new scenario settings based on tempaltes;
create new launch settings based on templates;
add or delete VMs for scenario settings;
add or delete VMs for launch settings;
load default scenario or launch settings
Tracked-On: #4641
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com
Reviewed-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
To keep align hv source code changes, config tool doese below changes:
1. Remove UUID from scenario config files.
2. Remove severity from scenario config files.
3. Use vm type to instead load order type.
4. Use the mapped UUID data base for launch vm script configurations.
5. Unify the ui_entry_api for '--out'
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
Use vm_type to instead combination of load_type/uuid/severity in config
xmls.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
As it doesn't depends on the scenario, there are sos/pre launched VMS
in config xmls, emulate vhostbridge for sos vm, specify the pass-thru
PCI device for pre launched vm.
Add support to parse pci_devs for pre launched vm.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
Add pass-thru PCI device for pre launched vm section in config xmls.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
For purpose weaken 'scenario' from config tool, remove 'scenario' dependency
from config tool.
Tracked-On: #4641
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
Acked-by: Terry Zou <terry.zou@intel.com>
The SCENARIO xml file already includes all information of KCONFIG file,
use KCONFIG_FILE/SCENARIO_FILE parameter at same time would introduce
conflict so this case should be invalid.
Tracked-On: #4616
Signed-off-by: Victor Sun <victor.sun@intel.com>
CONFIG_MAX_KATA_VM_NUM is a scenario specific configuration, so it is better
to put the MACRO in scenario folder directly, to instead the Kconfig item in
Kconfig file which should work for all scenarios;
Tracked-On: #4616
Signed-off-by: Victor Sun <victor.sun@intel.com>
Basicly ACRN scenario is a configuration name for specific usage. By giving
scenario name ACRN will load corresponding VM configurations to build the
hypervisor. But customer might have their own scenario name, change the
scenario type from choice to string is friendly to them since Kconfig source
file change will not be needed.
With this change, CONFIG_$(SCENARIO) will not exist in kconfig file and will
be instead of CONFIG_SCENARIO, so the Makefile need to be changed accordingly;
Tracked-On: #4616
Signed-off-by: Victor Sun <victor.sun@intel.com>
The pci_dev config settings of SOS are same so move the config interface
from vm_configurations.c to CONFIG_SOS_VM macro;
Tracked-On: #4616
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently the vm uuid and severity is initilized separately in
vm_config struct, developer need to take care both items carefuly
otherwise hypervisor would have trouble with the configurations.
Given the vm loader_order/uuid and severity are binded tightly, the
patch merged these tree settings in one macro so that developer will
have a simple interface to configure in vm_config struct.
Tracked-On: #4616
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The current logic puts hpa2 above GPA 4G always, which is incorrect. Need
to set gpa start of hpa2 right after hpa1 when hpa1 size is less then 2G;
Tracked-On: #4458
Signed-off-by: Victor Sun <victor.sun@intel.com>
Maintain a per-pCPU array of vCPUs (struct acrn_vcpu *vcpu_array[CONFIG_MAX_VM_NUM]),
one VM cannot have multiple vCPUs share one pcpu, so we can utilize this property
and use the containing VM's vm_id as the index to the vCPU array:
In create_vcpu(), we simply do:
per_cpu(vcpu_array, pcpu_id)[vm->vm_id] = vcpu;
In offline_vcpu():
per_cpu(vcpu_array, pcpuid_from_vcpu(vcpu))[vcpu->vm->vm_id] = NULL;
so basically we use the containing VM's vm_id as the index to the vCPU array,
as well as the index of posted interrupt IRQ/vector pair that are assigned
to this vCPU:
0: first vCPU and first posted interrupt IRQs/vector pair
(POSTED_INTR_IRQ/POSTED_INTR_VECTOR)
...
CONFIG_MAX_VM_NUM-1: last vCPU and last posted interrupt IRQs/vector pair
((POSTED_INTR_IRQ + CONFIG_MAX_VM_NUM - 1U)/(POSTED_INTR_VECTOR + CONFIG_MAX_VM_NUM - 1U)
In the posted interrupt handler, it will do the following:
Translate the IRQ into a zero based index of where the vCPU
is located in the vCPU list for current pCPU. Once the
vCPU is found, we wake up the waiting thread and record
this request as ACRN_REQUEST_EVENT
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>