This reverts commit ce9d5e8779.
Before readiness of better solution to fix SOS VM e820 and efi memmap mismatch
issue, revert this patch.
Tracked-On: #5626
Signed-off-by: Victor Sun <victor.sun@intel.com>
Provide EFI support for SOS could cause weird issues. For example, hypervisor
works based on E820 table whereras it's possible that the memory map from EFI
table is not aligned with E820 table. The SOS kernel kaslr will try to find the
random address for extracted kernel image in EFI table first. So it's possible
that none-RAM in E820 is picked for extracted kernel image. This will make
kernel boot fail.
This patch removes EFI support for SOS by not passing struct boot_efi_info to
SOS kernel zeropage, and reserve a memory to store RSDP table for SOS and pass
the RSDP address to SOS kernel zeropage for SOS to locate ACPI table.
The patch requires SOS kernel version be high than 4.20, otherwise the kernel
might fail to find the RSDP.
Tracked-On: #5626
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
This patch denies Service VM the access permission to device resources
owned by hypervisor.
HV may own these devices: (1) debug uart pci device for debug version
(2) type 1 pci device if have pre-launched VMs.
Current implementation exposes the mmio/pio resource of HV owned devices
to SOS, should remove them from SOS.
Tracked-On: #5615
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
- Refactor pci_dev_c.py to insert devices information per VMs
- Add function to get unused vbdf form bus:dev.func 00:00.0 to 00:1F.7
Add pci devices variables to vm_configurations.c
- To pass the pci vuart information form tool, add pci_dev_num and
pci_devs initialization by tool
- Change CONFIG_SOS_VM in hypervisor/include/arch/x86/vm_config.h to
compromise vm_configurations.c
Tracked-On: #5426
Signed-off-by: Yang, Yu-chu <yu-chu.yang@intel.com>
The old method of build pre-launched VM vacpi by HV source code is deprecated,
so remove related source code;
Tracked-On: #5266
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Previously we use a pre-defined structure as vACPI table for pre-launched
VM, the structure is initialized by HV code. Now change the method to use a
pre-loaded multiboot module instead. The module file will be generated by
acrn-config tool and loaded to GPA 0x7ff00000, a hardcoded RSDP table at
GPA 0x000f2400 will point to the XSDT table which at GPA 0x7ff00080;
Tracked-On: #5266
Signed-off-by: Victor Sun <victor.sun@intel.com>
Signed-off-by: Shuang Zheng <shuang.zheng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch will move the VM configuration check to pre-build stage,
a test program will do the check for pre-defined VM configuration
data before making hypervisor binary. If test failed, the make
process will be aborted. So once the hypervisor binary is built
successfully or start to run, it means the VM configuration has
been sanitized.
The patch did not add any new VM configuration check function,
it just port the original sanitize_vm_config() function from cpu.c
to static_checks.c with below change:
1. remove runtime rdt detection for clos check;
2. replace pr_err() from logmsg.h with printf() from stdio.h;
3. replace runtime call get_pcpu_nums() in ALL_CPUS_MASK macro
with static defined MAX_PCPU_NUM;
4. remove cpu_affinity check since pre-launched VM might share
pcpu with SOS VM;
The BOARD/SCENARIO parameter check and configuration folder check is
also moved to prebuild Makefile.
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Remove function of sanitize_vm_config() since the processing of sanitizing
will be moved to pre-build process.
When hypervisor has booted, we assume all VM configurations is sanitized;
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Previously the CPU affinity of SOS VM is initialized at runtime during
sanitize_vm_config() stage, follow the policy that all physical CPUs
except ocuppied by Pre-launched VMs are all belong to SOS_VM. Now change
the process that SOS CPU affinity should be initialized at build time
and has the assumption that its validity is guarenteed before runtime.
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The old layout configuration source which located in:
hypervisor/arch/x86/configs/ is abandoned, remove it;
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
There are 3 kinds of configurations in ACRN hypervisor source code: hypervisor
overall setting, per-board setting and scenario specific per-VM setting.
Currently Kconfig act as hypervisor overall setting and its souce is located at
"hypervisor/arch/x86/configs/$(BOARD).config"; Per-board configs are located at
"hypervisor/arch/x86/configs/$(BOARD)" folder; scenario specific per-VM configs
are located at "hypervisor/scenarios/$(SCENARIO)" folder.
This layout brings issues that board configs and VM configs are coupled tightly.
The board specific Kconfig file and misc_cfg.h are shared by all scenarios, and
scenario specific pci_dev.c is shared by all boards. So the user have no way to
build hypervisor binary for different scenario on different board with one
source code repo.
The patch will setup a new VM configurations layout as below:
misc/vm_configs
├── boards --> folder of supported boards
│ ├── <board_1> --> scenario-irrelevant board configs
│ │ ├── board.c --> C file of board configs
│ │ ├── board_info.h --> H file of board info
│ │ ├── pci_devices.h --> pBDF of PCI devices
│ │ └── platform_acpi_info.h --> native ACPI info
│ ├── <board_2>
│ ├── <board_3>
│ └── <board...>
└── scenarios --> folder of supported scenarios
├── <scenario_1> --> scenario specific VM configs
│ ├── <board_1> --> board specific VM configs for <scenario_1>
│ │ ├── <board_1>.config --> Kconfig for specific scenario on specific board
│ │ ├── misc_cfg.h --> H file of board specific VM configs
│ │ ├── pci_dev.c --> board specific VM pci devices list
│ │ └── vbar_base.h --> vBAR base info of VM PT pci devices
│ ├── <board_2>
│ ├── <board_3>
│ ├── <board...>
│ ├── vm_configurations.c --> C file of scenario specific VM configs
│ └── vm_configurations.h --> H file of scenario specific VM configs
├── <scenario_2>
├── <scenario_3>
└── <scenario...>
The new layout would decouple board configs and VM configs completely:
The boards folder stores kinds of supported boards info, each board folder
stores scenario-irrelevant board configs only, which could be totally got from
a physical platform and works for all scenarios;
The scenarios folder stores VM configs of kinds of working scenario. In each
scenario folder, besides the generic scenario specific VM configs, the board
specific VM configs would be put in a embedded board folder.
In new layout, all configs files will be removed out of hypervisor folder and
moved to a separate folder. This would make hypervisor LoC calculation more
precisely with below fomula:
typical LoC = Loc(hypervisor) + Loc(one vm_configs)
which
Loc(one vm_configs) = Loc(misc/vm_configs/boards/<board>)
+ LoC(misc/vm_configs/scenarios/<scenario>/<board>)
+ Loc(misc/vm_configs/scenarios/<scenario>/vm_configurations.c
+ Loc(misc/vm_configs/scenarios/<scenario>/vm_configurations.h
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
On WHL platform, we need to pass through TPM to Secure pre-launched VM. In order
to do this, we need to add TPM2 ACPI Table and add TPM DSDT ACPI table to include
the _CRS.
Now we only support the TPM 2.0 device (TPM 1.2 device is not support). Besides,
the TPM must use Start Method 7 (Uses the Command Response Buffer Interface)
to notify the TPM 2.0 device that a command is available for processing.
Tracked-On: #5053
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Using ACPI_TABLE_HEADER MACRO to initial the ACPI Table Header.
Tracked-On: #5053
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
The information needed to enable MSI-x emulation.
Only enable MSI-x emuation for the devices in msix_emul_devs array.
Currently, only EHL has the need to enable MSI-x emulation for TSN
devices.
Tracked-On: #4831
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According PCI Code and ID Assignment Specification Revision 1.11, a PCI device
whose Base Class is 06h and Sub-Class is 00h is a Host bridge.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We should check whether a PCI device is host bridge or not by Base Class (06h)
and Sub-Class (00h).
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
If HV relocation is enabled, either ACRN efi-stub or GRUB relocates
hypervisor image above HPA 256MB, thus we put hvlog and ramoops buffer
under 256MB to avoid conflict with hypervisor owned address.
This patch hardcodes these addresses:
0xa00000 - 0xdfffff: 4MiB for ramoops buffer
0xe00000 - 0xffffff: 2MiB for hvlog buffer
However, user can customize them to other addresses as long as it's under
256MB, available in host e820, and SOS bootarg "nokaslr" is not specified.
If HV relocation is disabled, need to make sure that these buffer
addresses are not between HV_RAM_START and HV_RAM_START + HV_RAM_SIZE.
Tracked-On: #4760
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
For post-launched VMs, the configured CPU affinity could be different
from the actual running CPU affinity. This new field acrn_vm->cpu_affinity
recognizes this difference so that it's possible that CREATE_VM
hypercall won't overwrite the configured CPU afifnity.
Change name cpu_affinity_bitmap in acrn_vm_config to cpu_affinity.
This is read-only in run time, never overwritten by acrn-dm.
Remove vm_config->vcpu_num, which means the number of vCPUs of the
configured CPU affinity. This is not to be confused with the actual
running vCPU number: vm->hw.created_vcpus.
Changed get_vm_bsp_pcpu_id() to get_configured_bsp_pcpu_id() for less
confusion.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This commit makes some RDT code cleanup, mainling including:
- remove the clos_mask and mba_delay validation check in setup_res_clos_msr(), the check will be done in pre-build;
- rename platform_clos_num to valid_clos_num, which is set as the minimal clos_mas of all enabled RDT resouces;
- init the platform_clos_array in the res_cap_info[] definition;
- remove the unnecessary return values and return value check.
Tracked-On: #4604
Signed-off-by: Yan, Like <like.yan@intel.com>
For return value of local_gpa2hpa, either INVALID_HPA or NULL
means the EPT walking failure. Current code only take care of
NULL return and leave INVALID_HPA as correct case.
In some cases (if guest page table is filled with invalid memory
address), it could crash ACRN from guest.
Add INVALID_HPA return check as well.
Also add @pre assumptions for some gpa2hpa usages.
Tracked-On: #4730
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
When boot ACRN hypervisor from grub multiboot, HV will be loaded at
CONFIG_HV_RAM_START since relocation is not supported in grub
multiboot1. The CONFIG_HV_RAM_SIZE in industry scenario will take
~330MB(0x14000000), unfortunately the efi memmap on NUC7i7DNB is
truncated at 0x6dba2000 although it is still usable from 0x6dba2000. So
from grub point of view, it could not find a continuous memory from
0x6000000 to load industry scenario. Per efi memmap, there is a big
memory area available from 0x40400000, so put CONFIG_HV_RAM_START to
0x41000000 is much safe for NUC7i7DNB.
Tracked-On: #4641
Signed-off-by: Victor Sun <victor.sun@intel.com>
Currently the vcpu_affinity[] array fixes the vCPU to pCPU mapping.
While the new cpu_affinity_bitmap doesn't explicitly sepcify this
mapping, instead, it implicitly assumes that vCPU0 maps to the pCPU
with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
so on.
This makes it possible for post-launched VM to run vCPUs on a subset of
these pCPUs only, and not all of them.
acrn-dm may launch post-launched VMs with the current approach: indicate
VM UUID and hypervisor launches all VCPUs from the PCPUs that are masked
in cpu_affinity_bitmap.
Also acrn-dm can choose to launch the VM on a subset of PCPUs that is
defined in cpu_affinity_bitmap. In this way, acrn-dm must specify the
subset of PCPUs in the CREATE_VM hypercall.
Additionally, with this change, a guest's vcpu_num can be easily calculated
from cpu_affinity_bitmap, so don't assign vcpu_num in vm_configuration.c.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add FADT table support to support guest S5 setting.
According to ACPI 6.3 Spec, OSPM must ignored the DSDT and FACS fields if them're zero.
However, Linux kernel seems not to abide by the protocol, it will check DSDT still.
So add an empty DSDT to meet it.
Tracked-On: #4623
Signed-off-by: Li Fei1 <fei1.li@intel.com>
On most board the MCFG base is set to 0xe0000000, so modify this value in
platform_acpi_info.h for generic boards;
The description of ACPI_PARSE_ENABLED is modified also to match its usage.
Tracked-On: #4157
Signed-off-by: Victor Sun <victor.sun@intel.com>
Currently the vm uuid and severity is initilized separately in
vm_config struct, developer need to take care both items carefuly
otherwise hypervisor would have trouble with the configurations.
Given the vm loader_order/uuid and severity are binded tightly, the
patch merged these tree settings in one macro so that developer will
have a simple interface to configure in vm_config struct.
Tracked-On: #4616
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add 64-bit MMIO window related MACROs to the supported board files
in the hypervisor source code.
Tracked-On: #4586
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
This commit allows hypervisor to allocate cache to vcpu by assigning different clos
to vcpus of a same VM.
For example, we could allocate different cache to housekeeping core and real-time core
of an RTVM in order to isolate the interference of housekeeping core via cache hierarchy.
Tracked-On: #4566
Signed-off-by: Yan, Like <like.yan@intel.com>
Reviewed-by: Chen, Zide <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There're some cases the SOS (higher severity guest) needs to access the
post-launched VM (lower severity guest) PCI CFG space:
1. The SR-IOV PF needs to reset the VF
2. Some pass through device still need DM to handle some quirk.
In the case a device is assigned to a UOS and is not in a zombie state, the SOS
is able to access, if and only if the SOS has higher severity than the UOS.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
As pci_devices.h is included by <page.h>, need to prepare pci_devices.h
for nuc6cayh and apl-up2 board.
Also the #error info in generic/pci_devices.h should be removed, otherwise
the build will be failed in sdc/sdc2/industry scenarios.
Tracked-On: #4458
Signed-off-by: Victor Sun <victor.sun@intel.com>
For a pre-launched VM, a region from PTDEV_HI_MMIO_START is used to store
64bit vBARs of PT devices which address is high than 4G. The region should
be located after all user memory space and be coverd by guest EPT address.
Tracked-On: #4458
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Remove useless per board ve820.c as arch/x86/guest/ve820.c is common for
all boards now;
Tracked-On: #4458
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- remove limit of CONFIG_HV_RAM_SIZE which is for scenario of 2 VMs only,
the default size from Kconfig could build scenario which up to 5 VMs;
- rename whl-ipc-i5_acpi_info.h to platform_acpi_info.h, since the former
one should be generated by acrn-config tool;
- add SOS related macros in misc.h, otherwise build scenarios which has
SOS VM would be failed;
Tracked-On: #4463
Signed-off-by: Victor Sun <victor.sun@intel.com>
This patch updates board.c files for RDT MBA on existing
platforms. Also, fixes setting RDT flag in WHL config file.
Tracked-On: #3725
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The init_one_dev_config is used to initialize a acrn_vm_pci_dev_config
SRIOV needs a explicit acrn_vm_pci_dev_config to create a VF vdev,so
refine it to return acrn_vm_pci_dev_config.
Tracked-On: #4433
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Make the SRIOV-Capable device invisible from SOS if there is
no room for its all virtual functions.
v2: fix a issue that if a PF has been dropped, the subsequent PF
will be dropped too even there is room for its VFs.
Tracked-On: #4433
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
is not set
This patch does the following,
1. Removes RDT code if CONFIG_RDT_ENABLED flag is
not set.
2. Set the CONFIG_RDT_ENABLED flag only on platforms
that support RDT so that build scripts will automatically
reflect the config.
Tracked-On: #3715
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There can be times when user unknowinlgy enables
CONFIG_CAT_ENBALED SW flag, but the hardware might
not support L3 or L2 CAT. In such case software can
end up writing to the CAT MSRs which can cause
undefined results. The patch fixes the issue by
enabling CAT only when both HW as well software
via the CONFIG_CAT_ENABLED supports CAT.
The patch also address typo with "clos2prq_msr"
function name. It should be "clos2pqr_msr" instead.
PQR stands for platform qos register.
Tracked-On: #3715
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Upcoming intel platforms can support both L2 and L3
but our current code only supports either L2 or L3 CAT.
So split the MSRs so that we can support allocation
for both L2 and L3.
This patch does the following,
1. splits programming of L2 and L3 cache resource
based on the resource ID.
2. Replace generic platform_clos_array struct with resource
specific struct in all the existing board.c files.
Tracked-On: #3715
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
As part of rdt cat refactoring, goal is to combine all rdt
specific features such as CAT under one module. So renaming
rdt resouce specific files such as cat.c/.h to generic rdt.c/.h
files.
Tracked-On: #3715
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
apl-mrb need to access P2SB device, so add 00:0d.0 P2SB device to
whitelist for platform pci hidden device.
Tracked-On: #3475
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Reviewed-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Victor Sun <victor.sun@intel.com>
In platforms that support CAT, when it is enabled by ACRN, i.e.
IA32_resourceType_MASK_n registers are programmed with customized values,
it has impacts to the whole system.
The per guest flag GUEST_FLAG_CLOS_REQUIRED suggests that CAT may be
enabled in some guests, but not in others who don't have this flag,
which is conceptually incorrect.
This patch removes GUEST_FLAG_CLOS_REQUIRED, and adds a new Kconfig
entry CAT_ENABLED for CAT enabling. When it's enabled, platform_clos_array[]
defines a set of system-wide Class of Service (COS, or CLOS), and the
per guest vm_configs[].clos associates the guest with particular CLOS.
Tracked-On: #2462
Signed-off-by: Zide Chen <zide.chen@intel.com>
- target vm_id of vuart can't be un-defined VM, nor the VM itself.
- fix potential NULL pointer dereference in find_active_target_vuart()
Tracked-On: #3854
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add severity definitions for different scenarios. The static
guest severity is defined according to guest configurations.
Also add sanity check to make sure the severity for all guests
are correct.
Tracked-On: #4270
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
rename the macro since MAX_PCPU_NUM could be parsed from board file and
it is not a configurable item anymore.
Tracked-On: #4230
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The default PCI mmcfg base is stored in ACPI MCFG table, when
CONFIG_ACPI_PARSE_ENABLED is set, acpi_fixup() function will
parse and fix up the platform mmcfg base in ACRN boot stage;
when it is not set, platform mmcfg base will be initialized to
DEFAULT_PCI_MMCFG_BASE which generated by acrn-config tool;
Please note we will not support platform which has multiple PCI
segment groups.
Tracked-On: #4157
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>