Invalidate cache by scanning and flushing the whole guest memory is
inefficient which might cause long execution time for WBINVD emulation.
A long execution in hypervisor might cause a vCPU stuck phenomenon what
impact Windows Guest booting.
This patch introduce a workaround method that pausing all other vCPUs in
the same VM when do wbinvd emulation.
Tracked-On: #4703
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Refine find_ptirq_entry by hashing instead of walk each of the PTIRQ entries one by one.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong<eddie.dong@Intel.com>
RTVM (with lapic PT) boots hang when maxcpus is
assigned a value less than the CPU number configured
in hypervisor.
In this case, vlapic_state(per VM) is left in TRANSITION
state after BSP boot, which blocks interupts to be injected
to this UOS.
Tracked-On: #4803
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Li, Fei <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There's no need to look up MSI ptirq entry by virtual SID any more since the MSI
ptirq entry would be removed before the device is assigned to a VM.
Now the logic of MSI interrupt remap could simplify as:
1. Add the MSI interrupt remap first;
2. If step is already done, just do the remap part.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong<eddie.dong@Intel.com>
Reviewed-by: Grandhi, Sainath <sainath.grandhi@intel.com>
For post-launched VMs, the configured CPU affinity could be different
from the actual running CPU affinity. This new field acrn_vm->cpu_affinity
recognizes this difference so that it's possible that CREATE_VM
hypercall won't overwrite the configured CPU afifnity.
Change name cpu_affinity_bitmap in acrn_vm_config to cpu_affinity.
This is read-only in run time, never overwritten by acrn-dm.
Remove vm_config->vcpu_num, which means the number of vCPUs of the
configured CPU affinity. This is not to be confused with the actual
running vCPU number: vm->hw.created_vcpus.
Changed get_vm_bsp_pcpu_id() to get_configured_bsp_pcpu_id() for less
confusion.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ACRN syncs PIR to vIRR in the software in cases the Posted
Interrupt notification happens while the pCPU is in root mode.
Sync can be achieved by processor hardware by sending a
posted interrupt notiification vector.
This patch sends a self-IPI, if there are interrupts pending in PIR,
which is serviced by the logical processor at the next
VMEnter
Tracked-On: #4777
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
The virtual MSI information could be included in ptirq_remapping_info structrue,
there's no need to pass another input paramater for this puepose. So we could
remove the ptirq_msi_info input.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
We look up PTIRQ entru only by SID. So _by_sid could removed.
And refine function name to verb-obj style.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
For return value of local_gpa2hpa, either INVALID_HPA or NULL
means the EPT walking failure. Current code only take care of
NULL return and leave INVALID_HPA as correct case.
In some cases (if guest page table is filled with invalid memory
address), it could crash ACRN from guest.
Add INVALID_HPA return check as well.
Also add @pre assumptions for some gpa2hpa usages.
Tracked-On: #4730
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Currently the vcpu_affinity[] array fixes the vCPU to pCPU mapping.
While the new cpu_affinity_bitmap doesn't explicitly sepcify this
mapping, instead, it implicitly assumes that vCPU0 maps to the pCPU
with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
so on.
This makes it possible for post-launched VM to run vCPUs on a subset of
these pCPUs only, and not all of them.
acrn-dm may launch post-launched VMs with the current approach: indicate
VM UUID and hypervisor launches all VCPUs from the PCPUs that are masked
in cpu_affinity_bitmap.
Also acrn-dm can choose to launch the VM on a subset of PCPUs that is
defined in cpu_affinity_bitmap. In this way, acrn-dm must specify the
subset of PCPUs in the CREATE_VM hypercall.
Additionally, with this change, a guest's vcpu_num can be easily calculated
from cpu_affinity_bitmap, so don't assign vcpu_num in vm_configuration.c.
Tracked-On: #4616
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The current logic puts hpa2 above GPA 4G always, which is incorrect. Need
to set gpa start of hpa2 right after hpa1 when hpa1 size is less then 2G;
Tracked-On: #4458
Signed-off-by: Victor Sun <victor.sun@intel.com>
If CPU has MSR_TEST_CTL, show an emulaued one to VCPU
Tracked-On: #4496
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
remove unnecessary state check and
add pre-condition for vcpu APIs.
Tracked-On: #4320
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
check the vm state in hypercall api,
add pre-condition for vm api.
Tracked-On: #4320
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
now it will call pause_vm in shutdown_vm,
move it out from shutdown_vm to reduce coupling.
Tracked-On: #4320
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ACRN disables Snoop Control in VT-d DMAR engines for simplifing the
implementation. Also, since the snoop behavior of PCIE transactions
can be controlled by guest drivers, some devices may take the advantage
of the NO_SNOOP_ATTRIBUTE of PCIE transactions for better performance
when snoop is not needed. No matter ACRN enables or disables Snoop
Control, the DMA operations of passthrough devices behave correctly
from guests' point of view.
This patch is used to clean all the snoop related code.
Tracked-On: #4509
Signed-off-by: Xiaoguang Wu <xiaoguang.wu@intel.com>
Reviewed-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Due to the fact that i915 iommu doesn't support snoop, hence it can't
access memory when the SNOOP bit of Secondary Level page PTE (SL-PTE)
is set, this will cause many undefined issues such as invisible cursor
in WaaG etc.
Current hv design uses EPT as Scondary Leval Page for iommu, and this
patch removes the codes of setting SNOOP bit in both EPT-PTE and SL-PTE
to avoid errors.
And according to SDM 28.2.2, the SNOOP bit (11th bit) will be ignored
by EPT, so it will not affect the CPU address translation.
Tracked-On: #4509
Signed-off-by: Xiaoguang Wu <xiaoguang.wu@intel.com>
Reviewed-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Waag will send NMIs to all its cores during reboot. But currently,
NMI cannot be injected to vcpu which is in HLT state.
To fix the problem, need to wakeup target vcpu, and inject NMI through
interrupt-window.
Tracked-On: #4620
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Align the implementation to SDM Vol.3 4.1.1.
Also this patch fixed a bug that doesn't check paging status first
in some cpu mode.
Tracked-On: #4628
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently vlapic_build_id() uses vcpu_id to retrieve the lapic_id
per_cpu variable:
vlapic_id = per_cpu(lapic_id, vcpu->vcpu_id);
SOS vcpu_id may not equal to pcpu_id, and in that case it runs into
problems. For example, if any pre-launched VMs are launched on PCPUs
whose IDs are smaller than any PCPU IDs that are used by SOS.
This patch fixes the issue and simplify the code to create or get
vapic_id by:
- assign vapic_id in create_vlapic(), which now takes pcpu_id as input
argument, and save it in the new field: vlapic->vapic_id, which will
never be changed.
- simplify vlapic_get_apicid() by returning te saved vapid_id directly.
- remove vlapic_build_id().
- vlapic_init() is only called once, merge it into vlapic_create().
Tracked-On: #4268
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Maintain a per-pCPU array of vCPUs (struct acrn_vcpu *vcpu_array[CONFIG_MAX_VM_NUM]),
one VM cannot have multiple vCPUs share one pcpu, so we can utilize this property
and use the containing VM's vm_id as the index to the vCPU array:
In create_vcpu(), we simply do:
per_cpu(vcpu_array, pcpu_id)[vm->vm_id] = vcpu;
In offline_vcpu():
per_cpu(vcpu_array, pcpuid_from_vcpu(vcpu))[vcpu->vm->vm_id] = NULL;
so basically we use the containing VM's vm_id as the index to the vCPU array,
as well as the index of posted interrupt IRQ/vector pair that are assigned
to this vCPU:
0: first vCPU and first posted interrupt IRQs/vector pair
(POSTED_INTR_IRQ/POSTED_INTR_VECTOR)
...
CONFIG_MAX_VM_NUM-1: last vCPU and last posted interrupt IRQs/vector pair
((POSTED_INTR_IRQ + CONFIG_MAX_VM_NUM - 1U)/(POSTED_INTR_VECTOR + CONFIG_MAX_VM_NUM - 1U)
In the posted interrupt handler, it will do the following:
Translate the IRQ into a zero based index of where the vCPU
is located in the vCPU list for current pCPU. Once the
vCPU is found, we wake up the waiting thread and record
this request as ACRN_REQUEST_EVENT
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
This is a preparation patch for adding support for VT-d PI
related vCPU scheduling.
ACRN does not support vCPU migration, one vCPU always runs on
the same pCPU, so PI's ndst is never changed after startup.
VCPUs of a VM won’t share same pCPU. So the maximum possible number
of VCPUs that can run on a pCPU is CONFIG_MAX_VM_NUM.
Allocate unique Activation Notification Vectors (ANV) for each vCPU
that belongs to the same pCPU, the ANVs need only be unique within each
pCPU, not across all vCPUs. This reduces # of pre-allocated ANVs for
posted interrupts to CONFIG_MAX_VM_NUM, and enables ACRN to avoid
switching between active and wake-up vector values in the posted
interrupt descriptor on vCPU scheduling state changes.
A total of CONFIG_MAX_VM_NUM consecutive IRQs/vectors are reserved
for posted interrupts use.
The code first initializes vcpu->arch.pid.control.bits.nv dynamically
(will be added in subsequent patch), the other code shall use
vcpu->arch.pid.control.bits.nv instead of the hard-coded notification vectors.
Rename some functions:
apicv_post_intr --> apicv_trigger_pi_anv
posted_intr_notification --> handle_pi_notification
setup_posted_intr_notification --> setup_pi_notification
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
Given the vcpumask, check if the IRQ is single destination
and return the destination vCPU if so, the address of associated PI
descriptor for this vCPU can then be passed to dmar_assign_irte() to
set up the posted interrupt IRTE for this device.
For fixed mode interrupt delivery, all vCPUs listed in vcpumask should
service the interrupt requested. But VT-d PI cannot support multicast/broadcast
IRQs, it only supports single CPU destination. So the number of vCPUs
shall be 1 in order to handle IRQ in posted mode for this device.
Add pid_paddr to struct intr_source. If platform_caps.pi is true and
the IRQ is single-destination, pass the physical address of the destination
vCPU's PID to ptirq_build_physical_msi and dmar_assign_irte
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
Add platform_caps.c to maintain platform related information
Set platform_caps.pi to true if all iommus are posted interrupt capable, false
otherwise
If lapic passthru is not configured and platform_caps.pi is true, the vm
may be able to use posted interrupt for a ptdev, if the ptdev's IRQ is
single-destination
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
EPT table can be changed concurrently by more than one vcpus.
This patch add a lock to protect the add/modify/delete operations
from different vcpus concurrently.
Tracked-On: #4253
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
The VMCS field is an embedded array for a vCPU. So there's no need to check for
NULL before use.
Tracked-On: #3813
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Conceptually, the devices unregistration sequence of the shutdown process should be
opposite to create.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
This commit allows hypervisor to allocate cache to vcpu by assigning different clos
to vcpus of a same VM.
For example, we could allocate different cache to housekeeping core and real-time core
of an RTVM in order to isolate the interference of housekeeping core via cache hierarchy.
Tracked-On: #4566
Signed-off-by: Yan, Like <like.yan@intel.com>
Reviewed-by: Chen, Zide <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
As ACRN prepares to support servers with large amounts of memory
current logic to allocate space for 4K pages of EPT at compile time
will increase the size of .bss section of ACRN binary.
Bootloaders could run into a situation where they cannot
find enough contiguous space to load ACRN binary under 4GB,
which is typically heavily fragmented with E820 types Reserved,
ACPI data, 32-bit PCI hole etc.
This patch does the following
1) Works only for "direct" mode of vboot
2) reserves space for 4K pages of EPT, after boot by parsing
platform E820 table, for all types of VMs.
Size comparison:
w/o patch
Size of DRAM Size of .bss
48 GB 0xe1bbc98 (~226 MB)
128 GB 0x222abc98 (~548 MB)
w/ patch
Size of DRAM Size of .bss
48 GB 0x1991c98 (~26 MB)
128 GB 0x1a81c98 (~28 MB)
Tracked-On: #4563
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We could use container_of to get vcpu structure pointer from vmtrr. So vcpu
structure pointer is no need in vmtrr structure.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
We could use container_of to get vcpu/vm structure pointer from vlapic. So vcpu/vm
structure pointer is no need in vlapic structure.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
This function casts a member of a structure out to the containing structure.
So rename to container_of is more readable.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Exend union dmar_ir_entry to support VT-d posted interrupts.
Rename some fields of union dmar_ir_entry:
entry --> value
sw_bits --> avail
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
Pass intr_src and dmar_ir_entry irte as pointers to dmar_assign_irte(),
which fixes the "Attempt to change parameter passed by value" MISRA C violation.
A few coding style fixes
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
For CPU side posted interrupts, it only uses bit 0 (ON) of the PI's 64-bit control
, other bits are don't care. This is not the case for VT-d posted
interrupts, define more bit fields for the PI's 64-bit control.
Use bitmap functions to manipulate the bit fields atomically.
Some MISRA-C violation and coding style fixes
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
The posted interrupt descriptor is more of a vmx/vmcs concept than a vlapic
concept. struct acrn_vcpu_arch stores the vmx/vmcs info, so put struct pi_desc
in struct acrn_vcpu_arch.
Remove the function apicv_get_pir_desc_paddr()
A few coding style/typo fixes
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
Rename struct vlapic_pir_desc to pi_desc
Rename struct member and local variable pir_desc to pid
pir=posted interrupt request, pi=posted interrupt
pid=posted interrupt descriptor
pir is part of pi descriptor, so it is better to use pi instead of pir
struct pi_desc will be moved to vmx.h in subsequent commit.
Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
The cupid() can be replaced with cupid_subleaf, which is more clear.
Having both APIs makes reading difficult.
Tracked-On: #4526
Signed-off-by: Li Fei1 <fei1.li@intel.com>
For SOS VM, when the target platform has multiple IO-APICs, there
should be equal number of virtual IO-APICs.
This patch adds support for emulating multiple vIOAPICs per VM.
Tracked-On: #4151
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
As ACRN prepares to support platforms with multiple IO-APICs,
GSI is a better way to represent physical and virtual INTx interrupt
source.
1) This patch replaces usage of "pin" with "gsi" whereever applicable
across the modules.
2) PIC pin to gsi is trickier and needs to consider the usage of
"Interrupt Source Override" structure in ACPI for the corresponding VM.
Tracked-On: #4151
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Reverts 538ba08c: hv:Add vpin to ptdev entry mapping for vpic/vioapic
ACRN uses an array of size per VM to store ptirq entries against the vIOAPIC pin
and an array of size per VM to store ptirq entries against the vPIC pin.
This is done to speed up "ptirq entry" lookup at runtime for Level triggered
interrupts in API ptirq_intx_ack used on EOI.
This patch switches the lookup API for INTx interrupts to the API,
ptirq_lookup_entry_by_sid
This could add delay to processing EOI for Level triggered interrupts.
Trade-off here is space saved for array/s of size CONFIG_MAX_IOAPIC_LINES with 8 bytes
per data. On a server platform, ACRN needs to emulate multiple vIOAPICs for
SOS VM, same as the number of physical IO-APICs. Thereby ACRN would need around
10 such arrays per VM.
Removes the need of "pic_pin" except for the APIs facing the hypercalls
hcall_set_ptdev_intr_info, hcall_reset_ptdev_intr_info
Tracked-On: #4151
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
There're some cases the SOS (higher severity guest) needs to access the
post-launched VM (lower severity guest) PCI CFG space:
1. The SR-IOV PF needs to reset the VF
2. Some pass through device still need DM to handle some quirk.
In the case a device is assigned to a UOS and is not in a zombie state, the SOS
is able to access, if and only if the SOS has higher severity than the UOS.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
ve820.c is a common file in arch/x86/guest/ now, so move function of
create_sos_vm_e820() to this file to make code structure clear;
Tracked-On: #4458
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
hypervisor/arch/x86/configs/$(BOARD)/ve820.c is used to store pre-launched
VM specific e820 entries according to memory configuration of customer.
It should be a scenario based configurations but we had to put it in per
board foler because of different board memory settings. This brings concerns
to customer on configuration orgnization.
Currently the file provides same e820 layout for all pre-launched VMs, but
they should have different e820 when their memory are configured differently.
Although we have acrn-config tool to generate ve802.c automatically, it
is not friendly to modify hardcoded ve820 layout manually, so the patch
changes the entries initialization method by calculating each entry item
in C code.
Tracked-On: #4458
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. Renames DEFINE_IOAPIC_SID with DEFINE_INTX_SID as the virtual source can
be IOAPIC or PIC
2. Rename the src member of source_id.intx_id to ctlr to indicate interrupt
controller
2. Changes the type of src member of source_id.intx_id from uint32_t to
enum with INTX_CTLR_IOAPIC and INTX_CTLR_PIC
Tracked-On: #4447
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Current code avoid the rule 88 S in MISRA-C, so move xsaves and xrstors
assembler to individual functions.
Tracked-On: #4436
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The init value for XCR0 and XSS should be the same with spec:
In SDM Vol1 13.3:
XCR0[0] is associated with x87 state (see Section 13.5.1). XCR0[0] is
always 1. The other bits in XCR0 are all 0 coming out of RESET.
The IA32_XSS MSR (with MSR index DA0H) is zero coming out of RESET.
The previous code try to fix the xsave area leak to other VMs during init
phase, but bring the error to linux. Besides, it cannot avoid the
possible leak in running phase. Need find a better solution.
Tracked-On: #4430
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There can be times when user unknowinlgy enables
CONFIG_CAT_ENBALED SW flag, but the hardware might
not support L3 or L2 CAT. In such case software can
end up writing to the CAT MSRs which can cause
undefined results. The patch fixes the issue by
enabling CAT only when both HW as well software
via the CONFIG_CAT_ENABLED supports CAT.
The patch also address typo with "clos2prq_msr"
function name. It should be "clos2pqr_msr" instead.
PQR stands for platform qos register.
Tracked-On: #3715
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
As part of rdt cat refactoring, goal is to combine all rdt
specific features such as CAT under one module. So renaming
rdt resouce specific files such as cat.c/.h to generic rdt.c/.h
files.
Tracked-On: #3715
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. Rename BOOT_CPU_ID to BSP_CPU_ID
2. Repace hardcoded value with BSP_CPU_ID when
ID of BSP is referenced.
Tracked-On: #4420
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Now we split passthrough PCI device from DM to HV, we could remove all the passthrough
PCI device unused code.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In this case, we could handle all the passthrough PCI devices in ACRN hypervisor.
But we still need DM to initialize BAR resources and Intx for passthrough PCI
device for post-launched VM since these informations should been filled into
ACPI tables. So
1. we add a HC vm_assign_pcidev to pass the extra informations to replace the old
vm_assign_ptdev.
2. we saso remove HC vm_set_ptdev_msix_info since it could been setted by the post-launched
VM now same as SOS.
3. remove vm_map_ptdev_mmio call for PTDev in DM since ACRN hypervisor will handle these
BAR access.
4. the most important thing is to trap PCI configure space access for PTDev in HV for
post-launched VM and bypass the virtual PCI device configure space access to DM.
This patch doesn't do the clean work. Will do it in the next patch.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Add assign/deassign PCI device hypercall APIs to assign a PCI device from SOS to
post-launched VM or deassign a PCI device from post-launched VM to SOS. This patch
is prepared for spliting passthrough PCI device from DM to HV.
The old assign/deassign ptdev APIs will be discarded.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
On UEFI UP2 board, APs might execute HLT before SOS kernel INIT them.
After SOS kernel take over and will re-init the APs directly. The flows
from HV perspective is like:
HLT trap:
wait_event(VCPU_EVENT_VIRTUAL_INTERRUPT) -> sleep_thread
SOS kernel INIT, SIPI APs:
pause_vcpu(ZOMBIE) -> sleep_thread
-> reset_vcpu
-> launch_vcpu -> wake_vcpu
However, the last wake_vcpu will fail because the cpu event
VCPU_EVENT_VIRTUAL_INTERRUPT had not got signaled.
This patch will reset all vcpu events in reset_vcpu. If the thread was
previously waiting for a event, its waiting status will be cleared and
launch_vcpu will wake it to running.
Tracked-On: #4402
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. Align the coding style for these MACROs
2. Align the values of fixed VECTORs
Tracked-On: #4348
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
In lapic passthrough mode, it should passthrough HLT/PAUSE execution
too. This patch disable their emulation when switch to lapic passthrough mode.
Tracked-On: #4329
Tested-by: Dongsheng Zhang <dongsheng.x.zhang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
is_polling_ioreq is more straightforward. Rename it.
Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
SOS will use PCIe ECAM access PCIe external configuration space. HV should trap this
access for security(Now pre-launched VM doesn't want to support PCI ECAM; post-launched
VM trap PCIe ECAM access in DM).
Besides, update PCIe MMCONFIG region to be owned by hypervisor and expose and pass through
platform hide PCI devices by BIOS to SOS.
Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
HLT emulation is import to CPU resource maximum utilization. vcpu
doing HLT means it is idle and can give up CPU proactively. Thus, we
pause the vcpu thread in HLT emulation and resume it while event happens.
When vcpu enter HLT, its vcpu thread will sleep, but the vcpu state is
still 'Running'.
VM ID PCPU ID VCPU ID VCPU ROLE VCPU STATE
===== ======= ======= ========= ==========
0 0 0 PRIMARY Running
0 1 1 SECONDARY Running
Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Sometimes HV wants to know if there are pending interrupts of one vcpu.
Add .has_pending_intr interface in acrn_apicv_ops and return the pending
interrupts status by check IRRs of apicv.
Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Introduce two kinds of events for each vcpu,
VCPU_EVENT_IOREQ: for vcpu waiting for IO request completion
VCPU_EVENT_VIRTUAL_INTERRUPT: for vcpu waiting for virtual interrupts events
vcpu can wait for such events, and resume to run when the
event get signalled.
This patch also change IO request waiting/notifying to this way.
Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
As we enabled cpu sharing, PAUSE-loop exiting can help vcpu
to release its pcpu proactively. It's good for performance.
VMX_PLE_GAP: upper bound on the amount of time between two successive
executions of PAUSE in a loop.
VMX_PLE_WINDOW: upper bound on the amount of time a guest is allowed to
execute in a PAUSE loop
Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In current code, wait_pcpus_offline() and make_pcpu_offline() are called by
both shutdown_vm() and reset_vm(), but this is not needed when lapic_pt is
not enabled for the vcpus of the VM.
The patch merged offline pcpus part code into a common
offline_lapic_pt_enabled_pcpus() api for shutdown_vm() and reset_vm() use and
called only when lapic_pt is enabled.
Tracked-On: #4325
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. This patch passes-through CR4.PCIDE to guest VM.
2. This patch handles the invlidation of TLB and the paging-structure caches.
According to SDM Vol.3 4.10.4.1, the following instructions invalidate
entries in the TLBs and the paging-structure caches:
- INVLPG: this instruction is passed-through to guest, no extra handling needed.
- INVPCID: this instruction is passed-trhough to guest, no extra handling needed.
- CR0.PG from 1 to 0: already handled by current code, change of CR0.PG will do
EPT flush.
- MOV to CR3: hypervisor doesn't trap this instrcution, no extra handling needed.
- CR4.PGE changed: already handled by current code, change of CR4.PGE will no EPT
flush.
- CR4.PCIDE from 1 to 0: this patch handles this case, will do EPT flush.
- CR4.PAE changed: already handled by current code, change of CR4.PAE will do EPT
flush.
- CR4.SEMP from 1 to 0, already handled by current code, change of CR4.SEMP will
do EPT flush.
- Task switch: Task switch is not supported in VMX non-root mode.
- VMX transitions: already handled by current code with the support of VPID.
3. This patch checks the validatiy of CR0, CR4 related to PCID feature.
According to SDM Vol.3 4.10.1, CR.PCIDE can be 1 only in IA-32e mode.
- MOV to CR4 causes a general-protection exception (#GP) if it would change CR4.PCIDE
from 0 to 1 and either IA32_EFER.LMA = 0 or CR3[11:0] ≠ 000H
- MOV to CR0 causes a general-protection exception if it would clear CR0.PG to 0
while CR4.PCIDE = 1
Tracked-On: #4296
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to SDM Vol.3 Section 25.3, behavior of the INVPCID
instruction is determined first by the setting of the “enable
INVPCID” VM-execution control:
- If the “enable INVPCID” VM-execution control is 0, INVPCID
causes an invalid-opcode exception (#UD).
- If the “enable INVPCID” VM-execution control is 1, treatment
is based on the setting of the “INVLPG exiting” VM-execution
control:
* If the “INVLPG exiting” VM-execution control is 0, INVPCID
operates normally.
* If the “INVLPG exiting” VM-execution control is 1, INVPCID
causes a VM exit.
In current implementation, hypervisor doesn't set “INVLPG exiting”
VM-execution control, this patch sets “enable INVPCID” VM-execution
control to 1 when the instruction is supported by physical cpu.
If INVPCID is supported by physical cpu, INVPCID will not cause VM
exit in VM.
If INVPCID is not supported by physical cpu, INVPCID causes an #UD
in VM.
When INVPCID is passed-through to VM, According to SDM Vol.3 28.3.3.1,
INVPCID instruction invalidates linear mappings and combined mappings.
They are required to do so only for the current VPID.
HV assigned a unique vpid for each vCPU, if guest uses wrong PCID,
it would not affect other vCPUs.
Tracked-On: #4296
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Pass-through PCID related capabilities to VMs:
- The support of PCID (CPUID.01H.ECX[17])
- The support of instruction INVPCID (CPUID.07H.EBX[10])
Tracked-On: #4296
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ACRN relies on the capability of VPID to avoid EPT flushes during VMX transitions.
This capability is checked as a must have hardware capability, otherwise, ACRN will
refuse to boot.
Also, the current code has already made sure each vpid for a virtual cpu is valid.
So, no need to check the validity of vpid for vcpu and enable VPID for vCPU by default.
Tracked-On: #4296
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Per SDM 10.12.5.1 vol.3, local APIC should keep LAPIC state after receiving
INIT. The local APIC ID register should also be preserved.
Tracked-On: #4267
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The patch abstract a vcpu_reset_internal() api for internal usage, the
function would not touch any vcpu state transition and just do vcpu reset
processing. It will be called by create_vcpu() and reset_vcpu().
The reset_vcpu() will act as a public api and should be called
only when vcpu receive INIT or vm reset/resume from S3. It should not be
called when do shutdown_vm() or hcall_sos_offline_cpu(), so the patch remove
reset_vcpu() in shutdown_vm() and hcall_sos_offline_cpu().
The patch also introduced reset_mode enum so that vcpu and vlapic could do
different context operation according to different reset mode;
Tracked-On: #4267
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Rename vlapic_xxx_write_handler() to vlapic_write_xxx() to make code more
readable;
Tracked-On: #4268
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Per SDM 10.4.7.1 vol3, the LVT register should be reset to 0s except for the
mask bits are set to 1s.
In current code, the lvt_last[] has been set to correct value(i.e. 0x10000) in
vlapic_reset() before enforce setting vlapic->lvt_last[i] to 0U, add the loop
that set vlapic->lvt_last[i] to 0 would lead to get zero when read LVT regs
after reset, which is incompiant with SDM;
Tracked-On: #4266
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For guest reset, if the highest severity guest reset will reset
system. There is vm flag to call out the highest severity guest
in specific scenario which is a static guest severity assignment.
There is case that the static highest severity guest is shutdown
and the highest severity guest should be transfer to other guest.
For example, in ISD scenario, if RTVM (static highest severity
guest) is shutdown, SOS should be highest severity guest instead.
The is_highest_severity_vm() is updated to detect highest severity
guest dynamically. And promote the highest severity guest reset
to system reset.
Also remove the GUEST_FLAG_HIGHEST_SEVERITY definition.
Tracked-On: #4270
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
For system S5, ACRN had assumption that SOS shutdown will trigger
system shutdown. So the system shutdown logical is:
1. Trap SOS shutdown
2. Wait for all other guest shutdown
3. Shutdown system
The new logical is refined as:
If all guest is shutdown, shutdown whole system
Tracked-On: #4270
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
We don't use INIT signal notification method now. This patch
removes them.
Tracked-On: #3886
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
We have implemented a new notification method using NMI.
So replace the INIT notification method with the NMI one.
Then we can remove INIT notification related code later.
Tracked-On: #3886
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
There is a window where we may miss the current request in the
notification period when the work flow is as the following:
CPUx + + CPUr
| |
| +--+
| | | Handle pending req
| <--+
+--+ |
| | Set req flag |
<--+ |
+------------------>---+
| Send NMI | | Handle NMI
| <--+
| |
| |
| +--> vCPU enter
| |
+ +
So, this patch enables the NMI-window exiting to trigger the next vmexit
once there is no "virtual-NMI blocking" after vCPU enter into VMX non-root
mode. Then we can process the pending request on time.
Tracked-On: #3886
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
The NMI for notification should not be inject to guest. So,
this patch drops NMI injection request when we use NMI
to notify vCPUs. Meanwhile, ACRN doesn't support vNMI well
and there is no well-designed way to check if the NMI is
for notification or for guest now. So, we take all the NMIs as
notificaton NMI for hard rtvm temporarily. It means that the
hard rtvm will never receive NMI with this patch applied.
TODO: vNMI support is not ready yet. we will add it later.
Tracked-On: #3886
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
ACRN hypervisor needs to kick vCPU off VMX non-root mode to do some
operations in hypervisor, such as interrupt/exception injection, EPT
flush etc. For non lapic-pt vCPUs, we can use IPI to do so. But, it
doesn't work for lapic-pt vCPUs as the IPI will be injected to VMs
directly without vmexit.
Without the way to kick the vCPU off VMX non-root mode to handle pending
request on time, there may be fatal errors triggered.
1). Certain operation may not be carried out on time which may further
lead to fatal errors. Taking the EPT flush request as an example, once we
don't flush the EPT on time and the guest access the out-of-date EPT,
fatal error happens.
2). ACRN now will send an IPI with vector 0xF0 to target vCPU to kick the vCPU
off VMX non-root mode if it wants to do some operations on target vCPU.
However, this way doesn't work for lapic-pt vCPUs. The IPI will be delivered
to the guest directly without vmexit and the guest will receive a unexpected
interrupt. Consequently, if the guest can't handle this interrupt properly,
fatal error may happen.
The NMI can be used as the notification signal to kick the vCPU off VMX
non-root mode for lapic-pt vCPUs. So, this patch uses NMI as notification signal
to address the above issues for lapic-pt vCPUs.
Tracked-On: #3886
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Reserved bits in a 8-bit PAT field has been checked in pat_mem_type_invalid.
Remove this redundant check "(PAT_FIELD_RSV_BITS & field) != 0UL" in
write_pat_msr.
Tracked-On: #1842
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
The port 0x64 is the status register of i8042 keyboard controller. When
i8042 is defined as ACPI PnP device in BIOS, enforce returning 0xff in
read handler would cause infinite loop when booting SOS VM, so expose
the physical port read in this case;
Tracked-On: #4228
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In current architecutre, the maximum vCPUs number per VM could not
exceed the pCPUs number. Given the MAX_PCPU_NUM macro is provided
in board configurations, so remove the MAX_VCPUS_PER_VM from Kconfig
and add a macro of MAX_VCPUS_PER_VM to reference MAX_PCPU_NUM directly.
Tracked-On: #4230
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
rename the macro since MAX_PCPU_NUM could be parsed from board file and
it is not a configurable item anymore.
Tracked-On: #4230
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In APICv advanced mode, an targeted vCPU, running in non-root mode, may get outdated
TMR and EOI exit bitmap if another vCPU sends an interrupt to it if the trigger mode
of this interrupt has changed.
This patch try to kick vCPU off to let it get the latest TMR and EOI exit bitmap when
it enters non-root mode again if new coming interrupt trigger mode has changed. Then
fill the interrupt to PIR.
Tracked-On: #4200
Signed-off-by: Li Fei1 <fei1.li@intel.com>
On some platforms, HPA regions for Virtual Machine can not be
contiguous because of E820 reserved type or PCI hole. In such
cases, pre-launched VMs need to be assigned non-contiguous memory
regions and this patch addresses it.
To keep things simple, current design has the following assumptions,
1. HPA2 always will be placed after HPA1
2. HPA1 and HPA2 don’t share a single ve820 entry.
(Create multiple entries if needed but not shared)
3. Only support 2 non-contiguous HPA regions (can extend
at a later point for multiple non-contiguous HPA)
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Tracked-On: #4195
Acked-by: Anthony Xu <anthony.xu@intel.com>
To handle reboot requests from pre-launched VMs that don't have
GUEST_FLAG_HIGHEST_SEVERITY, we shutdown the target VM explicitly
other than ignoring them.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
ptirq_prepare_msix_remap was called no matter whether MSI/MSI-X was enabled or not
and it passed zero to input parameter virtual MSI/MSI-X data field to indicate
MSI/MSI-X was disabled. However, it barely did nothing on this case.
Now ptirq_prepare_msix_remap is called only when MSI/MSI-X is enabled. It doesn't
need to check whether MSI/MSI-X is enabled or not by checking virtual MSI/MSI-X
data field.
Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
It's meaningless to sleep a non-running vcpu. Add a state check before
sleep the thread object of the vcpu.
Tracked-On: #4178
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
With cpu-sharing enabled, there are more than 1 vcpu on 1 pcpu, so the
smp_call handler should switch the vmcs to the target vcpu's vmcs. Then
get the info.
dump_vcpu_reg and dump_guest_mem should run on certain vmcs, otherwise,
there will be #GP error.
Renaming:
vcpu_dumpreg -> dump_vcpu_reg
switch_vmcs -> load_vmcs
Tracked-On: #4178
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We care more about leaf and subleaf of cpuid than vcpu_id.
So, this patch changes the cpuid trace-entry to trace the leaf
and subleaf of this cpuid vmexit.
Tracked-On: #4175
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
PMU is hidden from any guest, UD is expected when guest
try to execute 'rdpmc' instruction.
this patch sets 'RDPMC exiting' in Processorbased
VM-execution control.
Tracked-On: #3453
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
After changing init_vmcs to smp call approach and do it before
launch_vcpu, it could work with noop scheduler. On real sharing
scheudler, it has problem.
pcpu0 pcpu1 pcpu1
vmBvcpu0 vmAvcpu1 vmBvcpu1
vmentry
init_vmcs(vmBvcpu1) vmexit->do_init_vmcs
corrupt current vmcs
vmentry fail
launch_vcpu(vmBvcpu1)
This patch mark a event flag when request vmcs init for specific vcpu. When
it is running and checking pending events, will do init_vmcs firstly.
Tracked-On: #4178
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Starting with TSC_DEADLINE msr interception disabled, the virtual TSC_DEADLINE msr is always 0.
When the interception is enabled, need to sync the physical TSC_DEADLINE value to virtual TSC_DEADLINE.
When the interception is disabled, there are 2 cases:
- if the timer hasn't expired, sync virtual TSC_DEADLINE to physical TSC_DEADLINE, to make the guest read the same tsc_deadline
as it writes. This may change when the timer actually trigger.
- if the timer has expired, write 0 to the virtual TSC_DEADLINE.
Tracked-On: #4162
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
When write to virtual TSC_DEADLINE, if virtual TSC_ADJUST is not zero:
- when guest intends to disarm the tsc_deadline timer, should not arm the timer falsely;
- when guest intends to arm the tsc_deadline timer, should not disarm the timer falsely.
When read from virtual TSC_DEADLINE, if virtual TSC_ADJUST is not zero:
- if physical TSC_DEADLINE is not zero, return the virtual TSC_DEADLINE value;
- if physical TSC_DEADLINE is zero which means it's not armed (automatically disarmed after
timer triggered), return 0 and reset the virtual TSC_DEADLINE.
Tracked-On: #4162
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
xsave area:
legacy region: 512 bytes
xsave header: 64 bytes
extended region: < 3k bytes
So, pre-allocate 4k area for xsave. Use certain instruction to save or
restore the area according to hardware xsave feature set.
Tracked-On: #4166
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ptirq_msix_remap doesn't do the real remap, that's the vmsi_remap and vmsix_remap_entry
does. ptirq_msix_remap only did the preparation.
Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Issue description:
-----------------
Machine Check Error on Page Size Change
Instruction fetch may cause machine check error if page size
and memory type was changed without invalidation on some
processors[1][2]. Malicious guest kernel could trigger this issue.
This issue applies to both primary page table and extended page
tables (EPT), however the primary page table is controlled by
hypervisor only. This patch mitigates the situation in EPT.
Mitigation details:
------------------
Implement non-execute huge pages in EPT.
This patch series clears the execute permission (bit 2) in the
EPT entries for large pages. When EPT violation is triggered by
guest instruction fetch, hypervisor converts the large page to
smaller 4 KB pages and restore the execute permission, and then
re-execute the guest instruction.
The current patch turns on the mitigation by default.
The follow-up patches will conditionally turn on/off the feature
per processor model.
[1] Refer to erratum KBL002 in "7th Generation Intel Processor
Family and 8th Generation Intel Processor Family for U Quad Core
Platforms Specification Update"
https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/7th-gen-core-family-spec-update.pdf
[2] Refer to erratum SKL002 in "6th Generation Intel Processor
Family Specification Update"
https://www.intel.com/content/www/us/en/products/docs/processors/core/desktop-6th-gen-core-family-spec-update.html
Tracked-On: #4101
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
In non-64-bit mode, CS segment base address should be considered when
determining the linear address of the vcpu's instruction pointer. Use
vie_calculate_gla() for instruction address translation which also takes
care of 64-bit mode.
Tracked-On: #4064
Signed-off-by: Peter Fang <peter.fang@intel.com>
MISRA C requires specified bounds for arrays declaration, previous declaration
of platform_clos_array in board.h does not meet the requirement.
Tracked-On: #3987
Signed-off-by: Victor Sun <victor.sun@intel.com>
hypcall_id has a type of uint64_t and should use 'llx' as
formatting flag instead of '%d'. Otherwise, we will get a
confusing error log when not-allowed hypercall occurs.
Without this patch:
[96707209us][cpu=1][sev=3][seq=2386]:hypercall -2147483548 is only allowed from SOS_VM!
With this patch:
[84613395us][cpu=1][sev=3][seq=2136]:hypercall 0x80000064 is only allowed from SOS_VM!
So, we can figure out which not-allowed hypercall has been triggered more conveniently.
BTW, this patch adds hypcall_id which triggered from non-ring0 into error log.
Tracked-On: #4012
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Now the e820 structure store ACRN HV memory layout, not the physical memory layout.
Rename e820 to hv_hv_e820 to show this explicitly.
Tracked-On: #4007
Signed-off-by: Li Fei1 <fei1.li@intel.com>
AP trampoline code should be accessible
to hypervisor only, this patch is to unmap
this region from service VM's EPT for security
reason.
Tracked-On: #3992
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We should use INIT signal to notify the vcpu threads when
powering off the hard RTVM. To achive this, we should set
the vcpu->thread_obj.notify_mode as SCHED_NOTIFY_INIT.
Patch (27163df9 hv: sched: add sleep/wake for thread object)
tries to set the notify_mode according `is_lapic_pt_enabled(vcpu)`
in function prepare_vcpu. But at this point, the is_lapic_pt_enabled(vcpu)
will always return false. Consequently, it will set notify_mode
as SCHED_NOTIFY_IPI. Then leads to the failure of powering off
hard RTVM.
This patch fixes it by:
- Initialize the notify_mode as SCHED_NOTIFY_IPI in prepare_vcpu.
- Set notify_mode as SCHED_NOTIFY_INIT after guest is trying to
enable x2apic mode of passthru lapic.
Tracked-On: #3975
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
After adding PCI BAR remap support, mmio_node may unregister when there's others
access it. This patch add a lock to protect mmio_node access.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Since guest could re-program PCI device MSI-X table BAR, we should add mmio
emulation handler unregister.
However, after add unregister_mmio_emulation_handler API, emul_mmio_regions
is no longer accurate. Just replace it with max_emul_mmio_regions which records
the max index of the emul_mmio_node.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
In theory, guest could re-program PCI BAR address to any address. However, ACRN
hypervisor only support [0, top_address_space) EPT memory mapping. So we need to
check whether the PCI BAR re-program address is within this scope.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
kick means to notify one thread_object. If the target thread object is
running, send a IPI to notify it; if the target thread object is
runnable, make reschedule on it.
Also add kick_vcpu API in vcpu layer to notify vcpu.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch decouple some scheduling logic and abstract into a scheduler.
Then we have scheduler, schedule framework. From modulization
perspective, schedule framework provides some APIs for other layers to
use, also interact with scheduler through scheduler interaces.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- The default behaviors of PIO & MMIO handlers are same
for all VMs, no need to expose dedicated APIs to register
default hanlders for SOS and prelaunched VM.
Tracked-On: #3904
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
Currently the parameter of init_ept_mem_ops is
'struct acrn_vm *vm' for this api,change it to
'struct memory_ops *mem_ops' and 'vm_id' to avoid
the reversed dependency, page.c is hardware layer and vm structure
is its upper-layer stuff.
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Let init thread end with run_idle_thread(), then idle thread take over and
start to do scheduling.
Change enter_guest_mode() to init_guest_mode() as run_idle_thread() is removed
out of it. Also add run_thread() in schedule module to run
thread_object's thread loop directly.
rename: switch_to_idle -> run_idle_thread
Tracked-On: #3813
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
sleep one thread_object means to prevent it from being scheduled.
wake one thread_object is an opposite operation of sleep.
This patch also add notify_mode in thread_object to indicate how to
deliver the request.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
To support cpu sharing, multiple vcpu can run on same pcpu. We need do
necessary vcpu context switch. This patch add below actions in context
switch.
1) fxsave/fxrstor;
2) save/restore MSRs: MSR_IA32_STAR, MSR_IA32_LSTAR,
MSR_IA32_FMASK, MSR_IA32_KERNEL_GS_BASE;
3) switch vmcs.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
With cpu sharing enabled, per_cpu vcpu cannot work properly as we might
has multiple vcpus running on one pcpu.
Add a schedule API sched_get_current to get current thread_object on
specific pcpu, also add a vcpu API get_running_vcpu to get corresponding
vcpu of the thread_object.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
With cpu sharing enabled, we will map acrn_vcpu to thread_object
in scheduling. From modulization perspective, we'd better hide the
pcpu_id in acrn_vcpu and move it to thread_object.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vcpu thread's stack shouldn't follow reset_vcpu to reset.
There is also a bug here:
while vcpu B thread set vcpu->running to false, other vcpu A thread
will treat the vcpu B is paused while it has not been switch out
completely, then reset_vcpu will reset the vcpu B thread's stack and
corrupt its running context.
This patch will remove the vcpu thread's stack reset from reset_vcpu.
With the change, we need do init_vmcs between vcpu startup address be
settled and scheduled in. And switch_to_idle() is not needed anymore
as S3 thread's stack will not be reset.
Tracked-On: #3813
Signed-off-by: Fengwei Yin <fengwei.yin@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Two time related synthetic MSRs are implemented in this patch. Both of
them are partition wide MSR.
- HV_X64_MSR_TIME_REF_COUNT is read only and it is used to return the
partition's reference counter value in 100ns units.
- HV_X64_MSR_REFERENCE_TSC is used to set/get the reference TSC page,
a sequence number, an offset and a multiplier are defined in this
page by hypervisor and guest OS can use them to calculate the
normalized reference time since partition creation, in 100ns units.
Tracked-On: #3831
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
This patch implements the minimum set of TLFS functionality. It
includes 6 vCPUID leaves and 3 vMSRs.
- 0x40000001 Hypervisor Vendor-Neutral Interface Identification
- 0x40000002 Hypervisor System Identity
- 0x40000003 Hypervisor Feature Identification
- 0x40000004 Implementation Recommendations
- 0x40000005 Hypervisor Implementation Limits
- 0x40000006 Implementation Hardware Features
- HV_X64_MSR_GUEST_OS_ID Reporting the guest OS identity
- HV_X64_MSR_HYPERCALL Establishing the hypercall interface
- HV_X64_MSR_VP_INDEX Retrieve the vCPU ID from hypervisor
Tracked-On: #3832
Signed-off-by: wenwumax <wenwux.ma@intel.com>
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Consider the following case when TPR shadow is used with vlapic
basic mode:
1) 2 interrupts are pending in vlapic. INTa's priority > TPR and
INTb's priority <= TPR.
2) TPR threshold is set to zero and INTa is injected to guest.
3) Guest set TPR to the priority of INTa.
4) EOI of INTa. PPR is updated to TPR which equals INTa's priority.
INTb cannot be injected because its priority <= PPR.
5) Guest set TPR to zero. Because TPR threshold is still zero, there is
no TPR threshold vmexit. But since both TPR and ISRV are zero at
this time, the PPR is zero as well. INTb still cannot be injected.
This is a bug.
By adding vcpu_make_request(vlapic->vcpu, ACRN_REQUEST_EVENT) in EOI,
TPR threshold will be updated before vm_resume.
Tracked-On: #3795
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently we are using a 1:1 mapping logic for pcpu:vcpu. So don't need
a runqueue for it. Removing it as preparation work to abstract scheduler
framework.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
PRMRR related MSRs need to be configured by platform BIOS / bootloader.
These settings are not allowed to be changed by guest.
VMs currently have no requirement to access these MSRs even when vSGX is enabled.
So, this patch disables PRMRR related MSRs in VM.
Tracked-On: #3739
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
--remove unnecessary includes
--remove unnecssary forward-declaration for 'struct vhm_request'
Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
When a VM is configured with LAPIC PT mode and its vCPU is in x2APIC
mode, the corresponding pCPU needs to be reset during VM shutdown/reset
as its physical LAPIC was used by its guest.
This commit fixes an issue where this reset never happens.
is_lapic_pt_enabled() needs to be called before reset_vcpu() to be able
to correctly reflect a vCPU's APIC mode.
A vCPU with LAPIC PT mode but in xAPIC mode does not require such reset,
since its physical LAPIC was not touched by its guest directly.
v2 -> v3:
- refine edge case detection logic
v1 -> v2:
- use a separate function to return the bitmap of LAPIC PT enabled pCPUs
Tracked-On: #3708
Signed-off-by: Peter Fang <peter.fang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Jack Ren <jack.ren@intel.com>
To enable static configuration of different scenarios, we configure VMs
in HV code and prepare all nesserary resources for this VM in create VM
hypercall. It means when we create one VM through hypercall, HV will
read all its configuration and run it automatically.
Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
As we introduced vcpu_affinity[] to assign vcpus to different pcpus, the
old policy and functions are not needed. Remove them.
Tracked-On: #3663
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add this vcpu_affinity[] for each VM to indicate the assignment policy.
With it, pcpu_bitmap is not needed, so remove it from vm_config.
Instead, vcpu_affinity is a must for each VM.
This patch also add some sanitize check of vcpu_affinity[]. Here are
some rules:
1) only one bit can be set for each vcpu_affinity of vcpu.
2) two vcpus in same VM cannot be set with same vcpu_affinity.
3) vcpu_affinity cannot be set to the pcpu which used by pre-launched VM.
v4: config SDC with CONFIG_MAX_KATA_VM_NUM
v5: config SDC with CONFIG_MAX_PCPU_NUM
Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Now, we create vcpus while VM being created in hypervisor. The
create vcpu hypercall will not be used any more. For compatbility,
keep the hypercall HC_CREATE_VCPU do nothing.
v4: Don't remove HC_CREATE_VCPU hypercall, let it do nothing.
Tracked-On: #3663
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Now the structures(run_context & ext_context) are defined
in vcpu.h,and they are used in the lower-layer modules(wakeup.S),
this patch move down the structures from vcpu.h to cpu.h
to avoid reversed dependency.
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now the structures(union source & struct intr_source) are defined
in ptdev.h,they are used in vtd.c and assign.c,
vtd is the hardware layer and ptdev is the upper-layer module
from the modularization perspective, this patch move down
these structures to avoid reversed dependency.
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Using per_cpu list to record ptdev interrupts is more reasonable than
recording them per-vm. It makes dispatching such interrupts more easier
as we now do it in softirq which happens following interrupt context of
each pcpu.
Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
From modulization perspective, it's not suitable to put pcpu and vm
related request operations in schedule. So move them to pcpu and vm
module respectively. Also change need_offline return value to bool.
Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Now, we have assumption that SOS control whether the platform
should enter S5 or not. So when SOS tries enter S5, we just
forward the S5 request to native port which make sure platform
S5 is totally aligned with SOS S5.
With higher serverity guest introduced,this assumption is not
true any more. We need to extend the platform S5 process to
handle higher severity guest:
- For DM launched RTVM, we need to make sure these guests
is off before put the whole platfrom to S5.
- For pre-launched VM, there are two cases:
* if os running in it support S5, we wait for guests off.
* if os running in it doesn't support S5, we expect it
will invoke one hypercall to notify HV to shutdown it.
NOTE: this case is not supported yet. Will add it in the
future.
Tracked-On: #3564
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
change the input parameter from vcpu to eptp in order to let this api
more generic, no need to care normal world or secure world.
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Currently, the clos id of the cpu cores in vmx root mode is the same as non-root mode.
For RTVM, if hypervisor share the same clos id with non-root mode, the cacheline may
be polluted due to the hypervisor code execution when vmexit.
The patch adds hv_clos in vm_configurations.c
Hypervisor initializes clos setting according to hv_clos during physical cpu cores initialization.
For RTVM, MSR auto load/store areas are used to switch different settings for VMX root/non-root
mode for RTVM.
Tracked-On: #2462
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
-- remove some unnecessary includes
-- fix a typo
-- remove unnecessary void before launch_vms
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Fix tsc_deadline issue by trapping TSC_DEADLINE msr write if VMX_TSC_OFFSET is not 0.
Because there is an assupmtion in the ACRN vART design that pTSC_Adjust and vTSC_Adjust are
both 0.
We can leave the TSC_DEADLINE write pass-through without correctness issue becuase there is
no offset between the pTSC and vTSC, and there is no write to vTSC or vTSC_Adjust write observed
in the RTOS so far.
This commit fix the potential correctness issue, but the RT performance will be badly affected
if vTSC or vTSC_Adjust was not zero, which we will address if such case happened.
Tracked-On: #3636
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now that ACPI is enabled for pre-launched VMs, we can remove all mptable code.
Tracked-On: #3601
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Statically define the per vm RSDP/XSDT/MADT ACPI template tables in vacpi.c,
RSDP/XSDT tables are copied to guest physical memory after checksum is
calculated. For MADT table, first fix up process id/lapic id in its lapic
subtable, then the MADT table's checksum is calculated before it is copies to
guest physical memory.
Add 8-bit checksum function in util.h
Tracked-On: #3601
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
EPT tables are shared by MMU and IOMMU.
Some IOMMUs don't support page-walk coherency, the cpu cache of EPT entires
should be flushed to memory after modifications, so that the modifications
are visible to the IOMMUs.
This patch adds a new interface to flush the cache of modified EPT entires.
There are different implementations for EPT/PPT entries:
- For PPT, there is no need to flush the cpu cache after update.
- For EPT, need to call iommu_flush_cache to make the modifications visible
to IOMMUs.
Tracked-On: #3607
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
-- move 'RFLAGS_AC' to cpu.h
-- move 'VMX_SUPPORT_UNRESTRICTED_GUEST' to msr.h
and rename it to 'MSR_IA32_MISC_UNRESTRICTED_GUEST'
-- move 'get_vcpu_mode' to vcpu.h
-- remove deadcode 'vmx_eoi_exit()'
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
move some data structures and APIs related host reset
from vm_reset.c to pm.c, these are not related with guest.
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Now, we use native gdt saved in boot context for guest and assume
it could be put to same address of guest. But it may not be true
after the pre-launched VM is introduced. The gdt for guest could
be overwritten by guest images.
This patch make 32bit protect mode boot not use saved boot context.
Insteadly, we use predefined vcpu_regs value for protect guest to
initialize the guest bsp registers and copy pre-defined gdt table
to a safe place of guest memory to avoid gdt table overwritten by
guest images.
Tracked-On: #3532
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently, 'flags' is defined and set but never be used
in the flow of handling i/o request after then.
Tracked-On: #861
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In the definition of port i/o handler, struct acrn_vm * pointer
is redundant as input, as context of acrn_vm is aleady linked
in struct acrn_vcpu * by vcpu->vm, 'vm' is not required as input.
this patch removes argument '*vm' from 'io_read_fn_t' &
'io_write_fn_t', use '*vcpu' for them instead.
Tracked-On: #861
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Check whether the address area pointed by the guest
cr3 is valid or not before loading pdptrs. Inject #GP(0)
to guest if there are any invalid cases.
Tracked-On: #3572
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
fix violations touched below:
1.Cast operation on a constant value
2.signed/unsigned implicity conversion
3.return value unused.
V1->V2:
1.bitmap api will return boolean type, not need to check "!= 0", deleted.
2.The behaves ~(uint32_t)X and (uint32_t)~X are not defined in ACRN hypervisor Coding Guidelines,
removed the change of it.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
This patch moves vmx_rdmsr_pat/vmx_wrmsr_pat from vmcs.c to vmsr.c,
so that these two functions would become internal functions inside
vmsr.c.
This approach improves the modularity.
v1 -> v2:
* remove 'vmx_rdmsr_pat'
* rename 'vmx_wrmsr_pat' with 'write_pat_msr'
Tracked-On: #1842
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Add emulated PCI device configure for SOS to prepare for add support for customizing
special pci operations for each emulated PCI device.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Create an iommu domain for all guest in vpci_init no matter if there's a PTDev
in it.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Dongsheng Zhang <dongsheng.x.zhang@intel.com>
Align SOS pci device configure with pre-launched VM and filter pre-launched VM's
PCI PT device from SOS pci device configure.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
For non-trusty hypercalls, HV should inject #GP(0) to vCPU if they are
from non-ring0 or inject #UD if they are from ring0 of non-SOS. Also
we should not modify RAX of vCPU for these invalid vmcalls.
Tracked-On: #3497
Signed-off-by: Victor Sun <victor.sun@intel.com>
In some case, guest need to get more information under virtual environment,
like guest capabilities. Basically this could be done by hypercalls, but
hypercalls are designed for trusted VM/SOS VM, We need a machenism to report
these information for normal VMs. In this patch, vCPUID leaf 0x40000001 will
be used to satisfy this needs that report some extended information for guest
by CPUID.
Tracked-On: #3498
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The policy of vART is that software in native can run in
VM too. And in native side, the relationship between the
ART hardware and TSC is:
pTSC = (pART * M) / N + pAdjust
The vART solution is:
- Present the ART capability to guest through CPUID leaf
15H for M/N which identical to the physical values.
- PT devices see the pART (vART = pART).
- Guest expect: vTSC = vART * M / N + vAdjust.
- VMCS.OFFSET = vTSC - pTSC = vAdjust - pAdjust.
So to support vART, we should do the following:
1. if vAdjust and vTSC are changed by guest, we should change
VMCS.OFFSET accordingly.
2. Make the assumption that the pAjust is never touched by ACRN.
For #1, commit "a958fea hv: emulate IA32_TSC_ADJUST MSR" has implementation
it. And for #2, acrn never touch pAdjust.
--
v2 -> v3:
- Add comment when handle guest TSC_ADJUST and TSC accessing.
- Initialize the VMCS.OFFSET = vAdjust - pAdjust.
v1 -> v2
Refine commit message to describe the whole vART solution.
Tracked-On: #3501
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
After "commit f0e1c5e init vcpu host stack when reset vcpu", SOS resume form S3
wants to schedule to vcpu_thread not the point where SOS enter S3. So we should
schedule to idel first then reschedule to execute vcpu_thread.
Tracked-On: #3387
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
PMC is hidden from guest and hypervisor should
inject UD to guest when 'rdpmc' vmexit.
Tracked-On: #3453
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Per SDM, writing 0 to MSR_IA32_MCG_STATUS is allowed, HV should not
return -EACCES on this case;
Tracked-On: #3454
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
'invept' is not expected in guest and hypervisor should
inject UD when 'invept' VM exit happens.
Tracked-On: #3444
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Otherwise, the previous local variables in host stack is not reset.
Tracked-On: #3387
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
softirq shouldn't be bounded to vcpu thread. One issue for this
is shell (based on timer) can't work if we don't start any guest.
This change also is trying best to make softirq handler running
with irq enabled.
Also update the irq disable/enabel in vmexit handler to align
with the usage in vcpu_thread.
Tracked-On: #3387
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Because we depend on guest OS to switch x2apic mode to enable lapic pass-thru, vlapic is working at the early stage of booting, eg: in virtual boot loader.
After lapic pass-thru enabled, no interrupt should be injected via vlapic any more.
This commit resets the vlapic to clear the pending status and adds ptapic_ops to enforce that no more interrupt accepted/injected via vlapic.
Tracked-On: #3227
Signed-off-by: Yan, Like <like.yan@intel.com>
This commit adds ops to vlapic structure, and add an *ops parameter to vlapic_reset().
At vlapic reset, the ops is set to the global apicv_ops, and may be assigned
to other ops later.
Tracked-On: #3227
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vCPU schedule state change is under schedule lock protection. So there's no need
to be atomic.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For pre-launched VMs and SOS, vCPUs are created on BSP one by one; For post-launched VMs,
vCPUs are created under vmm_hypercall_lock protection. So vcpu_create is called sequentially.
Operation in vcpu_create don't need to be atomic.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
For per-vCPU, EOI exit bitmap is a global parameter which should set or clear
atomically since there's no lock to protect this critical variable.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Now sched_object and sched_context are protected by scheduler_lock. There's no
chance to use runqueue_lock to protect schedule runqueue if we have no plan to
support schedule migration.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
'pcpu_id' should be less than CONFIG_MAX_PCPU_NUM,
else 'per_cpu_data' will overflow. This commit fixes
this potential overflow issue.
Tracked-On: #3397
Signed-off-by: Tianhua Sun <tianhuax.s.sun@intel.com>
Reviewed-by: Yonghua Huang <yonghua.huang@intel.com>
ACRN Coding guidelines requires two different types pointer can't
convert to each other, except void *.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
ACRN Coding guidelines requires type conversion shall be explicity.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In almost case, vLAPIC will only be accessed by the related vCPU. There's no
synchronization issue in this case. However, other vCPUs could deliver interrupts
to the current vCPU, in this case, the IRR (for APICv base situation) or PIR
(for APICv advanced situation) and TMR for both cases could be accessed by more
than one vCPUS simultaneously. So operations on IRR or PIR should be atomical
and visible to other vCPUs immediately. In another case, vLAPIC could be accessed
by another vCPU when create vCPU or reset vCPU which could be supposed to be
consequently.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
cancel_event_injection is not need any more if we do 'scheudle' prior to
acrn_handle_pending_request. Commit "921288a6672: hv: fix interrupt
lost when do acrn_handle_pending_request twice" bring 'schedule'
forward, so remove cancel_event_injection related stuff.
Tracked-On: #3374
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
ACRN Coding guidelines requires no dead code.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Microarchitectural Data Sampling (MDS) is a hardware vulnerability
which allows unprivileged speculative access to data which is available
in various CPU internal buffers.
1. Mitigation on ACRN:
1) Microcode update is required.
2) Clear CPU internal buffers (store buffer, load buffer and
load port) if current CPU is affected by MDS, when VM entry
to avoid any information leakage to guest thru above buffers.
3) Mitigation is not needed if ARCH_CAP_MDS_NO bit (bit5)
is set in IA32_ARCH_CAPABILITIES MSR (10AH), in this case,
current processor is no affected by MDS vulnerability, in other
cases mitigation for MDS is required.
2. Methods to clear CPU buffers (microcode update is required):
1) L1D cache flush
2) VERW instruction
Either of above operations will trigger clearing all
CPU internal buffers if this CPU is affected by MDS.
Above mechnism is enumerated by:
CPUID.(EAX=7H, ECX=0):EDX[MD_CLEAR=10].
3. Mitigation details on ACRN:
if (processor is affected by MDS)
if (processor is not affected by L1TF OR
L1D flush is not launched on VM Entry)
execute VERW instruction when VM entry.
endif
endif
4. Referrence:
Deep Dive: Intel Analysis of Microarchitectural Data Sampling
https://software.intel.com/security-software-guidance/insights/
deep-dive-intel-analysis-microarchitectural-data-sampling
Deep Dive: CPUID Enumeration and Architectural MSRs
https://software.intel.com/security-software-guidance/insights/
deep-dive-cpuid-enumeration-and-architectural-msrs
Tracked-On: #3317
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Reviewed-by: Jason CJ Chen <jason.cj.chen@intel.com>
1. reset polarity of ptirq_remapping_info to zero.
this help to set correct initial pin state, and fix the interrupt lost issue
when assign a ptirq to uos.
2. since vioapic_generate_intr relys on rte, we should build rte before
generating an interrput, this fix the redundant interrupt.
Tracked-On: #3362
Signed-off-by: Cai Yulong <yulongc@hwtc.com.cn>
According to SDM, xsetbv writes the contents of registers EDX:EAX into the 64-bit
extended control register (XCR) specified in the ECX register. (On processors
that support the Intel 64 architecture, the high-order 32 bits of RCX are ignored.)
In current code, RCX is checked, should ingore the high-order 32bits.
Tracked-On: #3360
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Yonghua Huang <yonghua.huang@intel.com>
The current implement will trigger shutdown vm request on the BSP VCPU on the VM,
not the VCPU will trap out because triple fault. However, if the BSP VCPU on the VM
is handling another IO emulation, it may overwrite the triple fault IO request on
the vhm_request_buffer in function acrn_insert_request. The atomic operation of
get_vhm_req_state can't guarantee the vhm_request_buffer will not access by another
IO request if it is not running on the corresponding VCPU. So it should trigger
triple fault shutdown VM IO request on the VCPU which trap out because of triple
fault exception.
Besides, rt_vm_pm1a_io_write will do the right thing which we shouldn't do it in
triple_fault_shutdown_vm.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
According to SDM, bit N (physical address width) to bit 63 should be masked when calculate
host page frame number.
Currently, hypervisor doesn't set any of these bits, so gpa2hpa can work as expectd.
However, any of these bit set, gpa2hpa return wrong value.
Hypervisor never sets bit N to bit 51 (reserved bits), for simplicity, just mask bit 52 to bit 63.
Tracked-On: #3352
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
ACRN coding guideline requires function shall have only one return entry.
Fix it.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vcpu is never scan because of scan tool will be crashed!
After modulization, the vcpu can be scaned by the scan tool.
Clean up the violations in vcpu.c.
Fix the violations:
1.No brackets to then/else.
2.Function return value not checked.
3.Signed/unsigned coversion without cast.
V1->V2:
change the type of "vcpu->arch.irq_window_enabled" to bool.
V2->V3:
add "void *" prefix on the 1st parameter of memset.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The sbuf is allocated for each pcpu by hypercall from SOS. Before launch
Guest OS, the script will offline cpus, which will trigger vcpu reset and
then reset sbuf pointer. But sbuf only initiate once by SOS, so these
cpus for Guest OS has no sbuf to use. Thus, when run 'acrntrace' on SOS,
there is no trace data for Guest OS.
To fix the issue, only reset the sbuf for SOS.
Tracked-On: #3335
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
The current implement will cache each ISR vector in ISR vector stack and do
ISR vector stack check when updating PPR. However, there is no need to do this
because:
1) We will not touch vlapic->isrvec_stk[0] except doing vlapic_reset:
So we don't need to do vlapic->isrvec_stk[0] check.
2) We only deliver higher priority interrupt from IRR to ISR:
So we don't need to check whether vlapic->isrvec_stk interrupts is always increasing.
3) There're only 15 different priority interrupt, It will not happened that more that
15 interrupts could been delivered to ISR:
So we don't need to check whether vlapic->isrvec_stk_top will larger than
ISRVEC_STK_SIZE which is 16.
This patch try to remove ISR vector stack and use isrv to cache the vector number for
the highest priority bit that is set in the ISR.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Fix the violations not touched the logical.
1.Function return value not checked.
2.Logical conjuctions need brackets.
3.No brackets to then/else.
4.Type conversion without cast.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
MISRA-C requires inline functions should be declared static,
these APIs are external interfaces,remove inline
Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
modified: arch/x86/guest/vcpu.c
MISRA-C standard requires the type of result of expression in if/while pattern shall be boolean.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
There are a lot of works to do between create_vm (HV will mark vm's state
as VM_CREATED at this stage) and vm_run (HV will mark vm's state as VM_STARTED),
like building mptable/acpi table, initializing mevent and vdevs. If there is
something goes wrong between create_vm and vm_run, the devicemodel will jumps
to the deinit process and will try to destroy the vm. For example, if the
vm_init_vdevs failed, the devicemodel will jumps to dev_fail and then destroy
the vm.
For normal vm in above situation, it is fine to destroy vm. And we can create and
start it next time. But for RTVM, we can't destroy the vm as the vm's state is
VM_CREATED. And we can only destroy vm when its state is VM_POWERING_OFF. So, the
vm will stay at VM_CREATED state and we will never have chance to destroy it.
Consequently, we can't create and start the vm next time.
This patch fixes it by allowing to pause and then destroy RTVM when its state is VM_CREATED.
Tracked-On: #3069
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
wbinvd is used to write back all modified cache lines in the processor's
internal cache to main memory and invalidates(flushes) the internal caches.
Using clflushopt instructions to emulate wbinvd to flush each
guest vm memory, if CLFLUSHOPT is not supported, boot will fail.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The has_rt_vm walk through all VMs to check RT VM flag and if
there is no any RT VM, then return false otherwise return true.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
The ept_flush_leaf_page API is used to flush address space
from a ept page entry, user can use it to match walk_ept_mr to
flush VM address space.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The walk_ept_table API is used to walk through EPT table for getting
all of present pages, user can get each page entry and its size
from the walk_ept_table callback.
The get_ept_entry is used to getting EPT pointer of the vm, if current
context of mv is secure world, return secure world EPT pointer, otherwise
return normal world EPT pointer.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The current implement doesn't clear which access type we support for
APIC-Access VM Exit:
1) linear access for an instruction fetch
-- APIC-access page is mapped as UC which doesn't support fetch
2) linear access (read or write) during event delivery
-- Which is not happened in normal case except the guest went wrong, such as,
set the IDT table in APIC-access page. In this case, we don't need to support.
3) guest-physical access during event delivery;
guest-physical access for an instruction fetch or during instruction execution
-- Do we plan to support enable APIC in real mode ? I don't think so.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
This patch introduces check_vm_vlapic_state API instead of is_lapic_pt_enabled
to check if all the vCPUs of a VM are using x2APIC mode and LAPIC
pass-through is enabled on all of them.
When the VM is in VM_VLAPIC_TRANSITION or VM_VLAPIC_DISABLED state,
following conditions apply.
1) For pass-thru MSI interrupts, interrupt source is not programmed.
2) For DM emulated device MSI interrupts, interrupt is not delivered.
3) For IPIs, it will work only if the sender and destination are both in x2APIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch introduces vLAPIC state for a VM. The VM vLAPIC state can
be one of the following
* VM_VLAPIC_X2APIC - All the vCPUs/vLAPICs (Except for those in Disabled mode) of this
VM use x2APIC mode
* VM_VLAPIC_XAPIC - All the vCPUs/vLAPICs (Except for those in Disabled mode) of this
VM use xAPIC mode
* VM_VLAPIC_DISABLED - All the vCPUs/vLAPICs of this VM are in Disabled mode
* VM_VLAPIC_TRANSITION - Some of the vCPUs/vLAPICs of this VM (Except for those in Disabled mode)
are in xAPIC and the others in x2APIC
Upon a vCPU updating the IA32_APIC_BASE MSR to switch LAPIC mode, this
API is called to sync the vLAPIC state of the VM. Upon VM creation and reset,
vLAPIC state is set to VM_VLAPIC_XAPIC, as ACRN starts the vCPUs vLAPIC in
XAPIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
is_xapic_enabled API returns true if vLAPIC is in xAPIC mode. In
all other cases, it returns false.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Remove static and inline attributes to the API is_x2apic_enabled
and declare a prototype in vlapic.h. Also fix the check performed on guest
APICBASE_MSR value to query vLAPIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch changes the code in vlapic_set_apicbase for the following reasons
1) Better readability as it first checks if the new value programmed into
MSR is any different from the existing value cached in guest structures
2) Check if both bits 11:10 are set before enabling x2APIC mode for guest.
Current code does not check if Bit 11 is set.
3) Add TODO in the comments, to detail about the current gaps in
IA32_APIC_BASE MSR emulation.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
V1:Initial Patch
Modularize vpic. The current patch reduces the usage
of acrn_vm inside the vpic.c file.
Due to the global natire of register_pio_handler, where
acrn_vm is being passed, some usage remains.
These needs to be a separate "interface" file.
That will come in smaller newer patch provided
this patch is accepted.
V2:
Incorporated comments from Jason.
V3:
Fixed some MISRA-C Violations.
Tracked-On: #1842
Signed-off-by: Arindam Roy <arindam.roy@intel.com>
Reviewed-by: Xu, Anthony <anthony.xu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
This commit extract pm io handler registration code to register_pm_io_handler()
to reduce the cyclomatic complexity of create_vm() in order to be complied with
MISRA-C rules.
Tracked-On: #3227
Signed-off-by: Yan, Like <like.yan@intel.com>
When the lapic is passthru, vpic and vioapic cannot be used anymore. In
current code, user can still inject vpic interrupt to Guest OS, this is
not allowed.
This patch remove the vpic and vioapic initiate functions during
creating VM with lapic passthru. But the APIs in vpic and vioapic are
called in many places, for these APIs, follow the below principles:
1. For the APIs which will access uninitiated variables, and may case
hypervisor hang, add @pre to make sure user should call them after vpic or
vioapic is initiated.
2. For the APIs which only return some static value, do noting with them.
3. For the APIs which user will called to inject interrupt, such as
vioapic_set_irqline_lock or vpic_set_irqline, add condition in these
APIs to make sure it only inject interrupt when vpic or vioapic is
initiated. This change is to make sure the vuart or hypercall need not
to care whether lapic is passthru or the vpic and vioapic is initiated
or not.
Tracked-On: #3227
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
On SDC scenario, SOS VM id is fixed to 0 so some hypercalls from guest
are using hardcoded "0" to represent SOS VM, this would bring issues
for HYBRID scenario which SOS VM id is non-zero.
Now introducing a new VM id concept for DM/VHM hypercall APIs, that
return a relative VM id which is from SOS view when create VM for post-
launched VMs. DM/VHM could always treat their own vm id is "0". When they
make hypercalls, hypervisor will convert the VM id to the absolute id
when dispatch the hypercalls.
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Changes:
- In current design, the hypercall is only allowed calling from SOS or
trusty VM, so separate the trusty hypercalls from dispatch_hypercall().
The vm parameter which referenced by hcall_xxx() should be SOS VM;
- do not inject #UD for hypercalls from non-SOS, just return -ENODEV;
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
According to SDM vol1 13.3:
Write 1 to reserved bit of XCR0 will trigger GP.
This patch make ACRN behavior align with SDM definition.
Tracked-On: #3239
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch fix potential null pointer dereference
1, will access null pointer if 'context' is null.
2, if entry already been added to the VM when add
intx entry for this vm, but parameter virt_pin
is not equal to entry->virt_sid.intx_id.pin. So
will saves this entry address to
vpin_to_pt_entry[entry->virt_sid.intx_id.pin] and
vpin_to_pt_entry[virt_pin]. In this case, this entry
will be freed twice.
Tracked-On: #3217
Signed-off-by: Tianhua Sun <tianhuax.s.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently vlapic id of SOS VM is virtualized, it is indexed by vcpuid in
physical APIC id sequence, but CPUID 0BH leaf still report physical
APIC ID. In SDC/INDUSTRY scenario they are identical mapping so no issue
occured. In hybrid mode this would be a problem because vAPIC ID might
be different with pAPIC ID. We need to make the APIC ID which returned from
CPUID consistent with the one returned from LAPIC register.
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The vcpu num could be calculated based on pcpu_bitmap when prepare_vcpu()
is done, so remove this redundant configuration item;
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
vm_apicid2vcpu_id() might return invalid vcpu id, when this happens
we should return -1 in vlapic_x2apic_pt_icr_access();
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
The print message of source and target vcpu id is incorrect, fix it.
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Previously multiboot mods[0] is designed for kernel module for all
pre-launched VMs including SOS VM, and mods[0].mm_string is used
to store kernel cmdline. This design could not satisfy the requirement
of hybrid mode scenarios that each VM might use their own kernel image
also ramdisk image. To resolve this problem, we will use a tag in
mods mm_string field to specify the module type. If the tag could
be matched with os_config of VM configurations, the corresponding
module would be loaded;
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Add vuart_deinit to vm shutdown so that the vuart resource can be
reset, and when the Guest VM restart, it could have right state.
Tracked-On: #2987
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to Chap 23.8 RESTRICTIONS ON VMX OPERATION, Vol 3, SDM:
"Any attempt to set one of these bits to an unsupported value while in VMX
operation (including VMX root operation) using any of the CLTS, LMSW, or
MOV CR instructions causes a general-protection exception."
So we don't need to trap them out then inject the GP in hypervisor.
Tracked-On: #2561
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
FuSa requires setting CR4.SMAP/SMEP/PKE will invalidate the TLB. However,
setting CR4.SMAP will invalidate the TLB on native while not in non-root mode.
To make sure this, we will trap CR4.SMAP/SMEP/PKE setting to invalidate the TLB
in root mode.
Tracked-On: #2561
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Since the vapic_id is from VM, need to check for pre-condition before passing
vcpu_id to vcpu_from_vid. This is in the path of LAPIC passthrough ICR
access
Tracked-On: #3170
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Present SGX related MSRs to guest if SGX is supported.
- MSR_IA32_SGXLEPUBKEYHASH0 ~ MSR_IA32_SGXLEPUBKEYHASH3:
SGX Launch Control is not supported, so these MSRs are read only.
- MSR_IA32_SGX_SVN_STATUS:
read only
- MSR_IA32_FEATURE_CONTROL:
If SGX is support in VM, opt-in SGX in this MSR.
- MSR_SGXOWNEREPOCH0 ~ MSR_SGXOWNEREPOCH1:
The two MSRs' scope is package level, not allow guest to change them.
Still leave them in unsupported_msrs array.
Tracked-On: #3179
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
If sgx is supported in guest, present SGX capabilities to guest.
There will be only one EPC section presented to guest, even if EPC
memory for a guest is from muiltiple physcial EPC sections.
Tracked-On: #3179
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Build EPT entries for SGX EPC resource for VMs.
- SOS: EPC resrouce will be removed from EPT of SOS, don't support SGX virtualization for SOS.
- Non-SOS: build ept mapping for EPC resource for guest.
Guest base address and size is specified in vm configuration.
Tracked-On: #3179
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
When we call reset_vm() function to reset vm, the vm state
should be reset to VM_CREATED as well.
Tracked-On: #3182
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In current code, vpci do the pci enumartion and add pci devices to the
context table of iommu.
Need to enable iommu DMA address translation later than vpci init.
Otherwise, in UEFI platform, there will be a shot time that address translation
is enabled, but the context table is not setup.
For the devices active in UEFI environment will have problem on address translation.
Tracked-On: #3160
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
At the time the guest is switching to X2APIC mode, different VCPUs in the
same VM could expect the setting of the per VM msr_bitmap differently,
which could cause problems.
Considering different approaches to address this issue:
1) only guest BSP can update msr_bitmap when switching to X2APIC.
2) carefully re-write the update_msr_bitmap_x2apic_xxx() functions to
make sure any bit in the bitmap won't be toggled by the VCPUs.
3) make msr_bitmap as per VCPU.
We chose option 3) because it's simple and clean, though it takes more
memory than other options.
BTW, need to remove const modifier from update_msr_bitmap_x2apic_xxx()
functions to get it compiled.
Tracked-On: #3166
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
The previous function just check the pstate target value in PERF_CTL msr
by indexing px data control value which comes from ACPI table, this would
bring a bug in the case that guest is running intel_pstate_driver:
the turbo pstate target value from intel_pstate driver is in a range
instead of fixed value in ACPI _PSS table, thus the turbo px request would
be rejected. This patch fixed this issue.
Tracked-On: #3158
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
filter out prelaunched vm memory from e820 table
and unmap prelaunched vm memory from ept table
before boot service OS
Tracked-On: #3148
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
- add the GUEST_FLAG_HIGHEST_SEVERITY flag to indicate that the guest has
privilege to reboot the host system.
- this flag is statically assigned to guest(s) in vm_configurations.c in
different scenarios.
- implement reset_host() function to reset the host. First try the ACPI
reset register if available, then try the 0xcf9 PIO.
Tracked-On: #3145
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ACRN supports LAPIC emulation for guests using x86 APICv. When guest OS/BIOS
switches from xAPIC to x2APIC mode of operation, ACRN also supports switching
froom LAPIC emulation to LAPIC passthrough to guest. User/developer needs to
configure GUEST_FLAG_LAPIC_PASSTHROUGH for guest_flags in the corresponding
VM's config for ACRN to enable LAPIC passthrough.
This patch does the following
1)Fixes a bug in the abovementioned feature. For a guest that is
configured with GUEST_FLAG_LAPIC_PASSTHROUGH, during the time period guest is
using xAPIC mode of LAPIC, virtual interrupts are not delivered. This can be
manifested as guest hang when it does not receive virtual timer interrupts.
2)ACRN exposes physical topology via CPUID leaf 0xb to LAPIC PT VMs. This patch
removes that condition and exposes virtual topology via CPUID leaf 0xb.
Tracked-On: #3136
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
the pcpu just write its own vmcs, not need spinlock.
and the arch.lock not used other places, remove it too.
Tracked-On: #3130
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
now only SOS need decide boot with de-privilege or direct boot mode, while
for other pre-launched VMs, they should use direct boot mode.
this patch merge boot/guest/direct_boot_info.c &
boot/guest/deprivilege_boot_info.c into boot/guest/vboot_info.c,
and change init_direct_vboot_info() function name to init_general_vm_boot_info().
in init_vm_boot_info(), depend on get_sos_boot_mode(), SOS may choose to init
vm boot info by setting the vm_sw_loader to deprivilege specific one; for SOS
using DIRECT_BOOT_MODE and all other VMS, they will use general_sw_loader as
vm_sw_loader and go through init_general_vm_boot_info() for virtual boot vm
info filling.
this patch also move spurious handler initilization for de-privilege mode from
boot/guest/deprivilege_boot.c to boot/guest/vboot_info.c, and just set it in
deprivilege sw_loader before irq enabling.
Changes to be committed:
modified: Makefile
modified: arch/x86/guest/vm.c
modified: boot/guest/deprivilege_boot.c
deleted: boot/guest/deprivilege_boot_info.c
modified: boot/guest/direct_boot.c
renamed: boot/guest/direct_boot_info.c -> boot/guest/vboot_info.c
modified: boot/guest/vboot_wrapper.c
modified: boot/include/guest/deprivilege_boot.h
modified: boot/include/guest/direct_boot.h
modified: boot/include/guest/vboot.h
new file: boot/include/guest/vboot_info.h
modified: common/vm_load.c
modified: include/arch/x86/guest/vm.h
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Handle the PIO reset register that is defined in host ACPI:
Parse host FADT table to get the host reset register info, and emulate
it for Service OS:
- return all '1' for guest reads because the read behavior is not defined
in ACPI.
- ignore guest writes with the reset value to stop it from resetting host;
if guest writes other values, passthru it to hardware in case the reset
register supports other functionalities.
Tracked-On: #2700
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch implements triple fault vmexit handler and base on VM types:
- post-launched VMs: shutdown_target_vm() injects S5 PIO write to request
DM to shut down the target VM.
- pre-launched VMs: shut down the guest.
- SOS: similarly, but shut down all the non real-time post-launched VMs that
depend to SOS before shutting down SOS.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- post-launched RTVM: intercept both PIO ports so that hypervisor has a
chance to set VM_POWERING_OFF flag.
- all other type of VMs: deny these 2 ports from guest access so that
guests are not able to reset host.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For pre-launched VMs and SOS, VM shutdown should not be executed in the
current VM context.
- implement NEED_SHUTDOWN_VM request so that the BSP of the target VM can shut
down the guest in idle thread.
- implement shutdown_vm_from_idle() to shut down target VM.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1) According SDM Vol 3, Chap 29.1.2, Any VM exit caused by TPR virtualization
is trap-like: the instruction causing TPR virtualization completes before the VM
exit occurs (for example, the value of CS:RIP saved in the guest-state area of
the VMCS references the next instruction). So we need to retain the RIP.
2) The previous implement only consides the situation the guest will reduce the
TPR. However, the guest will increase it. So we need to update the PPR before we
find a deliverable interrupt. For examples: a) if the guest increase the TPR before
a irq windows vmexit, we need to update the PPR when check whether there has a pending
delivery interrupt or not; b) if the guest increase the TPR, then an external irq raised,
we need to update the PPR when check whether there has a deliverable interrupt to inject
to guest.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yu Wang <yu1.wang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
For prelaunched VM, Service OS and postlaunched RT VM, we only need the vRTC
provides backed-up date, so we could use the simple vRTC which implemented
in hypervisor. For postlaunched VM (which is not a RT VM), we needs the device
module to emulate the vRTC for it.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yu Wang <yu1.wang@intel.com>
There're some instructions which not support bit 0(w bit) flag but which
memory opcode size is fixed and the memory opcode size is not equal to the
register opcode size. In our code, there is movzx (which opcode is 0F B7)
which memory opcode size is fixed to 16 bits. So add a flag VIE_OP_F_WORD_OP
to indicate a instruction which memory opcode size is fixed to 16 bits.
Tracked-On: #1337
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Add MSR_IA32_MISC_ENABLE to emulated_guest_msrs to enable the emulation.
Init MSR_IA32_MISC_ENABLE for guest.
Tracked-On: #2834
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to SDM Vol4 2.1, modify vcpuid according to msr ia32_misc_enable:
- Clear CPUID.01H: ECX[3] if guest disabled monitor/mwait.
- Clear CPUID.80000001H: EDX[20] if guest set XD Bit Disable.
- Limit the CPUID leave maximum value to 2 if guest set Limit CPUID MAXVal.
Tracked-On: #2834
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Guest MSR_IA32_MISC_ENABLE read simply returns the value set by guest.
Guest MSR_IA32_MISC_ENABLE write:
- Clear EFER.NXE if MSR_IA32_MISC_ENABLE_XD_DISABLE set.
- MSR_IA32_MISC_ENABLE_MONITOR_ENA:
Allow guest to control this feature when HV doesn't use this feature and hw has no bug.
vcpuid update according to the change of the msr will be covered in following patch.
Tracked-On: #2834
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The interface struct & API changes like below:
struct uefi_context->struct depri_boot_context
init_firmware_operations()->init_vboot_operations()
init_firmware()->init_vboot()
firmware_init_irq()->init_vboot_irq()
firmware_get_rsdp()->get_rsdp_ptr()
firmware_get_ap_trampoline()->get_ap_trampoline_buf()
firmware_init_vm_boot_info()->init_vm_boot_info()
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
currently, ACRN hypervisor can either boot from sbl/abl or uefi, that's
why we have different firmware method under bsp & boot dirs.
but the fact is that we actually have two different operations based on
different guest boot mode:
1. de-privilege-boot: ACRN hypervisor will boot VM0 in the same context as
native(before entering hypervisor) - it means hypervisor will co-work with
ACRN UEFI bootloader, restore the context env and de-privilege this env
to VM0 guest.
2. direct-boot: ACRN hypervisor will directly boot different pre-launched
VM(including SOS), it will setup guest env by pre-defined configuration,
and prepare guest kernel image, ramdisk which fetch from multiboot modules.
this patch is trying to:
- rename files related with firmware, change them to guest vboot related
- restruct all guest boot stuff in boot & bsp dirs into a new boot/guest dir
- use de-privilege & direct boot to distinguish two different boot operations
this patch is pure file movement, the rename of functions based on old assumption will
be in the following patch.
Changes to be committed:
modified: ../efi-stub/Makefile
modified: ../efi-stub/boot.c
modified: Makefile
modified: arch/x86/cpu.c
modified: arch/x86/guest/vm.c
modified: arch/x86/init.c
modified: arch/x86/irq.c
modified: arch/x86/trampoline.c
modified: boot/acpi.c
renamed: bsp/cmdline.c -> boot/cmdline.c
renamed: bsp/firmware_uefi.c -> boot/guest/deprivilege_boot.c
renamed: boot/uefi/uefi_boot.c -> boot/guest/deprivilege_boot_info.c
renamed: bsp/firmware_sbl.c -> boot/guest/direct_boot.c
renamed: boot/sbl/multiboot.c -> boot/guest/direct_boot_info.c
renamed: bsp/firmware_wrapper.c -> boot/guest/vboot_wrapper.c
modified: boot/include/acpi.h
renamed: bsp/include/firmware_uefi.h -> boot/include/guest/deprivilege_boot.h
renamed: bsp/include/firmware_sbl.h -> boot/include/guest/direct_boot.h
renamed: bsp/include/firmware.h -> boot/include/guest/vboot.h
modified: include/arch/x86/multiboot.h
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Replace the vm state VM_STATE_INVALID to VM_POWERED_OFF.
Also replace is_valid_vm() with is_poweroff_vm().
Add API is_created_vm() to identify VM created state.
Tracked-On: #3082
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Previously ACPI PM1A register info was hardcoded to 0 in HV for generic boards,
but SOS still can know the real PM1A info so the system would hang if user
trigger S3 in SOS. Enabling PM1A register info fixup will let HV be able to
intercept the operation on PM1A and then make basic function of S3 work for
all boards;
Tracked-On: #2291
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
IRTE is freed if ptirq entry is released from remove_msix_remapping() or
remove_intx_remapping(). But if it's called from ptdev_release_all_entries(),
e.g. SOS shutdown/reboot, IRTE is not freed.
This patch adds a release_cb() callback function to do any architectural
specific cleanup. In x86, it's used to release IRTE.
On VM shutdown, vpci_cleanup() needs to remove MSI/MSI-X remapping on
ptirq entries, thus it should be called before ptdev_release_all_entries().
Tracked-On: #2700
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Rename 'type' in struct io_request to 'io_type' to be more readable.
Tracked-On: #3061
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Rename structure acrn_vm_type to acrn_vm_load_order as it is used to
indicate the load order instead of the VM type.
Tracked-On: #2291
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Shared buffer is allocated by VM and is protected by SMAP.
Accessing to shared buffer between stac/clac pair will invalidate
SMAP protection.This patch is to remove these cases.
Fix minor stac/clac mis-usage,and add comments as stac/clac usage BKM
Tracked-On: #2526
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The name NORMAL_VM does not clearly reflect the attribute that these VMs
are launched "later". POST_LAUNCHED_VM is closer to the fact
that these VMs are launched "later" by one of the VMs launched by ACRN.
Tracked-On: #3034
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1) For some instructions (like movsx, movzx which we support), there're two operands
and the source operand size is not equal to the dest operand size. In this case,
if we update the memory operand size according to the bit 0(w bit) of opcode,
we will lost the register operand size. This patch tries to fix this by calculating
memory operand size when we want to use it.
2) Calculate memory operand size form operand size and the bit 0(w bit) of opcode
when we want to operate on memory operand.
Tracked-On: #1337
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
If the vmcall param passed from guest is representing a vmid, we should
make sure it is a valid one because it is a pre-condition of following
get_vm_from_vmid(). And then we don't need to do NULL VM pointer check
in is_valid_vm() because get_vm_from_vmid() would never return NULL.
Tracked-On: #2978
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There are 2 reasons why SOS ve820 has to be separated from host e820:
- in hybrid mode, SOS may not own all the host memory.
- when SOS is being re-launched, it needs an untainted version of host
e820 to create SOS ve820.
This patch creates sos_e820 table for SOS and keeps host e820 intact
during SOS creation.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In previous code, only for pre-launched VM, hypervisor would create
vuart console for each VM. But for post-launched VM, no vuart is
created.
In this patch, create vuart according to configuration in structure
acrn_vm_config. As the new configuration is set for pre-launched VM and
post-launched VM, and the vuart initialize process is common for each
VM, so, remove CONFIG_PARTITION_MODE from vuart related code.
Tracked-On: #2987
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The guest flags of GUEST_FLAG_IO_COMPLETION_POLLING work for NORMAL_VM only;
Tracked-On: #2291
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to SDM 10.6.1, if dest fields is 0xffU for register
ICR operation, that means the IPI is for broadcast.
According to SDM 10.12.9, 0xffffffffU of dest fields for x2APIC
means IPI is for broadcast.
We add new parameter to vlapic_calc_dest() to show whether the
dest is for broadcast. For IPI, we will set it according to
dest fields. For ioapic and MSI, we hardcode it to false because
no broadcast for ioapic and MSI.
Tracked-On: #3003
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch adds prefix 'p' before 'cpu' to physical cpu related functions.
And there is no code logic change.
Tracked-On: #2991
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
the problem is : System will crash when run crashme.
The root cause of this problem is that when the ACRN_REQUEST_EXCP flag is set by calling
the vcpu_make_request function, the flag is not cleared.
Add the following statement to the vcpu_inject_exception function to fix the problem:
bitmap_test_and_clear_lock(ACRN_REQUEST_EXCP, &vcpu->arch.pending_req);
Tested that one night, there was no crash.
Tracked-On: #2527
Signed-off-by: bing.li<bingx.li@intel.com>
Acked-by: Eddie Dong<eddie.dong@intel.com>
In theory, we should trap out all the x2apic MSR access if APICv is not enabled.
When "Use TPR shadow" and "Virtualize x2APIC mode" are enabled, we could disable
TPR interception; when APICv is fully enabled, besides TPR, we could disable all
MSR read, EOI and self-IPI interception; when we pass through lapic to guest, we
could disable all the MSR access interception except XAPICID/LDR read and ICR write.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
When in fully APICv mode, we enable VID. All pending delivery interrupts
will inject to VM before VM entry. So there is no pending delivery interrupt.
However, if VID is not enabled, we can only inject pending delivery interrupt
one by one. So we always need to do this check.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
apicv_advanced_inject_intr is used if APICv fully features are supported,
it uses PIR to inject interrupt. otherwise, apicv_basic_inject_intr is used.
it will use VMCS INTR INFO field to inject irq.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
The APICv ops is decided once the APICv feature on the physical platform is detected.
We will use apicv_advanced_ops if the physical platform support fully APICv feature;
otherwise, we will use apicv_basic_ops.
This patch only wrap the accept interrupt API for them.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Possible buffer overflow will happen in vlapic_set_tmr()
and vlapic_update_ppr(),this path is to fix them.
Tracked-On: #1252
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1) Shouldn't try to set APIC-register virtualization if the physical doesn't
support APICV advanced mode.
2) Remove all APICv features VMCS setting when LAPIC is passed through to guest.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Not every instruction supports the operand-size bit (w). This patch try to correct
what done in commit-id 9df8790 by setting a flag VIE_OP_F_BYTE_OP to indicate which
instruction supports the operand-size bit (w).
This bug is found by removing VMX_PROCBASED_CTLS2_VAPIC_REGS VMCS setting when the
physical doesn't support this APICv feature. However, if emulated this in MRB board,
the android can't boot because when switch to trusty world, it will check
"Delivery Status" in ICR first. It turns out that this Bit Test instruction is not
emulated correctly.
Tracked-On: #1337
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
The physical core of lapic_pt vm should be reset for security and
correctness when shutdown the vm.
Tracked-On: #2991
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently, the previous configurations about guest_flags set by DM will
not be cleared when shutdown the vm. Then it might bring issue for the
next dm-launched vm.
For example, if we create one vm with LAPIC_PASSTHROUGH flag and shutdown it.
Then the next dm-launched vm will has the LAPIC_PASSTHROUGH flag set no matter
whether we set it in DM.
This patch clears all the DM set flags when shtudown vm.
Tracked-On: #2991
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently vpid is not released in reset_vcpu() hence the vpid resource
could be exhausted easily if guests are re-launched.
This patch assigns vpid according to the fixed mapping of runtime vm_id
and vcpu_id to guarantee the uniqueness of vpid.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
- The target vm in most of hypercalls should be a NORMAL_VM, in some
exceptions it might be a SOS_VM, we should validate them.
- Please be aware that some hypercall might have limitation on specific
target vm like RT or SAFETY VMs, this leaves "TODO" in future;
- Unify the coding style:
int32_t hcall_foo(vm, target_vm_id, ...)
{
int32_t ret = -1;
...
if ((is_valid_vm(target_vm) && is_normal_vm(target_vm)) {
ret = ....
}
return ret;
}
Tracked-On: #2978
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
If launch two UOS with same UUID by acrn-dm, current code path will
return same VM instance to the acrn-dm, this will crash the two UOS.
Check VM state and make sure it's in VM_STATE_INVALID state before
creating a VM.
Tracked-On: #2984
Signed-off-by: Cai Yulong <yulongc@hwtc.com.cn>
When shutting down SOS VM, the shared sbuf is released from guest OS, but
the per cpu sbuf pointers in hypervisor keep inact. This creates a problem
that after SOS is re-launched, hypervisor could write to the shared
buffer that no longer exists.
This patch implements sbuf_reset() and call it from reset_vcpu() to
reset sbuf pointers.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Add TPR below threshold implement for "Virtual-interrupt delivery" not support.
Windows will use it to delay interrupt handle.
Complete all the interrupts in IRR as long as they are higher priority than
current TPR. Once current IRR priority is less than current TPR enable TPR
threshold to IRR, so that if guest reduces the TPR threshold, it would be good
to take below TPR threshold exit and let interrupts to go thru.
Tracked-On: #1842
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Enable vMCE feature to boot windows guest.
vMCE is set in EDX from Microsoft TLFS spec, to support windows guest
vMCA and vMCE should be supported by guest CPUID.
Support MSR_IA32_MCG_CAP and MSR_IA32_MCG_STATUS reading when vMCE is enabled,
but they are not emulated yet, so return 0 directly.
Tracked-On: #1867
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Yu Wang <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In the presence of SOS, ACRN uses fallback_iommu_domain which is the same
used by SOS, to assign domain to devices during ACRN init. Also it uses
fallback_iommu_domain when DM requests ACRN to remove device from UOS domain.
This patch changes the design of assign/remove_iommu_device to avoid the
concept of fallback_iommu_domain and its setup. This way ACRN can commonly
treat pre-launched VMs bringup w.r.t. IOMMU domain creation.
Tracked-On: #2965
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
add new hypercall get platform information,
such as physical CPU number.
Tracked-On: #2538
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ACRN builds mptable for pre-launched VMs. It uses CONFIG_PARTITION_MODE
to compile mptable source code and related support. This patch removes
the macro and checks if the type of VM is pre-launched to build mptable.
Tracked-On: #2941
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently VM id of NORMAL_VM is allocated dymatically, we need to make
VM id statically for FuSa compliance.
This patch will pre-configure UUID for all VMs, then NORMAL_VM could
get its VM id/configuration from vm_configs array by indexing the UUID.
If UUID collisions is found in vm configs array, HV will refuse to
load the VM;
Tracked-On: #2291
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The code mixed the usage on term of UUID and GUID, now use UUID to make
code more consistent, also will use lowercase (i.e. uuid) in variable name
definition.
Tracked-On: #2291
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1) In x2apic mode, when read ICR, we want to read a 64-bits value.
2) In x2apic mode, write self-IPI will trap out through MSR write when VID isn't enabled.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>