This commit adds ops to vlapic structure, and add an *ops parameter to vlapic_reset().
At vlapic reset, the ops is set to the global apicv_ops, and may be assigned
to other ops later.
Tracked-On: #3227
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vCPU schedule state change is under schedule lock protection. So there's no need
to be atomic.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For pre-launched VMs and SOS, vCPUs are created on BSP one by one; For post-launched VMs,
vCPUs are created under vmm_hypercall_lock protection. So vcpu_create is called sequentially.
Operation in vcpu_create don't need to be atomic.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
For per-vCPU, EOI exit bitmap is a global parameter which should set or clear
atomically since there's no lock to protect this critical variable.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Now sched_object and sched_context are protected by scheduler_lock. There's no
chance to use runqueue_lock to protect schedule runqueue if we have no plan to
support schedule migration.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
cancel_event_injection is not need any more if we do 'scheudle' prior to
acrn_handle_pending_request. Commit "921288a6672: hv: fix interrupt
lost when do acrn_handle_pending_request twice" bring 'schedule'
forward, so remove cancel_event_injection related stuff.
Tracked-On: #3374
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Microarchitectural Data Sampling (MDS) is a hardware vulnerability
which allows unprivileged speculative access to data which is available
in various CPU internal buffers.
1. Mitigation on ACRN:
1) Microcode update is required.
2) Clear CPU internal buffers (store buffer, load buffer and
load port) if current CPU is affected by MDS, when VM entry
to avoid any information leakage to guest thru above buffers.
3) Mitigation is not needed if ARCH_CAP_MDS_NO bit (bit5)
is set in IA32_ARCH_CAPABILITIES MSR (10AH), in this case,
current processor is no affected by MDS vulnerability, in other
cases mitigation for MDS is required.
2. Methods to clear CPU buffers (microcode update is required):
1) L1D cache flush
2) VERW instruction
Either of above operations will trigger clearing all
CPU internal buffers if this CPU is affected by MDS.
Above mechnism is enumerated by:
CPUID.(EAX=7H, ECX=0):EDX[MD_CLEAR=10].
3. Mitigation details on ACRN:
if (processor is affected by MDS)
if (processor is not affected by L1TF OR
L1D flush is not launched on VM Entry)
execute VERW instruction when VM entry.
endif
endif
4. Referrence:
Deep Dive: Intel Analysis of Microarchitectural Data Sampling
https://software.intel.com/security-software-guidance/insights/
deep-dive-intel-analysis-microarchitectural-data-sampling
Deep Dive: CPUID Enumeration and Architectural MSRs
https://software.intel.com/security-software-guidance/insights/
deep-dive-cpuid-enumeration-and-architectural-msrs
Tracked-On: #3317
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Reviewed-by: Jason CJ Chen <jason.cj.chen@intel.com>
ACRN coding guideline requires function shall have only one return entry.
Fix it.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vcpu is never scan because of scan tool will be crashed!
After modulization, the vcpu can be scaned by the scan tool.
Clean up the violations in vcpu.c.
Fix the violations:
1.No brackets to then/else.
2.Function return value not checked.
3.Signed/unsigned coversion without cast.
V1->V2:
change the type of "vcpu->arch.irq_window_enabled" to bool.
V2->V3:
add "void *" prefix on the 1st parameter of memset.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The sbuf is allocated for each pcpu by hypercall from SOS. Before launch
Guest OS, the script will offline cpus, which will trigger vcpu reset and
then reset sbuf pointer. But sbuf only initiate once by SOS, so these
cpus for Guest OS has no sbuf to use. Thus, when run 'acrntrace' on SOS,
there is no trace data for Guest OS.
To fix the issue, only reset the sbuf for SOS.
Tracked-On: #3335
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
MISRA-C requires inline functions should be declared static,
these APIs are external interfaces,remove inline
Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
modified: arch/x86/guest/vcpu.c
MISRA-C standard requires the type of result of expression in if/while pattern shall be boolean.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
This patch introduces check_vm_vlapic_state API instead of is_lapic_pt_enabled
to check if all the vCPUs of a VM are using x2APIC mode and LAPIC
pass-through is enabled on all of them.
When the VM is in VM_VLAPIC_TRANSITION or VM_VLAPIC_DISABLED state,
following conditions apply.
1) For pass-thru MSI interrupts, interrupt source is not programmed.
2) For DM emulated device MSI interrupts, interrupt is not delivered.
3) For IPIs, it will work only if the sender and destination are both in x2APIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ACRN supports LAPIC emulation for guests using x86 APICv. When guest OS/BIOS
switches from xAPIC to x2APIC mode of operation, ACRN also supports switching
froom LAPIC emulation to LAPIC passthrough to guest. User/developer needs to
configure GUEST_FLAG_LAPIC_PASSTHROUGH for guest_flags in the corresponding
VM's config for ACRN to enable LAPIC passthrough.
This patch does the following
1)Fixes a bug in the abovementioned feature. For a guest that is
configured with GUEST_FLAG_LAPIC_PASSTHROUGH, during the time period guest is
using xAPIC mode of LAPIC, virtual interrupts are not delivered. This can be
manifested as guest hang when it does not receive virtual timer interrupts.
2)ACRN exposes physical topology via CPUID leaf 0xb to LAPIC PT VMs. This patch
removes that condition and exposes virtual topology via CPUID leaf 0xb.
Tracked-On: #3136
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
the pcpu just write its own vmcs, not need spinlock.
and the arch.lock not used other places, remove it too.
Tracked-On: #3130
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch adds prefix 'p' before 'cpu' to physical cpu related functions.
And there is no code logic change.
Tracked-On: #2991
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently vpid is not released in reset_vcpu() hence the vpid resource
could be exhausted easily if guests are re-launched.
This patch assigns vpid according to the fixed mapping of runtime vm_id
and vcpu_id to guarantee the uniqueness of vpid.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
When shutting down SOS VM, the shared sbuf is released from guest OS, but
the per cpu sbuf pointers in hypervisor keep inact. This creates a problem
that after SOS is re-launched, hypervisor could write to the shared
buffer that no longer exists.
This patch implements sbuf_reset() and call it from reset_vcpu() to
reset sbuf pointers.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
When RTVM is trying to poweroff by itself, we use INIT to
kick vCPUs off the non-root mode.
For RTVM, only if vm state equal VM_POWERING_OFF, we take action to pause
the vCPUs with INIT signal. Otherwise, we will reject the pause request.
Tracked-On: #2865
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch makes make_reschedule_request support for kicking
off vCPU using INIT.
Tracked-On: #2865
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Suppose run_ctx.cr0/cr4 are correct when do world switching, so call
vcpu_set_cr0/cr4() to update cr0/cr4 directly before resume to guest.
This design is only for trusty world switching.
Tracked-On: #2773
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now we only configure "hide MTRR" explicitly to false for SOS. For other VMs,
we don't configure it which means hide_mtrr is false by default.
And remove global config MTRR_ENABLED
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Add two functions to combine constraint for APICv:
is_apicv_basic_feature_supported: check the physical platform whether support
"Use TPR shadow", "Virtualize APIC accesses" and "Virtualize x2APIC mode"
is_apicv_advanced_feature_supported: check the physical platform whether support
"APIC-register virtualization", "Virtual-interrupt delivery" and
"Process posted interrupts".
If the physical platform only support APICv basic feature, enable "Use TPR shadow"
and "Virtualize APIC accesses" for xAPIC mode; enable "Use TPR shadow" and
"Virtualize x2APIC mode" for x2APIC. Otherwise, if the physical platform support
APICv advanced feature, enable APICv feature for xAPIC mode and x2APIC mode.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
When CAT is supported, UOS can setup acrn_vm_config.clos, to use CAT
feature. Eg.,
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] = {
{
.guest_flags |= CLOS_REQUIRED,
.clos = 1,
},
};
sanitize_vm_config() will check if CAT is supported and
vm_configs.clos is valid.
Tracked-On: #2462
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1) The previous implementaion will recalculate the whole EOI-exit bitmap for
each RTE once the destination, trigger mode, delivery mode or vector of a RTE
has changed and update the EOI-exit bitmap for each vcpu of the VM.
In this patch, only set the corresponding bit of EOI-exit bitmap for
a vcpu when a level triggered interrupt has accepted in IRR or clear the
corresponding bit of EOI-exit bitmap for a vcpu when a dege triggered interrupt
has accepted in IRR which means only update a bit of EOI-exit bitmap in a vcpu
when updating TMR.
2) Rename set eoi_exit related API to set eoi_exit_bitmap.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
remove hypervisor.h from per_cpu.h
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
For vcpu.c and vcpu.h,only include some necessary
header files, doesn't include hypervisor.h
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- move `vcpumask2pcpumask` from `guest.c` to `vcpu.c`
- move `prepare_sos_vm_memmap` from `guest.c` to `vm.c`
- rename `guest.c` to `guest_memory.c`
Tracked-On: #2484
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add this magic number to prevent potential overflow when dumping
host stack.
Tracked-On: #2455
Signed-off-by: Tw <wei.tan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. in UEFI bsp code, not need UEFI macro; it is controlled in makefile.
2. in vm/acpi/interrupt code, unify the API name for SBL & UEFI.
3. remove unnecessary header including and unused code.
Tracked-On: #1842
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
'vector' should be no greater than 0xff,else
'eoi_exit_bitmap[]' will overflow.
Tracked-On: #1252
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
below fields are defined but not used.
- sync, pause_cnt & dbg_req_state.
Tracked-On: #861
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Now we do not need pending_pre_work anymore, as we can make sure IO request
VCPU resume from where it paused.
Now only three fixed points will try to do schedule:
- vcpu_thread: before vm entry, will check reschedule flag and to it if needed
- default_idle: loop check reschedule flag to see if need switch out
- io request: if IO REQ need DM's handle, it will schedule out
Tracked-On: #2394
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
this patch added full context switch support for scheduling, it make sure
each VCPU has its own stack, and added host_sp field in struct sched_object
to record host stack pointer for each switch out object.
Arch related function arch_switch_to is added for context switch.
To benefit debugging, a name[] field is also added into struct sched_object.
Tracked-On: #2394
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Under sharing mode, VM0 is identical with SOS VM. But the coupling of
SOS VM and VM 0 is not friendly for partition mode.
This patch is a pure term change of vm0 to sos VM, it does not change
any code logic or senmantic.
Tracked-On: #2291
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. move the CR related code from vmcs/vcpu to vCR source files.
2. also add virtual_cr.h to acrn.doxyfile to avoid doc failure.
Tracked-On: #1842
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
This commit changes the EOI_EXIT_BITMAP as follows:
- add a eoi_exit_bitmap to vlapic structure;
- go through all the RTEs and set eoi_exit_bitmap in the vlapic structure when related RTE fields are modified;
- add ACRN_REQUEST_EOI_EXIT_UPDATE, if eoi_exit_bitmap changed, request the corresponding vcpu to write the bitmap
to VMCS.
Tracked-On: #2343
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. add static for local functions and variables.
2. move vm_sw_loader from vcpu to vm
3. refine uefi.c to follow the code rules.
4. separate uefi.c for vm0 boot and bsp two parts. bsp layer just
access native HW related, can't access vm/vcpu, vm0 boot part can
access vm / vcpu data structure.
Tracked-On: #1842
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
use struct sched_object as the main interface of scheduling, then
make scheduler as an independent module to vcpu:
- add struct sched_object as one field in struct vcpu
- define sched_object.thread for switch_to thread
- define sched_object.prepare_switch_out/in for prepare_switch before
switch_to
- move context_switch_out/context_switch_in into vcpu.c as
vcpu.sched_obj.prepare_switch_out/in
- make default_idle as global idle.thread for idle_thread
- make vcpu_thread as vcpu.sched_obj.thread for each vcpu thread
- simplify switch_to based on sched_object
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <edide.dong@intel.com>
just use pcpu_id for make_reschedule_request is enough
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <edide.dong@intel.com>
add struct sched_object, and use it as input param instead of vcpu for
below functions:
- add_to_cpu_runqueue renamed from add_vcpu_to_runqueue
- remove_from_cpu_runqueue renamed from remove_vcpu_from_runqueue
- get_next_sched_obj added to get next sched object
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <edide.dong@intel.com>
add get_ibrs_type API to get ibrs type.
this patch fix Misra C violation:
filename:/hypervisor/arch/x86/security.c function:None offset:19:
reason:Variable should be declared static. : ibrs_type
Tracked-On: #861
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
there are still some security related funcs in cpu_caps.c & cpu.c,
move them out into security.c.
Changes to be committed:
modified: Makefile
modified: arch/x86/cpu.c
modified: arch/x86/cpu_caps.c
modified: arch/x86/guest/vcpu.c
new file: arch/x86/security.c
modified: arch/x86/trusty.c
modified: arch/x86/vmx_asm.S
modified: include/arch/x86/cpu.h
modified: include/arch/x86/cpu_caps.h
modified: include/arch/x86/per_cpu.h
new file: include/arch/x86/security.h
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Currently there are two fields in ext_context to emulate IA32_PAT MSR:
- ia32_pat: hold the value of the emulated IA32_PAT MSR
- vmx_ia32_pat: used for load/store IA32_PAT MSR during world switch
This patch moves ext_context->ia32_pat to the common placeholder for
emulated MSRs acrn_vcpu_arch->guest_msrs[].
Also it renames ext_context->vmx_ia32_pat to ext_context->ia32_pat to
retain same naming convention in struct ext_context.
Tracked-On: #1867
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>