In current architecutre, the maximum vCPUs number per VM could not
exceed the pCPUs number. Given the MAX_PCPU_NUM macro is provided
in board configurations, so remove the MAX_VCPUS_PER_VM from Kconfig
and add a macro of MAX_VCPUS_PER_VM to reference MAX_PCPU_NUM directly.
Tracked-On: #4230
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
It's meaningless to sleep a non-running vcpu. Add a state check before
sleep the thread object of the vcpu.
Tracked-On: #4178
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
With cpu-sharing enabled, there are more than 1 vcpu on 1 pcpu, so the
smp_call handler should switch the vmcs to the target vcpu's vmcs. Then
get the info.
dump_vcpu_reg and dump_guest_mem should run on certain vmcs, otherwise,
there will be #GP error.
Renaming:
vcpu_dumpreg -> dump_vcpu_reg
switch_vmcs -> load_vmcs
Tracked-On: #4178
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
xsave area:
legacy region: 512 bytes
xsave header: 64 bytes
extended region: < 3k bytes
So, pre-allocate 4k area for xsave. Use certain instruction to save or
restore the area according to hardware xsave feature set.
Tracked-On: #4166
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We should use INIT signal to notify the vcpu threads when
powering off the hard RTVM. To achive this, we should set
the vcpu->thread_obj.notify_mode as SCHED_NOTIFY_INIT.
Patch (27163df9 hv: sched: add sleep/wake for thread object)
tries to set the notify_mode according `is_lapic_pt_enabled(vcpu)`
in function prepare_vcpu. But at this point, the is_lapic_pt_enabled(vcpu)
will always return false. Consequently, it will set notify_mode
as SCHED_NOTIFY_IPI. Then leads to the failure of powering off
hard RTVM.
This patch fixes it by:
- Initialize the notify_mode as SCHED_NOTIFY_IPI in prepare_vcpu.
- Set notify_mode as SCHED_NOTIFY_INIT after guest is trying to
enable x2apic mode of passthru lapic.
Tracked-On: #3975
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
kick means to notify one thread_object. If the target thread object is
running, send a IPI to notify it; if the target thread object is
runnable, make reschedule on it.
Also add kick_vcpu API in vcpu layer to notify vcpu.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch decouple some scheduling logic and abstract into a scheduler.
Then we have scheduler, schedule framework. From modulization
perspective, schedule framework provides some APIs for other layers to
use, also interact with scheduler through scheduler interaces.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Let init thread end with run_idle_thread(), then idle thread take over and
start to do scheduling.
Change enter_guest_mode() to init_guest_mode() as run_idle_thread() is removed
out of it. Also add run_thread() in schedule module to run
thread_object's thread loop directly.
rename: switch_to_idle -> run_idle_thread
Tracked-On: #3813
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
sleep one thread_object means to prevent it from being scheduled.
wake one thread_object is an opposite operation of sleep.
This patch also add notify_mode in thread_object to indicate how to
deliver the request.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
To support cpu sharing, multiple vcpu can run on same pcpu. We need do
necessary vcpu context switch. This patch add below actions in context
switch.
1) fxsave/fxrstor;
2) save/restore MSRs: MSR_IA32_STAR, MSR_IA32_LSTAR,
MSR_IA32_FMASK, MSR_IA32_KERNEL_GS_BASE;
3) switch vmcs.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
With cpu sharing enabled, per_cpu vcpu cannot work properly as we might
has multiple vcpus running on one pcpu.
Add a schedule API sched_get_current to get current thread_object on
specific pcpu, also add a vcpu API get_running_vcpu to get corresponding
vcpu of the thread_object.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
With cpu sharing enabled, we will map acrn_vcpu to thread_object
in scheduling. From modulization perspective, we'd better hide the
pcpu_id in acrn_vcpu and move it to thread_object.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vcpu thread's stack shouldn't follow reset_vcpu to reset.
There is also a bug here:
while vcpu B thread set vcpu->running to false, other vcpu A thread
will treat the vcpu B is paused while it has not been switch out
completely, then reset_vcpu will reset the vcpu B thread's stack and
corrupt its running context.
This patch will remove the vcpu thread's stack reset from reset_vcpu.
With the change, we need do init_vmcs between vcpu startup address be
settled and scheduled in. And switch_to_idle() is not needed anymore
as S3 thread's stack will not be reset.
Tracked-On: #3813
Signed-off-by: Fengwei Yin <fengwei.yin@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Currently we are using a 1:1 mapping logic for pcpu:vcpu. So don't need
a runqueue for it. Removing it as preparation work to abstract scheduler
framework.
Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
--remove unnecessary includes
--remove unnecssary forward-declaration for 'struct vhm_request'
Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
As we introduced vcpu_affinity[] to assign vcpus to different pcpus, the
old policy and functions are not needed. Remove them.
Tracked-On: #3663
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now the structures(run_context & ext_context) are defined
in vcpu.h,and they are used in the lower-layer modules(wakeup.S),
this patch move down the structures from vcpu.h to cpu.h
to avoid reversed dependency.
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently, the clos id of the cpu cores in vmx root mode is the same as non-root mode.
For RTVM, if hypervisor share the same clos id with non-root mode, the cacheline may
be polluted due to the hypervisor code execution when vmexit.
The patch adds hv_clos in vm_configurations.c
Hypervisor initializes clos setting according to hv_clos during physical cpu cores initialization.
For RTVM, MSR auto load/store areas are used to switch different settings for VMX root/non-root
mode for RTVM.
Tracked-On: #2462
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
-- remove some unnecessary includes
-- fix a typo
-- remove unnecessary void before launch_vms
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Now, we use native gdt saved in boot context for guest and assume
it could be put to same address of guest. But it may not be true
after the pre-launched VM is introduced. The gdt for guest could
be overwritten by guest images.
This patch make 32bit protect mode boot not use saved boot context.
Insteadly, we use predefined vcpu_regs value for protect guest to
initialize the guest bsp registers and copy pre-defined gdt table
to a safe place of guest memory to avoid gdt table overwritten by
guest images.
Tracked-On: #3532
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
fix violations touched below:
1.Cast operation on a constant value
2.signed/unsigned implicity conversion
3.return value unused.
V1->V2:
1.bitmap api will return boolean type, not need to check "!= 0", deleted.
2.The behaves ~(uint32_t)X and (uint32_t)~X are not defined in ACRN hypervisor Coding Guidelines,
removed the change of it.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
After "commit f0e1c5e init vcpu host stack when reset vcpu", SOS resume form S3
wants to schedule to vcpu_thread not the point where SOS enter S3. So we should
schedule to idel first then reschedule to execute vcpu_thread.
Tracked-On: #3387
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Otherwise, the previous local variables in host stack is not reset.
Tracked-On: #3387
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
This commit adds ops to vlapic structure, and add an *ops parameter to vlapic_reset().
At vlapic reset, the ops is set to the global apicv_ops, and may be assigned
to other ops later.
Tracked-On: #3227
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vCPU schedule state change is under schedule lock protection. So there's no need
to be atomic.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For pre-launched VMs and SOS, vCPUs are created on BSP one by one; For post-launched VMs,
vCPUs are created under vmm_hypercall_lock protection. So vcpu_create is called sequentially.
Operation in vcpu_create don't need to be atomic.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
For per-vCPU, EOI exit bitmap is a global parameter which should set or clear
atomically since there's no lock to protect this critical variable.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Now sched_object and sched_context are protected by scheduler_lock. There's no
chance to use runqueue_lock to protect schedule runqueue if we have no plan to
support schedule migration.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
cancel_event_injection is not need any more if we do 'scheudle' prior to
acrn_handle_pending_request. Commit "921288a6672: hv: fix interrupt
lost when do acrn_handle_pending_request twice" bring 'schedule'
forward, so remove cancel_event_injection related stuff.
Tracked-On: #3374
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Microarchitectural Data Sampling (MDS) is a hardware vulnerability
which allows unprivileged speculative access to data which is available
in various CPU internal buffers.
1. Mitigation on ACRN:
1) Microcode update is required.
2) Clear CPU internal buffers (store buffer, load buffer and
load port) if current CPU is affected by MDS, when VM entry
to avoid any information leakage to guest thru above buffers.
3) Mitigation is not needed if ARCH_CAP_MDS_NO bit (bit5)
is set in IA32_ARCH_CAPABILITIES MSR (10AH), in this case,
current processor is no affected by MDS vulnerability, in other
cases mitigation for MDS is required.
2. Methods to clear CPU buffers (microcode update is required):
1) L1D cache flush
2) VERW instruction
Either of above operations will trigger clearing all
CPU internal buffers if this CPU is affected by MDS.
Above mechnism is enumerated by:
CPUID.(EAX=7H, ECX=0):EDX[MD_CLEAR=10].
3. Mitigation details on ACRN:
if (processor is affected by MDS)
if (processor is not affected by L1TF OR
L1D flush is not launched on VM Entry)
execute VERW instruction when VM entry.
endif
endif
4. Referrence:
Deep Dive: Intel Analysis of Microarchitectural Data Sampling
https://software.intel.com/security-software-guidance/insights/
deep-dive-intel-analysis-microarchitectural-data-sampling
Deep Dive: CPUID Enumeration and Architectural MSRs
https://software.intel.com/security-software-guidance/insights/
deep-dive-cpuid-enumeration-and-architectural-msrs
Tracked-On: #3317
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Reviewed-by: Jason CJ Chen <jason.cj.chen@intel.com>
ACRN coding guideline requires function shall have only one return entry.
Fix it.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vcpu is never scan because of scan tool will be crashed!
After modulization, the vcpu can be scaned by the scan tool.
Clean up the violations in vcpu.c.
Fix the violations:
1.No brackets to then/else.
2.Function return value not checked.
3.Signed/unsigned coversion without cast.
V1->V2:
change the type of "vcpu->arch.irq_window_enabled" to bool.
V2->V3:
add "void *" prefix on the 1st parameter of memset.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The sbuf is allocated for each pcpu by hypercall from SOS. Before launch
Guest OS, the script will offline cpus, which will trigger vcpu reset and
then reset sbuf pointer. But sbuf only initiate once by SOS, so these
cpus for Guest OS has no sbuf to use. Thus, when run 'acrntrace' on SOS,
there is no trace data for Guest OS.
To fix the issue, only reset the sbuf for SOS.
Tracked-On: #3335
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
MISRA-C requires inline functions should be declared static,
these APIs are external interfaces,remove inline
Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
modified: arch/x86/guest/vcpu.c
MISRA-C standard requires the type of result of expression in if/while pattern shall be boolean.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
This patch introduces check_vm_vlapic_state API instead of is_lapic_pt_enabled
to check if all the vCPUs of a VM are using x2APIC mode and LAPIC
pass-through is enabled on all of them.
When the VM is in VM_VLAPIC_TRANSITION or VM_VLAPIC_DISABLED state,
following conditions apply.
1) For pass-thru MSI interrupts, interrupt source is not programmed.
2) For DM emulated device MSI interrupts, interrupt is not delivered.
3) For IPIs, it will work only if the sender and destination are both in x2APIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ACRN supports LAPIC emulation for guests using x86 APICv. When guest OS/BIOS
switches from xAPIC to x2APIC mode of operation, ACRN also supports switching
froom LAPIC emulation to LAPIC passthrough to guest. User/developer needs to
configure GUEST_FLAG_LAPIC_PASSTHROUGH for guest_flags in the corresponding
VM's config for ACRN to enable LAPIC passthrough.
This patch does the following
1)Fixes a bug in the abovementioned feature. For a guest that is
configured with GUEST_FLAG_LAPIC_PASSTHROUGH, during the time period guest is
using xAPIC mode of LAPIC, virtual interrupts are not delivered. This can be
manifested as guest hang when it does not receive virtual timer interrupts.
2)ACRN exposes physical topology via CPUID leaf 0xb to LAPIC PT VMs. This patch
removes that condition and exposes virtual topology via CPUID leaf 0xb.
Tracked-On: #3136
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
the pcpu just write its own vmcs, not need spinlock.
and the arch.lock not used other places, remove it too.
Tracked-On: #3130
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch adds prefix 'p' before 'cpu' to physical cpu related functions.
And there is no code logic change.
Tracked-On: #2991
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently vpid is not released in reset_vcpu() hence the vpid resource
could be exhausted easily if guests are re-launched.
This patch assigns vpid according to the fixed mapping of runtime vm_id
and vcpu_id to guarantee the uniqueness of vpid.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
When shutting down SOS VM, the shared sbuf is released from guest OS, but
the per cpu sbuf pointers in hypervisor keep inact. This creates a problem
that after SOS is re-launched, hypervisor could write to the shared
buffer that no longer exists.
This patch implements sbuf_reset() and call it from reset_vcpu() to
reset sbuf pointers.
Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
When RTVM is trying to poweroff by itself, we use INIT to
kick vCPUs off the non-root mode.
For RTVM, only if vm state equal VM_POWERING_OFF, we take action to pause
the vCPUs with INIT signal. Otherwise, we will reject the pause request.
Tracked-On: #2865
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch makes make_reschedule_request support for kicking
off vCPU using INIT.
Tracked-On: #2865
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Suppose run_ctx.cr0/cr4 are correct when do world switching, so call
vcpu_set_cr0/cr4() to update cr0/cr4 directly before resume to guest.
This design is only for trusty world switching.
Tracked-On: #2773
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now we only configure "hide MTRR" explicitly to false for SOS. For other VMs,
we don't configure it which means hide_mtrr is false by default.
And remove global config MTRR_ENABLED
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Add two functions to combine constraint for APICv:
is_apicv_basic_feature_supported: check the physical platform whether support
"Use TPR shadow", "Virtualize APIC accesses" and "Virtualize x2APIC mode"
is_apicv_advanced_feature_supported: check the physical platform whether support
"APIC-register virtualization", "Virtual-interrupt delivery" and
"Process posted interrupts".
If the physical platform only support APICv basic feature, enable "Use TPR shadow"
and "Virtualize APIC accesses" for xAPIC mode; enable "Use TPR shadow" and
"Virtualize x2APIC mode" for x2APIC. Otherwise, if the physical platform support
APICv advanced feature, enable APICv feature for xAPIC mode and x2APIC mode.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
When CAT is supported, UOS can setup acrn_vm_config.clos, to use CAT
feature. Eg.,
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] = {
{
.guest_flags |= CLOS_REQUIRED,
.clos = 1,
},
};
sanitize_vm_config() will check if CAT is supported and
vm_configs.clos is valid.
Tracked-On: #2462
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1) The previous implementaion will recalculate the whole EOI-exit bitmap for
each RTE once the destination, trigger mode, delivery mode or vector of a RTE
has changed and update the EOI-exit bitmap for each vcpu of the VM.
In this patch, only set the corresponding bit of EOI-exit bitmap for
a vcpu when a level triggered interrupt has accepted in IRR or clear the
corresponding bit of EOI-exit bitmap for a vcpu when a dege triggered interrupt
has accepted in IRR which means only update a bit of EOI-exit bitmap in a vcpu
when updating TMR.
2) Rename set eoi_exit related API to set eoi_exit_bitmap.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
remove hypervisor.h from per_cpu.h
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
For vcpu.c and vcpu.h,only include some necessary
header files, doesn't include hypervisor.h
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- move `vcpumask2pcpumask` from `guest.c` to `vcpu.c`
- move `prepare_sos_vm_memmap` from `guest.c` to `vm.c`
- rename `guest.c` to `guest_memory.c`
Tracked-On: #2484
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add this magic number to prevent potential overflow when dumping
host stack.
Tracked-On: #2455
Signed-off-by: Tw <wei.tan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. in UEFI bsp code, not need UEFI macro; it is controlled in makefile.
2. in vm/acpi/interrupt code, unify the API name for SBL & UEFI.
3. remove unnecessary header including and unused code.
Tracked-On: #1842
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
'vector' should be no greater than 0xff,else
'eoi_exit_bitmap[]' will overflow.
Tracked-On: #1252
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
below fields are defined but not used.
- sync, pause_cnt & dbg_req_state.
Tracked-On: #861
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Now we do not need pending_pre_work anymore, as we can make sure IO request
VCPU resume from where it paused.
Now only three fixed points will try to do schedule:
- vcpu_thread: before vm entry, will check reschedule flag and to it if needed
- default_idle: loop check reschedule flag to see if need switch out
- io request: if IO REQ need DM's handle, it will schedule out
Tracked-On: #2394
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
this patch added full context switch support for scheduling, it make sure
each VCPU has its own stack, and added host_sp field in struct sched_object
to record host stack pointer for each switch out object.
Arch related function arch_switch_to is added for context switch.
To benefit debugging, a name[] field is also added into struct sched_object.
Tracked-On: #2394
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Under sharing mode, VM0 is identical with SOS VM. But the coupling of
SOS VM and VM 0 is not friendly for partition mode.
This patch is a pure term change of vm0 to sos VM, it does not change
any code logic or senmantic.
Tracked-On: #2291
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. move the CR related code from vmcs/vcpu to vCR source files.
2. also add virtual_cr.h to acrn.doxyfile to avoid doc failure.
Tracked-On: #1842
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
This commit changes the EOI_EXIT_BITMAP as follows:
- add a eoi_exit_bitmap to vlapic structure;
- go through all the RTEs and set eoi_exit_bitmap in the vlapic structure when related RTE fields are modified;
- add ACRN_REQUEST_EOI_EXIT_UPDATE, if eoi_exit_bitmap changed, request the corresponding vcpu to write the bitmap
to VMCS.
Tracked-On: #2343
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. add static for local functions and variables.
2. move vm_sw_loader from vcpu to vm
3. refine uefi.c to follow the code rules.
4. separate uefi.c for vm0 boot and bsp two parts. bsp layer just
access native HW related, can't access vm/vcpu, vm0 boot part can
access vm / vcpu data structure.
Tracked-On: #1842
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
use struct sched_object as the main interface of scheduling, then
make scheduler as an independent module to vcpu:
- add struct sched_object as one field in struct vcpu
- define sched_object.thread for switch_to thread
- define sched_object.prepare_switch_out/in for prepare_switch before
switch_to
- move context_switch_out/context_switch_in into vcpu.c as
vcpu.sched_obj.prepare_switch_out/in
- make default_idle as global idle.thread for idle_thread
- make vcpu_thread as vcpu.sched_obj.thread for each vcpu thread
- simplify switch_to based on sched_object
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <edide.dong@intel.com>
just use pcpu_id for make_reschedule_request is enough
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <edide.dong@intel.com>
add struct sched_object, and use it as input param instead of vcpu for
below functions:
- add_to_cpu_runqueue renamed from add_vcpu_to_runqueue
- remove_from_cpu_runqueue renamed from remove_vcpu_from_runqueue
- get_next_sched_obj added to get next sched object
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <edide.dong@intel.com>
add get_ibrs_type API to get ibrs type.
this patch fix Misra C violation:
filename:/hypervisor/arch/x86/security.c function:None offset:19:
reason:Variable should be declared static. : ibrs_type
Tracked-On: #861
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
there are still some security related funcs in cpu_caps.c & cpu.c,
move them out into security.c.
Changes to be committed:
modified: Makefile
modified: arch/x86/cpu.c
modified: arch/x86/cpu_caps.c
modified: arch/x86/guest/vcpu.c
new file: arch/x86/security.c
modified: arch/x86/trusty.c
modified: arch/x86/vmx_asm.S
modified: include/arch/x86/cpu.h
modified: include/arch/x86/cpu_caps.h
modified: include/arch/x86/per_cpu.h
new file: include/arch/x86/security.h
Tracked-On: #1842
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Currently there are two fields in ext_context to emulate IA32_PAT MSR:
- ia32_pat: hold the value of the emulated IA32_PAT MSR
- vmx_ia32_pat: used for load/store IA32_PAT MSR during world switch
This patch moves ext_context->ia32_pat to the common placeholder for
emulated MSRs acrn_vcpu_arch->guest_msrs[].
Also it renames ext_context->vmx_ia32_pat to ext_context->ia32_pat to
retain same naming convention in struct ext_context.
Tracked-On: #1867
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- implement unified APIs to access guest_msrs[] under struct acrn_vcpu.
- use these new APIs to read/write emulated TSC_DEADLINE MSR
- switch world_msrs[] and guest_msrs[] during world switch for MSRs that
need world isolation
- remove the old guest_msrs[] array and it's index macros.
Tracked-On: #1867
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
replace CPU_PAGE_SIZE with PAGE_SIZE
These two MACROs are duplicated and PAGE_SIZE is a more reasonable name.
Tracked-On: #861
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently guest IA32_TSC_AUX MSR is loaded manually right before VM
entry, and saved right after VM exit.
This patch enables VM-Entry Control and VM-Exit Control to switch
MSR IA32_TSC_AUX between host and guest automatically. This helps to
keep vcpu_thread() function and struct acrn_vcpu cleaner.
Also it removes the dead code of intercepting IA32_TSC_AUX.
Tracked-On: #1867
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
to decrease the value of 'create_vcpus' in fail case.
Tracked-On: #861
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- Fix the integer violations related to the following rules:
1. The operands to shift operations (<<, >>) shall be unsigned
integers.
2. The operands to bit operations (&, |, ~) shall be unsigned
integers.
- Replace 12U with CPU_PAGE_SHIFT when it is address shift case.
v1 -> v2:
* use existed MACRO to get bus/slot/func values
* update PCI_SLOT MACRO to make it more straightforward
* remove the incorrect replacement of 12U with CPU_PAGE_SHIFT
dmar_fault_msi_write
Tracked-On: #861
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For data structure types "struct vm", its name is identical
with variable name in the same scope. This is a MISRA C violation.
Naming convention rule:If the data structure type is used by multi
modules, its corresponding logic resource is exposed to external
components (such as SOS, UOS), and its name meaning is simplistic
(such as vcpu, vm), its name needs prefix "acrn_".
The following udpates are made:
struct vm *vm-->struct acrn_vm *vm
Tracked-On: #861
Signed-off-by: Xiangyang Wu <xiangyang.wu@linux.intel.com>
For data structure types "struct vcpu_arch", its name
shall follow Naming convention.
Naming convention rule:If the data structure type is
used by multi modules, its corresponding logic resource
is exposed to external components (such as SOS, UOS),
and its name meaning is simplistic (such as vcpu, vm),
its name needs prefix "acrn_". Variable name can be
shortened from its data structure type name.
The following udpates are made:
struct vcpu_arch arch_vcpu-->struct acrn_vcpu_arch arch
Tracked-On: #861
Signed-off-by: Xiangyang Wu <xiangyang.wu@linux.intel.com>
For data structure types "struct vcpu", its name is identical
with variable name in the same scope. This is a MISRA C violation.
Naming convention rule:If the data structure type is used by multi
modules, its corresponding logic resource is exposed to external
components (such as SOS, UOS), and its name meaning is simplistic
(such as vcpu, vm), its name needs prefix "acrn_".
The following udpates are made:
struct vcpu *vcpu-->struct acrn_vcpu *vcpu
Tracked-On: #861
Signed-off-by: Xiangyang Wu <xiangyang.wu@linux.intel.com>
add size check for other hypervisor console command;
they could be overflow for shell log buffer output.
Tracked-On: #1587
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For SOS BSP, we reuse native saved cs.limit
For UOS BSP, we set cs.limit in DM
For AP, we use initialized data from realmode_init_regs.
Tracked-On: #1231
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
- flush L1 cache before VM entry only on platform
affected by L1TF
- flush operation is configurable by below MACRO:
--CONFIG_L1D_FLUSH_VMENTRY_ENABLED
Tracked-On: #1672
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Fix violations whose parameter can be read-only.
This patch only fix the parameter whose name is vcpu.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
guest software loading is per VM instead of vcpu. So we move it
from prepare_vcpu to prepare_vm. And make sure it's called for
all VMs for partition mode.
Tracked-On: #1565
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
MISRA-C required the suffix(U/UL), such as:
(1) ---> (1U)
(1) ---> (1UL)
(1U << 0) ---> (1U << 0U)
This patch will add the suffix(U/UL) to come up MISRA-C into
hypervisor/include directory.
Tracked-On: #1468
Signed-off-by: Wei Liu <weix.w.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now, UOS will use hypercall to init BSP state, we could remove
set_bsp_real_mode_entry() and set_bsp_protect_mode_regs().
For SOS, GDT will inherit from SBL or UEFI. For UOS, DM will
prepare GDT. We don't need hypervisor to prepare GDT for guest.
The entry_addr of vcpu struct could be removed. The guest entry
is set through BSP rip register.
GUEST_CFG_OFFSET is not needed any more after this patchset.
Tracked-On: #1231
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Move vcpu mode set to function vcpu_set_regs.
Tracked-On: #1231
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now, we make UOS to set BSP init state by using hypercall. We
could drop the old UOS loader in HV and make vm loader in HV
only for SOS.
Tracked-On: #1231
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The vcpu state is initialized outside of init_guest_state:
- SOS BSP state is initialized in SOS loader
- UOS BSP state is initialized in UOS loader
- AP state is initialized during SIPI signal emulation
We could make init_guest_state only update the vcpu state
to VMCS structure.
Tracked-On: #1231
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
NOTE: this patch is only workaround patch for UOS BSP state init.
Eventually, the DM will call hypercall to init UOS BSP state.
We use this workaround patch here to simplify the init_guest_state.
Will make the caller of init_guest_state calls init_guest_vmx
directly.
Tracked-On: #1231
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
With reset_vcpu_regs as pre-condition, we only need to set
cs_selector and cs_base for AP.
We call set_ap_entry in two places:
1. When emulation AP SIPI
2. When sos BSP resume from S3. The BSP is resumed to real
mode with entry set to wakeup_vec. We call set_ap_entry
API here with entry twisted from wakeup_vec.
Tracked-On: #1231
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This reset_vcpu_regs function will reset the vcpu registers to
default value: realmode with entry 0xFFFFFFF0
Make call to reset_vcpu_regs during create_vcpu and reset_vcpu
Tracked-On: #1231
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
add is_long_mode to check whether the processor is operating in IA-32e mode
add is_paging_enabled to check whether paging is enabled
add is_pae to check whether physical address extension is enabled.
Tracked-On: #1379
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
-- Replace dynamic memory allocation with static memory
-- Remove the parameter check if the vm is NULL
Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>