Commit Graph

51 Commits

Author SHA1 Message Date
Fei Li
20061b7c39 hv: remove xsave dependence
ACRN could run without XSAVE Capability. So remove XSAVE dependence to support
more (hardware or virtual) platforms.

Tracked-On: #6287
Signed-off-by: Fei Li <fei1.li@intel.com>
2021-08-10 16:36:15 +08:00
Tao Yuhong
171856c46b hv: uc-lock: Fix do not trap #GP
If HV enable trigger #GP for uc-lock, and is about to emulate guest uc-lock
instructions, should trap guest #GP. Guest uc-lock instrucction trigger #GP,
cause vmexit for #GP, HV handle this vmexit and emulate uc-lock
instruction.

Tracked-On: #6299
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
2021-08-09 15:33:12 +08:00
Minggui Cao
80ae3224d9 hv: expose PMC to core partition VM
for core partition VM (like RTVM), PMC is always used for performance
profiling / tuning, so expose PMC capability and pass-through its MSRs
to the VM.

Tracked-On: #6307
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-07-27 14:58:28 +08:00
Zide Chen
6428ca8f5b hv: VMPTRLD and VMCLEAR VMCS with the common APIs
Remove the direct calls to exec_vmptrld() or exec_vmclear(), and replace
with the wrapper APIs load_va_vmcs() and clear_va_vmcs().

Tracked-On: #5923
Signed-off-by: Zide Chen <zide.chen@intel.com>
2021-05-26 11:22:26 +08:00
Zide Chen
f5744174b5 hv: nested: support for VMPTRLD emulation
This patch emulates the VMPTRLD instruction. L0 hypervisor (ACRN) caches
the VMCS12 that is passed down from the VMPTRLD instruction, and merges it
with VMCS01 to create VMCS02 to run the nested VM.

- Currently ACRN can't cache multiple VMCS12 on one vCPU, so it needs to
  flushes active but not current VMCS12s to L1 guest.
- ACRN creates VMCS02 to run nested VM based on VMCS12:
  1) copy VMCS12 from guest memory to the per vCPU cache VMCS12
  2) initialize VMCS02 revision ID and host-state area
  3) load shadow fields from cache VMCS12 to VMCS02
  4) enable VMCS shadowing before L1 Vm entry

Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
2021-05-24 10:34:01 +08:00
Shuo A Liu
3fffa68665 hv: Support WAITPKG instructions in guest VM
TPAUSE, UMONITOR or UMWAIT instructions execution in guest VM cause
a #UD if "enable user wait and pause" (bit 26) of VMX_PROCBASED_CTLS2
is not set. To fix this issue, set the bit 26 of VMX_PROCBASED_CTLS2.

Besides, these WAITPKG instructions uses MSR_IA32_UMWAIT_CONTROL. So
load corresponding vMSR value during context switch in of a vCPU.

Please note, the TPAUSE or UMWAIT instruction causes a VM exit if the
"RDTSC exiting" and "enable user wait and pause" are both 1. In ACRN
hypervisor, "RDTSC exiting" is always 0. So TPAUSE or UMWAIT doesn't
cause a VM exit.

Performance impact:
    MSR_IA32_UMWAIT_CONTROL read costs ~19 cycles;
    MSR_IA32_UMWAIT_CONTROL write costs ~63 cycles.

Tracked-On: #6006
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
2021-05-13 14:19:50 +08:00
Liang Yi
688a41c290 hv: mod: do not use explicit arch name when including headers
Instead of "#include <x86/foo.h>", use "#include <asm/foo.h>".

In other words, we are adopting the same practice in Linux kernel.

Tracked-On: #5920
Signed-off-by: Liang Yi <yi.liang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2021-05-08 11:15:46 +08:00
Liang Yi
33ef656462 hv/mod-irq: use arch specific header files
Requires explicit arch path name in the include directive.

The config scripts was also updated to reflect this change.

Tracked-On: #5825
Signed-off-by: Peter Fang <peter.fang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2021-03-24 11:38:14 +08:00
Shuo A Liu
c11c07e0fe hv: keylocker: Support Key Locker feature for guest VM
KeyLocker is a new security feature available in new Intel CPUs that
protects data-encryption keys for the Advanced Encryption Standard (AES)
algorithm. These keys are more valuable than what they guard. If stolen
once, the key can be repeatedly used even on another system and even
after vulnerability closed.

It also introduces a CPU-internal wrapping key (IWKey), which is a key-
encryption key to wrap AES keys into handles. While the IWKey is
inaccessible to software, randomizing the value during the boot-time
helps its value unpredictable.

Keylocker usage:
 - New “ENCODEKEY” instructions take original key input and returns HANDLE
   crypted by an internal wrap key (IWKey, init by “LOADIWKEY” instruction)
 - Software can then delete the original key from memory
 - Early in boot/software, less likely to have vulnerability that allows
   stealing original key
 - Later encrypt/decrypt can use the HANDLE through new AES KeyLocker
   instructions
 - Note:
      * Software can use original key without knowing it (use HANDLE)
      * HANDLE cannot be used on other systems or after warm/cold reset
      * IWKey cannot be read from CPU after it's loaded (this is the
        nature of this feature) and only 1 copy of IWKey inside CPU.

The virtualization implementation of Key Locker on ACRN is:
 - Each vCPU has a 'struct iwkey' to store its IWKey in struct
   acrn_vcpu_arch.
 - At initilization, every vCPU is created with a random IWKey.
 - Hypervisor traps the execution of LOADIWKEY (by 'LOADIWKEY exiting'
   VM-exectuion control) of vCPU to capture and save the IWKey if guest
   set a new IWKey. Don't support randomization (emulate CPUID to
   disable) of the LOADIWKEY as hypervisor cannot capture and save the
   random IWKey. From keylocker spec:
   "Note that a VMM may wish to enumerate no support for HW random IWKeys
   to the guest (i.e. enumerate CPUID.19H:ECX[1] as 0) as such IWKeys
   cannot be easily context switched. A guest ENCODEKEY will return the
   type of IWKey used (IWKey.KeySource) and thus will notice if a VMM
   virtualized a HW random IWKey with a SW specified IWKey."
 - In context_switch_in() of each vCPU, hypervisor loads that vCPU's
   IWKey into pCPU by LOADIWKEY instruction.
 - There is an assumption that ACRN hypervisor will never use the
   KeyLocker feature itself.

This patch implements the vCPU's IWKey management and the next patch
implements host context save/restore IWKey logic.

Tracked-On: #5695
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-02-03 13:54:45 +08:00
Shuo A Liu
4483e93bd1 hv: keylocker: Enable the tertiary VM-execution controls
In order for a VMM to capture the IWKey values of guests, processors
that support Key Locker also support a new "LOADIWKEY exiting"
VM-execution control in bit 0 of the tertiary processor-based
VM-execution controls.

This patch enables the tertiary VM-execution controls.

Tracked-On: #5695
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-02-03 13:54:45 +08:00
Jie Deng
27d5711b62 hv: add a cache register for VMX_PROC_VM_EXEC_CONTROLS
This patch adds a cache register for VMX_PROC_VM_EXEC_CONTROLS
to avoid the frequent VMCS access.

Tracked-On: #5605
Signed-off-by: Jie Deng <jie.deng@intel.com>
2021-01-08 17:37:20 +08:00
Jie Deng
f291997811 hv: split-lock: using MTF instead of TF(#DB)
The TF is visible to guest which may be modified by
the guest, so it is not a safe method to emulate the
split-lock. While MTF is specifically designed for
single-stepping in x86/Intel hardware virtualization
VT-x technology which is invisible to the guest. Use MTF
to single step the VCPU during the emulation of split lock.

Tracked-On: #5605
Signed-off-by: Jie Deng <jie.deng@intel.com>
2021-01-08 17:37:20 +08:00
Jie Deng
47e193a7bb hv: Add split-lock emulation for LOCK prefix instruction
This patch adds the split-lock emulation.
If a #AC is caused by instruction with LOCK prefix then
emulate it, otherwise, inject it back as it used to be.

 1. Kick other vcpus of the guest to stop execution
    and set the TF flag to have #DB if the guest has more
    than one vcpu.

 2. Skip over the LOCK prefix and resume the current
    vcpu back to guest for execution.

 3. Notify other vcpus to restart exception at the end
    of handling the #DB since we have completed
    the LOCK prefix instruction emulation.

Tracked-On: #5605
Signed-off-by: Jie Deng <jie.deng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-12-31 11:12:33 +08:00
Yonghua Huang
442fc30117 hv: refine virtualization flow for cr0 and cr4
- The current code to virtualize CR0/CR4 is not
   well designed, and hard to read.
   This patch reshuffle the logic to make it clear
   and classify those bits into PASSTHRU,
   TRAP_AND_PASSTHRU, TRAP_AND_EMULATE & reserved bits.

Tracked-On: #5586
Signed-off-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2020-12-18 11:21:22 +08:00
Conghui Chen
906284eec8 hv: remove unnecessary debug symbols
remove unnecessary debug symbols.

Tracked-On: #4956
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
2020-06-18 14:05:56 +08:00
Conghui Chen
53f74f18ac hv: remove repeated assignment
remove repeated assignment for vmcs_pa.

Tracked-On: #4956
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
2020-06-18 14:05:56 +08:00
dongshen
14fa9c563c hv: define posted interrupt IRQs/vectors
This is a preparation patch for adding support for VT-d PI
related vCPU scheduling.

ACRN does not support vCPU migration, one vCPU always runs on
the same pCPU, so PI's ndst is never changed after startup.

VCPUs of a VM won’t share same pCPU. So the maximum possible number
of VCPUs that can run on a pCPU is CONFIG_MAX_VM_NUM.

Allocate unique Activation Notification Vectors (ANV) for each vCPU
that belongs to the same pCPU, the ANVs need only be unique within each
pCPU, not across all vCPUs. This reduces # of pre-allocated ANVs for
posted interrupts to CONFIG_MAX_VM_NUM, and enables ACRN to avoid
switching between active and wake-up vector values in the posted
interrupt descriptor on vCPU scheduling state changes.

A total of CONFIG_MAX_VM_NUM consecutive IRQs/vectors are reserved
for posted interrupts use.

The code first initializes vcpu->arch.pid.control.bits.nv dynamically
(will be added in subsequent patch), the other code shall use
vcpu->arch.pid.control.bits.nv instead of the hard-coded notification vectors.

Rename some functions:
  apicv_post_intr --> apicv_trigger_pi_anv
  posted_intr_notification --> handle_pi_notification
  setup_posted_intr_notification --> setup_pi_notification

Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
2020-04-15 13:47:22 +08:00
Li Fei1
74edf2e54b hv: vmcs: remove vmcs field check for a vcpu
The VMCS field is an embedded array for a vCPU. So there's no need to check for
NULL before use.

Tracked-On: #3813
Signed-off-by: Li Fei1 <fei1.li@intel.com>
2020-04-09 09:40:26 +08:00
dongshen
8f732f2809 hv: move pi_desc related code from vlapic.h/vlapic.c to vmx.h/vmx.c/vcpu.h
The posted interrupt descriptor is more of a vmx/vmcs concept than a vlapic
concept. struct acrn_vcpu_arch stores the vmx/vmcs info, so put struct pi_desc
in struct acrn_vcpu_arch.

Remove the function apicv_get_pir_desc_paddr()

A few coding style/typo fixes

Tracked-On: #4506
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@Intel.com>
2020-03-31 10:30:30 +08:00
Yonghua Huang
fd4775d044 hv: rename VECTOR_XXX and XXX_IRQ Macros
1. Align the coding style for these MACROs
  2. Align the values of fixed VECTORs

Tracked-On: #4348
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2020-01-14 10:21:23 +08:00
Yonghua Huang
b90862921e hv: rename the ACRN_DBG_XXX
Refine this MACRO 'ACRN_DBG_XXX' to 'DBG_LEVEL_XXX'

Tracked-On: #4348
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2020-01-14 10:21:23 +08:00
Shuo A Liu
b59e5a870a hv: Disable HLT and PAUSE-loop exiting emulation in lapic passthrough
In lapic passthrough mode, it should passthrough HLT/PAUSE execution
too. This patch disable their emulation when switch to lapic passthrough mode.

Tracked-On: #4329
Tested-by: Dongsheng Zhang <dongsheng.x.zhang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-01-13 10:16:30 +08:00
Shuo A Liu
4303ccb1a0 hv: HLT emulation in hypervisor
HLT emulation is import to CPU resource maximum utilization. vcpu
doing HLT means it is idle and can give up CPU proactively. Thus, we
pause the vcpu thread in HLT emulation and resume it while event happens.

When vcpu enter HLT, its vcpu thread will sleep, but the vcpu state is
still 'Running'.

VM ID    PCPU ID    VCPU ID    VCPU ROLE    VCPU STATE
=====    =======    =======    =========    ==========
  0         0          0       PRIMARY      Running
  0         1          1       SECONDARY    Running

Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-01-07 11:23:32 +08:00
Shuo A Liu
4115dd6241 hv: PAUSE-loop exiting support in hypervisor
As we enabled cpu sharing, PAUSE-loop exiting can help vcpu
to release its pcpu proactively. It's good for performance.

VMX_PLE_GAP: upper bound on the amount of time between two successive
executions of PAUSE in a loop.
VMX_PLE_WINDOW: upper bound on the amount of time a guest is allowed to
execute in a PAUSE loop

Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-01-07 11:23:32 +08:00
Binbin Wu
4ae350a091 hv: vmcs: pass-through instruction INVPCID to VM
According to SDM Vol.3 Section 25.3, behavior of the INVPCID
instruction is determined first by the setting of the “enable
INVPCID” VM-execution control:
- If the “enable INVPCID” VM-execution control is 0, INVPCID
  causes an invalid-opcode   exception (#UD).
- If the “enable INVPCID” VM-execution control is 1, treatment
  is based on the setting of the “INVLPG exiting” VM-execution
  control:
  * If the “INVLPG exiting” VM-execution control is 0, INVPCID
    operates normally.
  * If the “INVLPG exiting” VM-execution control is 1, INVPCID
    causes a VM exit.

In current implementation, hypervisor doesn't set “INVLPG exiting”
VM-execution control, this patch sets “enable INVPCID” VM-execution
control to 1 when the instruction is supported by physical cpu.
If INVPCID is supported by physical cpu, INVPCID will not cause VM
exit in VM.
If INVPCID is not supported by physical cpu, INVPCID causes an #UD
in VM.
When INVPCID is passed-through to VM, According to SDM Vol.3 28.3.3.1,
INVPCID instruction invalidates linear mappings and combined mappings.
They are required to do so only for the current VPID.
HV assigned a unique vpid for each vCPU, if guest uses wrong PCID,
it would not affect other vCPUs.

Tracked-On: #4296
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-01-02 10:47:34 +08:00
Binbin Wu
96331462b7 hv: vmcs: remove redundant check on vpid
ACRN relies on the capability of VPID to avoid EPT flushes during VMX transitions.
This capability is checked as a must have hardware capability, otherwise, ACRN will
refuse to boot.
Also, the current code has already made sure each vpid for a virtual cpu is valid.

So, no need to check the validity of vpid for vcpu and enable VPID for vCPU by default.

Tracked-On: #4296
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-01-02 10:47:34 +08:00
Kaige Fu
6d1f63aef0 HV: Use NMI to replace INIT signal for lapic-pt VMs S5
We have implemented a new notification method using NMI.
So replace the INIT notification method with the NMI one.
Then we can remove INIT notification related code later.

Tracked-On: #3886
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
2019-12-17 09:45:52 +08:00
Kaige Fu
a13909cedc HV: Use NMI-window exiting to address req missing issue
There is a window where we may miss the current request in the
notification period when the work flow is as the following:

      CPUx +                   + CPUr
           |                   |
           |                   +--+
           |                   |  | Handle pending req
           |                   <--+
           +--+                |
           |  | Set req flag   |
           <--+                |
           +------------------>---+
           |     Send NMI      |  | Handle NMI
           |                   <--+
           |                   |
           |                   |
           |                   +--> vCPU enter
           |                   |
           +                   +

So, this patch enables the NMI-window exiting to trigger the next vmexit
once there is no "virtual-NMI blocking" after vCPU enter into VMX non-root
mode. Then we can process the pending request on time.

Tracked-On: #3886
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
2019-12-17 09:45:52 +08:00
Kaige Fu
72f7f69c47 HV: Use NMI to kick lapic-pt vCPU's thread
ACRN hypervisor needs to kick vCPU off VMX non-root mode to do some
operations in hypervisor, such as interrupt/exception injection, EPT
flush etc. For non lapic-pt vCPUs, we can use IPI to do so. But, it
doesn't work for lapic-pt vCPUs as the IPI will be injected to VMs
directly without vmexit.

Without the way to kick the vCPU off VMX non-root mode to handle pending
request on time, there may be fatal errors triggered.
1). Certain operation may not be carried out on time which may further
    lead to fatal errors. Taking the EPT flush request as an example, once we
    don't flush the EPT on time and the guest access the out-of-date EPT,
    fatal error happens.
2). ACRN now will send an IPI with vector 0xF0 to target vCPU to kick the vCPU
    off VMX non-root mode if it wants to do some operations on target vCPU.
    However, this way doesn't work for lapic-pt vCPUs. The IPI will be delivered
    to the guest directly without vmexit and the guest will receive a unexpected
    interrupt. Consequently, if the guest can't handle this interrupt properly,
    fatal error may happen.

The NMI can be used as the notification signal to kick the vCPU off VMX
non-root mode for lapic-pt vCPUs. So, this patch uses NMI as notification signal
to address the above issues for lapic-pt vCPUs.

Tracked-On: #3886
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
2019-12-17 09:45:52 +08:00
Conghui Chen
d48da2af3a hv: bugfix for debug commands with smp_call
With cpu-sharing enabled, there are more than 1 vcpu on 1 pcpu, so the
smp_call handler should switch the vmcs to the target vcpu's vmcs. Then
get the info.

dump_vcpu_reg and dump_guest_mem should run on certain vmcs, otherwise,
there will be #GP error.

Renaming:
vcpu_dumpreg -> dump_vcpu_reg
switch_vmcs -> load_vmcs

Tracked-On: #4178
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-12-05 11:19:35 +08:00
Yonghua Huang
450d2cf2e9 hv: trap RDPMC instruction execution from any guest
PMU is hidden from any guest, UD is expected when guest
try to execute 'rdpmc' instruction.

this patch sets 'RDPMC exiting' in Processorbased
VM-execution control.

Tracked-On: #3453
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-12-03 14:14:27 +08:00
Shuo A Liu
3cb32bb6e3 hv: make init_vmcs as a event of VCPU
After changing init_vmcs to smp call approach and do it before
launch_vcpu, it could work with noop scheduler. On real sharing
scheudler, it has problem.

   pcpu0                  pcpu1            pcpu1
 vmBvcpu0                vmAvcpu1         vmBvcpu1
                         vmentry
init_vmcs(vmBvcpu1) vmexit->do_init_vmcs
                    corrupt current vmcs
                        vmentry fail
launch_vcpu(vmBvcpu1)

This patch mark a event flag when request vmcs init for specific vcpu. When
it is running and checking pending events, will do init_vmcs firstly.

Tracked-On: #4178
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-12-02 16:20:43 +08:00
Yonghua Huang
e51386fe04 hv: refine 'uint64_t' string print format in x86 moudle
Use "0x%lx" string to format 'uint64_t' type value,
 instead of "0x%llx".

Tracked-On: #4020
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2019-11-09 11:42:38 +08:00
Kaige Fu
c22f899a5e HV: Fix poweroff issue of hard RTVM
We should use INIT signal to notify the vcpu threads when
powering off the hard RTVM. To achive this, we should set
the vcpu->thread_obj.notify_mode as SCHED_NOTIFY_INIT.

Patch (27163df9 hv: sched: add sleep/wake for thread object)
tries to set the notify_mode according `is_lapic_pt_enabled(vcpu)`
in function prepare_vcpu. But at this point, the is_lapic_pt_enabled(vcpu)
will always return false. Consequently, it will set notify_mode
as SCHED_NOTIFY_IPI. Then leads to the failure of powering off
hard RTVM.

This patch fixes it by:
  - Initialize the notify_mode as SCHED_NOTIFY_IPI in prepare_vcpu.
  - Set notify_mode as SCHED_NOTIFY_INIT after guest is trying to
    enable x2apic mode of passthru lapic.

Tracked-On: #3975
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
2019-11-04 10:28:16 +08:00
Shuo A Liu
dadcdcefa0 hv: sched: support vcpu context switch on one pcpu
To support cpu sharing, multiple vcpu can run on same pcpu. We need do
necessary vcpu context switch. This patch add below actions in context
switch.
  1) fxsave/fxrstor;
  2) save/restore MSRs: MSR_IA32_STAR, MSR_IA32_LSTAR,
	MSR_IA32_FMASK, MSR_IA32_KERNEL_GS_BASE;
  3) switch vmcs.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
891e46453d hv: sched: move pcpu_id from acrn_vcpu to thread_object
With cpu sharing enabled, we will map acrn_vcpu to thread_object
in scheduling. From modulization perspective, we'd better hide the
pcpu_id in acrn_vcpu and move it to thread_object.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
f85106d1ed hv: Do not reset vcpu thread's stack when reset_vcpu
vcpu thread's stack shouldn't follow reset_vcpu to reset.
There is also a bug here:
while vcpu B thread set vcpu->running to false, other vcpu A thread
will treat the vcpu B is paused while it has not been switch out
completely, then reset_vcpu will reset the vcpu B thread's stack and
corrupt its running context.

This patch will remove the vcpu thread's stack reset from reset_vcpu.
With the change, we need do init_vmcs between vcpu startup address be
settled and scheduled in. And switch_to_idle() is not needed anymore
as S3 thread's stack will not be reset.

Tracked-On: #3813
Signed-off-by: Fengwei Yin <fengwei.yin@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
2019-10-23 12:47:08 +08:00
Mingqiang Chi
de0a5a48d6 hv:remove some unnecessary includes
--remove unnecessary includes
--remove unnecssary forward-declaration for 'struct vhm_request'

Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-10-15 14:40:39 +08:00
Mingqiang Chi
60adef33d3 hv:move down structures run_context and ext_context
Now the structures(run_context & ext_context) are defined
in vcpu.h,and they are used in the lower-layer modules(wakeup.S),
this patch move down the structures from vcpu.h to cpu.h
to avoid reversed dependency.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-16 14:51:36 +08:00
Binbin Wu
cd1ae7a89e hv: cat: isolate hypervisor from rtvm
Currently, the clos id of the cpu cores in vmx root mode is the same as non-root mode.
For RTVM, if hypervisor share the same clos id with non-root mode, the cacheline may
be polluted due to the hypervisor code execution when vmexit.

The patch adds hv_clos in vm_configurations.c
Hypervisor initializes clos setting according to hv_clos during physical cpu cores initialization.
For RTVM,  MSR auto load/store areas are used to switch different settings for VMX root/non-root
mode for RTVM.

Tracked-On: #2462
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-09-05 09:59:13 +08:00
Shiqing Gao
062fe19800 hv: move vmx_rdmsr_pat/vmx_wrmsr_pat from vmcs.c to vmsr.c
This patch moves vmx_rdmsr_pat/vmx_wrmsr_pat from vmcs.c to vmsr.c,
so that these two functions would become internal functions inside
vmsr.c.
This approach improves the modularity.

v1 -> v2:
 * remove 'vmx_rdmsr_pat'
 * rename 'vmx_wrmsr_pat' with 'write_pat_msr'

Tracked-On: #1842
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-08-14 10:51:35 +08:00
Kaige Fu
accdadce98 HV: Enable vART support by intercepting TSC_ADJUST MSR
The policy of vART is that software in native can run in
VM too. And in native side, the relationship between the
ART hardware and TSC is:

  pTSC = (pART * M) / N + pAdjust

The vART solution is:
  - Present the ART capability to guest through CPUID leaf
    15H for M/N which identical to the physical values.
  - PT devices see the pART (vART = pART).
  - Guest expect: vTSC = vART * M / N + vAdjust.
  - VMCS.OFFSET = vTSC - pTSC = vAdjust - pAdjust.

So to support vART, we should do the following:
  1. if vAdjust and vTSC are changed by guest, we should change
     VMCS.OFFSET accordingly.
  2. Make the assumption that the pAjust is never touched by ACRN.

For #1, commit "a958fea hv: emulate IA32_TSC_ADJUST MSR" has implementation
it. And for #2, acrn never touch pAdjust.

--
 v2 -> v3:
   - Add comment when handle guest TSC_ADJUST and TSC accessing.
   - Initialize the VMCS.OFFSET = vAdjust - pAdjust.

 v1 -> v2
   Refine commit message to describe the whole vART solution.

Tracked-On: #3501
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-31 13:29:51 +08:00
Sainath Grandhi
536c69b9ff hv: distinguish between LAPIC_PASSTHROUGH configured vs enabled
ACRN supports LAPIC emulation for guests using x86 APICv. When guest OS/BIOS
switches from xAPIC to x2APIC mode of operation, ACRN also supports switching
froom LAPIC emulation to LAPIC passthrough to guest. User/developer needs to
configure GUEST_FLAG_LAPIC_PASSTHROUGH for guest_flags in the corresponding
VM's config for ACRN to enable LAPIC passthrough.

This patch does the following

1)Fixes a bug in the abovementioned feature. For a guest that is
configured with GUEST_FLAG_LAPIC_PASSTHROUGH, during the time period guest is
using xAPIC mode of LAPIC,  virtual interrupts are not delivered. This can be
manifested as guest hang when it does not receive virtual timer interrupts.

2)ACRN exposes physical topology via CPUID leaf 0xb to LAPIC PT VMs. This patch
removes that condition and exposes virtual topology via CPUID leaf 0xb.

Tracked-On: #3136
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-05-23 11:15:31 +08:00
Binbin Wu
a581f50600 hv: vmsr: enable msr ia32_misc_enable emulation
Add MSR_IA32_MISC_ENABLE to emulated_guest_msrs to enable the emulation.
Init MSR_IA32_MISC_ENABLE for guest.

Tracked-On: #2834
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-05-09 16:35:15 +08:00
Kaige Fu
a85d11ca7a HV: Add prefix 'p' before 'cpu' to physical cpu related functions
This patch adds prefix 'p' before 'cpu' to physical cpu related functions.
And there is no code logic change.

Tracked-On: #2991
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-04-24 10:50:28 +08:00
Li, Fei1
2c13ac7400 hv: vmcs: minor fix about APICv feature setting
1) Shouldn't try to set APIC-register virtualization if the physical doesn't
support APICV advanced mode.
2) Remove all APICv features VMCS setting when LAPIC is passed through to guest.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-04-23 09:47:36 +08:00
Li, Fei1
f769f7457b hv: vlapic: add combined constraint for APICv
Add two functions to combine constraint for APICv:
is_apicv_basic_feature_supported: check the physical platform whether support
"Use TPR shadow", "Virtualize APIC accesses" and "Virtualize x2APIC mode"
is_apicv_advanced_feature_supported: check the physical platform whether support
"APIC-register virtualization", "Virtual-interrupt delivery" and
"Process posted interrupts".

If the physical platform only support APICv basic feature, enable "Use TPR shadow"
and "Virtualize APIC accesses" for xAPIC mode; enable "Use TPR shadow" and
"Virtualize x2APIC mode" for x2APIC. Otherwise, if the physical platform support
APICv advanced feature, enable APICv feature for xAPIC mode and x2APIC mode.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-03-12 20:37:06 +08:00
Mingqiang Chi
b24a8a0f59 hv:cleanup header file for guest folder
cleanup arch/x86/guest, only include some necessary
header files, doesn't include hypervisor.h

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
	modified:   arch/x86/guest/assign.c
	modified:   arch/x86/guest/ept.c
	modified:   arch/x86/guest/guest_memory.c
	modified:   arch/x86/guest/instr_emul.c
	modified:   arch/x86/guest/io_emul.c
	modified:   arch/x86/guest/pm.c
	modified:   arch/x86/guest/trusty.c
	modified:   arch/x86/guest/ucode.c
	modified:   arch/x86/guest/vcpu.c
	modified:   arch/x86/guest/vcpuid.c
	modified:   arch/x86/guest/virq.c
	modified:   arch/x86/guest/virtual_cr.c
	modified:   arch/x86/guest/vlapic.c
	modified:   arch/x86/guest/vm.c
	modified:   arch/x86/guest/vmcall.c
	modified:   arch/x86/guest/vmcs.c
	modified:   arch/x86/guest/vmexit.c
	modified:   arch/x86/guest/vmsr.c
	modified:   arch/x86/guest/vmtrr.c
	modified:   arch/x86/pm.c
	modified:   include/arch/x86/guest/assign.h
	modified:   include/arch/x86/guest/ept.h
	modified:   include/arch/x86/guest/guest_memory.h
	modified:   include/arch/x86/guest/instr_emul.h
	modified:   include/arch/x86/guest/io_emul.h
	modified:   include/arch/x86/guest/trusty.h
	modified:   include/arch/x86/guest/vcpu.h
	modified:   include/arch/x86/guest/vmcs.h
	modified:   include/arch/x86/io_req.h
	modified:   include/arch/x86/irq.h
	modified:   include/arch/x86/lapic.h
	modified:   include/arch/x86/mmu.h
	modified:   include/arch/x86/pgtable.h
	modified:   include/common/ptdev.h
	modified:   include/debug/console.h
2019-02-21 10:38:30 +08:00
Yonghua Huang
0f5c6e2c18 HV: fix address type violation for MSR_LOAD/STORE
According to SDM 24.7.2, these two MSRs should be
  configured with physical address.

Tracked-On: #861
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-01-31 22:23:41 +08:00
Yan, Like
17f4cd18a3 hv: fix dest of IPI for CPU with lapic_pt
With lapic_pt based on vlapic, guest always see vitual apic_id.
We need to convert the virtual apic_id from guest to physical apic_id
before writing ICR.

SMP for VM with lapic_pt is supported with this fix.

Tracked-On: #2351
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-01-31 18:18:44 +08:00