Commit Graph

51 Commits

Author SHA1 Message Date
Yifan Liu
69fef2e685 hv: debug: Add hv console callback to VM-exit event
In some scenarios (e.g., nested) where lapic-pt is enabled for a vcpu
running on a pcpu hosting console timer, the hv console will be
inaccessible.

This patch adds the console callback to every VM-exit event so that the
console can still be somewhat functional under such circumstance.

Since this is VM-exit driven, the VM-exit/second can be low in certain
cases (e.g., idle or running stress workload). In extreme cases where
the guest panics/hangs, there will be no VM-exits at all.

In most cases, the shell is laggy but functional (probably enough for
debugging purpose).

Tracked-On: #6312
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
2021-07-22 10:08:23 +08:00
Tao Yuhong
2aba7f31db HV: rename splitlock file name
Because the emulation code is for both split-lock and uc-lock,
rename splitlock.c/splitlock.h to lock_instr_emul.c/lock_instr_emul.h

Tracked-On: #6299
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-07-21 11:25:47 +08:00
Tao Yuhong
7926504011 HV: rename split-lock emulation APIs
Because the emulation code is for both split-lock and uc-lock, Changed
these API names:
vcpu_kick_splitlock_emulation() -> vcpu_kick_lock_instr_emulation()
vcpu_complete_splitlock_emulation() -> vcpu_complete_lock_instr_emulation()
emulate_splitlock() -> emulate_lock_instr()

Tracked-On: #6299
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-07-21 11:25:47 +08:00
Zide Chen
811e367ad9 hv: nested: implement nested VM exit handler
Nested VM exits happen when vCPU is in guest mode (VMCS02 is current).
Initially we reflect all nested VM exits to L1 hypervisor. To prepare
the environment to run L1 guest:

- restore some VMCS fields to the value as what L1 hypervisor programmed.
- VMCLEAR VMCS02, VMPTRLD VMCS01 and enable VMCS shadowing.
- load the non-shadowing host states from VMCS12 to VMCS01 guest states.
- VMRESUME to L1 guest with this modified VMCS01.

Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Alexander Merritt <alex.merritt@intel.com>
2021-06-03 15:23:25 +08:00
Zide Chen
4acc65eacc hv: nested: support for INVEPT and INVVPID emulation
invvpid and invept instructions cause VM exits unconditionally.
For initial support, we pass all the instruction operands as is
to the pCPU.

Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-06-03 15:23:25 +08:00
Zide Chen
4c29a0bb29 hv: nested: support for VMLAUNCH and VMRESUME emulation
Implement the VMLAUNCH and VMRESUME instructions, allowing a L1
hypervisor to run nested guests.

- merge VMCS control fields and VMCS guest fields to VMCS02
- clear shadow VMCS indicator on VMCS02 and load VMCS02 as current
- set VMCS12 launch state to "launched" in VMLAUNCH handler

Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Alex Merritt <alex.merritt@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-06-03 15:23:25 +08:00
Zide Chen
6d69058a9d hv: nested: support for VMREAD and VMWRITE emulation
This patch implements the VMREAD and VMWRITE instructions.

When L1 guest is running with an active VMCS12, the “VMCS shadowing”
VM-execution control is always set to 1 in VMCS01. Thus the possible
behavior of VMREAD or VMWRITE from L1 could be:

- It causes a VM exit to L0 if the bit corresponds to the target VMCS
  field in the VMREAD bitmap or VMWRITE bitmap is set to 1.
- It accesses the VMCS referenced by VMCS01 link pointer (VMCS02 in
  our case) if the above mentioned bit is set to 0.

This patch handles the VMREAD and VMWRITE VM exits in this way:

- on VMWRITE, it writes the desired VMCS value to the respective field
  in the cached VMCS12. For VMCS fields that need to be synced to VMCS02,
  sets the corresponding dirty flag.

- on VMREAD, it reads the desired VMCS value from the cached VMCS12.

Tracked-On: #5923
Signed-off-by: Alex Merritt <alex.merritt@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
2021-05-24 10:34:01 +08:00
Zide Chen
2bd269c11c hv: nested: support for VMCLEAR emulation
This patch is to emulate VMCLEAR instruction.

L1 hypervisor issues VMCLEAR on a VMCS12 whose state could be any of
these: active and current, active but not current, not yet VMPTRLDed.

To emulate the VMCLEAR instruction, ACRN sets the VMCS12 launch state to
"clear", and if L0 already cached this VMCS12, need to sync it back to
guest memory:

- sync shadow fields from shadow VMCS VMCS to cache VMCS12
- copy cache VMCS12 to L1 guest memory

Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
2021-05-24 10:34:01 +08:00
Zide Chen
f5744174b5 hv: nested: support for VMPTRLD emulation
This patch emulates the VMPTRLD instruction. L0 hypervisor (ACRN) caches
the VMCS12 that is passed down from the VMPTRLD instruction, and merges it
with VMCS01 to create VMCS02 to run the nested VM.

- Currently ACRN can't cache multiple VMCS12 on one vCPU, so it needs to
  flushes active but not current VMCS12s to L1 guest.
- ACRN creates VMCS02 to run nested VM based on VMCS12:
  1) copy VMCS12 from guest memory to the per vCPU cache VMCS12
  2) initialize VMCS02 revision ID and host-state area
  3) load shadow fields from cache VMCS12 to VMCS02
  4) enable VMCS shadowing before L1 Vm entry

Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
2021-05-24 10:34:01 +08:00
Zide Chen
0a1ac2f4a0 hv: nested: support for VMXOFF emulation
This patch implements the VMXOFF instruction. By issuing VMXOFF,
L1 guest Leaves VMX Operation.

- cleanup VCPU nested virtualization context states in VMXOFF handler.
- implement check_vmx_permission() to check permission for VMX operation
  for VMXOFF and other VMX instructions.

Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
2021-05-24 10:34:01 +08:00
Zide Chen
fc8f07e740 hv: nested: support for VMXON emulation
This patch emulates VMXON instruction. Basically checks some
prerequisites to enable VMX operation on L1 guest (next patch), and
prepares some virtual hardware environment in L0.

Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
2021-05-24 10:34:01 +08:00
Li Fei1
30febed0e1 hv: cache: wrap common APIs
Wrap three common Cache APIs:
- flush_invalidate_all_cache
- flush_cacheline
- flush_cache_range

Tracked-On: #5830
Signed-off-by: Li Fei1 <fei1.li@intel.com>
2021-05-14 09:18:00 +08:00
Liang Yi
688a41c290 hv: mod: do not use explicit arch name when including headers
Instead of "#include <x86/foo.h>", use "#include <asm/foo.h>".

In other words, we are adopting the same practice in Linux kernel.

Tracked-On: #5920
Signed-off-by: Liang Yi <yi.liang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2021-05-08 11:15:46 +08:00
Liang Yi
33ef656462 hv/mod-irq: use arch specific header files
Requires explicit arch path name in the include directive.

The config scripts was also updated to reflect this change.

Tracked-On: #5825
Signed-off-by: Peter Fang <peter.fang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2021-03-24 11:38:14 +08:00
Liang Yi
ff732cfb2a hv/mod_irq: move guest interrupt API out of x86/irq.h
A new x86/guest/virq.h head file now contains all guest
related interrupt handling API.

Tracked-On: #5825
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
2021-03-24 11:38:14 +08:00
Yonghua Huang
ea44bb6c4d hv: wrap function to check software SRAM support
Below boolean function are defined in this patch:
 - is_software_sram_enabled() to check if SW SRAM
   feature is enabled or not.
 - set global variable 'is_sw_sram_initialized'
   to file static.

Tracked-On: #5649
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-03-11 09:42:44 +08:00
Shuo A Liu
38cd5b481d hv: keylocker: host keylocker iwkey context switch
Different vCPU may have different IWKeys. Hypervisor need do the iwkey
context switch.

This patch introduce a load_iwkey() function to do that. Switches the
host iwkey when the switch_in vCPU satisfies:
  1) keylocker feature enabled
  2) Different from the current loaded one.

Two opportunities to do the load_iwkey():
  1) Guest enables CR4.KL bit.
  2) vCPU thread context switch.

load_iwkey() costs ~600 cycles when do the load IWKey action.

Tracked-On: #5695
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-02-03 13:54:45 +08:00
Shuo A Liu
c11c07e0fe hv: keylocker: Support Key Locker feature for guest VM
KeyLocker is a new security feature available in new Intel CPUs that
protects data-encryption keys for the Advanced Encryption Standard (AES)
algorithm. These keys are more valuable than what they guard. If stolen
once, the key can be repeatedly used even on another system and even
after vulnerability closed.

It also introduces a CPU-internal wrapping key (IWKey), which is a key-
encryption key to wrap AES keys into handles. While the IWKey is
inaccessible to software, randomizing the value during the boot-time
helps its value unpredictable.

Keylocker usage:
 - New “ENCODEKEY” instructions take original key input and returns HANDLE
   crypted by an internal wrap key (IWKey, init by “LOADIWKEY” instruction)
 - Software can then delete the original key from memory
 - Early in boot/software, less likely to have vulnerability that allows
   stealing original key
 - Later encrypt/decrypt can use the HANDLE through new AES KeyLocker
   instructions
 - Note:
      * Software can use original key without knowing it (use HANDLE)
      * HANDLE cannot be used on other systems or after warm/cold reset
      * IWKey cannot be read from CPU after it's loaded (this is the
        nature of this feature) and only 1 copy of IWKey inside CPU.

The virtualization implementation of Key Locker on ACRN is:
 - Each vCPU has a 'struct iwkey' to store its IWKey in struct
   acrn_vcpu_arch.
 - At initilization, every vCPU is created with a random IWKey.
 - Hypervisor traps the execution of LOADIWKEY (by 'LOADIWKEY exiting'
   VM-exectuion control) of vCPU to capture and save the IWKey if guest
   set a new IWKey. Don't support randomization (emulate CPUID to
   disable) of the LOADIWKEY as hypervisor cannot capture and save the
   random IWKey. From keylocker spec:
   "Note that a VMM may wish to enumerate no support for HW random IWKeys
   to the guest (i.e. enumerate CPUID.19H:ECX[1] as 0) as such IWKeys
   cannot be easily context switched. A guest ENCODEKEY will return the
   type of IWKey used (IWKey.KeySource) and thus will notice if a VMM
   virtualized a HW random IWKey with a SW specified IWKey."
 - In context_switch_in() of each vCPU, hypervisor loads that vCPU's
   IWKey into pCPU by LOADIWKEY instruction.
 - There is an assumption that ACRN hypervisor will never use the
   KeyLocker feature itself.

This patch implements the vCPU's IWKey management and the next patch
implements host context save/restore IWKey logic.

Tracked-On: #5695
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2021-02-03 13:54:45 +08:00
Yonghua Huang
a6420e8cfa hv: cleanup legacy terminologies in RTCM module
This patch updates below terminologies according
 to the latest TCC Spec:
  PTCT -> RTCT
  PTCM -> RTCM
  pSRAM -> Software SRAM

Tracked-On: #5649
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2021-01-28 11:29:25 +08:00
Yonghua Huang
806f479108 hv: rename RTCM source files
'ptcm' and 'ptct' are legacy name according
   to the latest TCC spec, hence rename below files
   to avoid confusing:

  ptcm.c -> rtcm.c
  ptcm.h -> rtcm.h
  ptct.h -> rtct.h

Tracked-On: #5649
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2021-01-28 11:29:25 +08:00
Jie Deng
8aebf5526f hv: move split-lock logic into dedicated file
This patch move the split-lock logic into dedicated file
to reduce LOC. This may make the logic more clear.

Tracked-On: #5605
Signed-off-by: Jie Deng <jie.deng@intel.com>
2021-01-08 17:37:20 +08:00
Jie Deng
27d5711b62 hv: add a cache register for VMX_PROC_VM_EXEC_CONTROLS
This patch adds a cache register for VMX_PROC_VM_EXEC_CONTROLS
to avoid the frequent VMCS access.

Tracked-On: #5605
Signed-off-by: Jie Deng <jie.deng@intel.com>
2021-01-08 17:37:20 +08:00
Jie Deng
f291997811 hv: split-lock: using MTF instead of TF(#DB)
The TF is visible to guest which may be modified by
the guest, so it is not a safe method to emulate the
split-lock. While MTF is specifically designed for
single-stepping in x86/Intel hardware virtualization
VT-x technology which is invisible to the guest. Use MTF
to single step the VCPU during the emulation of split lock.

Tracked-On: #5605
Signed-off-by: Jie Deng <jie.deng@intel.com>
2021-01-08 17:37:20 +08:00
Junming Liu
1cd932e568 hv: refine code style
refine code style

Tracked-On: #4020

Signed-off-by: Junming Liu <junming.liu@intel.com>
2020-11-26 12:56:28 +08:00
Junming Liu
56eb859ea4 hv: vmexit: refine xsetbv_vmexit_handler API
From SDM Vol.2C - XSETBV instruction description,
If CR4.OSXSAVE[bit 18] = 0,
execute "XSETBV" instruction will generate #UD exception.

From SDM Vol.3C 25.1.1,#UD exception has priority over VM exits,
So if vCPU execute "XSETBV" instruction when CR4.OSXSAVE[bit 18] = 0,
VM exits won't happen.

While hv inject #GP if vCPU execute "XSETBV" instruction
when CR4.OSXSAVE[bit 18] = 0.
It's a wrong behavior, this patch will fix the bug.

Tracked-On: #4020

Signed-off-by: Junming Liu <junming.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-11-26 12:56:28 +08:00
Qian Wang
a557105e71 hv: ept: set EPT cache attribute to WB for pSRAM
pSRAM memory should be cachable. However, it's not a RAM or a normal MMIO,
so we can't use the an exist API to do the EPT mapping and set the EPT cache
attribute to WB for it. Now we assume that SOS must assign the PSRAM area as
a whole and as a separate memory region whose base address is PSRAM_BASE_HPA.
If the hpa of the EPT mapping region is equal to PSRAM_BASE_HPA, we think this
EPT mapping is for pSRAM, we change the EPT mapping cache attribute to WB.

And fix a minor bug when SOS trap out to emulate wbinvd when pSRAM is enabled.

Tracked-On: #5330
Signed-off-by: Qian Wang <qian1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-11-02 15:56:30 +08:00
Qian Wang
ca2aee225c hv: skip pSRAM for guest WBINVD emulation
Use ept_flush_leaf_page to emulate guest WBINVD when PTCM is enabled and skip
the pSRAM in ept_flush_leaf_page.
TODO: do we need to emulate WBINVD in HV side.

Tracked-On: #5330
Signed-off-by: Qian Wang <qian1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-11-02 10:29:43 +08:00
Shuo A Liu
9a15ea82ee hv: pause all other vCPUs in same VM when do wbinvd emulation
Invalidate cache by scanning and flushing the whole guest memory is
inefficient which might cause long execution time for WBINVD emulation.
A long execution in hypervisor might cause a vCPU stuck phenomenon what
impact Windows Guest booting.

This patch introduce a workaround method that pausing all other vCPUs in
the same VM when do wbinvd emulation.

Tracked-On: #4703
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-05-21 15:21:29 +08:00
Shuo A Liu
4303ccb1a0 hv: HLT emulation in hypervisor
HLT emulation is import to CPU resource maximum utilization. vcpu
doing HLT means it is idle and can give up CPU proactively. Thus, we
pause the vcpu thread in HLT emulation and resume it while event happens.

When vcpu enter HLT, its vcpu thread will sleep, but the vcpu state is
still 'Running'.

VM ID    PCPU ID    VCPU ID    VCPU ROLE    VCPU STATE
=====    =======    =======    =========    ==========
  0         0          0       PRIMARY      Running
  0         1          1       SECONDARY    Running

Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-01-07 11:23:32 +08:00
Shuo A Liu
4115dd6241 hv: PAUSE-loop exiting support in hypervisor
As we enabled cpu sharing, PAUSE-loop exiting can help vcpu
to release its pcpu proactively. It's good for performance.

VMX_PLE_GAP: upper bound on the amount of time between two successive
executions of PAUSE in a loop.
VMX_PLE_WINDOW: upper bound on the amount of time a guest is allowed to
execute in a PAUSE loop

Tracked-On: #4329
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2020-01-07 11:23:32 +08:00
Kaige Fu
5f9d1379bc HV: Remove INIT signal notification related code
We don't use INIT signal notification method now. This patch
removes them.

Tracked-On: #3886
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
2019-12-17 09:45:52 +08:00
Kaige Fu
a13909cedc HV: Use NMI-window exiting to address req missing issue
There is a window where we may miss the current request in the
notification period when the work flow is as the following:

      CPUx +                   + CPUr
           |                   |
           |                   +--+
           |                   |  | Handle pending req
           |                   <--+
           +--+                |
           |  | Set req flag   |
           <--+                |
           +------------------>---+
           |     Send NMI      |  | Handle NMI
           |                   <--+
           |                   |
           |                   |
           |                   +--> vCPU enter
           |                   |
           +                   +

So, this patch enables the NMI-window exiting to trigger the next vmexit
once there is no "virtual-NMI blocking" after vCPU enter into VMX non-root
mode. Then we can process the pending request on time.

Tracked-On: #3886
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
2019-12-17 09:45:52 +08:00
Kaige Fu
40ba7e8686 HV: Don't make NMI injection req when notifying vCPU
The NMI for notification should not be inject to guest. So,
this patch drops NMI injection request when we use NMI
to notify vCPUs. Meanwhile, ACRN doesn't support vNMI well
and there is no well-designed way to check if the NMI is
for notification or for guest now. So, we take all the NMIs as
notificaton NMI for hard rtvm temporarily. It means that the
hard rtvm will never receive NMI with this patch applied.

TODO: vNMI support is not ready yet. we will add it later.

Tracked-On: #3886
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
2019-12-17 09:45:52 +08:00
Kaige Fu
aae974b473 HV: trace leaf and subleaf of cpuid
We care more about leaf and subleaf of cpuid than vcpu_id.
So, this patch changes the cpuid trace-entry to trace the leaf
and subleaf of this cpuid vmexit.

Tracked-On: #4175
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
2019-12-03 16:34:14 +08:00
Yonghua Huang
e51386fe04 hv: refine 'uint64_t' string print format in x86 moudle
Use "0x%lx" string to format 'uint64_t' type value,
 instead of "0x%llx".

Tracked-On: #4020
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2019-11-09 11:42:38 +08:00
Shuo A Liu
891e46453d hv: sched: move pcpu_id from acrn_vcpu to thread_object
With cpu sharing enabled, we will map acrn_vcpu to thread_object
in scheduling. From modulization perspective, we'd better hide the
pcpu_id in acrn_vcpu and move it to thread_object.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Yonghua Huang
49e60ae151 hv: refine handler to 'rdpmc' vmexit
PMC is hidden from guest and hypervisor should
 inject UD to guest when 'rdpmc' vmexit.

Tracked-On: #3453
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-07-24 15:05:46 +08:00
Yonghua Huang
dde20bdb03 HV:refine the handler for 'invept' vmexit
'invept' is not expected in guest and hypervisor should
inject UD when 'invept' VM exit happens.

Tracked-On: #3444
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
2019-07-22 13:23:47 +08:00
Yin, Fengwei
11e67f1c4a softirq: move softirq from hv_main to interrupt context
softirq shouldn't be bounded to vcpu thread. One issue for this
is shell (based on timer) can't work if we don't start any guest.

This change also is trying best to make softirq handler running
with irq enabled.

Also update the irq disable/enabel in vmexit handler to align
with the usage in vcpu_thread.

Tracked-On: #3387
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-07-22 09:55:06 +08:00
Binbin Wu
f3ffce4be1 hv: vmexit: ecx should be checked instead of rcx when xsetbv
According to SDM, xsetbv writes the contents of registers EDX:EAX into the 64-bit
extended control register (XCR) specified in the ECX register. (On processors
that support the Intel 64 architecture, the high-order 32 bits of RCX are ignored.)

In current code, RCX is checked, should ingore the high-order 32bits.

Tracked-On: #3360
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Yonghua Huang <yonghua.huang@intel.com>
2019-07-05 09:48:53 +08:00
Li, Fei1
09a63560f4 hv: vm_manage: minor fix about triple_fault_shutdown_vm
The current implement will trigger shutdown vm request on the BSP VCPU on the VM,
not the VCPU will trap out because triple fault. However, if the BSP VCPU on the VM
is handling another IO emulation, it may overwrite the triple fault IO request on
the vhm_request_buffer in function acrn_insert_request. The atomic operation of
get_vhm_req_state can't guarantee the vhm_request_buffer will not access by another
IO request if it is not running on the corresponding VCPU. So it should trigger
triple fault shutdown VM IO request on the VCPU which trap out because of triple
fault exception.
Besides, rt_vm_pm1a_io_write will do the right thing which we shouldn't do it in
triple_fault_shutdown_vm.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-03 17:44:45 +08:00
Yuan Liu
f8934df355 HV: implement wbinvd instruction emulation
wbinvd is used to write back all modified cache lines in the processor's
internal cache to main memory and invalidates(flushes) the internal caches.

Using clflushopt instructions to emulate wbinvd to flush each
guest vm memory, if CLFLUSHOPT is not supported, boot will fail.

Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-06-20 09:32:55 +08:00
Yin Fengwei
6b7233446f xsave: inject GP when guest tries to write 1 to XCR0 reserved bit
According to SDM vol1 13.3:
Write 1 to reserved bit of XCR0 will trigger GP.

This patch make ACRN behavior align with SDM definition.

Tracked-On: #3239
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-06-12 08:28:53 +08:00
Zide Chen
26f08680eb hv: shutdown guest VM upon triple fault exceptions
This patch implements triple fault vmexit handler and base on VM types:

- post-launched VMs: shutdown_target_vm() injects S5 PIO write to request
  DM to shut down the target VM.
- pre-launched VMs: shut down the guest.
- SOS: similarly, but shut down all the non real-time post-launched VMs that
  depend to SOS before shutting down SOS.

Tracked-On: #2700
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-05-15 11:20:12 +08:00
Kaige Fu
a85d11ca7a HV: Add prefix 'p' before 'cpu' to physical cpu related functions
This patch adds prefix 'p' before 'cpu' to physical cpu related functions.
And there is no code logic change.

Tracked-On: #2991
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-04-24 10:50:28 +08:00
Mingqiang Chi
69627ad7b6 hv: rename io_emul.c to vmx_io.c
renamed:  arch/x86/guest/io_emul.c -> arch/x86/guest/vmx_io.c
renamed:  include/arch/x86/guest/io_emul.h
	   -> include/arch/x86/guest/vmx_io.h

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-04-12 10:09:26 +08:00
Kaige Fu
382acfaf28 HV: Using INIT to kick vCPUs off when RTVM poweroff by itself
When RTVM is trying to poweroff by itself, we use INIT to
kick vCPUs off the non-root mode.

For RTVM, only if vm state equal VM_POWERING_OFF, we take action to pause
the vCPUs with INIT signal. Otherwise, we will reject the pause request.

Tracked-On: #2865
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-03-29 16:17:44 +08:00
Binbin Wu
f32b59d73d hv: disable mpx capability for guest
This patch hide Memory Protection Extention (MPX) capability from guest.

- vCPUID change:
  Clear cpuid.07H.0.ebx[14]
  Clear cpuid.0DH.0.eax[4:3]
- vMSR change:
  Add MSR_IA32_BNDCFGS to un-supported MSR array.
- XCR0[4:3] is not allowed to set by guest.

Tracked-On: #2821
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-03-20 13:07:31 +08:00
Mingqiang Chi
b24a8a0f59 hv:cleanup header file for guest folder
cleanup arch/x86/guest, only include some necessary
header files, doesn't include hypervisor.h

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
	modified:   arch/x86/guest/assign.c
	modified:   arch/x86/guest/ept.c
	modified:   arch/x86/guest/guest_memory.c
	modified:   arch/x86/guest/instr_emul.c
	modified:   arch/x86/guest/io_emul.c
	modified:   arch/x86/guest/pm.c
	modified:   arch/x86/guest/trusty.c
	modified:   arch/x86/guest/ucode.c
	modified:   arch/x86/guest/vcpu.c
	modified:   arch/x86/guest/vcpuid.c
	modified:   arch/x86/guest/virq.c
	modified:   arch/x86/guest/virtual_cr.c
	modified:   arch/x86/guest/vlapic.c
	modified:   arch/x86/guest/vm.c
	modified:   arch/x86/guest/vmcall.c
	modified:   arch/x86/guest/vmcs.c
	modified:   arch/x86/guest/vmexit.c
	modified:   arch/x86/guest/vmsr.c
	modified:   arch/x86/guest/vmtrr.c
	modified:   arch/x86/pm.c
	modified:   include/arch/x86/guest/assign.h
	modified:   include/arch/x86/guest/ept.h
	modified:   include/arch/x86/guest/guest_memory.h
	modified:   include/arch/x86/guest/instr_emul.h
	modified:   include/arch/x86/guest/io_emul.h
	modified:   include/arch/x86/guest/trusty.h
	modified:   include/arch/x86/guest/vcpu.h
	modified:   include/arch/x86/guest/vmcs.h
	modified:   include/arch/x86/io_req.h
	modified:   include/arch/x86/irq.h
	modified:   include/arch/x86/lapic.h
	modified:   include/arch/x86/mmu.h
	modified:   include/arch/x86/pgtable.h
	modified:   include/common/ptdev.h
	modified:   include/debug/console.h
2019-02-21 10:38:30 +08:00
Arindam Roy
de8d85753e HV: Modularize vtd.c to remove acrn_vm usage
This patch is a modified one. It removes the usage
of acrn_vm struct from inside vtd.c.
It also puts struct iommu_domain inside vtd.h,
from vtd.c.
It modifies the signature of init_iommu_domain
in order to remove dependency on acrn_vm from
inside vtd.c.
Incorporated comments from Jason and Eddie.
Changed the name of sos_vm_domain to
fallback_iommu_domain
Removed any reference of sos_vm from vtd.[c|h]
files, including comments.

Tracked-On: #2496
Signed-off-by: Arindam Roy <arindam.roy@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-02-06 08:53:46 +08:00