Commit Graph

954 Commits

Author SHA1 Message Date
Vijay Dhanraj
6e8b413689 HV: Add support to assign non-contiguous HPA regions for pre-launched VM
On some platforms, HPA regions for Virtual Machine can not be
contiguous because of E820 reserved type or PCI hole. In such
cases, pre-launched VMs need to be assigned non-contiguous memory
regions and this patch addresses it.

To keep things simple, current design has the following assumptions,
	1. HPA2 always will be placed after HPA1
	2. HPA1 and HPA2 don’t share a single ve820 entry.
	(Create multiple entries if needed but not shared)
	3. Only support 2 non-contiguous HPA regions (can extend
	at a later point for multiple non-contiguous HPA)

Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Tracked-On: #4195
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-12-09 11:28:38 +08:00
Conghui Chen
d48da2af3a hv: bugfix for debug commands with smp_call
With cpu-sharing enabled, there are more than 1 vcpu on 1 pcpu, so the
smp_call handler should switch the vmcs to the target vcpu's vmcs. Then
get the info.

dump_vcpu_reg and dump_guest_mem should run on certain vmcs, otherwise,
there will be #GP error.

Renaming:
vcpu_dumpreg -> dump_vcpu_reg
switch_vmcs -> load_vmcs

Tracked-On: #4178
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-12-05 11:19:35 +08:00
Binbin Wu
3d412266bc hv: ept: build 4KB page mapping in EPT for RTVM for MCE on PSC
Deterministic is important for RTVM. The mitigation for MCE on
Page Size Change converts a large page to 4KB pages runtimely during
the vmexit triggered by the instruction fetch in the large page.
These vmexits increase nondeterminacy, which should be avoided for RTVM.
This patch builds 4KB page mapping in EPT for RTVM to avoid these vmexits.

Tracked-On: #4101
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-12-03 09:17:04 +08:00
Binbin Wu
192859ee02 hv: ept: apply MCE on page size change mitigation conditionally
Only apply the software workaround on the models that might be
affected by MCE on page size change. For these models that are
known immune to the issue, the mitigation is turned off.

Atom processors are not afftected by the issue.
Also check the CPUID & MSR to check whether the model is immune to the issue:
CPU is not vulnerable when both CPUID.(EAX=07H,ECX=0H).EDX[29] and
IA32_ARCH_CAPABILITIES[IF_PSCHANGE_MC_NO] are 1.

Other cases not listed above, CPU may be vulnerable.

This patch also changes MACROs for MSR IA32_ARCH_CAPABILITIES bits to UL instead of U
since the MSR is 64bit.

Tracked-On: #4101
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-12-03 09:17:04 +08:00
Shuo A Liu
3cb32bb6e3 hv: make init_vmcs as a event of VCPU
After changing init_vmcs to smp call approach and do it before
launch_vcpu, it could work with noop scheduler. On real sharing
scheudler, it has problem.

   pcpu0                  pcpu1            pcpu1
 vmBvcpu0                vmAvcpu1         vmBvcpu1
                         vmentry
init_vmcs(vmBvcpu1) vmexit->do_init_vmcs
                    corrupt current vmcs
                        vmentry fail
launch_vcpu(vmBvcpu1)

This patch mark a event flag when request vmcs init for specific vcpu. When
it is running and checking pending events, will do init_vmcs firstly.

Tracked-On: #4178
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-12-02 16:20:43 +08:00
Conghui Chen
e61412981d hv: support xsave in context switch
xsave area:
    legacy region: 512 bytes
    xsave header: 64 bytes
    extended region: < 3k bytes

So, pre-allocate 4k area for xsave. Use certain instruction to save or
restore the area according to hardware xsave feature set.

Tracked-On: #4166
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-12-02 09:31:12 +08:00
Li Fei1
6ee076f7df hv: assign: rename ptirq_msix_remap to ptirq_prepare_msix_remap
ptirq_msix_remap doesn't do the real remap, that's the vmsi_remap and vmsix_remap_entry
does. ptirq_msix_remap only did the preparation.

Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
2019-11-29 08:53:07 +08:00
Alexander Merritt
ea131eea41 HV: add DRHD index to pci_pdev
We add new member pci_pdev.drhd_idx associating the DRHD
(IOMMU) with this pdev, and a method to convert a pbdf of a device to
this index by searching the pdev list.

Partial patch: drhd_index initialization handled in subsequent patch.

Tracked-On: #4134
Signed-off-by: Alexander Merritt <alex.merritt@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-11-27 09:49:32 +08:00
Mingqiang Chi
bd0dbd274d hv:add dump_guest_mem
add shell command to support dump dump guest memory
e.g.
dump_guest_mem vm_id, gva, length

Tracked-On: #4144
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-11-26 10:58:19 +08:00
Sainath Grandhi
22a1bd6948 hv: Fix the definition of struct representing interrupt hw frame
In 64-bit mode, processor pushes SS and RSP onto stack unconditionally.
Also when dumping the exception info, it makes more sense to dump
the RSP at the point of interrupt, rather than the RSP after pushing
context (including GPRs)

Tracked-On: #4102
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-11-13 16:06:35 +08:00
Binbin Wu
fa3888c12a hv: ept: disable execute right on large pages
Issue description:
-----------------
Machine Check Error on Page Size Change
Instruction fetch may cause machine check error if page size
and memory type was changed without invalidation on some
processors[1][2]. Malicious guest kernel could trigger this issue.

This issue applies to both primary page table and extended page
tables (EPT), however the primary page table is controlled by
hypervisor only. This patch mitigates the situation in EPT.

Mitigation details:
------------------
Implement non-execute huge pages in EPT.
This patch series clears the execute permission (bit 2) in the
EPT entries for large pages. When EPT violation is triggered by
guest instruction fetch, hypervisor converts the large page to
smaller 4 KB pages and restore the execute permission, and then
re-execute the guest instruction.

The current patch turns on the mitigation by default.
The follow-up patches will conditionally turn on/off the feature
per processor model.

[1] Refer to erratum KBL002 in "7th Generation Intel Processor
Family and 8th Generation Intel Processor Family for U Quad Core
Platforms Specification Update"
https://www.intel.com/content/dam/www/public/us/en/documents/specification-updates/7th-gen-core-family-spec-update.pdf
[2] Refer to erratum SKL002 in "6th Generation Intel Processor
Family Specification Update"
https://www.intel.com/content/www/us/en/products/docs/processors/core/desktop-6th-gen-core-family-spec-update.html

Tracked-On: #4101
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-11-13 08:00:36 +08:00
Victor Sun
3411f00b5b HV: fix misra violation on platform clos array
MISRA C requires specified bounds for arrays declaration, previous declaration
of platform_clos_array in board.h does not meet the requirement.

Tracked-On: #3987

Signed-off-by: Victor Sun <victor.sun@intel.com>
2019-11-08 16:40:14 +08:00
Victor Sun
9e92f3cdf5 HV: move dmar info definition to board.c
The DMAR info is board specific so move the structure definition to board.c.
As a configruation file, the whole board.c could be generated by acrn-config
tool for each board.

Please note we only provide DMAR info MACROs for nuc7i7dnb board. For other
boards, ACPI_PARSE_ENABLED must be set to y in Kconfig to let hypervisor parse
DMAR info, or use acrn-config tool to generate DMAR info MACROs if user won't
enable ACPI parse code for FuSa consideration.

The patch also moves the function of get_dmar_info() to vtd.c, so dmar_info.c
could be removed.

Tracked-On: #3977

Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-11-08 16:40:14 +08:00
Victor Sun
589be88cf6 HV: link CONFIG_MAX_IOMMU_NUM and MAX_DRHDS to DRHD_COUNT
The value of CONFIG_MAX_IOMMU and MAX_DRHDS are identical to DRHD_COUNT
which defined in platform ACPI table, so remove CONFIG_MAX_IOMMU_NUM
from Kconfig and link these three MACROs together.

Tracked-On: #3977

Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-11-08 16:40:14 +08:00
Li Fei1
620a1c5215 hv: mmu: rename e820 to hv_e820
Now the e820 structure store ACRN HV memory layout, not the physical memory layout.
Rename e820 to hv_hv_e820 to show this explicitly.

Tracked-On: #4007
Signed-off-by: Li Fei1 <fei1.li@intel.com>
2019-11-07 08:47:02 +08:00
Li, Fei1
9d26dab6d6 hv: mmio: add a lock to protect mmio_node access
After adding PCI BAR remap support, mmio_node may unregister when there's others
access it. This patch add a lock to protect mmio_node access.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-11-01 14:44:11 +08:00
Li, Fei1
2c158d5ad4 hv: io: add unregister_mmio_emulation_handler API
Since guest could re-program PCI device MSI-X table BAR, we should add mmio
emulation handler unregister.
However, after add unregister_mmio_emulation_handler API, emul_mmio_regions
is no longer accurate. Just replace it with max_emul_mmio_regions which records
the max index of the emul_mmio_node.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-10-29 14:49:55 +08:00
Li, Fei1
dc1e2adaec hv: vpci: add PCI BAR re-program address check
In theory, guest could re-program PCI BAR address to any address. However, ACRN
hypervisor only support [0, top_address_space) EPT memory mapping. So we need to
check whether the PCI BAR re-program address is within this scope.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-29 14:49:55 +08:00
Shuo A Liu
5f8e7a6cb7 hv: sched: add kick_thread to support notification
kick means to notify one thread_object. If the target thread object is
running, send a IPI to notify it; if the target thread object is
runnable, make reschedule on it.

Also add kick_vcpu API in vcpu layer to notify vcpu.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-25 13:00:21 +08:00
Shuo A Liu
f04c491259 hv: sched: decouple scheduler from schedule framework
This patch decouple some scheduling logic and abstract into a scheduler.
Then we have scheduler, schedule framework. From modulization
perspective, schedule framework provides some APIs for other layers to
use, also interact with scheduler through scheduler interaces.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-25 13:00:21 +08:00
Yonghua Huang
2e62ad9574 hv[v2]: remove registration of default port IO and MMIO handlers
- The default behaviors of PIO & MMIO handlers are same
   for all VMs, no need to expose dedicated APIs to register
   default hanlders for SOS and prelaunched VM.

Tracked-On: #3904
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2019-10-24 13:21:19 +08:00
Mingqiang Chi
d81872ba18 hv:Change the function parameter for init_ept_mem_ops
Currently the parameter of init_ept_mem_ops is
'struct acrn_vm *vm' for this api,change it to
'struct memory_ops *mem_ops' and 'vm_id' to avoid
the reversed dependency, page.c is hardware layer and vm structure
is its upper-layer stuff.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:48:30 +08:00
Shuo A Liu
dadcdcefa0 hv: sched: support vcpu context switch on one pcpu
To support cpu sharing, multiple vcpu can run on same pcpu. We need do
necessary vcpu context switch. This patch add below actions in context
switch.
  1) fxsave/fxrstor;
  2) save/restore MSRs: MSR_IA32_STAR, MSR_IA32_LSTAR,
	MSR_IA32_FMASK, MSR_IA32_KERNEL_GS_BASE;
  3) switch vmcs.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
7e66c0d4fa hv: sched: use get_running_vcpu to replace per_cpu vcpu with cpu sharing
With cpu sharing enabled, per_cpu vcpu cannot work properly as we might
has multiple vcpus running on one pcpu.
Add a schedule API sched_get_current to get current thread_object on
specific pcpu, also add a vcpu API get_running_vcpu to get corresponding
vcpu of the thread_object.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
891e46453d hv: sched: move pcpu_id from acrn_vcpu to thread_object
With cpu sharing enabled, we will map acrn_vcpu to thread_object
in scheduling. From modulization perspective, we'd better hide the
pcpu_id in acrn_vcpu and move it to thread_object.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Jian Jun Chen
1d194ede61 hv: support reference time enlightenment
Two time related synthetic MSRs are implemented in this patch. Both of
them are partition wide MSR.
- HV_X64_MSR_TIME_REF_COUNT is read only and it is used to return the
  partition's reference counter value in 100ns units.
- HV_X64_MSR_REFERENCE_TSC is used to set/get the reference TSC page,
  a sequence number, an offset and a multiplier are defined in this
  page by hypervisor and guest OS can use them to calculate the
  normalized reference time since partition creation, in 100ns units.

Tracked-On: #3831
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-10-22 10:09:16 +08:00
wenwumax
048155d3d6 hv: support minimum set of TLFS
This patch implements the minimum set of TLFS functionality. It
includes 6 vCPUID leaves and 3 vMSRs.

- 0x40000001 Hypervisor Vendor-Neutral Interface Identification
- 0x40000002 Hypervisor System Identity
- 0x40000003 Hypervisor Feature Identification
- 0x40000004 Implementation Recommendations
- 0x40000005 Hypervisor Implementation Limits
- 0x40000006 Implementation Hardware Features

- HV_X64_MSR_GUEST_OS_ID Reporting the guest OS identity
- HV_X64_MSR_HYPERCALL Establishing the hypercall interface
- HV_X64_MSR_VP_INDEX Retrieve the vCPU ID from hypervisor

Tracked-On: #3832
Signed-off-by: wenwumax <wenwux.ma@intel.com>
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-10-22 10:09:16 +08:00
Mingqiang Chi
292d1a15f9 hv:Wrap some APIs related with guest pm
-- change some APIs to static
-- combine two APIs to init_guest_pm

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-10-21 10:13:02 +08:00
Shuo A Liu
837e4d8788 hv: sched: rename schedule related structs and vars
prepare_switch_out -> switch_out
prepare_switch_in -> switch_in
prepare_switch -> do_switch
run_thread_t -> thread_entry_t
sched_object -> thread_object
sched_object.thread -> thread_object.thread_entry
sched_obj -> thread_obj
sched_context -> sched_control
sched_ctx -> sched_ctl

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-16 10:25:53 +08:00
Binbin Wu
d19592a33e hv: vmsr: disable prmrr related msrs in vm
PRMRR related MSRs need to be configured by platform BIOS / bootloader.
These settings are not allowed to be changed by guest.
VMs currently have no requirement to access these MSRs even when vSGX is enabled.
So, this patch disables PRMRR related MSRs in VM.

Tracked-On: #3739
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-10-15 15:13:11 +08:00
Mingqiang Chi
de0a5a48d6 hv:remove some unnecessary includes
--remove unnecessary includes
--remove unnecssary forward-declaration for 'struct vhm_request'

Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-10-15 14:40:39 +08:00
Shuo A Liu
1c526e6d16 hv: use vcpu_affinity[] in vm_config to support vcpu assignment
Add this vcpu_affinity[] for each VM to indicate the assignment policy.
With it, pcpu_bitmap is not needed, so remove it from vm_config.
Instead, vcpu_affinity is a must for each VM.

This patch also add some sanitize check of vcpu_affinity[]. Here are
some rules:
  1) only one bit can be set for each vcpu_affinity of vcpu.
  2) two vcpus in same VM cannot be set with same vcpu_affinity.
  3) vcpu_affinity cannot be set to the pcpu which used by pre-launched VM.

v4: config SDC with CONFIG_MAX_KATA_VM_NUM
v5: config SDC with CONFIG_MAX_PCPU_NUM

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
2019-09-24 11:58:45 +08:00
Shuo A Liu
ca2540fe8c hv: return pre-defined vcpu_num from HV to upper layer
There is plan that define each VM configuration statically in HV and let
DM just do VM creating and destroying. So DM need get vcpu_num
information when VM creating.

This patch return the vcpu_num via the API param. And also initial the
VMs' cpu_num for existing scenarios.

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-24 11:58:45 +08:00
Shuo A Liu
d588703976 hv: Add a helper to account bitmap weight
Sometimes we need know the number of 1 in one bitmap. This patch provide
a inline function bitmap_weight for it.

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-24 11:58:45 +08:00
Mingqiang Chi
489937f7b8 hv:check pcpu numbers during init_pcpu_pre
it will panic if phys_cpu_num > CONFIG_MAX_PCPU_NUM
during init_pcpu_pre,after that no need to check it again.

Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-09-24 09:02:05 +08:00
Qi Yadong
3ebeecf060 hv: save/restore TSC in host's suspend/resume path
TSC would be reset to 0 when enter suspend state on some platform.
This will fail the secure timer checking in secure world because
secure world leverage the TSC as source of secure timer which should
be increased monotonously.

This patch save/restore TSC in host suspend/resume path to guarantee
the mono increasing TSC.

Note: There should no timer setup before TSC resumed.

Tracked-On: #3697
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-19 13:50:50 +08:00
Mingqiang Chi
60adef33d3 hv:move down structures run_context and ext_context
Now the structures(run_context & ext_context) are defined
in vcpu.h,and they are used in the lower-layer modules(wakeup.S),
this patch move down the structures from vcpu.h to cpu.h
to avoid reversed dependency.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-16 14:51:36 +08:00
Mingqiang Chi
4f98cb03a7 hv:move down the structure intr_source
Now the structures(union source & struct intr_source) are defined
in ptdev.h,they are used in vtd.c and assign.c,
vtd is the hardware layer and ptdev is the upper-layer module
from the modularization perspective, this patch move down
these structures to avoid reversed dependency.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-16 14:51:36 +08:00
Shuo A Liu
4742d1c747 hv: ptdev: move softirq_dev_entry_list from vm structure to per_cpu region
Using per_cpu list to record ptdev interrupts is more reasonable than
recording them per-vm. It makes dispatching such interrupts more easier
as we now do it in softirq which happens following interrupt context of
each pcpu.

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-16 09:36:52 +08:00
Shuo A Liu
2cc45534d6 hv: move pcpu offline request and vm shutdown request from schedule
From modulization perspective, it's not suitable to put pcpu and vm
related request operations in schedule. So move them to pcpu and vm
module respectively. Also change need_offline return value to bool.

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
2019-09-16 09:36:52 +08:00
Yin Fengwei
6b6aa80600 hv: pm: fix coding style issue
This patch fix the coding style issue introduced by previous two
patches.

Tracked-On: #3564
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
2019-09-11 17:30:24 +08:00
Yin Fengwei
f039d75998 hv: pm: enhencement platform S5 entering operation
Now, we have assumption that SOS control whether the platform
should enter S5 or not. So when SOS tries enter S5, we just
forward the S5 request to native port which make sure platform
S5 is totally aligned with SOS S5.

With higher serverity guest introduced,this assumption is not
true any more. We need to extend the platform S5 process to
handle higher severity guest:
  - For DM launched RTVM, we need to make sure these guests
    is off before put the whole platfrom to S5.

  - For pre-launched VM, there are two cases:
    * if os running in it support S5, we wait for guests off.
    * if os running in it doesn't support S5, we expect it
      will invoke one hypercall to notify HV to shutdown it.
      NOTE: this case is not supported yet. Will add it in the
      future.

Tracked-On: #3564
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
2019-09-11 17:30:24 +08:00
Mingqiang Chi
c691c5bd3c hv:add volatile keyword for some variables
pcpu_active_bitmap was read continuously in wait_pcpus_offline(),
acrn_vcpu->running was read continuously in pause_vcpu(),
add volatile keyword to ensure that such accesses are not
optimised away by the complier.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-09-10 11:26:35 +08:00
Mingqiang Chi
cd40980d5f hv:change function parameter for invept
change the input parameter from vcpu to eptp in order to let this api
more generic, no need to care normal world or secure world.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-09-05 16:32:30 +08:00
Binbin Wu
cd1ae7a89e hv: cat: isolate hypervisor from rtvm
Currently, the clos id of the cpu cores in vmx root mode is the same as non-root mode.
For RTVM, if hypervisor share the same clos id with non-root mode, the cacheline may
be polluted due to the hypervisor code execution when vmexit.

The patch adds hv_clos in vm_configurations.c
Hypervisor initializes clos setting according to hv_clos during physical cpu cores initialization.
For RTVM,  MSR auto load/store areas are used to switch different settings for VMX root/non-root
mode for RTVM.

Tracked-On: #2462
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-09-05 09:59:13 +08:00
Mingqiang Chi
38ca8db19f hv:tiny cleanup
-- remove some unnecessary includes
-- fix a typo
-- remove unnecessary void before launch_vms

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-09-05 09:58:47 +08:00
Yan, Like
3f84acda09 hv: add "invariant TSC" cap detection
ACRN HV is designed/implemented with "invariant TSC" capability, which wasn't checked at boot time.
This commit adds the "invairant TSC" detection, ACRN fails to boot if there wasn't "invariant TSC" capability.

Tracked-On: #3636
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-05 09:58:16 +08:00
dongshen
295701cc55 hv: remove mptable code for pre-launched VMs
Now that ACPI is enabled for pre-launched VMs, we can remove all mptable code.

Tracked-On: #3601
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-08-29 10:12:25 +08:00
dongshen
b447ce3d86 hv: add ACPI support for pre-launched VMs
Statically define the per vm RSDP/XSDT/MADT ACPI template tables in vacpi.c,
RSDP/XSDT tables are copied to guest physical memory after checksum is
calculated. For MADT table, first fix up process id/lapic id in its lapic
subtable, then the MADT table's checksum is calculated before it is copies to
guest physical memory.

Add 8-bit checksum function in util.h

Tracked-On: #3601
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-08-29 10:12:25 +08:00
Binbin Wu
5c81659713 hv: ept: flush cache for modified ept entries
EPT tables are shared by MMU and IOMMU.
Some IOMMUs don't support page-walk coherency, the cpu cache of EPT entires
should be flushed to memory after modifications, so that the modifications
are visible to the IOMMUs.

This patch adds a new interface to flush the cache of modified EPT entires.
There are different implementations for EPT/PPT entries:
- For PPT, there is no need to flush the cpu cache after update.
- For EPT, need to call iommu_flush_cache to make the modifications visible
to IOMMUs.

Tracked-On: #3607
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
2019-08-26 10:47:17 +08:00