Commit Graph

1277 Commits

Author SHA1 Message Date
Yonghua Huang
2e62ad9574 hv[v2]: remove registration of default port IO and MMIO handlers
- The default behaviors of PIO & MMIO handlers are same
   for all VMs, no need to expose dedicated APIs to register
   default hanlders for SOS and prelaunched VM.

Tracked-On: #3904
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2019-10-24 13:21:19 +08:00
Mingqiang Chi
d81872ba18 hv:Change the function parameter for init_ept_mem_ops
Currently the parameter of init_ept_mem_ops is
'struct acrn_vm *vm' for this api,change it to
'struct memory_ops *mem_ops' and 'vm_id' to avoid
the reversed dependency, page.c is hardware layer and vm structure
is its upper-layer stuff.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:48:30 +08:00
Shuo A Liu
0f70a5ca3a hv: sched: decouple idle stuff from schedule module
Let init thread end with run_idle_thread(), then idle thread take over and
start to do scheduling.
Change enter_guest_mode() to init_guest_mode() as run_idle_thread() is removed
out of it. Also add run_thread() in schedule module to run
thread_object's thread loop directly.

rename: switch_to_idle -> run_idle_thread

Tracked-On: #3813
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
27163df9b1 hv: sched: add sleep/wake for thread object
sleep one thread_object means to prevent it from being scheduled.
wake one thread_object is an opposite operation of sleep.
This patch also add notify_mode in thread_object to indicate how to
deliver the request.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
9b8c6e6a90 hv: sched: add status for thread_object
Now, we have three valid status for thread_object:
	THREAD_STS_RUNNING,
	THREAD_STS_RUNNABLE,
	THREAD_STS_BLOCKED.
This patch also provide several helpers to check the thread's status and
a status set wrapper function.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
fafd5cf063 hv: sched: move schedule initialization to each pcpu init
schedule infrastructure is per pcpu, so move its initialization to each
pcpu's initialization.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
dadcdcefa0 hv: sched: support vcpu context switch on one pcpu
To support cpu sharing, multiple vcpu can run on same pcpu. We need do
necessary vcpu context switch. This patch add below actions in context
switch.
  1) fxsave/fxrstor;
  2) save/restore MSRs: MSR_IA32_STAR, MSR_IA32_LSTAR,
	MSR_IA32_FMASK, MSR_IA32_KERNEL_GS_BASE;
  3) switch vmcs.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
7e66c0d4fa hv: sched: use get_running_vcpu to replace per_cpu vcpu with cpu sharing
With cpu sharing enabled, per_cpu vcpu cannot work properly as we might
has multiple vcpus running on one pcpu.
Add a schedule API sched_get_current to get current thread_object on
specific pcpu, also add a vcpu API get_running_vcpu to get corresponding
vcpu of the thread_object.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Shuo A Liu
891e46453d hv: sched: move pcpu_id from acrn_vcpu to thread_object
With cpu sharing enabled, we will map acrn_vcpu to thread_object
in scheduling. From modulization perspective, we'd better hide the
pcpu_id in acrn_vcpu and move it to thread_object.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-23 12:47:08 +08:00
Jian Jun Chen
1d194ede61 hv: support reference time enlightenment
Two time related synthetic MSRs are implemented in this patch. Both of
them are partition wide MSR.
- HV_X64_MSR_TIME_REF_COUNT is read only and it is used to return the
  partition's reference counter value in 100ns units.
- HV_X64_MSR_REFERENCE_TSC is used to set/get the reference TSC page,
  a sequence number, an offset and a multiplier are defined in this
  page by hypervisor and guest OS can use them to calculate the
  normalized reference time since partition creation, in 100ns units.

Tracked-On: #3831
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-10-22 10:09:16 +08:00
wenwumax
048155d3d6 hv: support minimum set of TLFS
This patch implements the minimum set of TLFS functionality. It
includes 6 vCPUID leaves and 3 vMSRs.

- 0x40000001 Hypervisor Vendor-Neutral Interface Identification
- 0x40000002 Hypervisor System Identity
- 0x40000003 Hypervisor Feature Identification
- 0x40000004 Implementation Recommendations
- 0x40000005 Hypervisor Implementation Limits
- 0x40000006 Implementation Hardware Features

- HV_X64_MSR_GUEST_OS_ID Reporting the guest OS identity
- HV_X64_MSR_HYPERCALL Establishing the hypercall interface
- HV_X64_MSR_VP_INDEX Retrieve the vCPU ID from hypervisor

Tracked-On: #3832
Signed-off-by: wenwumax <wenwux.ma@intel.com>
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-10-22 10:09:16 +08:00
Mingqiang Chi
292d1a15f9 hv:Wrap some APIs related with guest pm
-- change some APIs to static
-- combine two APIs to init_guest_pm

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-10-21 10:13:02 +08:00
Shuo A Liu
de157ab96c hv: sched: remove runqueue from current schedule logic
Currently we are using a 1:1 mapping logic for pcpu:vcpu. So don't need
a runqueue for it. Removing it as preparation work to abstract scheduler
framework.

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-16 10:25:53 +08:00
Shuo A Liu
837e4d8788 hv: sched: rename schedule related structs and vars
prepare_switch_out -> switch_out
prepare_switch_in -> switch_in
prepare_switch -> do_switch
run_thread_t -> thread_entry_t
sched_object -> thread_object
sched_object.thread -> thread_object.thread_entry
sched_obj -> thread_obj
sched_context -> sched_control
sched_ctx -> sched_ctl

Tracked-On: #3813
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-10-16 10:25:53 +08:00
Binbin Wu
d19592a33e hv: vmsr: disable prmrr related msrs in vm
PRMRR related MSRs need to be configured by platform BIOS / bootloader.
These settings are not allowed to be changed by guest.
VMs currently have no requirement to access these MSRs even when vSGX is enabled.
So, this patch disables PRMRR related MSRs in VM.

Tracked-On: #3739
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-10-15 15:13:11 +08:00
Mingqiang Chi
de0a5a48d6 hv:remove some unnecessary includes
--remove unnecessary includes
--remove unnecssary forward-declaration for 'struct vhm_request'

Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-10-15 14:40:39 +08:00
Li, Fei1
0dac373d93 hv: vpci: remove pci_msi_cap in pci_pdev
The MSI Message Address and Message Data have no valid data after Power-ON. So
there's no need to initialize them by reading the data from physical PCI configuration
space.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-10-14 15:09:03 +08:00
Shiqing Gao
c8bcab9006 hv: pci: update function "bdf_is_equal"
- update the function argument type to union
  Declaring argument as pointer is not necessary since it
  only does the comparison.

Tracked-On: #1842
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
2019-09-25 13:45:39 +08:00
Shiqing Gao
658fff27b4 hv: pci: update "union pci_bdf"
- add one more filed in "union pci_bdf"
- remove following interfaces:
  * pci_bus
  * pci_slot
  * pci_func
  * pci_devfn

Tracked-On: #1842
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
2019-09-25 13:45:39 +08:00
Shuo A Liu
9a23ec6b5a hv: remove unused pcpu assignment functions
As we introduced vcpu_affinity[] to assign vcpus to different pcpus, the
old policy and functions are not needed. Remove them.

Tracked-On: #3663
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-24 11:58:45 +08:00
Shuo A Liu
1c526e6d16 hv: use vcpu_affinity[] in vm_config to support vcpu assignment
Add this vcpu_affinity[] for each VM to indicate the assignment policy.
With it, pcpu_bitmap is not needed, so remove it from vm_config.
Instead, vcpu_affinity is a must for each VM.

This patch also add some sanitize check of vcpu_affinity[]. Here are
some rules:
  1) only one bit can be set for each vcpu_affinity of vcpu.
  2) two vcpus in same VM cannot be set with same vcpu_affinity.
  3) vcpu_affinity cannot be set to the pcpu which used by pre-launched VM.

v4: config SDC with CONFIG_MAX_KATA_VM_NUM
v5: config SDC with CONFIG_MAX_PCPU_NUM

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
2019-09-24 11:58:45 +08:00
Shuo A Liu
ca2540fe8c hv: return pre-defined vcpu_num from HV to upper layer
There is plan that define each VM configuration statically in HV and let
DM just do VM creating and destroying. So DM need get vcpu_num
information when VM creating.

This patch return the vcpu_num via the API param. And also initial the
VMs' cpu_num for existing scenarios.

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-24 11:58:45 +08:00
Shuo A Liu
d588703976 hv: Add a helper to account bitmap weight
Sometimes we need know the number of 1 in one bitmap. This patch provide
a inline function bitmap_weight for it.

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-24 11:58:45 +08:00
Shuo A Liu
f4ce9cc4a2 hv: make hypercall HC_CREATE_VCPU empty
Now, we create vcpus while VM being created in hypervisor. The
create vcpu hypercall will not be used any more. For compatbility,
keep the hypercall HC_CREATE_VCPU do nothing.

v4: Don't remove HC_CREATE_VCPU hypercall, let it do nothing.

Tracked-On: #3663
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
2019-09-24 11:58:45 +08:00
Mingqiang Chi
489937f7b8 hv:check pcpu numbers during init_pcpu_pre
it will panic if phys_cpu_num > CONFIG_MAX_PCPU_NUM
during init_pcpu_pre,after that no need to check it again.

Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-09-24 09:02:05 +08:00
Li, Fei1
535a83e24b hv: vpci: refine vPCI BAR initialization
Initialize vBAR configure space when doing vPCI BAR initialization. At this time,
we access the physical device as we needs, no need to cache physical PCI device
BAR information beforehand.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-09-23 11:16:48 +08:00
Qi Yadong
3ebeecf060 hv: save/restore TSC in host's suspend/resume path
TSC would be reset to 0 when enter suspend state on some platform.
This will fail the secure timer checking in secure world because
secure world leverage the TSC as source of secure timer which should
be increased monotonously.

This patch save/restore TSC in host suspend/resume path to guarantee
the mono increasing TSC.

Note: There should no timer setup before TSC resumed.

Tracked-On: #3697
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-19 13:50:50 +08:00
Manisha
489a0e645d SEP/SOCWATCH change variable names
To follow ACRN coding guideline: identifier cannot be reused.
in profiling_info_wrapper struct, modified member names:
pmu_sample -> p_sample
sep_state -> s_state
sw_msr_op_info -> sw_msr_info
vm_switch_trace -> vm_trace

Tracked-On: #3598
Acked-by: min.yeol.lim@intel.com
Signed-off-by: Manisha <manisha.chinthapally@intel.com>
2019-09-16 15:54:34 +08:00
Mingqiang Chi
60adef33d3 hv:move down structures run_context and ext_context
Now the structures(run_context & ext_context) are defined
in vcpu.h,and they are used in the lower-layer modules(wakeup.S),
this patch move down the structures from vcpu.h to cpu.h
to avoid reversed dependency.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-16 14:51:36 +08:00
Mingqiang Chi
4f98cb03a7 hv:move down the structure intr_source
Now the structures(union source & struct intr_source) are defined
in ptdev.h,they are used in vtd.c and assign.c,
vtd is the hardware layer and ptdev is the upper-layer module
from the modularization perspective, this patch move down
these structures to avoid reversed dependency.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-16 14:51:36 +08:00
Shuo A Liu
4742d1c747 hv: ptdev: move softirq_dev_entry_list from vm structure to per_cpu region
Using per_cpu list to record ptdev interrupts is more reasonable than
recording them per-vm. It makes dispatching such interrupts more easier
as we now do it in softirq which happens following interrupt context of
each pcpu.

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-16 09:36:52 +08:00
Shuo A Liu
2cc45534d6 hv: move pcpu offline request and vm shutdown request from schedule
From modulization perspective, it's not suitable to put pcpu and vm
related request operations in schedule. So move them to pcpu and vm
module respectively. Also change need_offline return value to bool.

Tracked-On: #3663
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Yu Wang <yu1.wang@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
2019-09-16 09:36:52 +08:00
Yin Fengwei
6b6aa80600 hv: pm: fix coding style issue
This patch fix the coding style issue introduced by previous two
patches.

Tracked-On: #3564
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
2019-09-11 17:30:24 +08:00
Yin Fengwei
f039d75998 hv: pm: enhencement platform S5 entering operation
Now, we have assumption that SOS control whether the platform
should enter S5 or not. So when SOS tries enter S5, we just
forward the S5 request to native port which make sure platform
S5 is totally aligned with SOS S5.

With higher serverity guest introduced,this assumption is not
true any more. We need to extend the platform S5 process to
handle higher severity guest:
  - For DM launched RTVM, we need to make sure these guests
    is off before put the whole platfrom to S5.

  - For pre-launched VM, there are two cases:
    * if os running in it support S5, we wait for guests off.
    * if os running in it doesn't support S5, we expect it
      will invoke one hypercall to notify HV to shutdown it.
      NOTE: this case is not supported yet. Will add it in the
      future.

Tracked-On: #3564
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
2019-09-11 17:30:24 +08:00
Li, Fei1
6ebc22210b hv: vPCI: cache PCI BAR physical base address
PCI BAR physical base address will never changed. Cache it to avoid calculating
it every time when we access it.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
2019-09-11 13:17:42 +08:00
Mingqiang Chi
c691c5bd3c hv:add volatile keyword for some variables
pcpu_active_bitmap was read continuously in wait_pcpus_offline(),
acrn_vcpu->running was read continuously in pause_vcpu(),
add volatile keyword to ensure that such accesses are not
optimised away by the complier.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-09-10 11:26:35 +08:00
Mingqiang Chi
cd40980d5f hv:change function parameter for invept
change the input parameter from vcpu to eptp in order to let this api
more generic, no need to care normal world or secure world.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-09-05 16:32:30 +08:00
Binbin Wu
cd1ae7a89e hv: cat: isolate hypervisor from rtvm
Currently, the clos id of the cpu cores in vmx root mode is the same as non-root mode.
For RTVM, if hypervisor share the same clos id with non-root mode, the cacheline may
be polluted due to the hypervisor code execution when vmexit.

The patch adds hv_clos in vm_configurations.c
Hypervisor initializes clos setting according to hv_clos during physical cpu cores initialization.
For RTVM,  MSR auto load/store areas are used to switch different settings for VMX root/non-root
mode for RTVM.

Tracked-On: #2462
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-09-05 09:59:13 +08:00
Mingqiang Chi
38ca8db19f hv:tiny cleanup
-- remove some unnecessary includes
-- fix a typo
-- remove unnecessary void before launch_vms

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
2019-09-05 09:58:47 +08:00
Yan, Like
3f84acda09 hv: add "invariant TSC" cap detection
ACRN HV is designed/implemented with "invariant TSC" capability, which wasn't checked at boot time.
This commit adds the "invairant TSC" detection, ACRN fails to boot if there wasn't "invariant TSC" capability.

Tracked-On: #3636
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-09-05 09:58:16 +08:00
dongshen
295701cc55 hv: remove mptable code for pre-launched VMs
Now that ACPI is enabled for pre-launched VMs, we can remove all mptable code.

Tracked-On: #3601
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-08-29 10:12:25 +08:00
dongshen
b447ce3d86 hv: add ACPI support for pre-launched VMs
Statically define the per vm RSDP/XSDT/MADT ACPI template tables in vacpi.c,
RSDP/XSDT tables are copied to guest physical memory after checksum is
calculated. For MADT table, first fix up process id/lapic id in its lapic
subtable, then the MADT table's checksum is calculated before it is copies to
guest physical memory.

Add 8-bit checksum function in util.h

Tracked-On: #3601
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-08-29 10:12:25 +08:00
dongshen
96b422ce9d hv: create 8-bit sum function
Move 8-bit sum code to a separate new function calculate_sum8() in
util.h and replace the old code with a call to calculate_sum8()

Minor code cleanup in found_rsdp() to make it more readable. Both break and
continue statements are used in a single for loop, changed to only use break
statement to make the logic simpler.

Fixed some coding style issues reported by checkpatch.pl for file
hypervisor/boot/acpi_base.c

Tracked-On: #3601
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-08-29 10:12:25 +08:00
Binbin Wu
5c81659713 hv: ept: flush cache for modified ept entries
EPT tables are shared by MMU and IOMMU.
Some IOMMUs don't support page-walk coherency, the cpu cache of EPT entires
should be flushed to memory after modifications, so that the modifications
are visible to the IOMMUs.

This patch adds a new interface to flush the cache of modified EPT entires.
There are different implementations for EPT/PPT entries:
- For PPT, there is no need to flush the cpu cache after update.
- For EPT, need to call iommu_flush_cache to make the modifications visible
to IOMMUs.

Tracked-On: #3607
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
2019-08-26 10:47:17 +08:00
Binbin Wu
2abd8b34ef hv: vtd: export iommu_flush_cache
VT-d shares the EPT tables as the second level translation tables.
For the IOMMUs that don't support page-walk coherecy, cpu cache should
be flushed for the IOMMU EPT entries that are modified.

For the current implementation, EPT tables for translating from GPA to HPA
for EPT/IOMMU are not modified after VM is created, so cpu cache invlidation is
done once per VM before starting execution of VM.
However, this may be changed, runtime EPT modification is possible.

When cpu cache of EPT entries is invalidated when modification, there is no need
invalidate cpu cache globally per VM.

This patch exports iommu_flush_cache for EPT entry cache invlidation operations.
- IOMMUs share the same copy of EPT table, cpu cache should be flushed if any of
  the IOMMU active doesn't support page-walk coherency.
- In the context of ACRN, GPA to HPA mapping relationship is not changed after
  VM created, skip flushing iotlb to avoid potential performance penalty.

Tracked-On: #3607
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
2019-08-26 10:47:17 +08:00
Mingqiang Chi
2310d99ebf hv: cleanup vmcs.h
-- move 'RFLAGS_AC' to cpu.h
-- move 'VMX_SUPPORT_UNRESTRICTED_GUEST' to msr.h
   and rename it to 'MSR_IA32_MISC_UNRESTRICTED_GUEST'
-- move 'get_vcpu_mode' to vcpu.h
-- remove deadcode 'vmx_eoi_exit()'

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-08-22 14:13:15 +08:00
Mingqiang Chi
bd09f471a6 hv:move some APIs related host reset to pm.c
move some data structures and APIs related host reset
from vm_reset.c to pm.c, these are not related with guest.

Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-08-22 14:09:18 +08:00
Yin Fengwei
6beb34c3cb vm_load: update init gdt preparation
Now, we use native gdt saved in boot context for guest and assume
it could be put to same address of guest. But it may not be true
after the pre-launched VM is introduced. The gdt for guest could
be overwritten by guest images.

This patch make 32bit protect mode boot not use saved boot context.
Insteadly, we use predefined vcpu_regs value for protect guest to
initialize the guest bsp registers and copy pre-defined gdt table
to a safe place of guest memory to avoid gdt table overwritten by
guest images.

Tracked-On: #3532

Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-08-20 09:22:20 +08:00
Yonghua Huang
700a37856f hv: remove 'flags' field in struct vm_io_range
Currently, 'flags' is defined and set but never be used
  in the flow of handling i/o request after then.

Tracked-On: #861
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-08-19 10:19:54 +08:00
Yonghua Huang
f791574f0e hv: refine the function pointer type of port I/O request handlers
In the definition of port i/o handler, struct acrn_vm * pointer
 is redundant as input, as context of acrn_vm is aleady linked
 in struct acrn_vcpu * by vcpu->vm, 'vm' is not required as input.

 this patch removes argument '*vm' from 'io_read_fn_t' &
 'io_write_fn_t', use '*vcpu' for them instead.

Tracked-On: #861
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-08-16 11:44:27 +08:00
huihuang.shi
f147c388a5 hv: fix Violations touched ACRN Coding Guidelines
fix violations touched below:
1.Cast operation on a constant value
2.signed/unsigned implicity conversion
3.return value unused.

V1->V2:
1.bitmap api will return boolean type, not need to check "!= 0", deleted.
2.The behaves ~(uint32_t)X and (uint32_t)~X are not defined in ACRN hypervisor Coding Guidelines,
removed the change of it.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2019-08-15 09:47:11 +08:00
Shiqing Gao
062fe19800 hv: move vmx_rdmsr_pat/vmx_wrmsr_pat from vmcs.c to vmsr.c
This patch moves vmx_rdmsr_pat/vmx_wrmsr_pat from vmcs.c to vmsr.c,
so that these two functions would become internal functions inside
vmsr.c.
This approach improves the modularity.

v1 -> v2:
 * remove 'vmx_rdmsr_pat'
 * rename 'vmx_wrmsr_pat' with 'write_pat_msr'

Tracked-On: #1842
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-08-14 10:51:35 +08:00
Li, Fei1
90480db553 hv: vpci: split vPCI device from SOS for post-launched VM
When assgined a PCI PTDev to post-launched VM from SOS, using a pointer to point to
the real struct pci_vdev. When post-launched VM access its PTDev configure space in
SOS address space, using this real struct pci_vdev.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-08-12 10:00:44 +08:00
Li, Fei1
4c8e60f1d0 hv: vpci: add each vdev_ops for each emulated PCI device
Add a field (vdev_ops) in struct acrn_vm_pci_dev_config to configure a PCI CFG
operation for an emulated PCI device. Use pci_pt_dev_ops for PCI_DEV_TYPE_PTDEV
by default if there's no such configure.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-08-09 14:19:49 +08:00
Li, Fei1
ff54fa2325 hv: vpci: add emulated PCI device configure for SOS
Add emulated PCI device configure for SOS to prepare for add support for customizing
special pci operations for each emulated PCI device.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-08-09 14:19:49 +08:00
Li, Fei1
599a058403 hv: vpci: refine init_vdevs
Now almost the vPCI device information could be obtain from PCI device configure
in VM configure. init_vdevs could make things more easier.
And rename init_vdevs to vpci_init_vdevs, init_vdev to vpci_init_vdevs to avoid
MISRA-C violations.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Dongsheng Zhang <dongsheng.x.zhang@intel.com>
2019-08-06 11:51:02 +08:00
Li, Fei1
eb21f205e4 hv: vm_config: build pci device configure for SOS
Align SOS pci device configure with pre-launched VM and filter pre-launched VM's
PCI PT device from SOS pci device configure.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-08-06 11:51:02 +08:00
Li, Fei1
adbaaaf6cb hv: vpci: rename ptdev_config to pci_dev_config
pci_dev_config in VM configure stores all the PCI devices for a VM. Besides PT
devices, there're other type devices, like virtual host bridge. So rename ptdev
to pci_dev for these configure.

Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-08-06 11:51:02 +08:00
Victor Sun
363daf6aa2 HV: return extended info in vCPUID leaf 0x40000001
In some case, guest need to get more information under virtual environment,
like guest capabilities. Basically this could be done by hypercalls, but
hypercalls are designed for trusted VM/SOS VM, We need a machenism to report
these information for normal VMs. In this patch, vCPUID leaf 0x40000001 will
be used to satisfy this needs that report some extended information for guest
by CPUID.

Tracked-On: #3498

Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-31 14:13:39 +08:00
Victor Sun
555a03db99 HV: add board specific cpu state table to support Px Cx
Currently the Px Cx supported SoCs which listed in cpu_state_tbl.c is limited,
and it is not a wise option to build a huge state table data base to support
Px/Cx for other SoCs. This patch give a alternative solution that build a board
specific cpu state table in board.c which could be auto-generated by offline
tool, then the CPU Px/Cx of customer board could be enabled;

Hypervisor will search the cpu state table in cpu_state_tbl[] first, if not
found then go check board_cpu_state_tbl. If no matched cpu state table is found
then Px/Cx will not be supported;

Tracked-On: #3477

Signed-off-by: Victor Sun <victor.sun@intel.com>
2019-07-29 20:25:16 +08:00
Li, Fei1
4a27d08360 hv: schedule: schedule to idel after SOS resume form S3
After "commit f0e1c5e init vcpu host stack when reset vcpu", SOS resume form S3
wants to schedule to vcpu_thread not the point where SOS enter S3. So we should
schedule to idel first then reschedule to execute vcpu_thread.

Tracked-On: #3387
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-29 09:53:18 +08:00
Zhao Yakui
baf7d90fdf HV: Refine the usage of monitor/mwait to avoid the possible lockup
Based on SDM Vol2 the monitor uses the RAX register to setup the address
monitored by HW. The mwait uses the rax/rcx as the hints that the process
will enter. It is incorrect that the same value is used for monitor/mwait.
The ecx in mwait specifies the optional externsions.

At the same time it needs to check whether the the value of monitored addr
is already expected before entering mwait. Otherwise it will have possible
lockup.

V1->V2: Add the asm wrappper of monitor/mwait to avoid the mixed usage of
inline assembly in wait_sync_change

v2-v3: Remove the unnecessary line break in asm_monitor/asm_mwait.
       Follow Fei's comment to remove the mwait ecx hint setting that
treats the interrupt as break event. It only needs to check whether the
value of psync_change is already expected.

Tracked-On: #3442
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-07-26 10:55:58 +08:00
Li, Fei1
11cf9a4a8a hv: mmu: add hpa2hva_early API for earlt boot
When need hpa and hva translation before init_paging, we need hpa2hva_early and
hva2hpa_early since init_paging may modify hva2hpa to not be identical mapping.

Tracked-On: #2987
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-26 09:10:06 +08:00
Li, Fei1
cc47dbe769 hv: uart: enable early boot uart
Enable uart as early as possible to make things easier for debugging.
After this we could use printf to output information to the uart. As for
pr_xxx APIs, they start to work when init_logmsg is called.

Tracked-On: #2987
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-26 09:10:06 +08:00
Yin Fengwei
ce4d71e038 vpci: fix coding style issue
This is a followup patch to fix the coding style issue introduced
in by commit "c2d25aafb889ade954af8795df2405a94024d860":
  The unmodified pointer should be defined as const

Also addressed one comments from Fei to use reversed function call
in vpci_init_pt_dev and vpci_deinit_pt_dev.

Tracked-On: #3241
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-22 16:21:40 +08:00
Yin, Fengwei
11e67f1c4a softirq: move softirq from hv_main to interrupt context
softirq shouldn't be bounded to vcpu thread. One issue for this
is shell (based on timer) can't work if we don't start any guest.

This change also is trying best to make softirq handler running
with irq enabled.

Also update the irq disable/enabel in vmexit handler to align
with the usage in vcpu_thread.

Tracked-On: #3387
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
2019-07-22 09:55:06 +08:00
Yan, Like
97f6097f04 hv: add ops to vlapic structure
This commit adds ops to vlapic structure, and add an *ops parameter to vlapic_reset().
At vlapic reset, the ops is set to the global apicv_ops, and may be assigned
to other ops later.

Tracked-On: #3227
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-19 16:47:06 +08:00
Victor Sun
600aa8ea5a HV: change param type of init_pcpu_pre
When initialize secondary pcpu, pass INVALID_CPU_ID as param of init_pcpu_pre()
looks weird, so change the param type to bool to represent whether the pcpu is
a BSP or AP.

Tracked-On: #3420

Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-17 13:48:00 +08:00
Li, Fei1
e352553e1a hv: atomic: remove atomic load/store and set/clear
In x86 architecture, word/doubleword/quadword aligned read/write on its boundary
is atomic, so we may remove atomic load/store.
As for atomic set/clear, use bitmap_set/claer seems more reasonable. After replace
them all, we could remove them too.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-17 09:20:54 +08:00
Li, Fei1
b39526f759 hv: schedule: vCPU schedule state setting don't need to be atomic
vCPU schedule state change is under schedule lock protection. So there's no need
to be atomic.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-17 09:20:54 +08:00
Li, Fei1
0eb0854858 hv: schedule: minor fix about the return type of need_offline
ACRN Coding guidelines requires type conversion shall be explicity. However,
there's no need for this case since we could return bool directly.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-17 09:20:54 +08:00
Li, Fei1
e69b3dcf67 hv: schedule: remove runqueue_lock in sched_context
Now sched_object and sched_context are protected by scheduler_lock. There's no
chance to use runqueue_lock to protect schedule runqueue if we have no plan to
support schedule migration.

Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
2019-07-17 09:20:54 +08:00
Yin Fengwei
009a16bd7d vhostbridge: update vhostbridge to use vdev_ops
And use vhostbridge for both SOS and pre-launched VM.

Tracked-On: #3241
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-15 10:35:47 +08:00
Yin Fengwei
c2d25aafb8 pci_vdev: add pci_vdev_ops to pci_vdev
It will be used to define pci vdev own ops. And high level API
will call this ops intead of invoking device specific functions
directly.

Tracked-On: #3241
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-15 10:35:47 +08:00
Victor Sun
5b1852e482 HV: add kata support on sdc scenario
In current design, devicemodel passes VM UUID to create VMs and hypervisor
would check the UUID whether it is matched with the one in VM configurations.

Kata container would maintain few UUIDs to let ACRN launch the VM, so
hypervisor need to add these UUIDs in VM configurations for Kata running.
In the hypercall of hcall_get_platform_info(), hypervisor will report the
maximum Kata container number it will support. The patch will add a Kconfig
to indicate the maximum Kata container number that SOS could support.

In current stage, only one Kata container is supported by SOS on SDC scenario
so add one UUID for Kata container in SDC VM configuration. If we want to
support Kata on other scenarios in the future, we could follow the example
of this patch;

Tracked-On: #3402

Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-12 16:34:31 +08:00
Huihuang Shi
2ec1694901 HV: fix sbuf "Casting operation to a pointer"
ACRN Coding guidelines requires two different types pointer can't
convert to each other, except void *.

Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-11 13:57:21 +08:00
Huihuang Shi
9063504bc8 HV: ve820 fix "Casting operation to a pointer"
ACRN Coding guidelines requires two different types pointer can't
convert to each other, except void *.

Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
2019-07-11 13:57:21 +08:00
Huihuang Shi
714162fb8b HV: fix violations touched type conversion
ACRN Coding guidelines requires type conversion shall be explicity.

Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-11 09:16:09 +08:00
Li, Fei1
5d6c9c33ca hv: vlapic: clear up where needs atomic operation in vLAPIC
In almost case, vLAPIC will only be accessed by the related vCPU. There's no
synchronization issue in this case. However, other vCPUs could deliver interrupts
to the current vCPU, in this case, the IRR (for APICv base situation) or PIR
(for APICv advanced situation) and TMR for both cases could be accessed by more
than one vCPUS simultaneously. So operations on IRR or PIR should be atomical
and visible to other vCPUs immediately. In another case, vLAPIC could be accessed
by another vCPU when create vCPU or reset vCPU which could be supposed to be
consequently.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-11 09:15:47 +08:00
Li, Fei1
5930e96d12 hv: io_req: refine vhm_req status setting
In spite of vhm_req status could be updated in HV and DM on different CPUs, they
only change vhm_req status when they detect vhm_req status has been updated by
each other. So vhm_req status will not been misconfigured. However, before HV
sets vhm_req status to REQ_STATE_PENDING, vhm_req buffer filling should be visible
to DM. Add a write memory barrier to guarantee this.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-11 09:15:47 +08:00
Yonghua Huang
1ea3052f80 HV: check security mitigation support for SSBD
Hypervisor exposes mitigation technique for Speculative
Store Bypass(SSB) to guests and allows a guest to determine
whether to enable SSBD mitigation by providing direct guest
access to IA32_SPEC_CTRL.

Before that, hypervisor should check the SSB mitigation support
on underlying processor, this patch is to add this capability check.

Tracked-On: #3385
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-07-10 10:55:34 +08:00
Huihuang Shi
564a60120a HV: fix vuart.c "Parameter needs to add const"
ACRN Coding guidelines requires parameters need to add const prefix when the
parameter is not modified in its function or recursion function call.

Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-09 10:36:44 +08:00
Shuo A Liu
4129b72b2e hv: remove unnecessary cancel_event_injection related stuff
cancel_event_injection is not need any more if we do 'scheudle' prior to
acrn_handle_pending_request. Commit "921288a6672: hv: fix interrupt
lost when do acrn_handle_pending_request twice" bring 'schedule'
forward, so remove cancel_event_injection related stuff.

Tracked-On: #3374
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
2019-07-09 09:23:12 +08:00
Huihuang Shi
9a7043e83f HV: remove instr_emul.c dead code
ACRN Coding guidelines requires no dead code.

Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
2019-07-09 09:22:53 +08:00
Yonghua Huang
3164f3976a hv: Mitigation for CPU MDS vulnerabilities.
Microarchitectural Data Sampling (MDS) is a hardware vulnerability
 which allows unprivileged speculative access to data which is available
 in various CPU internal buffers.

 1. Mitigation on ACRN:
    1) Microcode update is required.
    2) Clear CPU internal buffers (store buffer, load buffer and
       load port) if current CPU is affected by MDS, when VM entry
       to avoid any information leakage to guest thru above buffers.
    3) Mitigation is not needed if ARCH_CAP_MDS_NO bit (bit5)
       is set in IA32_ARCH_CAPABILITIES MSR (10AH), in this case,
       current processor is no affected by MDS vulnerability, in other
       cases mitigation for MDS is required.

 2. Methods to clear CPU buffers (microcode update is required):
    1) L1D cache flush
    2) VERW instruction
    Either of above operations will trigger clearing all
    CPU internal buffers if this CPU is affected by MDS.
    Above mechnism is enumerated by:
    CPUID.(EAX=7H, ECX=0):EDX[MD_CLEAR=10].

 3. Mitigation details on ACRN:
    if (processor is affected by MDS)
	    if (processor is not affected by L1TF OR
		  L1D flush is not launched on VM Entry)
		    execute VERW instruction when VM entry.
	    endif
    endif

 4. Referrence:
    Deep Dive: Intel Analysis of Microarchitectural Data Sampling
    https://software.intel.com/security-software-guidance/insights/
    deep-dive-intel-analysis-microarchitectural-data-sampling

    Deep Dive: CPUID Enumeration and Architectural MSRs
    https://software.intel.com/security-software-guidance/insights/
    deep-dive-cpuid-enumeration-and-architectural-msrs

Tracked-On: #3317
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Reviewed-by: Jason CJ Chen <jason.cj.chen@intel.com>
2019-07-05 15:17:27 +08:00
Yonghua Huang
076a30b555 hv: refine security capability detection function.
ACRN hypervisor always print CPU microcode update
 warning message on KBL NUC platform, even after
 BIOS was updated to the latest.

 'check_cpu_security_cap()' returns false if
 no ARCH_CAPABILITIES MSR support on current platform,
 but this MSR may not be available on some platforms.
 This patch is to remove this pre-condition.

Tracked-On: #3317
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Jason CJ Chen <jason.cj.chen@intel.com>
2019-07-05 15:17:27 +08:00
dongshen
012ec75163 HV: rename vbdf in struct pci_vdev to bdf
Rename vbdf to bdf for the following reasons:
Use the same coding style as struct pci_pdev, as pci_pdev uses bdf instead of pbdf

pci_vdev implies the its bdf is virtual, no need to prefix bdf with the v
prefix (redundant)

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-04 11:25:01 +08:00
dongshen
af163d579f HV: add support for 64-bit bar emulation
Enable 64-bit bar emulation, if pbar is of type PCIBAR_MEM64, vbar will also be
of type PCIBAR_MEM64 instead of PCIBAR_MEM32

With 64-bit bar emulation code in place, we can remove enum pci_bar_type type
from struct pci_bar as bar type can be derived from struct pci_bar's reg member
by using the pci_get_bar_type function

Rename functions:
  pci_base_from_size_mask --> git_size_masked_bar_base

Remove unused functions

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-07-04 11:25:01 +08:00
Li, Fei1
09a63560f4 hv: vm_manage: minor fix about triple_fault_shutdown_vm
The current implement will trigger shutdown vm request on the BSP VCPU on the VM,
not the VCPU will trap out because triple fault. However, if the BSP VCPU on the VM
is handling another IO emulation, it may overwrite the triple fault IO request on
the vhm_request_buffer in function acrn_insert_request. The atomic operation of
get_vhm_req_state can't guarantee the vhm_request_buffer will not access by another
IO request if it is not running on the corresponding VCPU. So it should trigger
triple fault shutdown VM IO request on the VCPU which trap out because of triple
fault exception.
Besides, rt_vm_pm1a_io_write will do the right thing which we shouldn't do it in
triple_fault_shutdown_vm.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-03 17:44:45 +08:00
Li, Fei1
647797ff1a hv: ptdev: refine ptdev active flag
Since spinlock ptdev_lock is used to protect ptdev entry, there's no need to
use atomic operation to protect ptdev active flag in split of it's wrong used
to protect ptdev entry. And refine active flag data type to bool.

Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
2019-07-03 17:44:45 +08:00
Binbin Wu
4a22801dd1 hv: ept: mask EPT leaf entry bit 52 to bit 63 in gpa2hpa
According to SDM, bit N (physical address width) to bit 63 should be masked when calculate
host page frame number.
Currently, hypervisor doesn't set any of these bits, so gpa2hpa can work as expectd.
However, any of these bit set, gpa2hpa return wrong value.

Hypervisor never sets bit N to bit 51 (reserved bits), for simplicity, just mask bit 52 to bit 63.

Tracked-On: #3352
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-07-03 09:39:41 +08:00
dongshen
ae996250c1 HV: extract functions from code to improve code reuse and readability
Create 2 functions from code:
 pci_base_from_size_mask
 vdev_pt_remap_mem_vbar

Use vbar in place of vdev->bar[idx] by setting vbar to &vdev->bar[idx]

Change base to uint64_t to accommodate 64-bit MMIO bar size masking in
subsequent commits

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-07-01 09:57:05 +08:00
dongshen
84e09a2246 HV: remove uint64_t base from struct pci_bar
At this point, uint64_t base in struct pci_bar is not used by any code, so we
can remove it.

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-01 09:57:05 +08:00
dongshen
0247c0b942 Hv: minor cosmetic fix
Define/Use variable in place of code to improve readability:

Define new local variable struct pci_bar *vbar, and use vbar-> in place of vdev->bar[idx].

Define new local variable uint64_t vbar_base in init_vdev_pt

Rename uint64_t vbar[PCI_BAR_COUNT] of struct acrn_vm_pci_ptdev_config to uint64_t vbar_base[PCI_BAR_COUNT]

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-07-01 09:57:05 +08:00
dongshen
ed1bdcbbdf HV: add uint64_t bar_base_mapped[PCI_BAR_COUNT] to struct pci_vdev
To remember the previously mapped/registered vbar base

For the following reasons:
 register_mmio_emulation_handler() will throw an error if the the same addr_lo is
 alreayd registered before

 We are going to remove the base member from struct pci_bar, so we cannot use vdev->bar[idx].base
 in the code any more

 In subsequent commits, we will assume vdev_pt_remap_generic_mem_vbar() is called after a new
 vbar base is set, mainly because of 64-bit mmio bar handling, so we need a
 separate bar_base_mapped[] array to track the previously mapped vbar bases.

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-07-01 09:57:05 +08:00
Huihuang Shi
198e01716a HV:fix vcpu violations
vcpu is never scan because of scan tool will be crashed!
After modulization, the vcpu can be scaned by the scan tool.
Clean up the violations in vcpu.c.
Fix the violations:
    1.No brackets to then/else.
    2.Function return value not checked.
    3.Signed/unsigned coversion without cast.

V1->V2:
    change the type of "vcpu->arch.irq_window_enabled" to bool.
V2->V3:
    add "void *" prefix on the 1st parameter of memset.

Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-06-28 13:28:26 +08:00
Conghui Chen
4d88b2bb65 hv: bugfix for sbuf reset
The sbuf is allocated for each pcpu by hypercall from SOS. Before launch
Guest OS, the script will offline cpus, which will trigger vcpu reset and
then reset sbuf pointer. But sbuf only initiate once by SOS, so these
cpus for Guest OS has no sbuf to use. Thus, when run 'acrntrace' on SOS,
there is no trace data for Guest OS.
To fix the issue, only reset the sbuf for SOS.

Tracked-On: #3335
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
2019-06-27 15:40:19 +08:00
dongshen
f7c4d2b5c4 HV: add uint32_t nr_bars to struct pci_vdev
Use nr_bars instead of PCI_BAR_COUNT to check bar access offset.
As while normal pci device has max 6 bars, pci bridge only has 2 bars,
so for pci normal pci device, pci cfg offsets 0x10-0x24 are for bar access,
but for pci bridge, only 0x10-0x14 are for bar access (0x18-0x24 are
for other accesses).

Rename function:
 pci_bar_access --> is_bar_offset

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-06-27 15:35:16 +08:00
dongshen
208b1d3664 HV: add uint32_t nr_bars to struct struct pci_pdev to track # of bars
nr_bars in struct pci_pdev is used to store the actual # of bars (
6 for normal pci device and 2 for pci bridge), nr_bars will be used in subsequent
patches

Use uint32_t for bar related variables (bar index, etc) to unify the bar
related code (no casting between uint32_t and uint8_t)

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
2019-06-27 15:35:16 +08:00
dongshen
95d10d8921 HV: add union pci_bar and is_64bit_high to struct pci_bar
union pci_bar uses bit fields and follows the PCI bar spec definition to define the
bar flags portion and base address, this is to keep the same hardware format for vbar
register. The base/type of union pci_bar are still kept to minimize code changes
in one patch, they will be removed in subsequent patches.

define pci_pdev_get_bar_base() function to extract bar base address given a 32-bit raw
bar value

define a utility function pci_get_bar_type() to extract bar types
from raw bar value to simply code, as this function will be used in multiple
places later on: this function can be called on reg->value stored in struct
pci_bar to derive bar type.

Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
2019-06-27 15:35:16 +08:00