-- move vm_state_lock to other place in vm structure
to avoid the memory waste because of the page-aligned.
-- remove the memset from create_vm
-- explicitly set max_emul_mmio_regions and vcpuid_entry_nr to 0
inside create_vm to avoid use without initialization.
-- rename max_emul_mmio_regions to nr_emul_mmio_regions
v1->v2:
add deinit_emul_io in shutdown_vm
Tracked-On: #4958
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Grandhi, Sainath <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Previously the CPU affinity of SOS VM is initialized at runtime during
sanitize_vm_config() stage, follow the policy that all physical CPUs
except ocuppied by Pre-launched VMs are all belong to SOS_VM. Now change
the process that SOS CPU affinity should be initialized at build time
and has the assumption that its validity is guarenteed before runtime.
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Previously we have complicated check mechanism on platform_acpi_info.h which
is supposed to be generated by acrn-config tool, but given the reality that
all configurations should be generated by acrn-config before build acrn
hypervisor, this check is not needed anymore.
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The SDC scenario configurations will not be validated so remove it from
build makefile;
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In MSI Capability Structure, bit 7 (64 bit address capable) of MSICTRL
is RO;
Tracked-On: #5125
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Li Fei <fei1.li@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
As we only set BLOCKED status in context switch_out, which means, only
running thread can be changed to BLOCKED, but runnable thread can not.
This lead to the deadloop in sleep_thread_sync.
To solve the problem, in sleep_thread, we set the status to BLOCKED
directly when the original thread status is RUNNABLE.
Tracked-On: #5115
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
When VM read pre-sriov header in ECAP of ptdev, only emulate the
reading if SRIOV is hidden.
Write to pre-sriov header is ignored so no need to fix writting.
Tracked-On: #5085
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
The old layout configuration source which located in:
hypervisor/arch/x86/configs/ is abandoned, remove it;
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
The make command is same as old configs layout:
under acrn-hypervisor folder:
make hypervisor BOARD=xxx SCENARIO=xxx [TARGET_DIR]=xxx [RELEASE=x]
under hypervisor folder:
make BOARD=xxx SCENARIO=xxx [TARGET_DIR]=xxx [RELEASE=x]
if BOARD/SCENARIO parameter is not specified, the default will be:
BOARD=nuc7i7dnb SCENARIO=industry
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There are 3 kinds of configurations in ACRN hypervisor source code: hypervisor
overall setting, per-board setting and scenario specific per-VM setting.
Currently Kconfig act as hypervisor overall setting and its souce is located at
"hypervisor/arch/x86/configs/$(BOARD).config"; Per-board configs are located at
"hypervisor/arch/x86/configs/$(BOARD)" folder; scenario specific per-VM configs
are located at "hypervisor/scenarios/$(SCENARIO)" folder.
This layout brings issues that board configs and VM configs are coupled tightly.
The board specific Kconfig file and misc_cfg.h are shared by all scenarios, and
scenario specific pci_dev.c is shared by all boards. So the user have no way to
build hypervisor binary for different scenario on different board with one
source code repo.
The patch will setup a new VM configurations layout as below:
misc/vm_configs
├── boards --> folder of supported boards
│ ├── <board_1> --> scenario-irrelevant board configs
│ │ ├── board.c --> C file of board configs
│ │ ├── board_info.h --> H file of board info
│ │ ├── pci_devices.h --> pBDF of PCI devices
│ │ └── platform_acpi_info.h --> native ACPI info
│ ├── <board_2>
│ ├── <board_3>
│ └── <board...>
└── scenarios --> folder of supported scenarios
├── <scenario_1> --> scenario specific VM configs
│ ├── <board_1> --> board specific VM configs for <scenario_1>
│ │ ├── <board_1>.config --> Kconfig for specific scenario on specific board
│ │ ├── misc_cfg.h --> H file of board specific VM configs
│ │ ├── pci_dev.c --> board specific VM pci devices list
│ │ └── vbar_base.h --> vBAR base info of VM PT pci devices
│ ├── <board_2>
│ ├── <board_3>
│ ├── <board...>
│ ├── vm_configurations.c --> C file of scenario specific VM configs
│ └── vm_configurations.h --> H file of scenario specific VM configs
├── <scenario_2>
├── <scenario_3>
└── <scenario...>
The new layout would decouple board configs and VM configs completely:
The boards folder stores kinds of supported boards info, each board folder
stores scenario-irrelevant board configs only, which could be totally got from
a physical platform and works for all scenarios;
The scenarios folder stores VM configs of kinds of working scenario. In each
scenario folder, besides the generic scenario specific VM configs, the board
specific VM configs would be put in a embedded board folder.
In new layout, all configs files will be removed out of hypervisor folder and
moved to a separate folder. This would make hypervisor LoC calculation more
precisely with below fomula:
typical LoC = Loc(hypervisor) + Loc(one vm_configs)
which
Loc(one vm_configs) = Loc(misc/vm_configs/boards/<board>)
+ LoC(misc/vm_configs/scenarios/<scenario>/<board>)
+ Loc(misc/vm_configs/scenarios/<scenario>/vm_configurations.c
+ Loc(misc/vm_configs/scenarios/<scenario>/vm_configurations.h
Tracked-On: #5077
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
hv: vpci: inject physical PCIEXBAR to SOS vhostbridge in
order to fully emulate a full host bridge following HW spec
The vhostbridge we emulate currently is a "Celeron N3350/
Pentium N4200/Atom E3900 Series Host Bridge", which is of
Appollo Lake SoC, but the emulation is incomplete, and
we need to implement a full vhostbridge following HW spec.
This is a step-by-step process, and in this patch we fixes
the simulation of PCIEXBAR register (0x60) and thus solved
bug #6464.
-------#6464: SOS cannot make use of ECAM---------------
Generally, SOS will check the MMIO Base Addr in ACPI MCFG
table to confirm it is a reserved memory area. There will
be 3 methods to check:
1. Via E820 table
2. Via EFI runtime service
3. To check with the value in PCIEXBAR(0x60) of hostbridge
For SOS, method 2 is not feasible since no EFI runtime service
is available for SOS. And on newer platform like EHL/TGL, its
BIOS somehow doesn't reserve it in native E820, thus SOS will
try use method 3 to verify, so we should inject physical ECAM
to vhostbridge, otherwise all 3 methods will fail, and SOS will
not make use of ECAM, which will results in that SOS cannot use
PCIe Extended Capabilities like SR-IOV.
-------------------------------------------------------
TODO:
1. In the future, we may add one or more virtual hostbridges for CPUs that are incompatible in layout with the current one, according to HW specs
2. Besides PCIEXBAR(0x60), there are also some registers needs to be emulated more precisely rather than be treated as read-only and hard-coded, will be fixed in future patches.
Tracked-On: #5056
Signed-off-by: Qian Wang <qian1.wang@intel.com>
Reviewed-by: Jason Chen <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
hv: vpci: refine init_vhostbridge to be dword-aligned
Refine the hard-coded non-dword-aligned sentences in init_vhostbridge
to be dword-aligned to simplify the initialization operation
Tracked-On: #5056
Signed-off-by: Qian Wang <qian1.wang@intel.com>
Reviewed-by: Jason Chen <jason.cj.chen@intel.com>
Reviewed-by: Li Fei <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
To hide CET feature from guest VM completely, the MSR IA32_MSR_XSS also
need to be intercepted because it comprises CET_U and CET_S feature bits
of xsave/xstors operations. Mask these two bits in IA32_MSR_XSS writing.
With IA32_MSR_XSS interception, member 'xss' of 'struct ext_context' can
be removed because it is duplicated with the MSR store array
'vcpu->arch.guest_msrs[]'.
Tracked-On: #5074
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Return-oriented programming (ROP), and similarly CALL/JMP-oriented
programming (COP/JOP), have been the prevalent attack methodologies for
stealth exploit writers targeting vulnerabilities in programs.
CET (Control-flow Enforcement Technology) provides the following
capabilities to defend against ROP/COP/JOP style control-flow subversion
attacks:
* Shadow stack: Return address protection to defend against ROP.
* Indirect branch tracking: Free branch protection to defend against
COP/JOP
The full support of CET for Linux kernel has not been merged yet. As the
first stage, hide CET from guest VM.
Tracked-On: #5074
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
On WHL platform, we need to pass through TPM to Secure pre-launched VM. In order
to do this, we need to add TPM2 ACPI Table and add TPM DSDT ACPI table to include
the _CRS.
Now we only support the TPM 2.0 device (TPM 1.2 device is not support). Besides,
the TPM must use Start Method 7 (Uses the Command Response Buffer Interface)
to notify the TPM 2.0 device that a command is available for processing.
Tracked-On: #5053
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Using ACPI_TABLE_HEADER MACRO to initial the ACPI Table Header.
Tracked-On: #5053
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Add mmio device pass through support for pre-launched VM.
When we pass through a MMIO device to pre-launched VM, we would remove its
resource from the SOS. Now these resources only include the MMIO regions.
Tracked-On: #5053
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Add two hypercalls to support MMIO device pass through for post-launched VM.
And when we support MMIO pass through for pre-launched VM, we could re-use
the code in mmio_dev.c
Tracked-On: #5053
Signed-off-by: Li Fei1 <fei1.li@intel.com>
During context switch in hypervisor, xsave/xrstore are used to
save/resotre the XSAVE area according to the XCR0 and XSS. The legacy
region in XSAVE area include FPU and SSE, we should make sure the
legacy region be saved during contex switch. FPU in XCR0 is always
enabled according to SDM.
For SSE, we enable it in XCR0 during context switch.
Tracked-On: #5062
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
kick_thread function is only used by kick_vcpu to kick vcpu out of
non-root mode, the implementation in it is sending IPI to target CPU if
target obj is running and target PCPU is not current one; while for
runnable obj, it will just make reschedule request. So the kick_thread
is not actually belong to scheduler module, we can drop it and just do
the cpu notification in kick_vcpu.
Tracked-On: #5057
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vcpu->running is duplicated with THREAD_STS_RUNNING status of thread
object. Introduce an API sleep_thread_sync(), which can utilize the
inner status of thread object, to do the sync sleep for zombie_vcpu().
Tracked-On: #5057
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
1. Update thread status after switch_in/switch_out.
2. Add 'be_blocking' to represent the intermediate state during
sleep_thread and switch_out. After switch_out, the thread status
update to THREAD_STS_BLOCKED.
Tracked-On: #5057
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
-- replace global hypercall lock with per-vm lock
-- add spinlock protection for vm & vcpu state change
v1-->v2:
change get_vm_lock/put_vm_lock parameter from vm_id to vm
move lock obtain before vm state check
move all lock from vmcall.c to hypercall.c
Tracked-On: #4958
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Hide sriov capability of passthrough devices for VMs at init_vdev_pt().
And for post-launched VM, allow assign PF.
Tracked-On: #5041
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Support hide SRIOV extend capability for passthough device
Tracked-On: #5041
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
There are some devices (like Samsung NVMe SSD SM981/PM981 which has 33 MSIX tables)
which have more than 16 MSIX tables. Extend the default value to 64 to handle them.
Tracked-On: #4994
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Some OSes assume the platform must have the IOAPIC. For example:
Linux Kernel allocates IRQ force from GSI (0 if there's no PIC and IOAPIC) on x86.
And it thinks IRQ 0 is an architecture special IRQ, not for device driver. As a
result, the device driver may goes wrong if the allocated IRQ is 0 for RTVM.
This patch expose vIOAPIC to RTVM with LAPIC passthru even though the RTVM can't
use IOAPIC, it servers as a place holder to fullfil the guest assumption.
After vIOAPIC has exposed to guest unconditionally, the 'ready' field could be
removed since we do vIOAPIC initialization for each guest.
Tracked-On: #4691
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
replace spinlock_obtain/spinlock_release with spinlock_irqsave_obtain
and spinlock_irqrestore_release to avoid dead lock for uart module.
this uart lock may be accessed in ISR context like this path:
dispatch_interrupt->pr_err/pr_xxx or printf
->console_write->uart16550_puts
Tracked-On: #4958
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
About the MSI/MSI-X Capability, there're some fields of it would never been changed
once they had been initialized. So it's no need to reset them once the vdev instance
is still used. What need to reset are the fields which would been changed by guest
at runtime.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
will follow this convention for spin lock initialization:
-- for simple global variable locks, use this style:
static spinlock_t xxx_spinlock = {.head = 0U, .tail = 0U,}
-- for the locks inside a data structure, need to call
spinlock_init to initialize.
Tracked-On: #4958
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
replace spinlock_obtain/spinlock_release with spinlock_irqsave_obtain
and spinlock_irqrestore_release to avoid dead lock for vpic module.
this vpic lock may be accessed in ISR context like this path:
dispatch_interrupt->do_softirq->softirq_handlers
->ptirq_softirq->ptirq_handle_intx->vpic_set_irqline
Tracked-On: #4958
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
hv: hypercall: restrict the condition to assign/deassign a pci device to
a post-launched VM for safety
For the safety of post-launched VMs, pci devices assignments should
occur only when VM is being created (at VM_CREATED STATUS), and pci
devices de-assignment should occur only when VM is being created or
shutdown/reset (at VM_CREATED or VM_PAUSED status)
Tracked-On: #4995
Acked-by: Eddie Done <eddie.dong@intel.com>
Reviewed-by: Li Fei <Fei1.Li@intel.com>
Signed-off-by: Wang Qian <qian1.wang@intel.com>
From the VT-d spec 8.3:
If a DRHD structure with INCLUDE_PCI_ALL flag Set is reported for a
Segment, it must be enumerated by BIOS after all other DRHD structures
for the same Segment.
However, some broken BIOS violate the rules. To bring up ACRN with them,
change the ASSERT to a permissive check to unblock the BIOS limitation.
Also, scan the DRHD list to find the one who has INCLUDE_PCI_ALL flag.
Tracked-On: #4937
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Replace dmar_iterate_tbl() by a direct for loop. Handle the
dmar_unit_cnt and handle_one_drhd() of each DRHD in the direct for loop.
Also tune some function definitions to save LOC.
Tracked-On: #4937
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
According to SDM 10.12.11, we can know this register is dedicated to the
purpose of sending self-IPIs with the intent of enabling a highly
optimized path for sending self-IPIs. Also sending the IPI via the Self
Interrupt Register ensures that interrupt is delivered to the processor
core. Specifically completion of the WRMSR instruction to the SELF IPI
register implies that the interrupt has been logged into the IRR.
Tracked-On: #4937
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently, not all platforms support posted interrupt processing of both
VT-x and VT-d. On EHL, VT-d doesn't support posted interrupt processing.
So in such scenario, is_pi_capable() in vcpu_handle_pi_notification()
will bypass the PIR pending bits check which might cause a self-NV-IPI
lost.
With commit "bf1ff8c98 (hv: Offload syncing PIR to vIRR to processor
hardware)", the syncing PIR to vIRR is postponed and it is handled by a
self-NV-IPI in the following VMEnter. The process looks like,
a) vcpu A accepts a virtual interrupt ->
1) ACRN_REQUEST_EVENT is set
2) corresponding bit in PIR is set
3) Posted Interrupt ON bit is set
b) vcpu A does virtual interrupt injection on resume path due to
the pending ACRN_REQUEST_EVENT ->
1) hypervisor disables host interrupt
2) ACRN_REQUEST_EVENT is cleared
3) a self-NV-IPI is sent via ICR of LAPIC.
4) IRR bit of the self-NV-IPI is set
c) (VM-ENTRY) vcpu A returns into non-root mode
1) host interrupt enable(by HW)
2) posted interrupt processing clears the ON bit, sync PIR to vIRR
3) deliver the virtual interrupt if guest rflags.IF=1
d) (VM-EXIT) vcpu A traps due to a instruction execution (e.g. HLT)
1) host interrupt disable(by HW)
2) hypervisor enable host interrupt
Above illustrates a normal process of the virtual interrupt injection
with cpu PI support. However, a failing case is observed. The failing
case is that the self-NV-IPI from b-3 is not accepted by the core until
a timing between d-1 and d-2. b-4 happening between d-1 and d-2 is
observed by debug trace. So the self-NV-IPI will be handled in root-mode
which cannot do the syncing PIR to vIRR processing. Due to the bug
described in the first paragraph, vcpu_handle_pi_notification() cannot
succeed the virtual interrupt injection request. This patch fix it by
removing the wrong check in vcpu_handle_pi_notification() because
vcpu_handle_pi_notification() only happens on platform with cpu PI
support.
Here are some cost data for sending IPI via LAPIC ICR regsiter.
Normally, the cycles between ICR write and IRR got set is 140~260,
which is not accurate due to the MSR read overhead.
And from b-3 to c is about 560 cycles. So b-4 happens during this
period. But in bad case, b-4 doesn't happen even c is triggered.
The worse case i captured is that ICR write and IRR got set costs more
than 1900 cycles. Now, the best GUESS of the huge cost of IPI via ICR is
the ACPI bus arbitration(refer to SDM 10.6.3, 10.7 and Figure 10-17).
Tracked-On: #4937
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We hit following build error when using gcc10:
arch/x86/page.c:240:48: error: array subscript is outside
array bounds of 'struct page[0][1]' [-Werror=array-bounds]
It happens with gcc10 on different Linux distributions.
Regarding the case that ACRN depends on zero length array in
sevaral places, we disable the zero length array warning by
gcc option.
Tracked-On: #4810
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Wrap a function to do guest ept flush. This function doesn't do real EPT flush.
It just make the EPT flush request and do the real flush just before vcpu vmenter.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
-- remove unnecessary lock in pci_mmcfg_read_cfg and
pci_mmcfg_write_cfg since the mmio operation is atomic
if the offest is aligned with 1/2/4 bytes.
-- move pci_is_valid_access to pci.h
Tracked-On: #4958
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
remove spin lock for micro code update since the guest
operating system will take lock
Tracked-On: #4958
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
The commit 'HV: Config Splitlock Detection to be disable' allows
using CONFIG_ENFORCE_TURNOFF_AC to turn off splitlock #AC. If
CONFIG_ENFORCE_TURNOFF_AC is not set, splitlock #AC should be turn on
Tracked-On: #4962
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>