The current implement will trigger shutdown vm request on the BSP VCPU on the VM,
not the VCPU will trap out because triple fault. However, if the BSP VCPU on the VM
is handling another IO emulation, it may overwrite the triple fault IO request on
the vhm_request_buffer in function acrn_insert_request. The atomic operation of
get_vhm_req_state can't guarantee the vhm_request_buffer will not access by another
IO request if it is not running on the corresponding VCPU. So it should trigger
triple fault shutdown VM IO request on the VCPU which trap out because of triple
fault exception.
Besides, rt_vm_pm1a_io_write will do the right thing which we shouldn't do it in
triple_fault_shutdown_vm.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Since spinlock ptdev_lock is used to protect ptdev entry, there's no need to
use atomic operation to protect ptdev active flag in split of it's wrong used
to protect ptdev entry. And refine active flag data type to bool.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
According to SDM, bit N (physical address width) to bit 63 should be masked when calculate
host page frame number.
Currently, hypervisor doesn't set any of these bits, so gpa2hpa can work as expectd.
However, any of these bit set, gpa2hpa return wrong value.
Hypervisor never sets bit N to bit 51 (reserved bits), for simplicity, just mask bit 52 to bit 63.
Tracked-On: #3352
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Create 2 functions from code:
pci_base_from_size_mask
vdev_pt_remap_mem_vbar
Use vbar in place of vdev->bar[idx] by setting vbar to &vdev->bar[idx]
Change base to uint64_t to accommodate 64-bit MMIO bar size masking in
subsequent commits
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
At this point, uint64_t base in struct pci_bar is not used by any code, so we
can remove it.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Define/Use variable in place of code to improve readability:
Define new local variable struct pci_bar *vbar, and use vbar-> in place of vdev->bar[idx].
Define new local variable uint64_t vbar_base in init_vdev_pt
Rename uint64_t vbar[PCI_BAR_COUNT] of struct acrn_vm_pci_ptdev_config to uint64_t vbar_base[PCI_BAR_COUNT]
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
To remember the previously mapped/registered vbar base
For the following reasons:
register_mmio_emulation_handler() will throw an error if the the same addr_lo is
alreayd registered before
We are going to remove the base member from struct pci_bar, so we cannot use vdev->bar[idx].base
in the code any more
In subsequent commits, we will assume vdev_pt_remap_generic_mem_vbar() is called after a new
vbar base is set, mainly because of 64-bit mmio bar handling, so we need a
separate bar_base_mapped[] array to track the previously mapped vbar bases.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
vcpu is never scan because of scan tool will be crashed!
After modulization, the vcpu can be scaned by the scan tool.
Clean up the violations in vcpu.c.
Fix the violations:
1.No brackets to then/else.
2.Function return value not checked.
3.Signed/unsigned coversion without cast.
V1->V2:
change the type of "vcpu->arch.irq_window_enabled" to bool.
V2->V3:
add "void *" prefix on the 1st parameter of memset.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The sbuf is allocated for each pcpu by hypercall from SOS. Before launch
Guest OS, the script will offline cpus, which will trigger vcpu reset and
then reset sbuf pointer. But sbuf only initiate once by SOS, so these
cpus for Guest OS has no sbuf to use. Thus, when run 'acrntrace' on SOS,
there is no trace data for Guest OS.
To fix the issue, only reset the sbuf for SOS.
Tracked-On: #3335
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Use nr_bars instead of PCI_BAR_COUNT to check bar access offset.
As while normal pci device has max 6 bars, pci bridge only has 2 bars,
so for pci normal pci device, pci cfg offsets 0x10-0x24 are for bar access,
but for pci bridge, only 0x10-0x14 are for bar access (0x18-0x24 are
for other accesses).
Rename function:
pci_bar_access --> is_bar_offset
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
nr_bars in struct pci_pdev is used to store the actual # of bars (
6 for normal pci device and 2 for pci bridge), nr_bars will be used in subsequent
patches
Use uint32_t for bar related variables (bar index, etc) to unify the bar
related code (no casting between uint32_t and uint8_t)
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
union pci_bar uses bit fields and follows the PCI bar spec definition to define the
bar flags portion and base address, this is to keep the same hardware format for vbar
register. The base/type of union pci_bar are still kept to minimize code changes
in one patch, they will be removed in subsequent patches.
define pci_pdev_get_bar_base() function to extract bar base address given a 32-bit raw
bar value
define a utility function pci_get_bar_type() to extract bar types
from raw bar value to simply code, as this function will be used in multiple
places later on: this function can be called on reg->value stored in struct
pci_bar to derive bar type.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Add get_offset_of_caplist() function to return capability offset based on header type:
For normal pci device and bridge, its capability offset is at offset 0x34
For cardbus, its capability offset is at offset 0x14
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
find_pci_pdev is not used any more, remove it.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The current implement will cache each ISR vector in ISR vector stack and do
ISR vector stack check when updating PPR. However, there is no need to do this
because:
1) We will not touch vlapic->isrvec_stk[0] except doing vlapic_reset:
So we don't need to do vlapic->isrvec_stk[0] check.
2) We only deliver higher priority interrupt from IRR to ISR:
So we don't need to check whether vlapic->isrvec_stk interrupts is always increasing.
3) There're only 15 different priority interrupt, It will not happened that more that
15 interrupts could been delivered to ISR:
So we don't need to check whether vlapic->isrvec_stk_top will larger than
ISRVEC_STK_SIZE which is 16.
This patch try to remove ISR vector stack and use isrv to cache the vector number for
the highest priority bit that is set in the ISR.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Remove couple of run-time ASSERTs in ioapic module by checking for the
number of interrupt pins per IO-APICs against the configured MAX_IOAPIC_LINES
in the initialization flow.
Also remove the need for two MACROs specifying the max. number of
interrupt lines per IO-APIC and add a config item MAX_IOAPIC_LINES for the
same.
Tracked-On: #3299
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The has_rt_vm walk through all VMs to check RT VM flag and if
there is no any RT VM, then return false otherwise return true.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
The ept_flush_leaf_page API is used to flush address space
from a ept page entry, user can use it to match walk_ept_mr to
flush VM address space.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The walk_ept_table API is used to walk through EPT table for getting
all of present pages, user can get each page entry and its size
from the walk_ept_table callback.
The get_ept_entry is used to getting EPT pointer of the vm, if current
context of mv is secure world, return secure world EPT pointer, otherwise
return normal world EPT pointer.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
flush_address_space is used to flush address space by clflushopt instruction.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
CLFLUSHOPT is used to invalidate from every level of the cache hierarchy
in the cache coherence domain the cache line that contains the linear
address specified with memory operand. If that cache line contains
modified date at any level of the cache hierarchy, that data is written
back to memory.
If the platform does not support CLFLUSHOPT instruction, boot will fail.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The current implement doesn't clear which access type we support for
APIC-Access VM Exit:
1) linear access for an instruction fetch
-- APIC-access page is mapped as UC which doesn't support fetch
2) linear access (read or write) during event delivery
-- Which is not happened in normal case except the guest went wrong, such as,
set the IDT table in APIC-access page. In this case, we don't need to support.
3) guest-physical access during event delivery;
guest-physical access for an instruction fetch or during instruction execution
-- Do we plan to support enable APIC in real mode ? I don't think so.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Rename MSI-X struct, pci_msix, member from tables to table_entries
Tracked-On: #3265
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch introduces check_vm_vlapic_state API instead of is_lapic_pt_enabled
to check if all the vCPUs of a VM are using x2APIC mode and LAPIC
pass-through is enabled on all of them.
When the VM is in VM_VLAPIC_TRANSITION or VM_VLAPIC_DISABLED state,
following conditions apply.
1) For pass-thru MSI interrupts, interrupt source is not programmed.
2) For DM emulated device MSI interrupts, interrupt is not delivered.
3) For IPIs, it will work only if the sender and destination are both in x2APIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch introduces vLAPIC state for a VM. The VM vLAPIC state can
be one of the following
* VM_VLAPIC_X2APIC - All the vCPUs/vLAPICs (Except for those in Disabled mode) of this
VM use x2APIC mode
* VM_VLAPIC_XAPIC - All the vCPUs/vLAPICs (Except for those in Disabled mode) of this
VM use xAPIC mode
* VM_VLAPIC_DISABLED - All the vCPUs/vLAPICs of this VM are in Disabled mode
* VM_VLAPIC_TRANSITION - Some of the vCPUs/vLAPICs of this VM (Except for those in Disabled mode)
are in xAPIC and the others in x2APIC
Upon a vCPU updating the IA32_APIC_BASE MSR to switch LAPIC mode, this
API is called to sync the vLAPIC state of the VM. Upon VM creation and reset,
vLAPIC state is set to VM_VLAPIC_XAPIC, as ACRN starts the vCPUs vLAPIC in
XAPIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
is_xapic_enabled API returns true if vLAPIC is in xAPIC mode. In
all other cases, it returns false.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Remove static and inline attributes to the API is_x2apic_enabled
and declare a prototype in vlapic.h. Also fix the check performed on guest
APICBASE_MSR value to query vLAPIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
When the SOS kernel/pre-launched OS access the 0xCF8/0xCFC, it will cause
the vm-exit and then the hypervisor tries to emulate the PCI_cfg access.
0xCF8 write: The bdf/reg is captured. cache_reg = value & (0xFF);
0xCFC-0xCFF read/write: offset = address - 0xCFC. Then cached_reg + offset is
used as the offset to access the pci_cfg.
If the aligned reg is passed in 0xCF8 register, it can work well. But when
the unaligned reg is passed in 0xCF8 register, the cached_reg + offset will cause
that the incorrect pci_cfg offset is accessed. For example:
The cached_reg = 0x02(Device_ID offset) based on the value passed from 0xCF8
offset = 2 based on 0xCFC-0xCFF address.
Then cached_reg + offset is used as the offset(PCI_CMD_REG)
In fact the unaligned reg can work well on the real HW.
So the cached_reg should be aligned to handle the unaligned reg passed in
0xCF8 reg.
Tracked-On: #3249
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
V1:Initial Patch
Modularize vpic. The current patch reduces the usage
of acrn_vm inside the vpic.c file.
Due to the global natire of register_pio_handler, where
acrn_vm is being passed, some usage remains.
These needs to be a separate "interface" file.
That will come in smaller newer patch provided
this patch is accepted.
V2:
Incorporated comments from Jason.
V3:
Fixed some MISRA-C Violations.
Tracked-On: #1842
Signed-off-by: Arindam Roy <arindam.roy@intel.com>
Reviewed-by: Xu, Anthony <anthony.xu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
When the lapic is passthru, vpic and vioapic cannot be used anymore. In
current code, user can still inject vpic interrupt to Guest OS, this is
not allowed.
This patch remove the vpic and vioapic initiate functions during
creating VM with lapic passthru. But the APIs in vpic and vioapic are
called in many places, for these APIs, follow the below principles:
1. For the APIs which will access uninitiated variables, and may case
hypervisor hang, add @pre to make sure user should call them after vpic or
vioapic is initiated.
2. For the APIs which only return some static value, do noting with them.
3. For the APIs which user will called to inject interrupt, such as
vioapic_set_irqline_lock or vpic_set_irqline, add condition in these
APIs to make sure it only inject interrupt when vpic or vioapic is
initiated. This change is to make sure the vuart or hypercall need not
to care whether lapic is passthru or the vpic and vioapic is initiated
or not.
Tracked-On: #3227
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
On SDC scenario, SOS VM id is fixed to 0 so some hypercalls from guest
are using hardcoded "0" to represent SOS VM, this would bring issues
for HYBRID scenario which SOS VM id is non-zero.
Now introducing a new VM id concept for DM/VHM hypercall APIs, that
return a relative VM id which is from SOS view when create VM for post-
launched VMs. DM/VHM could always treat their own vm id is "0". When they
make hypercalls, hypervisor will convert the VM id to the absolute id
when dispatch the hypercalls.
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
According to SDM vol1 13.3:
Write 1 to reserved bit of XCR0 will trigger GP.
This patch make ACRN behavior align with SDM definition.
Tracked-On: #3239
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The VM IDs which is high or equal then CONFIG_MAX_VM_NUM are all
invalid VM IDs, the MACRO has never been referenced in code, so
remove it;
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
The vcpu num could be calculated based on pcpu_bitmap when prepare_vcpu()
is done, so remove this redundant configuration item;
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Zephyr kernel is stripped ram image, its entry and load address
are explicitly defined in vm configurations, hypervisor will load
Zephyr directly based on these configurations.
Currently we only support boot Zephyr from protected mode.
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Previously multiboot mods[0] is designed for kernel module for all
pre-launched VMs including SOS VM, and mods[0].mm_string is used
to store kernel cmdline. This design could not satisfy the requirement
of hybrid mode scenarios that each VM might use their own kernel image
also ramdisk image. To resolve this problem, we will use a tag in
mods mm_string field to specify the module type. If the tag could
be matched with os_config of VM configurations, the corresponding
module would be loaded;
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Different kernel has different load method, it should be configurable
in vm configurations;
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
The guest OS of ACRN will not be limited to Linux, so refine the struct
of sw_linux to more generic sw_module_info. Currently bootargs and ramdisk
are only supported modules but we can include more modules in future;
Tracked-On: #3214
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
is_msix, part of ptirq_msi_info, is not used in the code.
Tracked-On: #3205
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Present SGX related MSRs to guest if SGX is supported.
- MSR_IA32_SGXLEPUBKEYHASH0 ~ MSR_IA32_SGXLEPUBKEYHASH3:
SGX Launch Control is not supported, so these MSRs are read only.
- MSR_IA32_SGX_SVN_STATUS:
read only
- MSR_IA32_FEATURE_CONTROL:
If SGX is support in VM, opt-in SGX in this MSR.
- MSR_SGXOWNEREPOCH0 ~ MSR_SGXOWNEREPOCH1:
The two MSRs' scope is package level, not allow guest to change them.
Still leave them in unsupported_msrs array.
Tracked-On: #3179
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
If sgx is supported in guest, present SGX capabilities to guest.
There will be only one EPC section presented to guest, even if EPC
memory for a guest is from muiltiple physcial EPC sections.
Tracked-On: #3179
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add EPC information in vm configuration structure.
EPC information contains the EPC base and size allocated to a VM.
Tracked-On: #3179
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Get the platform EPC resource and partiton the EPC resource for VMs
according to VM configurations.
Don't support sgx capability in SOS VM.
init_sgx is called during platform bsp initialization.
If init_sgx() fails, consider it as configuration error, panic the system.
init_sgx() fails if one of the following happens when at least one VM requests
EPC resource if no enough EPC resource for all VMs.
No further check if sgx is not supported by platform or not opted-in in BIOS,
just disable SGX support for VMs.
Tracked-On: #3179
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
VM Name length is restricted to 32 characters. kata creates
a VM name with GUID added as a part of VM name making it around
80 characters. So increasing this size to 128.
v1->v2:
It turns out that MAX_VM_OS_NAME_LEN usage in DM and HV are for
different use cases. So removing the macro from acrn_common.h.
Definied macro MAX_VMNAME_LEN for DM purposes in dm.h. Retaining
original macron name MAX_VM_OS_NAME_LEN for HV purposes but defined
in vm_config.h.
Tracked-On: #3138
Signed-off-by: Vijay Dhanraj <vijay.dhanraj@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Define a static mptable array and each VM could index its vmptable by
vm id, then mptable is not needed in vm configurations;
Tracked-On: #2291
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
At the time the guest is switching to X2APIC mode, different VCPUs in the
same VM could expect the setting of the per VM msr_bitmap differently,
which could cause problems.
Considering different approaches to address this issue:
1) only guest BSP can update msr_bitmap when switching to X2APIC.
2) carefully re-write the update_msr_bitmap_x2apic_xxx() functions to
make sure any bit in the bitmap won't be toggled by the VCPUs.
3) make msr_bitmap as per VCPU.
We chose option 3) because it's simple and clean, though it takes more
memory than other options.
BTW, need to remove const modifier from update_msr_bitmap_x2apic_xxx()
functions to get it compiled.
Tracked-On: #3166
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
- add the GUEST_FLAG_HIGHEST_SEVERITY flag to indicate that the guest has
privilege to reboot the host system.
- this flag is statically assigned to guest(s) in vm_configurations.c in
different scenarios.
- implement reset_host() function to reset the host. First try the ACPI
reset register if available, then try the 0xcf9 PIO.
Tracked-On: #3145
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
And make other related changes accordingly:
Remove pci_pdev_enumeration_cb define
Create init_vdevs() to iterate through the pdev list and create vdev for each pdev
Export num_pci_pdev and pci_pdev_array as globals in header file
Minor cosmetic fix:
Remove trailing whitespace
Tracked-On: #3022
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
ACRN supports LAPIC emulation for guests using x86 APICv. When guest OS/BIOS
switches from xAPIC to x2APIC mode of operation, ACRN also supports switching
froom LAPIC emulation to LAPIC passthrough to guest. User/developer needs to
configure GUEST_FLAG_LAPIC_PASSTHROUGH for guest_flags in the corresponding
VM's config for ACRN to enable LAPIC passthrough.
This patch does the following
1)Fixes a bug in the abovementioned feature. For a guest that is
configured with GUEST_FLAG_LAPIC_PASSTHROUGH, during the time period guest is
using xAPIC mode of LAPIC, virtual interrupts are not delivered. This can be
manifested as guest hang when it does not receive virtual timer interrupts.
2)ACRN exposes physical topology via CPUID leaf 0xb to LAPIC PT VMs. This patch
removes that condition and exposes virtual topology via CPUID leaf 0xb.
Tracked-On: #3136
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>