Replace new_base with vbar_base in vdev_pt_remap_generic_mem_vbar().
We will call vdev_pt_remap_generic_mem_vbar() after a new vbar base
is set, no need to pass new_base to vdev_pt_remap_generic_mem_vbar(),
as this new vbar base (vbar_base) can be obtained by calling get_vbar_base().
The reason we call vdev_pt_remap_generic_mem_vbar() after a new vbar base
is set is for 64-bit mmio handling: when the lower 32-bit of 64-bit mmio vbar is
set, we will defer calling vdev_pt_remap_generic_mem_vbar until its upper 32-bit
vbar base is set.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
To remember the previously mapped/registered vbar base
For the following reasons:
register_mmio_emulation_handler() will throw an error if the the same addr_lo is
alreayd registered before
We are going to remove the base member from struct pci_bar, so we cannot use vdev->bar[idx].base
in the code any more
In subsequent commits, we will assume vdev_pt_remap_generic_mem_vbar() is called after a new
vbar base is set, mainly because of 64-bit mmio bar handling, so we need a
separate bar_base_mapped[] array to track the previously mapped vbar bases.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
vbar base can be built by using the base address fields stored in
struct pci_bar's reg member.
get_vbar_base: return vbar's base address in 64-bit. For 64-bit MMIO bar, its lower 32-bits
base address and upper 32-bits base are combined into one 64-bit base address
And changed related code to use get_vbar_base to get vbar base address in
64-bit.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
We added "union pci_bar_reg reg" to struct pci_bar in previous commit,
but only pci_pdev uses it and pci_vdev does not use it. Starting from
this commit, pci_vdev will use it:
In init_vdev_pt(), copy pbar's reg's flags portion to corresponding vbar's
reg.
When guest updates the vbar base address, the corresponding vbar reg's base
address will also be updated, so that in subsequent commits, we can eventually
remove the base member in struct pci_bar.
Rename local variable new_bar to base in vdev_pt_write_vbar
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
pbar base can be built by using the base address fields stored in
struct pci_bar's reg member.
get_pbar_base: return pbar's base address in 64-bit. For 64-bit MMIO bar, its lower 32-bits
base address and upper 32-bits base are combined into one 64-bit base address
pci_bar_2_bar_base: helper function that is called by get_pbar_base
And changed related code to use get_pbar_base to get pbar base address in 64-bit
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
There is no need to call get_bar_base(), as new_bar is set to val & mask,
where mask is the bar size mask, so new_base has already been set to be the
bar base address before get_bar_base() is called on it.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
ACRN coding guideline requires function shall have only one return entry.
Fix it.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vcpu is never scan because of scan tool will be crashed!
After modulization, the vcpu can be scaned by the scan tool.
Clean up the violations in vcpu.c.
Fix the violations:
1.No brackets to then/else.
2.Function return value not checked.
3.Signed/unsigned coversion without cast.
V1->V2:
change the type of "vcpu->arch.irq_window_enabled" to bool.
V2->V3:
add "void *" prefix on the 1st parameter of memset.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The sbuf is allocated for each pcpu by hypercall from SOS. Before launch
Guest OS, the script will offline cpus, which will trigger vcpu reset and
then reset sbuf pointer. But sbuf only initiate once by SOS, so these
cpus for Guest OS has no sbuf to use. Thus, when run 'acrntrace' on SOS,
there is no trace data for Guest OS.
To fix the issue, only reset the sbuf for SOS.
Tracked-On: #3335
Signed-off-by: Conghui Chen <conghui.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Use nr_bars instead of PCI_BAR_COUNT to check bar access offset.
As while normal pci device has max 6 bars, pci bridge only has 2 bars,
so for pci normal pci device, pci cfg offsets 0x10-0x24 are for bar access,
but for pci bridge, only 0x10-0x14 are for bar access (0x18-0x24 are
for other accesses).
Rename function:
pci_bar_access --> is_bar_offset
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
And put the checking in vdev_pt_write_cfg instead to have less nesting in
vdev_pt_write_vbar to improve code readability.
Rename function:
vdev_pt_remap_generic_bar --> vdev_pt_remap_generic_mem_vbar
vdev_pt_read_cfg's function declaration is merged into one line instead of 2
lines
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
nr_bars in struct pci_pdev is used to store the actual # of bars (
6 for normal pci device and 2 for pci bridge), nr_bars will be used in subsequent
patches
Use uint32_t for bar related variables (bar index, etc) to unify the bar
related code (no casting between uint32_t and uint8_t)
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
union pci_bar uses bit fields and follows the PCI bar spec definition to define the
bar flags portion and base address, this is to keep the same hardware format for vbar
register. The base/type of union pci_bar are still kept to minimize code changes
in one patch, they will be removed in subsequent patches.
define pci_pdev_get_bar_base() function to extract bar base address given a 32-bit raw
bar value
define a utility function pci_get_bar_type() to extract bar types
from raw bar value to simply code, as this function will be used in multiple
places later on: this function can be called on reg->value stored in struct
pci_bar to derive bar type.
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Add get_offset_of_caplist() function to return capability offset based on header type:
For normal pci device and bridge, its capability offset is at offset 0x34
For cardbus, its capability offset is at offset 0x14
Tracked-On: #3241
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
The current implement will cache each ISR vector in ISR vector stack and do
ISR vector stack check when updating PPR. However, there is no need to do this
because:
1) We will not touch vlapic->isrvec_stk[0] except doing vlapic_reset:
So we don't need to do vlapic->isrvec_stk[0] check.
2) We only deliver higher priority interrupt from IRR to ISR:
So we don't need to check whether vlapic->isrvec_stk interrupts is always increasing.
3) There're only 15 different priority interrupt, It will not happened that more that
15 interrupts could been delivered to ISR:
So we don't need to check whether vlapic->isrvec_stk_top will larger than
ISRVEC_STK_SIZE which is 16.
This patch try to remove ISR vector stack and use isrv to cache the vector number for
the highest priority bit that is set in the ISR.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
there is a build error if we only build debug/release library
because missing the build/modules folder
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
modified: Makefile
modified: debug/Makefile
modified: release/Makefile
Fix the violations not touched the logical.
1.Function return value not checked.
2.Logical conjuctions need brackets.
3.No brackets to then/else.
4.Type conversion without cast.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
When debugging the HV, we may want to check the HV version information
frequently. In current HV shell command, there is no such kind of command
to check this information. We can only scroll up the HV console log to
get the information. If there are very huge amount of lines of log, it
will be very time-wasting to get the HV version information.
So, this patch adds 'version' command to get the HV version information
conveniently.
Tracked-On: #3310
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Remove couple of run-time ASSERTs in ioapic module by checking for the
number of interrupt pins per IO-APICs against the configured MAX_IOAPIC_LINES
in the initialization flow.
Also remove the need for two MACROs specifying the max. number of
interrupt lines per IO-APIC and add a config item MAX_IOAPIC_LINES for the
same.
Tracked-On: #3299
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
MISRA-C requires inline functions should be declared static,
these APIs are external interfaces,remove inline
Tracked-On: #861
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
modified: arch/x86/guest/vcpu.c
MISRA-C standard requires the type of result of expression in if/while pattern shall be boolean.
Tracked-On: #861
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
There are a lot of works to do between create_vm (HV will mark vm's state
as VM_CREATED at this stage) and vm_run (HV will mark vm's state as VM_STARTED),
like building mptable/acpi table, initializing mevent and vdevs. If there is
something goes wrong between create_vm and vm_run, the devicemodel will jumps
to the deinit process and will try to destroy the vm. For example, if the
vm_init_vdevs failed, the devicemodel will jumps to dev_fail and then destroy
the vm.
For normal vm in above situation, it is fine to destroy vm. And we can create and
start it next time. But for RTVM, we can't destroy the vm as the vm's state is
VM_CREATED. And we can only destroy vm when its state is VM_POWERING_OFF. So, the
vm will stay at VM_CREATED state and we will never have chance to destroy it.
Consequently, we can't create and start the vm next time.
This patch fixes it by allowing to pause and then destroy RTVM when its state is VM_CREATED.
Tracked-On: #3069
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
wbinvd is used to write back all modified cache lines in the processor's
internal cache to main memory and invalidates(flushes) the internal caches.
Using clflushopt instructions to emulate wbinvd to flush each
guest vm memory, if CLFLUSHOPT is not supported, boot will fail.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The has_rt_vm walk through all VMs to check RT VM flag and if
there is no any RT VM, then return false otherwise return true.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
The ept_flush_leaf_page API is used to flush address space
from a ept page entry, user can use it to match walk_ept_mr to
flush VM address space.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The walk_ept_table API is used to walk through EPT table for getting
all of present pages, user can get each page entry and its size
from the walk_ept_table callback.
The get_ept_entry is used to getting EPT pointer of the vm, if current
context of mv is secure world, return secure world EPT pointer, otherwise
return normal world EPT pointer.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
CLFLUSHOPT is used to invalidate from every level of the cache hierarchy
in the cache coherence domain the cache line that contains the linear
address specified with memory operand. If that cache line contains
modified date at any level of the cache hierarchy, that data is written
back to memory.
If the platform does not support CLFLUSHOPT instruction, boot will fail.
Signed-off-by: Jack Ren <jack.ren@intel.com>
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The current implement doesn't clear which access type we support for
APIC-Access VM Exit:
1) linear access for an instruction fetch
-- APIC-access page is mapped as UC which doesn't support fetch
2) linear access (read or write) during event delivery
-- Which is not happened in normal case except the guest went wrong, such as,
set the IDT table in APIC-access page. In this case, we don't need to support.
3) guest-physical access during event delivery;
guest-physical access for an instruction fetch or during instruction execution
-- Do we plan to support enable APIC in real mode ? I don't think so.
Tracked-On: #1842
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
In hcall_inject_msi, we check vlapic state of SOS by mistake.
If the SOS's vlapic state doesn't equal to target_vm's, the OVMF will
hang when boot up. Instead, we should check the target_vm's
vlapic state.
Tracked-On: #3069
Acked-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Binbin Wu <binbin.wu@intel.com>
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
support compiling virtual platform hypercall to vp_hcall_mod.a
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
modified: Makefile
support compiling virtual platform trusty to vp_trusty_mod.a
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
modified: Makefile
support compiling virtual platform device model layer
to vp_dm_mod.a
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
modified: Makefile
support compiling virtual platform base layer to
vp_base_mod.a
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
modified: Makefile
HV has been divided into the following layers
according to Jason's modularization documentation
high: 70 -- system initialization
60 -- virtual platform hypercall
50 -- virtual platform trusty
40 -- virtual platform device model
30 -- virtual platform base
20 -- hardware management
10 -- platform boot
low: 00 -- library
this patch is only for library layer,
support compiling library layer to lib_mod.a
Tracked-On: #1842
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
modified: Makefile
This patch introduces check_vm_vlapic_state API instead of is_lapic_pt_enabled
to check if all the vCPUs of a VM are using x2APIC mode and LAPIC
pass-through is enabled on all of them.
When the VM is in VM_VLAPIC_TRANSITION or VM_VLAPIC_DISABLED state,
following conditions apply.
1) For pass-thru MSI interrupts, interrupt source is not programmed.
2) For DM emulated device MSI interrupts, interrupt is not delivered.
3) For IPIs, it will work only if the sender and destination are both in x2APIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch introduces vLAPIC state for a VM. The VM vLAPIC state can
be one of the following
* VM_VLAPIC_X2APIC - All the vCPUs/vLAPICs (Except for those in Disabled mode) of this
VM use x2APIC mode
* VM_VLAPIC_XAPIC - All the vCPUs/vLAPICs (Except for those in Disabled mode) of this
VM use xAPIC mode
* VM_VLAPIC_DISABLED - All the vCPUs/vLAPICs of this VM are in Disabled mode
* VM_VLAPIC_TRANSITION - Some of the vCPUs/vLAPICs of this VM (Except for those in Disabled mode)
are in xAPIC and the others in x2APIC
Upon a vCPU updating the IA32_APIC_BASE MSR to switch LAPIC mode, this
API is called to sync the vLAPIC state of the VM. Upon VM creation and reset,
vLAPIC state is set to VM_VLAPIC_XAPIC, as ACRN starts the vCPUs vLAPIC in
XAPIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
is_xapic_enabled API returns true if vLAPIC is in xAPIC mode. In
all other cases, it returns false.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Remove static and inline attributes to the API is_x2apic_enabled
and declare a prototype in vlapic.h. Also fix the check performed on guest
APICBASE_MSR value to query vLAPIC mode.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch changes the code in vlapic_set_apicbase for the following reasons
1) Better readability as it first checks if the new value programmed into
MSR is any different from the existing value cached in guest structures
2) Check if both bits 11:10 are set before enabling x2APIC mode for guest.
Current code does not check if Bit 11 is set.
3) Add TODO in the comments, to detail about the current gaps in
IA32_APIC_BASE MSR emulation.
Tracked-On: #3253
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
When the SOS kernel/pre-launched OS access the 0xCF8/0xCFC, it will cause
the vm-exit and then the hypervisor tries to emulate the PCI_cfg access.
0xCF8 write: The bdf/reg is captured. cache_reg = value & (0xFF);
0xCFC-0xCFF read/write: offset = address - 0xCFC. Then cached_reg + offset is
used as the offset to access the pci_cfg.
If the aligned reg is passed in 0xCF8 register, it can work well. But when
the unaligned reg is passed in 0xCF8 register, the cached_reg + offset will cause
that the incorrect pci_cfg offset is accessed. For example:
The cached_reg = 0x02(Device_ID offset) based on the value passed from 0xCF8
offset = 2 based on 0xCFC-0xCFF address.
Then cached_reg + offset is used as the offset(PCI_CMD_REG)
In fact the unaligned reg can work well on the real HW.
So the cached_reg should be aligned to handle the unaligned reg passed in
0xCF8 reg.
Tracked-On: #3249
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
V1:Initial Patch
Modularize vpic. The current patch reduces the usage
of acrn_vm inside the vpic.c file.
Due to the global natire of register_pio_handler, where
acrn_vm is being passed, some usage remains.
These needs to be a separate "interface" file.
That will come in smaller newer patch provided
this patch is accepted.
V2:
Incorporated comments from Jason.
V3:
Fixed some MISRA-C Violations.
Tracked-On: #1842
Signed-off-by: Arindam Roy <arindam.roy@intel.com>
Reviewed-by: Xu, Anthony <anthony.xu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
This commit extract pm io handler registration code to register_pm_io_handler()
to reduce the cyclomatic complexity of create_vm() in order to be complied with
MISRA-C rules.
Tracked-On: #3227
Signed-off-by: Yan, Like <like.yan@intel.com>