It seems important that passthru device's max payload settings match
the settings on the native device otherwise passthru device may not work.
So we have to set vrp's max payload capacity as native root port
otherwise we may accidentally change passthru device's max payload
since during guest OS's pci device enumeration, pass-thru device will
renegotiate its max payload's setting with vrp.
Tracked-On: #5915
Signed-off-by: Rong Liu <rong.l.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
For post launch VM, ACRN supports PTM under these conditions:
1. HW implements a simple PTM hierarchy: PTM requestor device (ep) is
directly connected to PTM root capable root port. Or
2. ptm requestor itself is root complex integrated ep.
Currently acrn doesn't support emulation of other type of PTM hiearchy, such
as if there is an intermediate PTM node (for example, switch) inbetween
PTM requestor and PTM root.
To avoid VM touching physical hardware, acrn hv ensures PTM is always enabled
in the hardware.
During hv's pci init, if root port is ptm capable,
hv will enable PTM on that root port. In addition,
log error (and don't enable PTM) if ptm root
capability is on intermediate node other than root port.
V2:
- Modify commit messages to clarify the limitation
of current PTM implementation.
- Fix code that may fail FUSA
- Remove pci_ptm_info() and put info log inside pci_enable_ptm_root().
Tracked-On: #5915
Signed-off-by: Rong Liu <rong.l.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Create virtual root port through add_vdev hypercall. add_vdev
identifies the virtual device to add by its vendor id and device id, then
call the corresponding function to create virtual device.
-create_vrp(): Find the right virtual root port to create
by its secondary bus number, then initialize the virtual root port.
And finally initialize PTM related configurations.
-destroy_vrp(): nothing to destroy
Tracked-On: #5915
Signed-off-by: Rong Liu <rong.l.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Jason Chen <jason.cj.chen@intel.com>
Acked-by: Yu Wang <yu1.wang@intel.com>
Add virtual root port that supports the most basic pci-e bridge and root port operations.
- init_vroot_port(): init vroot_port's basic registers.
- deinit_vroot_port(): reset vroot_port
- read_vroot_port_cfg(): read from vroot_port's virtual config space.
- write_vroot_port_cfg(): write to vroot_port's virtual config space.
Tracked-On: #5915
Signed-off-by: Rong Liu <rong.l.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Jason Chen <jason.cj.chen@intel.com>
Acked-by: Yu Wang <yu1.wang@intel.com>
This patch denies Service VM the access permission to device resources
owned by hypervisor.
HV may own these devices: (1) debug uart pci device for debug version
(2) type 1 pci device if have pre-launched VMs.
Current implementation exposes the mmio/pio resource of HV owned devices
to SOS, should remove them from SOS.
Tracked-On: #5615
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
This patch denies Service VM the access permission to device
resources owned by pre-launched VMs.
Rationale:
* Pre-launched VMs in ACRN are independent of service VM,
and should be immune to attacks from service VM. However,
current implementation exposes the bar resource of passthru
devices to service VM for some reason. This makes it possible
for service VM to crash or attack pre-launched VMs.
* It is same for hypervisor owned devices.
NOTE:
* The MMIO spaces pre-allocated to VFs are still presented to
Service VM. The SR-IOV capable devices assigned to pre-launched
VMs doesn't have the SR-IOV capability. So the MMIO address spaces
pre-allocated by BIOS for VFs are not decoded by hardware and
couldn't be enabled by guest. SOS may live with seeing the address
space or not. We will revisit later.
Tracked-On: #5615
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The commit 'Fix: HV: VM OS failed to assign new address to pci-vuart
BARs' need more reshuffle.
Tracked-On: #5491
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Signed-off-by: Eddie Dong <eddie.dong@intel.com>
When wrong BAR address is set for pci-vuart, OS may assign a
new BAR address to it. Pci-vuart BAR can't be reprogrammed,
for its wrong fixed value. That can may because pci_vbar.fixed and
pci_vbar.type has overlap in abstraction, pci_vbar.fixed
has a confusing name, pci_vbar.type has PCIBAR_MEM64HI which is not
really a type of pci BARs.
So replace pci_vbar.type with pci_vbar.is_mem64hi, and change
pci_vbar.fixed to an union type with new name pci_vbar.bar_type.
Tracked-On: #5491
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Per PCI Firmware Specification Revision 3.0, 4.1.2. MCFG Table Description:
Memory Mapped Enhanced Configuration Space Base Address Allocation Structure
assign the Start Bus Number and the End Bus Number which could decoded by the
Host Bridge. We should not access the PCI device which bus number outside of
the range of [Start Bus Number, End Bus Number).
For ACRN, we should:
1. Don't detect PCI device which bus number outside the range of
[Start Bus Number, End Bus Number) of MCFG ACPI Table.
2. Only trap the ECAM MMIO size: [MMCFG_BASE_ADDRESS, MMCFG_BASE_ADDRESS +
(End Bus Number - Start Bus Number + 1) * 0x100000) for SOS.
Tracked-On: #5233
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Support hide SRIOV extend capability for passthough device
Tracked-On: #5041
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
-- remove unnecessary lock in pci_mmcfg_read_cfg and
pci_mmcfg_write_cfg since the mmio operation is atomic
if the offest is aligned with 1/2/4 bytes.
-- move pci_is_valid_access to pci.h
Tracked-On: #4958
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
hv: pci: refine pci_lookup_drhd_for_pbdf with hash
1. Added an auxiliary function pci_find_pdev using hash to find pdev
with pbdf, thus pci_lookup_drhd_for_pbdf will have a better performance
Tracked-On: #4857
Signed-off-by: Wang Qian <qian1.wang@intel.com>
Reviewed-by: Li Fei <Fei1.Li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
hv: pci: rename pci_pdev_array to pci_pdevs to make it clearer
Tracked-On: #4857
Signed-off-by: Wang Qian <qian1.wang@intel.com>
Reviewed-by: Li Fei <Fei1.Li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Some passthrough devices require multiple MSI vectors, but don't
support MSI-X. In meanwhile, Linux kernel doesn't support continuous
vector allocation.
On native platform, this issue can be mitigated by IOMMU via interrupt
remapping. However, on ACRN, there is no vIOMMU.
vMSI-X on MSI emulation is one solution to mitigate this problem on ACRN.
This patch adds MSI-X emulation on MSI capability.
For the device needs to do MSI-X emulation, HV will hide MSI capability
and present MSI-X capability to guest.
The guest driver may need to modify to reqeust MSI-X vector.
For example:
ret = pci_alloc_irq_vectors(pdev, 1, STMMAC_MSI_VEC_MAX,
- PCI_IRQ_MSI);
+ PCI_IRQ_MSI | PCI_IRQ_MSIX);
To enable MSI-X emulation, the device should:
- 1. The device should be in vmsix_on_msi_devs array.
- 2. Support MSI, but don't support MSI-X.
- 3. MSI capability should support per-vector mask.
- 4. The device should have an unused BAR.
- 5. The device driver should not rely on PBA for functionality.
Tracked-On: #4831
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We define some functions to read some fields of the CFG header registers. We
could remove them since they're not necessary since calling pci_pdev_read_cfg
is simple.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
According PCI Code and ID Assignment Specification Revision 1.11, a PCI device
whose Base Class is 06h and Sub-Class is 00h is a Host bridge.
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We should check whether a PCI device is host bridge or not by Base Class (06h)
and Sub-Class (00h).
Tracked-On: #4550
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There're some PCI devices need special handler for vendor-specical feature or
capability CFG access. The Intel GPU is one of them. In order to keep the ACRN-HV
clean, we want to throw the qurik part of PCI CFG asccess to DM to handle.
To achieve this, we implement per-device policy base on whether it needs quirk handler
for a VM: each device could configure as "quirk pass through device" or not. For a
"quirk pass through device", we will handle the general part in HV and the quirk part
in DM. For a non "quirk pass through device", we will handle all the part in HV.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
In order to add GVT-D support, we need pass through stolen memory and opregion memroy
to the post-launched VM. To implement this, we first reserve the GPA for stolen memory
and opregion memory through post-launched VM e820 table. Then we would build EPT mapping
between the GPA and the stolen memory and opregion memory real HPA. The last, we need to
return the GPA to post-launched VM if it wants to read the stolen memory and opregion
memory address and prevent post-launched VM to write the stolen memory and opregion memory
address register for now.
We do the GPA reserve and GPA to HPA EPT mapping in ACRN-DM and the stolen memory and
opregion memory CFG space register access emulation in ACRN-HV.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Emulate Device ID, Vendor ID and MSE(Memory Space Enable) bit in
configuration space for an assigned VF, initialize assgined VF Bars.
The Device ID comes from PF's SRIOV capability
The Vendor ID comes from PF's Vendor ID
The PCI MSE bit always be set when VM reads from an assigned VF.
Tracked-On: #4433
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add cfg_header_read_cfg and cfg_header_write_cfg to handle the 1st 64B
CFG Space header PCI configuration space.
Only Command and Status Registers are pass through;
Only Command and Status Registers and Base Address Registers are writable.
In order to implement this, we add two type bit mask for per 4B register:
pass through mask and read-only mask. When pass through bit mask is set, this
means this bit of this 4B register is pass through, otherwise, it is virtualized;
When read-only mask is set, this means this bit of this 4B register is read-only,
otherwise, it's writable. We should write it to physical CFG space or virtual
CFG space base on whether the pass through bit mask is set or not.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
For SRIOV needs ARI support, so enable it in HV if
the PCI bridge support it.
TODO:
need check all the PCI devices under this bridge can support ARI,
if not, it is better not enable it as PCIe spec. That check will be
done when scanning PCI devices.
Tracked-On: #3381
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Due to SRIOV VF physical device needs to be initialized when
VF_ENABLE is set and a SRIOV VF physical device initialization
is same with standard PCIe physical device, so expose the
init_pdev for SRIOV VF physical device initialization.
Tracked-On: #4433
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
All SRIOV VF physical devices don't have bars in configuration space,
they are from the VF associated PF's VF_BAR registers of SRIOV capability.
Adding a vbars data structure in pci_cap_sriov data structure to store
SRIOV VF_BAR information, so that each VF bars can be initialized directly
through the vbars instead multiple accessing of the PF VF_BAR registers.
Tracked-On: #4433
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
VF_ENABLE is one field of SRIOV capability that is used to create
or remove VF physical devices. If VF_ENABLE is set, hv can detect
if the VF physical devices are ready after waiting 100 ms.
v2: Add sanity check for writing NumVFs register, add precondition
and application constraints when VF_ENABLE is set and refine
code style.
Tracked-On: #4433
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Make the SRIOV-Capable device invisible from SOS if there is
no room for its all virtual functions.
v2: fix a issue that if a PF has been dropped, the subsequent PF
will be dropped too even there is room for its VFs.
Tracked-On: #4433
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
if the device has PCIe capability, walks all PCIe extended
capabilities for SRIOV discovery.
v2: avoid type casting and refine naming.
Tracked-On: #4433
Signed-off-by: Yuan Liu <yuan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
add vpci bridge operations in hypervisor, to avoid SOS mis-operations
to affect other VM's PCI devices.
assumption: before hypervisor bootup, the physical pci-bridge shall be
configured correctly by BIOS or other bootloader; for ACS (Access
Control Service) capability, it is configured by BIOS to support the
devices under it to be isolated and allocated to different VMs.
to simplify the emulations of vpci bridge, set limitations as following:
1. expose all configure space registers, but readonly
2. BIST not support; by default is 0
3. not support interrupt, including INTx and MSI.
TODO:
1. configure tool can select whether a PCI bridge is emulated or pass
through.
Open:
1. SOS how to reset PCI device under the PCI bridge?
Tracked-On: #3381
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Add assign/deassign PCI device hypercall APIs to assign a PCI device from SOS to
post-launched VM or deassign a PCI device from post-launched VM to SOS. This patch
is prepared for spliting passthrough PCI device from DM to HV.
The old assign/deassign ptdev APIs will be discarded.
Tracked-On: #4371
Signed-off-by: Li Fei1 <fei1.li@intel.com>
SOS will use PCIe ECAM access PCIe external configuration space. HV should trap this
access for security(Now pre-launched VM doesn't want to support PCI ECAM; post-launched
VM trap PCIe ECAM access in DM).
Besides, update PCIe MMCONFIG region to be owned by hypervisor and expose and pass through
platform hide PCI devices by BIOS to SOS.
Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Use Enhanced Configuration Access Mechanism (MMIO) instead of PCI-compatible
Configuration Mechanism (IO port) to access PCIe Configuration Space
PCI-compatible Configuration Mechanism (IO port) access is used for UART in
debug version.
Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
Since we restore BAR values when writing Command Register if necessary. We don't
need to trap FLR and do the BAR restore then.
Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
When PCIe does Conventinal Reset or FLR, almost PCIe configurations and states will
lost. So we should save the configurations and states before do the reset and restore
them after the reset. This was done well by BIOS or Guest now. However, ACRN will trap
these access and handle them properly for security. Almost of these configurations and
states will be written to physical configuration space at last except for BAR values
for now. So we should do the restore for BAR values. One way is to do restore after
one type reset is detected. This will be too complex. Another way is to do the restore
when BIOS or guest tries to write the Command Register. This could work because:
1. The I/O Space Enable bit and Memory Space Enable bits in Command Register will reset
to zero.
2. Before BIOS or guest wants to enable these bits, the BAR couldn't be accessed.
3. So we could restore the BAR values before enable these bits if reset is detected.
Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
ACRN hypervisor should trap guest doing PCI AF FLR. Besides, it should save some status
before doing the FLR and restore them later, only BARs values for now.
This patch will trap guest Conventional PCI Advanced Features Control Register write
operation if the device supports Conventional PCI Advanced Features Capability and
check whether it wants to do device AF FLR. If it does, call pdev_do_flr to do the job.
Tracked-On: #3465
Signed-off-by: Li Fei1 <fei1.li@intel.com>
ACRN hypervisor should trap guest doing PCIe FLR. Besides, it should save some status
before doing the FLR and restore them later, only BARs values for now.
This patch will trap guest Device Capabilities Register write operation if the device
supports PCI Express Capability and check whether it wants to do device FLR. If it does,
call pdev_do_flr to do the job.
Tracked-On: #3465
Signed-off-by: Li Fei1 <fei1.li@intel.com>
The default PCI mmcfg base is stored in ACPI MCFG table, when
CONFIG_ACPI_PARSE_ENABLED is set, acpi_fixup() function will
parse and fix up the platform mmcfg base in ACRN boot stage;
when it is not set, platform mmcfg base will be initialized to
DEFAULT_PCI_MMCFG_BASE which generated by acrn-config tool;
Please note we will not support platform which has multiple PCI
segment groups.
Tracked-On: #4157
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Major changes:
1. Correct handling of device multi-function capability
We only check function zero for this feature. If it has it, we continue
looking at all remaining functions, ignoring those with invalid vendors.
The PCI spec says we are not to probe beyond function zero if it does
not exist or indicates it is not a multi-function device.
2a. Walk *ALL* buses in the PCI space, however,
Before walking the PCI hierarchy, post-processed ACPI DMAR info is parsed
and a map is created between all device-scopes across all DRHDs and the
corresponding IOMMU index.
This map is used at the time of walking the PCI hierarchy. If a BDF that
ACRN is currently working on, is found in the above-mentioned map, the
BDF device is mapped to the corresponding DRHD in the map.
If the BDF were a bridge type, realized with "Header Type" in config space,
the BDF device along with all its downstream devices are mapped to the
corresponding DRHD in the map.
To avoid walking previously visited buses, we maintain a bitmap that
stores which bus is walked when we handle Bridge type devices.
Once ACPI information is included into ACRN about the PCI-Express Root
Complexes / PCI Host Bridges, we can avoid the final loop which probes
all remainder buses, and instead jump to the next Host Bridge bus.
From prior patches, init_pdev returns the pdev structure it created to
the caller. This allows us to complete initialization by updating its
drhd_idx to the correct DRHD.
Tracked-On: #4134
Signed-off-by: Alexander Merritt <alex.merritt@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
We add new member pci_pdev.drhd_idx associating the DRHD
(IOMMU) with this pdev, and a method to convert a pbdf of a device to
this index by searching the pdev list.
Partial patch: drhd_index initialization handled in subsequent patch.
Tracked-On: #4134
Signed-off-by: Alexander Merritt <alex.merritt@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Add some encapsulation of utilities which read PCI header space using
wrapper functions. Also contain verification of PCI vendor to its own
function, rather than having hard-coded integrals exposed among other
code.
Tracked-On: #4134
Signed-off-by: Alexander Merritt <alex.merritt@intel.com>
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
The current code declare pci_bar structure following the PCI bar spec. However,
we could not tell whether the value in virtual BAR configuration space is valid
base address base on current pci_bar structure. We need to add more fields which
are duplicated instances of the vBAR information. Basides these fields which will
added, bar_base_mapped is another duplicated instance of the vBAR information.
This patch try to reshuffle the pci_bar structure to declare pci_bar structure
following the software implement benefit not the PCI bar spec.
Tracked-On: #3475
Signed-off-by: Li Fei1 <fei1.li@intel.com>
The MSI Message Address and Message Data have no valid data after Power-ON. So
there's no need to initialize them by reading the data from physical PCI configuration
space.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
- update the function argument type to union
Declaring argument as pointer is not necessary since it
only does the comparison.
Tracked-On: #1842
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Initialize vBAR configure space when doing vPCI BAR initialization. At this time,
we access the physical device as we needs, no need to cache physical PCI device
BAR information beforehand.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
PCI BAR physical base address will never changed. Cache it to avoid calculating
it every time when we access it.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Now almost the vPCI device information could be obtain from PCI device configure
in VM configure. init_vdevs could make things more easier.
And rename init_vdevs to vpci_init_vdevs, init_vdev to vpci_init_vdevs to avoid
MISRA-C violations.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Dongsheng Zhang <dongsheng.x.zhang@intel.com>
Align SOS pci device configure with pre-launched VM and filter pre-launched VM's
PCI PT device from SOS pci device configure.
Tracked-On: #3475
Signed-off-by: Li, Fei1 <fei1.li@intel.com>