This patch adds code to support read-only RTC support for guests
run by partition mode ACRN. It supports RW for CMOS address port 0x70
and RO for CMOS data port 0x71. Reads to CMOS RAM offsets are fetched
by reading CMOS h/w directly and writes to CMOS offsets are discarded.
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Now the guest may change "Destination Field", "Trigger Mode",
"Interrupt Input Pin Polarity" even "Interrupt Vector" when
"Interrupt Mask" not masked. So we should update the pass through
device interrupt pin rte in this situation. The old logic would
update it only when "Interrupt Mask" or "Trigger Mode" or
"Interrupt Input Pin Polarity" was changed.
update ptdev native ioapic rte when (a) something changed and
(b) the old rte interrupt mask is not masked or the new rte interrupt
mask is not masked.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
-- Change these APIs to void type, add pre-conditions,
and move parameter-check to upper-layer functions.
handle_vpic_irqline
vpic_set_irqstate
vpic_assert_irq
vpic_deassert_irq
vpic_pulse_irq
vpic_get_irq_trigger
handle_vioapic_irqline
vioapic_assert_irq
vioapic_deassert_irq
vioapic_pulse_irq
-- Remove dead code
vpic_set_irq_trigger
v1-->v2:
add cleanup vpic
change some APIs to void type, add pre-conditions,
and move the parameter-check to upper-layer functions.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Now the DM has adopted the new VHM request state transitions and
REQ_STATE_FAILED is obsolete since neither VHM nor kernel mediators will set the
state to FAILED.
This patch drops the definition to REQ_STATE_FAILED in the hypervisor, makes
''processed'' unsigned to make the compiler happy about typing and simplifies
error handling in the following ways.
* (dm_)emulate_(pio|mmio)_post no longer returns an error code, by introducing a
constraint that these functions must be called after an I/O request
completes (which is the case in the current design) and assuming
handlers/VHM/DM will always give a value for reads (typically all 1's if the
requested address is invalid).
* emulate_io() now returns a positive value IOREQ_PENDING to indicate that the
request is sent to VHM. This mitigates a potential race between
dm_emulate_pio() and pio_instr_vmexit_handler() which can cause
emulate_pio_post() being called twice for the same request.
* Remove the ''processed'' member in io_request. Previously this mirrors the
state of the VHM request which terminates at either COMPLETE or FAILED. After
the FAILED state is removed, the terminal state will always be constantly
COMPLETE. Thus the mirrored ''processed'' member is no longer useful.
Note that emulate_instruction() will always succeed after a reshuffle, and this
patch takes that assumption in advance. This does not hurt as that returned
value is not currently handled.
This patch makes it explicit that I/O emulation is not expected to fail. One
issue remains, though, which occurs when a non-aligned cross-boundary access
happens. Currently the hypervisor, VHM and DM adopts different policy:
* Hypervisor: inject #GP if it detects that the access crossed boundary
* VHM: deliver to DM if the access does not complete falls in the range of a
client
* DM: a handler covering part of the to-be-accessed region is picked and
assertion failure can be triggered.
A high-level design covering all these components (in addition to instruction
emulation) is needed for this. Thus this patch does not yet cover the issue.
Tracked-On: #875
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The bracket is required when the level of precedence of
the operators is less than 13. Add the bracket to logical
conjunctions. The commit applys the rule to the files under
hypervisor/arch/x86/guest/*
Signed-off-by: Yang, Yu-chu <yu-chu.yang@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
V4:
- Moved error checking to vdev_hostbridge_cfgwrite/vdev_hostbridge_cfgread
V3:
- Unified ops calling and implemented deinit/cfgwrite/cfgread ops,
previously only init op is implemented
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
V4:
- Renamed members for struct pcibar and changed code accordingly
V3:
- Do not use ASSERT
- Use EPT_XX defines when claling ept_mr_add
- Report 64-bit MMIO physical bar to UOS as 32-bit virtual bar
(assume bar size is always less than 4GB), which removed quite some of
64-bit bar handling code
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
V4:
- If pci device is not found in the PCI mapping table, just return without
throwing any error message
- Added NULL pointer checking when calling cfgread/cfgwrite ops
- Moved error checking code to cfgread/cfgwrite ops
V3:
- Do not use ASSERT
- Call the cfg read/write ops defined in the vm description
V2:
- Fixed MISRA violations
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
V4:
- Clear address cache info after a full cf8/cfc access
- Add NULL pointer checking when calling init/deinit ops
V3:
- Do not use ASSERT
- Loop through the vdev list defined in vm_desctiption table to call the vdev init/unit functions
- Make the cached vbdf info struct per vm instead of per pcpu
V2:
- Fixed MISRA violations
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>