When assigning a physical interrupt to a Post-launched VM, if it has
been assigned to ServiceVM, we should remove that mapping first to reset
ioapic pin state and rte, and build new mapping for the Post-launched
VM.
Tracked-On: #8370
Signed-off-by: Qiang Zhang <qiang4.zhang@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
When CPU support MONITOR/MWAIT, OS prefer to use it enter
deeper C-state.
Now ACRN pass through MONITOR/MWAIT to guest.
For vCPUs (ie vCPU A and vCPU B) share a pCPU, if vCPU A uses MWait to enter C state,
vCPU B could run only after the time slice of vCPU A is expired. This time slice of
vCPU A is gone to waste.
For Local APIC pass-through (used for RTVM), the guest pay more attention to
timeliness than power saving.
So this patch hides MONITOR/MWAIT by:
1. Clear vCPUID.05H, vCPUID.01H:ECX.[bit 3] and
MSR_IA32_MISC_ENABLE_MONITOR_ENA to tell the guest VM's vCPU
does not support MONITOR/MAIT.
2. Enable MSR_IA32_MISC_ENABLE_MONITOR_ENA bit for
MSR_IA32_MISC_ENABLE inject 'GP'.
3. Trap instruction 'MONITOR' and 'MWAIT' and inject 'UD'.
4. Clear vCPUID.07H:ECX.[bit 5] to hide 'UMONITOR/UMWAIT'.
5. Clear "enable user wait and pause" VM-execution control, so
UMONITOR/MWAIT causes an 'UD'.
Tracked-On: #8253
Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@linux.intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Exclude "GUEST_FLAG_PMU_PASSTHROUGH" from DM_OWNED_GUEST_FLAG_MASK
in case device model rewrite the value in release mode, reserve it
in debug mode.
Signed-off-by: hangliu1 <hang1.liu@linux.intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Tracked-On:#6690
The current code would write 'BAR address & size_maks' into PCIe virtual
BAR before updating the virtual BAR's base address when guest writing a
PCIe device's BAR. If the size of a PCIe device's BAR is larger than 4G,
the low 32 bits size_mask for this 64 bits BAR is zero. When ACRN updating
the virtual BAR's base address, the low 32 bits sizing information would
be lost.
This patch saves whether a BAR writing is sizing or not before updating the
virtual BAR's base address.
Tracked-On: #8267
Signed-off-by: Fei Li <fei1.li@intel.com>
When guest traps and wants to access a mmio region, the ACRN hypervisor
doesn't know the mmio size the guest wants to access until the trap
happens. In this case, ACRN should switch the mmio size and then call
mmio_read8/16/32/64 or mmio_write8/16/32/64 in each trap place.
This patch wrap the mmio read/write with a parameter to assign the mmio
size.
Tracked-On: #8255
Signed-off-by: Fei Li <fei1.li@intel.com>
The design of ACRN CPU performance management is to let hardware
do the autonomous frequency selection(or set to a fixed value),
and remove guest's ability to control CPU frequency.
This patch is to implement the CPU frequency initializer, which will
setup CPU frequency base on the performance policy type.
Two performance policy types are provided for user to choose from:
- 'Performance': CPU runs at its CPU runs at its maximum frequency.
Enable hardware autonomous frequency selection if HWP is presented.
- 'Nominal': CPU runs at its guaranteed frequency.
The policy type is passed to hypervisor through boot parameter, as
either 'cpu_perf_policy=Nominal' or 'cpu_perf_policy=Performance'.
The default type is 'Performance'.
Both HWP and ACPI p-state are supported. HWP is the first choice, for
it provides hardware autonomous frequency selection, while keeps
frequency transaction time low.
Two functions are added to the hypervisor to call:
- init_frequency_policy(): called by BSP at start up time. It processes
the boot parameters, and enables HWP if it is presented.
- apply_frequency_policy(): called after init_frequency_policy().
It applies initial CPU frequency policy setting for each core. It
uses a set of frequency limits data struct to quickly decide what the
highest/nominal frequency is. The frequency limits are generated by
config-tools.
The hypervisor will not be governing CPU frequency after initial policy
is applied.
Cores running RTVMs are fixed to nominal/guaranteed frequency, to get
more certainty in latency. This is done by setting the core's frequency
limits to highest=lowest=nominal in config-tools.
Tracked-On: #8168
Signed-off-by: Wu Zhou <wu.zhou@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For an IO request, hv will check if it was registered in asyncio desc
list. If yes, put the corresponding fd to the shared buffer. If the
shared buffer is full, yield the vcpu and try again later.
Tracked-On: #8209
Signed-off-by: Conghui <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add hypercall to add/remove asyncio request info. Hv will record the
info in a list, and when a new ioreq is come, hv will check if it is
in the asyncio list, if yes, queue the fd to asyncio buffer.
Tracked-On: #8209
Signed-off-by: Conghui <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Current IO emulation is synchronous. The user VM need to wait for the
completion of the the I/O request before return. But Virtio Spec
introduces introduces asynchronous IO with a new register in MMIO/PIO
space named NOTIFY, to be used for FE driver to notify BE driver, ACRN
hypervisor can emulate this register by sending a notification to vCPU
in Service VM side. This way, FE side can resume to work without waiting
for the full completion of BE side response.
Tracked-On: #8209
Signed-off-by: Conghui <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Extend sbuf hypercall to support other kinds of share buffer.
Tracked-On: #8209
Signed-off-by: Conghui <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
sbuf is now only used for debug purpose, but later, it will be used as a
common interfaces. So, move the sbuf related code out of the debug directory.
Tracked-On: #8209
Signed-off-by: Conghui <conghui.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Improve SMP call to support ACRN shell to operate RTVM.
before, the RTVM CPU can't be kicked off by notification IPI,
so some shell commands can't support it, like rdmsr/wrmsr,
memory/registers dump. So INIT will be used for RTVM, which
LAPIC is pass-thru.
Tracked-On: #8207
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
By default, notification IPI used to kick sharing pCPU, INIT
used to kick partition pCPU. If USE_INIT_IPI flag is passed to
hypervisor, only INIT will be used to kick pCPU.
Tracked-On: #8207
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
INIT signal has been used to kick off the partitioned pCPU, like RTVM,
whose LAPIC is pass-through. notification IPI is used to kick off
sharing pCPU.
Add mode_to_kick_pcpu in per-cpu to control the way of kicking
pCPU.
Tracked-On: #8207
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The access to PCI config_space is handled in HV for Passthrough pci
devices. And it also provides one mechanism to forward cfg_access of
some registers to DM. For example: the opregion reg for GPU device.
This patch tried to add the support of emulated PCI ROM bar for the
device. And it doesn't handle the phys PCI ROM bar of phys PCI devices.
At the same time the rom firmware is provided in DM and pci rom bar_reg
is also emulated in DM, this leverages the quirk mechanism so that the
access to PCI rom bar_reg is forwarded to DM.
Tracked-On: #8175
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Wang Yu <yu1.wang@intel.com>
The design of ACRN CPU performance management is to let hardware
do the autonomous frequency selection(or set to a fixed value),
and remove guest's ability to control CPU frequency.
This patch is to remove guest's ability to control CPU frequency by
removing the guests' HWP/EIST CPUIDs and blocking the related MSR
accesses. Including:
- Remove CPUID.06H:EAX[7..11] (HWP)
- Remove CPUID.01H:ECX[7] (EIST)
- Inject #GP(0) upon accesses to MSR_IA32_PM_ENABLE,
MSR_IA32_HWP_CAPABILITIES, MSR_IA32_HWP_REQUEST,
MSR_IA32_HWP_STATUS, MSR_IA32_HWP_INTERRUPT,
MSR_IA32_HWP_REQUEST_PKG
- Emulate MSR_IA32_PERF_CTL. Value written to MSR_IA32_PERF_CTL
is just stored for reading. This is like how the native
environment would behavior when EIST is disabled from BIOS.
- Emulate MSR_IA32_PERF_STATUS by filling it with base frequency
state. This is consistent with Windows, which displays current
frequency as base frequency when running in VM.
- Hide the IA32_MISC_ENABLE bit 16 (EIST enable) from guests.
This bit is dependent to CPUID.01H:ECX[7] according to SDM.
- Remove CPID.06H:ECX[0] (hardware coordination feedback)
- Inject #GP(0) upon accesses to IA32_MPERF, IA32_APERF
Also DM do not need to generate _PSS/_PPC for post-launched VMs
anymore. This is done by letting hypercall HC_PM_GET_CPU_STATE sub
command ACRN_PMCMD_GET_PX_CNT and ACRN_PMCMD_GET_PX_DATA return (-1).
Tracked-On: #8168
Signed-off-by: Wu Zhou <wu.zhou@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
BVT schedule rule:
When a new thread is wakeup and added to runqueue, it will get the
smallest avt (svt) from runqueue to initiate its avt. If the svt is
smaller than it's avt, it will keep the original avt. With the svt, it
can prevent a thread from claiming an excessive share of CPU after
sleepting for a long time.
For the reboot issue, when the VM is reboot, it means a new vcpu thread
is wakeup, but at this time, the Service VM's vcpu thread is blocked,
and removed from the runqueue, and the runqueue is empty, so the svt is
0. The new vcpu thread will get avt=0. avt=0 means very high priority,
and can run for a very long time until it catch up with other thread's
avt in runqueue.
At this time, when Service VM's vcpu thread wakeup, it will check the
svt, but the svt is very small, so will not update it's avt according to
the rule, thus has a very low priority and cannot be scheduled.
To fix it, update svt in pick_next handler to make sure svt is align
with the avt of the first obj in runqueue.
Tracked-On: #7944
Signed-off-by: Conghui <conghui.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
1. make memcpy_erms as a public API; add a new one
memcpy_erms_backwards, which supports to copy data from tail to head.
2. improve to use right/left/home/end key to move cursor, and support
delete/backspace key to modify current input command.
Tracked-On: #7931
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For platform that supports RRSBA (Restricted Return Stack Buffer
Alternate), using retpoline may not be sufficient to guard against branch
history injection or intra-mode branch target injection. RRSBA must
be disabled to prevent CPUs from using alternate predictors for RETs.
Quoting Intel CVE-2022-0001/CVE-2022-0002:
Where software is using retpoline as a mitigation for BHI or intra-mode BTI,
and the processor both enumerates RRSBA and enumerates RRSBA_DIS controls,
it should disable this behavior.
...
Software using retpoline as a mitigation for BHI or intra-mode BTI should use
these new indirect predictor controls to disable alternate predictors for RETs.
See: https://www.intel.com/content/www/us/en/developer/articles/technical/
software-security-guidance/technical-documentation/branch-history-injection.html
Tracked-On: #7907
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
TLFS defined 2 vMSRs which can be used by Windows guest to get the
TSC/APIC frequencies from hypervisor. This patch adds the support
of HV_X64_MSR_TSC_FREQUENCY/HV_X64_MSR_APIC_FREQUENCY vMSRS whose
availability is exposed by CPUID.0x40000003:EAX[bit11] and EDX[bit8].
v1->v2:
- revise commit message to highlight that the changes are for WaaG
Tracked-On: #7876
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
On some platforms CPUID.0x15:ECX is zero and CPUID.0x16 can
only return the TSC frequency in MHZ which is not accurate.
For example the TSC frequency obtained by CPUID.0x16 is 2300
MHZ and the TSC frequency calibrated by HPET is 2303.998 MHZ
which is much closer to the actual TSC frequency 2304.000 MHZ.
This patch adds the support of using HPET to calibrate TSC
when HPET is available and CPUID.0x15:ECX is zero.
v3->v4:
- move calc_tsc_by_hpet into hpet_calibrate_tsc
v2->v3:
- remove the NULL check in hpet_init
- remove ""& 0xFFFFFFFFU" in tsc_read_hpet
- add comment for the counter wrap in the low 32 bits in
calc_tsc_by_hpet
- use a dedicated function for hpet_calibrate_tsc
v1->v2:
- change native_calibrate_tsc_cpuid_0x15/0x16 to
native_calculate_tsc_cpuid_0x15/0x16
- move hpet_init to BSP init
- encapsulate both HPET and PIT calibration to one function
- revise the commit message with an example"
Tracked-On: #7876
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Modified the copyright year range in code, and corrected "int32_tel"
into "Intel" in two "hypervisor/include/debug/profiling.h" and
"hypervisor/include/debug/profiling_internal.h".
Tracked-On: #7559
Signed-off-by: Ziheng Li <ziheng.li@intel.com>
As mentioned in previous patch, wbinvd utilizes the vcpu_make_request
and signal_event call pair to stall other vcpus. Due to the fact that
these two calls are not thread-safe, we need to avoid concurrent call to
this API pair.
This patch adds wbinvd lock to serialize wbinvd emulation.
Tracked-On: #7887
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Commit d575edf79a changes the internal
implementation of wait_event and signal_event to use a counter instead
of a boolean value.
The background was:
ACRN utilizes vcpu_make_request and signal_event pair to shoot down
other vcpus and let them wait for signals. vcpu_make_request eventually
leads to target vcpu calling wait_event.
However vcpu_make_request/signal_event pair was not thread-safe,
and concurrent calls of this pair of API could lead to problems.
One such example is the concurrent wbinvd emulation, where vcpus may
concurrently issue vcpu_make_request/signal_event to synchronize wbinvd
emulation.
d575edf commit uses a counter in internal implementation of
wait_event/signal_event to avoid data races.
However by using a counter, the wait/signal pair now carries semantics of
semaphores instead of events. Semaphores require caller to carefully
plan their calls instead of multiply signaling any number of times to the same
event, which deviates from the original "event" semantics.
This patch changes the API implementation back to boolean-based, and
re-resolve the issue of concurrent wbinvd in next patch.
This also partially reverts commit 10963b04d1,
which was introduced because of the d575edf.
Tracked-On: #7887
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
There is an issue of calculate 2^n roundup of CONFIG_MAX_PT_IRQ_ENTRIES,
and the code style is very ugly when we use macro to fix it.
So this patch move MAX_IR_ENTRIES to offline tool which could do align
check and calculate it automatically.
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Current code limit the MAX vUART number to 8 which is not enough for
Service VM which should config S5 UART for each user VM.
We could count how many vUARTs we need by offline tool, so remove the
define of MAX_VUART_NUM_PER_VM to offline tool is a simple and accurate
way to allocate vUARTs.
Tracked-On: #8782
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
For physical RTC is monotonic growth, ensure vRTC monotonicity.
Periodical calibration and physical RTC modification may have
impact. Check it before reading
Tracked-On: #7440
Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
VRTC for hv used to ignore writes to vRTC register.
This patch add time modification to vRTC.
Add base RTC time and TSC offset to get the current time. Convert
current time to vRTC register values (`struct rtcdev`). Then
modify a register, and calculate a new time. Get RTC offset by
substrcting new time from current time.
Thereafter, add RTC offset also when get current time. Then user
can get the modified time.
Tracked-On: #7440
Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
During setting RTC time, driver will halt RTC update. So support
bit 7 of reg_b. When it set to 0, time will be updated. And
when it's set to 1, rtc will keep the second it was set.
In the process of getting RTC time, driver sets alarm interrupt,
waits 1s, and get alarm interrupt flag. So support alarm interrupt
flag update. If alarm interrupt is enabled (bit 5, reg_b set to 1),
interrupt flag register will be set (bit 7 & bit 5 of reg_c) at
appropriate time.
Tracked-On: #7440
Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
Add support for hour format.
Bit 1 of register B indicates the hour byte format. When it's 0,
twelve-hour mode is selected. Then the seventh bit of hour register
presents AM as 0 and PM as 1.
Tracked-On: #7440
Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
Add judging of the vRTC data mode. If BCD data mode is inuse,
make conversion of data.
Tracked-On: #7440
Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
Current code would read physical RTC register and return it directly to guest.
This patch would read a base physical RTC time and a base physical TSC time
at initialize stage. Then when guest tries to read vRTC time, ACRN HV would
read the real TSC time and use the TSC offset to calculate the real RTC time.
This patch only support BIN data mode and 24 hour mode.
BCD data mode and 12 hour mode will add in other patch.
The accuracy of clock provided by this patch is limited by TSC, and will
be improved in a following patch also.
Tracked-On: #7440
Signed-off-by: Yuanyuan Zhao <yuanyuan.zhao@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
The movdqu instruction moves unaligned double quadword (128 bit)
contained in XMM registers.
This patch uses pointers as input parameters of the function
write_xmm_0_2() to get 128-bit value from 64-bit array for each XMM
register.
Tracked-On: #7380
Reviewed-by: Fei Li <fei1.li@intel.com>
Signed-off-by: Jiang, Yanting <yanting.jiang@intel.com>
In spite of Table Size in MSI-X Message Control Register [Bits 10:0] masks as
RO (Register bits are read-only and cannot be altered by software), In Spec
PCIe 6.0, Chap 6.1.4.2 MSI-X Configuration "Depending upon system software
policy, system software, device driver software, or each at different times or
environments may configure a Function’s MSI-X Capability and table structures
with suitable vectors."
This patch just pass through MSI-X Control Register field to guest.
Tracked-On: #7275
Signed-off-by: Fei Li <fei1.li@intel.com>
Since CAT support for hybrid platform is landed, let's remove some old declarations
which are no longer used.
Tracked-On: #6690
Signed-off-by: Tw <wei.tan@intel.com>
The current code only supports 2 HPA regions per VM.
This patch extended ACRN to support 2+ HPA regions per VM, to use host
memory better if it is scatted among multiple regions.
This patch uses an array to describe the hpa region for the VM, and
change the logic of ve820 to support multiple regions.
This patch dependent on the config tool and GPA SSRAM change
Tracked-On: #6690
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Page table entry present check is page table type
specific and static, e.g. just need to check bit0
of page entry for entries of MMU page table and
bit2~bit0 for EPT page table case. hence no need to
check it by callback function every time.
This patch remove 'pgentry_present' callback field and
add a new bitmask field for this page entry present check.
It can get better performance especially when this
check is executed frequently.
Tracked-On: #7327
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Fei Li <fei1.li@intel.com>
Reviewed-by: Yu Wang <yu1.wang@intel.com>
Using the SSRAM area size extracted by config_tools, the patch changes
the hard-coded GPA SSRAM area size to its actual size, so that
pre-launched VMs can support large(>8MB) SSRAM area.
When booting service VM, the SSRAM area has to be removed from Service
VM's mem space, because they are passed-through to the pre-rt VM. The
code was bugged since it was using the SSRAM area's GPA in the pre-rt
VM. Changed it to GPA in Service VM.
Tracked-On: #7212
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
On hybrid platform(e.g. ADL), there may be multiple instances of same level caches for different type of processors,
The current design only supports one global `rdt_info` for each RDT resource type.
In order to support hybrid platform, this patch introduce `rdt_ins` to represents the "instance".
Also, the number of `rdt_info` is dynamically generated by config-tool to match with physical board.
Tracked-On: projectacrn#6690
Signed-off-by: Tw <wei.tan@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
As RDT related information will be offered by config-tool dynamically,
and HV is just a consumer of that. So there's no need to do this detection
at startup anymore.
Tracked-On: projectacrn#6690
Signed-off-by: Tw <wei.tan@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Many of the license and Intel copyright headers include the "All rights
reserved" string. It is not relevant in the context of the BSD-3-Clause
license that the code is released under. This patch removes those strings
throughout the code (hypervisor, devicemodel and misc).
Tracked-On: #7254
Signed-off-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Unify the handling of host/guest MSR area in VMCS. Remove the emum value
as the element index when there are a few of MSRs in host/guest area.
Because the index could be changed if one element not used. So, use a
variable to save the index which will be used.
Tracked-On: #6966
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Requirement: in CPU partition VM (RTVM), vtune or perf can be used to
sample hotspot code path to tune the RT performance, It need support
PMU/PEBS (Processor Event Based Sampling). Intel TCC asks for it, too.
It exposes PEBS related capabilities/features and MSRs to CPU
partition VM, like RTVM. PEBS is a part of PMU. Also PEBS needs
DS (Debug Store) feature to support. So DS is exposed too.
Limitation: current it just support PEBS feature in VM level, when CPU
traps to HV, the performance counter will stop. Perf global control
MSR is used to do this work. So, the counters shall be close to native.
Tracked-On: #6966
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Add a flag: GUEST_FLAG_PMU_PASSTHROUGH to indicate if
PMU (Performance Monitor Unit) is passthrough to guest VM.
Tracked-On: #6966
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
NMI is used to notify LAPIC-PT RTVM, to kick its CPU into hypervisor.
But NMI could be used by system devices, like PMU (Performance Monitor
Unit). So use INIT signal as the partition CPU notification function, to
replace injecting NMI.
Also remove unused NMI as notification related code.
Tracked-On: #6966
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Now the vept module uses a mixture of nept and vept, it's better to
refine it.
So this patch rename nept to vept and simplify the interface of vept
init module.
Tracked-On: #6690
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
This patch adds ENOTTY and ENOSYS to indicate undefined and obsoleted
request hyercall respectively, and uses ENOTTY as error code for undefined
hypercall instead of EINVAL to consistent with the ACRN kernel's return
value.
Tracked-On: #7029
Signed-off-by: Wen Qian <qian.wen@intel.com>
Signed-off-by: Li Fei <fei1.li@intel.com>
Acked-by: Wang, Yu1 <yu1.wang@intel.com>
Now the vept table was allocate dynamically, but the table size of vept
was calculated by the CONFIG_PLATFORM_RAM_SIZE which was predefined by
config tool.
It's not complete change and can't support single binary for different
boards/platforms.
So this patch will replace the CONFIG_PLATFORM_RAM_SIZE and get the
top ram size from hv_E820 interface for vept.
Tracked-On: #6690
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Chenli Wei <chenli.wei@linux.intel.com>
CONFIG_PLATFORM_RAM_SIZE is predefined by config tool and mmu use it to
calculate the table size and predefine the ppt table.
This patch will change the ppt to allocate dynamically and get the table
size by the hv_e820_ram_size interface which could get the RAM
info on run time and replace the CONFIG_PLATFORM_RAM_SIZE.
Tracked-On: #6690
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Chenli Wei <chenli.wei@linux.intel.com>
The e820 module could get the RAM info on run time, but the RAM size
and MAX address was limited by CONFIG_PLATFORM_RAM_SIZE which was
predefined by config tool.
Current solution can't support single binary for different boards or
platforms and the CONFIG_PLATFORM_RAM_SIZE can't matching the RAM size
if user have not update config tools setting after the device changed.
So this patch remove the CONFIG_PLATFORM_RAM_SIZE and calculate ram
size on run time.
Tracked-On: #6690
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Chenli Wei <chenli.wei@linux.intel.com>
HC_GET_PLATFORM_INFO hypercall is not supported anymore,
hence to remove related function and data structure definition.
Tracked-On: #6690
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
The coding guideline rule C-FN-16 requires that 'Mixed-use of
C code and assembly code in a single function shall not be allowed',
this patch wraps inline assembly to inline functions.
Tracked-On: #6776
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
v1-->v2:
use inline functions for read/write XMM registers
This patch adds an option CONFIG_KEEP_IRQ_DISABLED to hv (default n) and
config-tool so that when this option is 'y', all interrupts in hv root
mode will be permanently disabled.
With this option to be 'y', all interrupts received in root mode will be
handled in external interrupt vmexit after next VM entry. The postpone
latency is negligible. This new configuration is a requirement from x86
TEE's secure/non-secure interrupt flow support. Many race conditions can be
avoided when keeping IRQ off.
v5:
Rename CONFIG_ACRN_KEEP_IRQ_DISABLED to CONFIG_KEEP_IRQ_DISABLED
v4:
Change CPU_IRQ_ENABLE/DISABLE to
CPU_IRQ_ENABLE_ON_CONFIG/DISABLE_ON_CONFIG and guard them using
CONFIG_ACRN_KEEP_IRQ_DISABLED
v3:
CONFIG_ACRN_DISABLE_INTERRUPT -> CONFIG_ACRN_KEEP_IRQ_DISABLED
Add more comment in commit message
Tracked-On: #6571
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
"idle=halt " should be avoided in REE since we have to
keep the interrupt always masked in root mode.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
The TEE_NOTIFICATION_VECTOR can sometimes be confused with TEE's PI
notification vector. So rename it to TEE_FIXED_NONSECURE_VECTOR for
better readability.
No logic change.
v3:
Add more comments in commit message.
Tracked-On: #6571
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Sometimes HV would like to know if there are specific interrupt
pending in vIRR, and clears them if necessary (such as in x86_tee case).
This patch adds two APIs: get_next_pending_intr and clear_pending_intr.
This patch also moves the inline api prio() from
vlapic.c to vlapic.h
v3:
Remove apicv_get_next_pending_intr and apicv_clear_pending_intr
and use vlapic_get_next_pending_intr and vlapic_clear_pending_intr
directly.
v2:
get_pending_intr -> get_next_pending_intr
apicv_basic/advanced_clear_pending_intr -> apicv_clear_pending_intr
apicv_basic/advanced_get_pending_intr -> apicv_get_next_pending_intr
has_pending_intr kept
Tracked-On: #6571
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
This patch wraps the check of GUEST_FLAG_TEE/REE into functions
is_tee_vm/is_ree_vm for readability. No logic changes.
Tracked-On: #6571
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
The CONFIG_LOG_DESTINATION parameter selects where the logging messages
send to,serial console or memory or npk device MMIO region.
Now we want to remove it and check the loglevel of each channel,close the
output when the loglevel is ZERO.
Tracked-On: #6934
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
This patch introduces stateful VM which represents a VM that has its own
internal state such as a file cache, and adds a check before system
shutdown to make sure that stateless VM does not block system shutdown.
Tracked-On: #6571
Signed-off-by: Wang Yu <yu1.wang@intel.com>
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently the MACRO(DM_OWNED_GUEST_FLAG_MASK) is generated
by config-tool. It's unnecessary to generate by tool since
it is fixed, the config-tool will remove this MACRO and
move it to vm_config.h
Tracked-On: #6366
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Variable should not have a prefix of '_' per MISRA C standard. The patch
removes the prefix for _ld_ram_start and _ld_ram_end.
Tracked-On: #6885
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
In previous commit df7ffab441
the CONFIG_HV_RAM_SIZE was removed and the hv_ram_size was calculated in
link script by following formula:
ld_ram_size = _ld_ram_end - _ld_ram_start ;
but _ld_ram_start is a relative address in boot section whereas _ld_ram_end
is a absolute address in global, the mix operation cause hv_ram_size is
incorrect when HV binary is relocated.
The patch fix this issue by getting _ld_ram_start and _ld_ram_end respectively
and calculated at runtime.
Tracked-On: #6885
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Secure interrupt (interrupt belongs to TEE) comes
when TEE vcpu is running, the interrupt will be
injected to TEE directly. But when REE vcpu is running
at that time, we need to switch to TEE for handling.
Non-Secure interrupt (interrupt belongs to REE) comes
when REE vcpu is running, the interrupt will be injected
to REE directly. But when TEE vcpu is running at that time,
we need to inject a predefined vector to TEE for notification
and continue to switch back to TEE for running.
To sum up, when secure interrupt comes, switch to TEE
immediately regardless of whether REE is running or not;
when non-Secure interrupt comes and TEE is running,
just notify the TEE and keep it running, TEE will switch
to REE on its own initiative after completing its work.
Tracked-On: projectacrn#6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
This patch implements the following x86_tee hypercalls,
- HC_TEE_VCPU_BOOT_DONE
- HC_SWITCH_EE
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
This patch adds the x86_tee hypercall interfaces.
- HC_TEE_VCPU_BOOT_DONE
This hypercall is used to notify the hypervisor that the TEE VCPU Boot
is done, so that we can sleep the corresponding TEE VCPU. REE will be
started at the last time this hypercall is called by TEE.
- HC_SWITCH_EE
For REE VM, it uses this hypercall to request TEE service.
For TEE VM, it uses this hypercall to switch back to REE
when it completes the REE service.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
TEE is a secure VM which has its own partitioned resources while
REE is a normal VM which owns the rest of platform resources.
The TEE, as a secure world, it can see the memory of the REE
VM, also known as normal world, but not the other way around.
But please note, TEE and REE can only see their own devices.
So this patch does the following things:
1. go through physical e820 table, to ept add all system memory entries.
2. remove hv owned memory.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Given an e820, this API creates an identical memmap for specified
e820 memory type, EPT memory cache type and access right.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Add a configuration to support companion VM.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Add two VM flags for x86_tee. GUEST_FLAG_TEE for TEE VM,
GUEST_FLAG_REE for normal rich VM.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Since the UUID is not a *must* set parameter for the standard post-launched
VM which doesn't depend on any static VM configuration. We can remove
the KATA related code from hypervisor as it belongs to such VM type.
v2-->v3:
separate the struce acrn_platform_info change of devicemodel
v1-->v2:
update the subject and commit msg
Tracked-On:#6685
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
With current arch design the UUID is used to identify ACRN VMs,
all VM configurations must be deployed with given UUIDs at build time.
For post-launched VMs, end user must use UUID as acrn-dm parameter
to launch specified user VM. This is not friendly for end users
that they have to look up the pre-configured UUID before launching VM,
and then can only launch the VM which its UUID in the pre-configured UUID
list,otherwise the launch will fail.Another side, VM name is much straight
forward for end user to identify VMs, whereas the VM name defined
in launch script has not been passed to hypervisor VM configuration
so it is not consistent with the VM name when user list VM
in hypervisor shell, this would confuse user a lot.
This patch will resolve these issues by removing UUID as VM identifier
and use VM name instead:
1. Hypervisor will check the VM name duplication during VM creation time
to make sure the VM name is unique.
2. If the VM name passed from acrn-dm matches one of pre-configured
VM configurations, the corresponding VM will be launched,
we call it static configured VM.
If there is no matching found, hypervisor will try to allocate one
unused VM configuration slot for this VM with given VM name and get it
run if VM number does not reach CONFIG_MAX_VM_NUM,
we will call it dynamic configured VM.
3. For dynamic configured VMs, we need a guest flag to identify them
because the VM configuration need to be destroyed
when it is shutdown or creation failed.
v7->v8:
-- rename is_static_vm_configured to is_static_configured_vm
-- only set DM owned guest_flags in hcall_create_vm
-- add check dynamic flag in get_unused_vmid
v6->v7:
-- refine get_vmid_by_name, return the first matching vm_id
-- the GUEST_FLAG_STATIC_VM is added to identify the static or
dynamic VM, the offline tool will set this flag for
all the pre-defined VMs.
-- only clear name field for dynamic VM instead of clear entire
vm_config
Tracked-On: #6685
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: Victor Sun<victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The CONFIG_MAX_IR_ENTRIES and CONFIG_MAX_PT_IRQ_ENTRIES are separate
configuration items, and they can be configured through configuration tool
When the number of PT irq entries are more than IR entries, then some
passthrough devices' irqs may failed to be protected by interrupt
remapping or automatically injected by post-interrupt mechanism.
And it waste memory if the CONFIG_MAX_IR_ENTRIES is larger.
This patch replace the CONFIG_MAX_IR_ENTRIES to MAX_IR_ENTRIES and
enforce it align to CONFIG_PT_IRQ_ENTRIES and round up to > 2^n as the
IRTA_REG spec.This way can enforce all PT irqs works with IR or PI
mechanism.
Tracked-On: #6745
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Commit cbf3825 "hv: Pass-through IA32_TSC_AUX MSR to L1 guest"
lets guest own the physical MSR IA32_TSC_AUX and does not handle this MSR
in the hypervisor.
If multiple vCPUs share the same pCPU, when one vCPU reads MSR IA32_TSC_AUX,
it may get the value set by other vCPUs.
To fix this issue, this patch does:
- initialize the MSR content to 0 for the given vCPU, which is consistent with
the value specified in SDM Vol3 "Table 9-1. IA-32 and Intel 64 Processor
States Following Power-up, Reset, or INIT"
- save/restore the MSR content for the given vCPU during context switch
v1 -> v2:
* According to Table 9-1, the content of IA32_TSC_AUX MSR is unchanged
following INIT, v2 updates the initialization logic so that the content for
vCPU is consistent with SDM.
Tracked-On: #6799
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Reviewed-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch moves the ssram area in ve820 tab, and reunites the
hpa1_low_part1/2 areas. The ve820 building code is refined.
before:
|<---low_1M--->|
|<---hpa1_low_part1--->|
|<---SSRAM--->|
|<---hpa1_low_part2--->|
|<---GPU_OpRegion--->|
|<---ACPI DATA--->|
|<---ACPI NVS--->|
---2G---
after:
|<---low_1M--->|
|<---hpa_low--->|
|<---SSRAM--->|
|<---GPU_OpRegion--->|
|<---ACPI DATA--->|
|<---ACPI NVS--->|
---2G---
The SSRAM area's address is described in the ACPI's RTCT/PTCT
table. To simplify the SSRAM implementation, SSRAM area was
identical mapped to GPA, and resulted in the divition of hpa_low.
Then the ve820 building logic became too complicated.
Now we managed to edit the guest's RTCT/PTCT table by offline
tools in the former patch, so we can move the guest's SSRAM
area, and reunite the hpa_low areas again.
After doing this, this patch rewrites the ve820 building code
in a much simpler way.
Tracked-On: #6674
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
The ve820 table' hpa1_low area is divided into two parts, which
is making the code too complicated and causing problems. Moving
the entries that divides the hpa1_low could make things easier.
This patch moves the GPU OpRegion to the tail area of 2G,
consecutive to the acpi data/nvs area.
before:
|<---low_1M--->|
|<---hpa1_low_part1--->|
|<---SSRAM--->|
|<---GPU_OpRegion--->|
|<---hpa1_low_part2--->|
|<---ACPI DATA--->|
|<---ACPI NVS--->|
---2G---
after:
|<---low_1M--->|
|<---hpa1_low_part1--->|
|<---SSRAM--->|
|<---hpa1_low_part2--->|
|<---GPU_OpRegion--->|
|<---ACPI DATA--->|
|<---ACPI NVS--->|
---2G---
Tracked-On: #6674
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
The length of the ACPI data entry in ve820 tab was 960K, while the
ACPI file is 1MB. It causes the ACPI file copy failed due to reserved
ACPI regions in ve820 table is not enough when loading pre-launched
VMs. This patch changes ACPI data area to 1MB to fix the problem.
And the ACPI data length was missed when calculating
ENTRY_HPA1_LOW_PART2 length. Fixed here too.
Also adds some refinement to the hard-coded ACPI base/addr definations
Tracked-On: #6674
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
In previous implementation we leave MAX_EFI_MMAP_ENTRIES in config tool and
let end user to configure it. However it is hard for end user to understand
how to configure it, also it is hard for board_inspector to get this value
automatically because this info is only meaningful during the kernel boot
stage and there is no such info available after boot in Linux toolset.
This patch hardcode the value to 350, and ASSERT if the board need more efi
mmap entries to run ACRN. User could modify the MAX_EFI_MMAP_ENTRIES macro
in case ASSERT occurs in DEBUG stage.
The More size of hv_memdesc[] only consume very little memory, the overhead
is (size * sizeof(struct efi_memory_desc)), i.e. (size * 40) in bytes.
Tracked-On: #6442
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
The coding guideline rules C-TY-27 and C-TY-28, combined, requires that
assignment and arithmetic operations shall be applied only on operands of the
same kind. This patch either adds explicit type casts or adjust types of
variables to align the types of operands.
The only semantic change introduced by this patch is the promotion of the
second argument of set_vmcs_bit() and clear_vmcs_bit() to
uint64_t (formerly uint32_t). This avoids clear_vmcs_bit() to accidentally
clears the upper 32 bits of the requested VMCS field.
Other than that, this patch has no semantic change. Specifically this patch
is not meant to fix buggy narrowing operations, only to make these
operations explicit.
Tracked-On: #6776
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The coding guideline rule C-TY-12 requires that 'all type conversions shall
be explicit'. Especially implicit cases on the signedness of variables
shall be avoided.
This patch either adds explicit type casts or adjust local variable types
to make sure that Booleans, signed and unsigned integers are not used
mixedly.
This patch has no semantic changes.
Tracked-On: #6776
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The coding guideline rule C-TY-02 requires that 'the operands of bit
operations shall be unsigned'. This patch adds explicit casts or literal
suffixes to make explicit the type of values involved in bit operations.
Explicit casts to widen integers before shift operations are also
introduced to make explicit that the variables are expanded BEFORE it is
shifted (which is already so in C99 but implicitly).
This patch has no semantic changes.
Tracked-On: #6776
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The coding guideline rule C-PP-04 requires that 'parentheses shall be used
when referencing a MACRO parameter'. This patch adds parentheses to macro
parameters or expressions that are not yet wrapped properly.
This patch has no sematic impact.
Tracked-On: #6776
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The coding guideline gule C-FN-09 requires that 'the formal parameter name
of a function shall be consistent'. This patch fixes two places where the
formal parameters are named differently in declarations and
definitions. More specifically, the names in declarations are replaced with
those in definitions.
This patch has no semantic impact.
Tracked-On: #6776
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In lock instruction emulation, we use vcpu_make_request and
signal_event pairs to shoot down/release other vcpus.
However, vcpu_make_request is async and does not guarantee an execution
of wait_event on target vcpu, and we want wait_event to be consistent
with signal_event.
Consider following scenarios:
1, When target vcpu's state has not yet turned to VCPU_RUNNING,
vcpu_make_request on ACRN_REQUEST_SPLIT_LOCK does not make sense, and will
not result in wait_event.
2, When target vcpu is already requested on ACRN_REQUEST_SPLIT_LOCK (i.e., the
corresponding bit in pending_req is set) but not yet handled,
the vcpu_make_request call does not result in wait_event as 1 bit is not
enough to cache multiple requests.
This patch tries to add checks in vcpu_kick_lock_instr_emulation and
vcpu_complete_lock_instr_emulation to resolve these issues.
Tracked-On: #6502
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Rename SOS_VM type to SERVICE_VM
rename UOS to User VM in XML description
rename uos_thread_pid to user_vm_thread_pid
rename devname_uos to devname_user_vm
rename uosid to user_vmid
rename UOS_ACK to USER_VM_ACK
rename SOS_VM_CONFIG_CPU_AFFINITY to SERVICE_VM_CONFIG_CPU_AFFINITY
rename SOS_COM to SERVICE_VM_COM
rename SOS_UART1_VALID_NUM" to SERVICE_VM_UART1_VALID_NUM
rename SOS_BOOTARGS_DIFF to SERVICE_VM_BOOTARGS_DIFF
rename uos to user_vm in launch script and xml
Tracked-On: #6744
Signed-off-by: Liu Long <long.liu@linux.intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Rename SOS_VM_NUM to SERVICE_VM_NUM.
rename SOS_SOCKET_PORT to SERVICE_VM_SOCKET_PORT.
rename PROCESS_RUN_IN_SOS to PROCESS_RUN_IN_SERVICE_VM.
rename PCI_DEV_TYPE_SOSEMUL to PCI_DEV_TYPE_SERVICE_VM_EMUL.
rename SHUTDOWN_REQ_FROM_SOS to SHUTDOWN_REQ_FROM_SERVICE_VM.
rename PROCESS_RUN_IN_SOS to PROCESS_RUN_IN_SERVICE_VM.
rename SHUTDOWN_REQ_FROM_UOS to SHUTDOWN_REQ_FROM_USER_VM.
rename UOS_SOCKET_PORT to USER_VM_SOCKET_PORT.
rename SOS_CONSOLE to SERVICE_VM_OS_CONSOLE.
rename SOS_LCS_SOCK to SERVICE_VM_LCS_SOCK.
rename SOS_VM_BOOTARGS to SERVICE_VM_OS_BOOTARGS.
rename SOS_ROOTFS to SERVICE_VM_ROOTFS.
rename SOS_IDLE to SERVICE_VM_IDLE.
rename SEVERITY_SOS to SEVERITY_SERVICE_VM.
rename SOS_VM_UUID to SERVICE_VM_UUID.
rename SOS_REQ to SERVICE_VM_REQ.
rename RTCT_NATIVE_FILE_PATH_IN_SOS to RTCT_NATIVE_FILE_PATH_IN_SERVICE_VM.
rename CBC_REQ_T_UOS_ACTIVE to CBC_REQ_T_USER_VM_ACTIVE.
rename CBC_REQ_T_UOS_INACTIVE to CBC_REQ_T_USER_VM_INACTIV.
rename uos_active to user_vm_active.
Tracked-On: #6744
Signed-off-by: Liu Long <long.liu@linux.intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Rename gpa_uos to gpa_user_vm
rename base_gpa_in_uos to base_gpa_in_user_vm
rename UOS_VIRT_PCI_MMCFG_BASE to USER_VM_VIRT_PCI_MMCFG_BASE
rename UOS_VIRT_PCI_MMCFG_START_BUS to USER_VM_VIRT_PCI_MMCFG_START_BUS
rename UOS_VIRT_PCI_MMCFG_END_BUS to USER_VM_VIRT_PCI_MMCFG_END_BUS
rename UOS_VIRT_PCI_MEMBASE32 to USER_VM_VIRT_PCI_MEMBASE32
rename UOS_VIRT_PCI_MEMLIMIT32 to USER_VM_VIRT_PCI_MEMLIMIT32
rename UOS_VIRT_PCI_MEMBASE64 to USER_VM_VIRT_PCI_MEMBASE64
rename UOS_VIRT_PCI_MEMLIMIT64 to USER_VM_VIRT_PCI_MEMLIMIT64
rename UOS in comments message to User VM.
Tracked-On: #6744
Signed-off-by: Liu Long <long.liu@linux.intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Rename sos_vm to service_vm.
rename sos_vmid to service_vmid.
rename sos_vm_ptr to service_vm_ptr.
rename get_sos_vm to get_service_vm.
rename sos_vm_gpa to service_vm_gpa.
rename sos_vm_e820 to service_vm_e820.
rename sos_efi_info to service_vm_efi_info.
rename sos_vm_config to service_vm_config.
rename sos_vm_hpa2gpa to service_vm_hpa2gpa.
rename vdev_in_sos to vdev_in_service_vm.
rename create_sos_vm_e820 to create_service_vm_e820.
rename sos_high64_max_ram to service_vm_high64_max_ram.
rename prepare_sos_vm_memmap to prepare_service_vm_memmap.
rename post_uos_sworld_memory to post_user_vm_sworld_memory
rename hcall_sos_offline_cpu to hcall_service_vm_offline_cpu.
rename filter_mem_from_sos_e820 to filter_mem_from_service_vm_e820.
rename create_sos_vm_efi_mmap_desc to create_service_vm_efi_mmap_desc.
rename HC_SOS_OFFLINE_CPU to HC_SERVICE_VM_OFFLINE_CPU.
rename SOS to Service VM in comments message.
Tracked-On: #6744
Signed-off-by: Liu Long <long.liu@linux.intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Implement the write_vcbm() function to handle the
MSR_IA32_type_MASK_n vCBM MSRs write request
Call write_vclosid() to handle MSR_IA32_PQR_ASSOC MSR write request
Several vCAT P2V (physical to virtual) and V2P (virtual to physical)
mappings exist:
struct acrn_vm_config *vm_config = get_vm_config(vm_id)
max_pcbm = vm_config->max_type_pcbm (type: l2 or l3)
mask_shift = ffs64(max_pcbm)
vclosid = vmsr - MSR_IA32_type_MASK_0
pclosid = vm_config->pclosids[vclosid]
pmsr = MSR_IA32_type_MASK_0 + pclosid
pcbm = vcbm << mask_shift
vcbm = pcbm >> mask_shift
Where
MSR_IA32_type_MASK_n: L2 or L3 mask msr address for CLOSIDn, from
0C90H through 0D8FH (inclusive).
max_pcbm: a bitmask that selects all the physical cache ways assigned to the VM
vclosid: virtual CLOSID, always starts from 0
pclosid: corresponding physical CLOSID for a given vclosid
vmsr: virtual msr address, passed to vCAT handlers by the
caller functions rdmsr_vmexit_handler()/wrmsr_vmexit_handler()
pmsr: physical msr address
vcbm: virtual CBM, passed to vCAT handlers by the
caller functions rdmsr_vmexit_handler()/wrmsr_vmexit_handler()
pcbm: physical CBM
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Implement the read_vcbm() and read_vclosid() functions to handle the MSR_IA32_PQR_ASSOC
and MSR_IA32_type_MASK_n vCAT MSRs read request.
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Expose CAT feature to vCAT VM by reporting the number of
cache ways/CLOSIDs via the 04H/10H cpuid instructions, so that the
VM can take advantage of CAT to prioritize and partition cache
resource for its own tasks.
Add the vcat_pcbm_to_vcbm() function to map pcbm to vcbm
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Initialize vCBM MSRs
Initialize vCLOSID MSR
Add some vCAT functions:
Retrieve max_vcbm and max_pcbm
Check if vCAT is configured or not for the VM
Map vclosid to pclosid
write_vclosid: vCLOSID MSR write handler
write_vcbm: vCBM MSR write handler
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Initialize the emulated_guest_msrs[] array at runtime for
MSR_IA32_type_MASK_n and MSR_IA32_PQR_ASSOC msrs, there is no good
way to do this initialization statically at build time
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
For vCAT, it may need to store more than MAX_VCPUS_PER_VM of closids,
change clos in vm_config.h to a pointer to accommodate this situation
Rename clos to pclosids
pclosids now is a pointer to an array of physical CLOSIDs that is defined
in vm_configurations.c by vmconfig. The number of elements in the array
must be equal to the value given by num_pclosids
Add max_type_pcbm (type: l2 or l3) to struct acrn_vm_config, which stores a bitmask
that selects/covers all the physical cache ways assigned to the VM
Change vmsr.c to accommodate this amended data structure
Change the config-tools to generate vm_configurations.c, and fill in the num_closids
and clos pointers based on the information from the scenario file.
Now vm_configurations.c.xsl generates all the clos related code so remove the same
code from misc_cfg.h.xsl.
Examples:
Scenario file:
<RDT>
<RDT_ENABLED>y</RDT_ENABLED>
<CDP_ENABLED>n</CDP_ENABLED>
<VCAT_ENABLED>y</VCAT_ENABLED>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0x7ff</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
<CLOS_MASK>0xff800</CLOS_MASK>
/RDT>
<vm id="0">
<guest_flags>
<guest_flag>GUEST_FLAG_VCAT_ENABLED</guest_flag>
</guest_flags>
<clos>
<vcpu_clos>3</vcpu_clos>
<vcpu_clos>4</vcpu_clos>
<vcpu_clos>5</vcpu_clos>
<vcpu_clos>6</vcpu_clos>
<vcpu_clos>7</vcpu_clos>
</clos>
</vm>
<vm id="1">
<clos>
<vcpu_clos>1</vcpu_clos>
<vcpu_clos>2</vcpu_clos>
</clos>
</vm>
vm_configurations.c (generated by config-tools) with the above vCAT config:
static uint16_t vm0_vcpu_clos[5U] = {3U, 4U, 5U, 6U, 7U};
static uint16_t vm1_vcpu_clos[2U] = {1U, 2U};
struct acrn_vm_config vm_configs[CONFIG_MAX_VM_NUM] = {
{
.guest_flags = (GUEST_FLAG_VCAT_ENABLED),
.pclosids = vm0_vcpu_clos,
.num_pclosids = 5U,
.max_l3_pcbm = 0xff800U,
},
{
.pclosids = vm1_vcpu_clos,
.num_pclosids = 2U,
},
};
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add the VCAT_ENABLED element to RDTType so that user can enable/disable vCAT globally
Add the GUEST_FLAG_VCAT_ENABLED guest flag to enable/disable vCAT per-VM.
Currently we have the following per-VM clos element in scenario file for RDT use:
<clos>
<vcpu_clos>0</vcpu_clos>
<vcpu_clos>0</vcpu_clos>
</clos>
When the GUEST_FLAG_VCAT_ENABLED guest flag is not specified, clos is for RDT use,
vcpu_clos is per-CPU and it configures each CPU in VMs to a desired CLOS ID.
When the GUEST_FLAG_VCAT_ENABLED guest flag is specified, vCAT is enabled for this VM,
clos is for vCAT use, vcpu_clos is not per-CPU anymore in this case, just a list of
physical CLOSIDs (minimum 2) that are assigned to VMs for vCAT use. Each vcpu_clos
will be mapped to a virtual CLOSID, the first vcpu_clos is mapped to virtual CLOSID
0 and the second is mapped to virtual CLOSID 1, etc
Add xs:assert to prevent any problems with invalid configuration data for vCAT:
If any GUEST_FLAG_VCAT_ENABLED guest flag is specified, both RDT_ENABLED and VCAT_ENABLED
must be 'y'
If VCAT_ENABLED is 'y', RDT_ENABLED must be 'y' and CDP_ENABLED must be 'n'
For a vCAT VM, vcpu_clos cannot be set to CLOSID 0, CLOSID 0 is reserved to be used by hypervisor
For a vCAT VM, number of clos/vcpu_clos elements must be greater than 1
For a vCAT VM, each clos/vcpu_clos must be less than L2/L3 COS_MAX
For a vCAT VM, its clos/vcpu_clos elements cannot contain duplicate values
There should not be any CLOS IDs overlap between a vCAT VM and any other VMs
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
PR #6283 updated code and docs to the new kernel HSM driver. Fix
some references to VHM missed in the doxygen comments. Also fixed some
misspellings while in these files.
Tracked-On: #6282
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
In the current hypervisor, only support at most two legacy vuarts
(COM1 and COM2) for a VM, COM1 is usually configured as VM console,
COM2 is configured as communication channel of S5 feature.
Hypervisor can support MAX_VUART_NUM_PER_VM(8) legacy vuart, but only
register handlers for two legacy vuart since the assumption (legacy
vuart is less than 2) is made.
In the current hypervisor configurtion, io port (2F8H) is always
allocated for virtual COM2, it will be not friendly if user wants to
assign this port to physical COM2.
Legacy vuart is common communication channel between service VM and
user VM, it can work in polling mode and its driver exits in each
guest OS. The channel can be used to send shutdown command to user VM
in S5 featuare, so need to config serval vuarts for service VM and one
vuart for each user VM.
The following changes will be made to support at most
MAX_VUART_NUM_PER_VM legacy vuarts:
- Refine legacy vuarts initialization to register PIO handler for
related vuart.
- Update assumption of legacy vuart number.
BTW, config tools updates about legacy vuarts will be made in other
patch.
v1-->v2:
Update commit message to make this patch's purpose clearer;
If vuart index is valid, register handler for it.
Tracked-On: #6652
Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
It's difficult to configure CONFIG_HV_RAM_SIZE properly at once. This patch
not only remove CONFIG_HV_RAM_SIZE, but also we use ld linker script to
dynamically get the size of HV RAM size.
Tracked-On: #6663
Signed-off-by: Fei Li <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>