Now the vept table was allocate dynamically, but the table size of vept
was calculated by the CONFIG_PLATFORM_RAM_SIZE which was predefined by
config tool.
It's not complete change and can't support single binary for different
boards/platforms.
So this patch will replace the CONFIG_PLATFORM_RAM_SIZE and get the
top ram size from hv_E820 interface for vept.
Tracked-On: #6690
Acked-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Chenli Wei <chenli.wei@linux.intel.com>
The TEE_NOTIFICATION_VECTOR can sometimes be confused with TEE's PI
notification vector. So rename it to TEE_FIXED_NONSECURE_VECTOR for
better readability.
No logic change.
v3:
Add more comments in commit message.
Tracked-On: #6571
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Sometimes HV would like to know if there are specific interrupt
pending in vIRR, and clears them if necessary (such as in x86_tee case).
This patch adds two APIs: get_next_pending_intr and clear_pending_intr.
This patch also moves the inline api prio() from
vlapic.c to vlapic.h
v3:
Remove apicv_get_next_pending_intr and apicv_clear_pending_intr
and use vlapic_get_next_pending_intr and vlapic_clear_pending_intr
directly.
v2:
get_pending_intr -> get_next_pending_intr
apicv_basic/advanced_clear_pending_intr -> apicv_clear_pending_intr
apicv_basic/advanced_get_pending_intr -> apicv_get_next_pending_intr
has_pending_intr kept
Tracked-On: #6571
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
This patch wraps the check of GUEST_FLAG_TEE/REE into functions
is_tee_vm/is_ree_vm for readability. No logic changes.
Tracked-On: #6571
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
This patch introduces stateful VM which represents a VM that has its own
internal state such as a file cache, and adds a check before system
shutdown to make sure that stateless VM does not block system shutdown.
Tracked-On: #6571
Signed-off-by: Wang Yu <yu1.wang@intel.com>
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Secure interrupt (interrupt belongs to TEE) comes
when TEE vcpu is running, the interrupt will be
injected to TEE directly. But when REE vcpu is running
at that time, we need to switch to TEE for handling.
Non-Secure interrupt (interrupt belongs to REE) comes
when REE vcpu is running, the interrupt will be injected
to REE directly. But when TEE vcpu is running at that time,
we need to inject a predefined vector to TEE for notification
and continue to switch back to TEE for running.
To sum up, when secure interrupt comes, switch to TEE
immediately regardless of whether REE is running or not;
when non-Secure interrupt comes and TEE is running,
just notify the TEE and keep it running, TEE will switch
to REE on its own initiative after completing its work.
Tracked-On: projectacrn#6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
This patch implements the following x86_tee hypercalls,
- HC_TEE_VCPU_BOOT_DONE
- HC_SWITCH_EE
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
This patch adds the x86_tee hypercall interfaces.
- HC_TEE_VCPU_BOOT_DONE
This hypercall is used to notify the hypervisor that the TEE VCPU Boot
is done, so that we can sleep the corresponding TEE VCPU. REE will be
started at the last time this hypercall is called by TEE.
- HC_SWITCH_EE
For REE VM, it uses this hypercall to request TEE service.
For TEE VM, it uses this hypercall to switch back to REE
when it completes the REE service.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
TEE is a secure VM which has its own partitioned resources while
REE is a normal VM which owns the rest of platform resources.
The TEE, as a secure world, it can see the memory of the REE
VM, also known as normal world, but not the other way around.
But please note, TEE and REE can only see their own devices.
So this patch does the following things:
1. go through physical e820 table, to ept add all system memory entries.
2. remove hv owned memory.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Given an e820, this API creates an identical memmap for specified
e820 memory type, EPT memory cache type and access right.
Tracked-On: #6571
Signed-off-by: Jie Deng <jie.deng@intel.com>
Reviewed-by: Wang, Yu1 <yu1.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
With current arch design the UUID is used to identify ACRN VMs,
all VM configurations must be deployed with given UUIDs at build time.
For post-launched VMs, end user must use UUID as acrn-dm parameter
to launch specified user VM. This is not friendly for end users
that they have to look up the pre-configured UUID before launching VM,
and then can only launch the VM which its UUID in the pre-configured UUID
list,otherwise the launch will fail.Another side, VM name is much straight
forward for end user to identify VMs, whereas the VM name defined
in launch script has not been passed to hypervisor VM configuration
so it is not consistent with the VM name when user list VM
in hypervisor shell, this would confuse user a lot.
This patch will resolve these issues by removing UUID as VM identifier
and use VM name instead:
1. Hypervisor will check the VM name duplication during VM creation time
to make sure the VM name is unique.
2. If the VM name passed from acrn-dm matches one of pre-configured
VM configurations, the corresponding VM will be launched,
we call it static configured VM.
If there is no matching found, hypervisor will try to allocate one
unused VM configuration slot for this VM with given VM name and get it
run if VM number does not reach CONFIG_MAX_VM_NUM,
we will call it dynamic configured VM.
3. For dynamic configured VMs, we need a guest flag to identify them
because the VM configuration need to be destroyed
when it is shutdown or creation failed.
v7->v8:
-- rename is_static_vm_configured to is_static_configured_vm
-- only set DM owned guest_flags in hcall_create_vm
-- add check dynamic flag in get_unused_vmid
v6->v7:
-- refine get_vmid_by_name, return the first matching vm_id
-- the GUEST_FLAG_STATIC_VM is added to identify the static or
dynamic VM, the offline tool will set this flag for
all the pre-defined VMs.
-- only clear name field for dynamic VM instead of clear entire
vm_config
Tracked-On: #6685
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
Reviewed-by: Victor Sun<victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Commit cbf3825 "hv: Pass-through IA32_TSC_AUX MSR to L1 guest"
lets guest own the physical MSR IA32_TSC_AUX and does not handle this MSR
in the hypervisor.
If multiple vCPUs share the same pCPU, when one vCPU reads MSR IA32_TSC_AUX,
it may get the value set by other vCPUs.
To fix this issue, this patch does:
- initialize the MSR content to 0 for the given vCPU, which is consistent with
the value specified in SDM Vol3 "Table 9-1. IA-32 and Intel 64 Processor
States Following Power-up, Reset, or INIT"
- save/restore the MSR content for the given vCPU during context switch
v1 -> v2:
* According to Table 9-1, the content of IA32_TSC_AUX MSR is unchanged
following INIT, v2 updates the initialization logic so that the content for
vCPU is consistent with SDM.
Tracked-On: #6799
Signed-off-by: Shiqing Gao <shiqing.gao@intel.com>
Reviewed-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The coding guideline rules C-TY-27 and C-TY-28, combined, requires that
assignment and arithmetic operations shall be applied only on operands of the
same kind. This patch either adds explicit type casts or adjust types of
variables to align the types of operands.
The only semantic change introduced by this patch is the promotion of the
second argument of set_vmcs_bit() and clear_vmcs_bit() to
uint64_t (formerly uint32_t). This avoids clear_vmcs_bit() to accidentally
clears the upper 32 bits of the requested VMCS field.
Other than that, this patch has no semantic change. Specifically this patch
is not meant to fix buggy narrowing operations, only to make these
operations explicit.
Tracked-On: #6776
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The coding guideline rule C-PP-04 requires that 'parentheses shall be used
when referencing a MACRO parameter'. This patch adds parentheses to macro
parameters or expressions that are not yet wrapped properly.
This patch has no sematic impact.
Tracked-On: #6776
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The coding guideline gule C-FN-09 requires that 'the formal parameter name
of a function shall be consistent'. This patch fixes two places where the
formal parameters are named differently in declarations and
definitions. More specifically, the names in declarations are replaced with
those in definitions.
This patch has no semantic impact.
Tracked-On: #6776
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In lock instruction emulation, we use vcpu_make_request and
signal_event pairs to shoot down/release other vcpus.
However, vcpu_make_request is async and does not guarantee an execution
of wait_event on target vcpu, and we want wait_event to be consistent
with signal_event.
Consider following scenarios:
1, When target vcpu's state has not yet turned to VCPU_RUNNING,
vcpu_make_request on ACRN_REQUEST_SPLIT_LOCK does not make sense, and will
not result in wait_event.
2, When target vcpu is already requested on ACRN_REQUEST_SPLIT_LOCK (i.e., the
corresponding bit in pending_req is set) but not yet handled,
the vcpu_make_request call does not result in wait_event as 1 bit is not
enough to cache multiple requests.
This patch tries to add checks in vcpu_kick_lock_instr_emulation and
vcpu_complete_lock_instr_emulation to resolve these issues.
Tracked-On: #6502
Signed-off-by: Yifan Liu <yifan1.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Rename gpa_uos to gpa_user_vm
rename base_gpa_in_uos to base_gpa_in_user_vm
rename UOS_VIRT_PCI_MMCFG_BASE to USER_VM_VIRT_PCI_MMCFG_BASE
rename UOS_VIRT_PCI_MMCFG_START_BUS to USER_VM_VIRT_PCI_MMCFG_START_BUS
rename UOS_VIRT_PCI_MMCFG_END_BUS to USER_VM_VIRT_PCI_MMCFG_END_BUS
rename UOS_VIRT_PCI_MEMBASE32 to USER_VM_VIRT_PCI_MEMBASE32
rename UOS_VIRT_PCI_MEMLIMIT32 to USER_VM_VIRT_PCI_MEMLIMIT32
rename UOS_VIRT_PCI_MEMBASE64 to USER_VM_VIRT_PCI_MEMBASE64
rename UOS_VIRT_PCI_MEMLIMIT64 to USER_VM_VIRT_PCI_MEMLIMIT64
rename UOS in comments message to User VM.
Tracked-On: #6744
Signed-off-by: Liu Long <long.liu@linux.intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Rename sos_vm to service_vm.
rename sos_vmid to service_vmid.
rename sos_vm_ptr to service_vm_ptr.
rename get_sos_vm to get_service_vm.
rename sos_vm_gpa to service_vm_gpa.
rename sos_vm_e820 to service_vm_e820.
rename sos_efi_info to service_vm_efi_info.
rename sos_vm_config to service_vm_config.
rename sos_vm_hpa2gpa to service_vm_hpa2gpa.
rename vdev_in_sos to vdev_in_service_vm.
rename create_sos_vm_e820 to create_service_vm_e820.
rename sos_high64_max_ram to service_vm_high64_max_ram.
rename prepare_sos_vm_memmap to prepare_service_vm_memmap.
rename post_uos_sworld_memory to post_user_vm_sworld_memory
rename hcall_sos_offline_cpu to hcall_service_vm_offline_cpu.
rename filter_mem_from_sos_e820 to filter_mem_from_service_vm_e820.
rename create_sos_vm_efi_mmap_desc to create_service_vm_efi_mmap_desc.
rename HC_SOS_OFFLINE_CPU to HC_SERVICE_VM_OFFLINE_CPU.
rename SOS to Service VM in comments message.
Tracked-On: #6744
Signed-off-by: Liu Long <long.liu@linux.intel.com>
Reviewed-by: Geoffroy Van Cutsem <geoffroy.vancutsem@intel.com>
Implement the write_vcbm() function to handle the
MSR_IA32_type_MASK_n vCBM MSRs write request
Call write_vclosid() to handle MSR_IA32_PQR_ASSOC MSR write request
Several vCAT P2V (physical to virtual) and V2P (virtual to physical)
mappings exist:
struct acrn_vm_config *vm_config = get_vm_config(vm_id)
max_pcbm = vm_config->max_type_pcbm (type: l2 or l3)
mask_shift = ffs64(max_pcbm)
vclosid = vmsr - MSR_IA32_type_MASK_0
pclosid = vm_config->pclosids[vclosid]
pmsr = MSR_IA32_type_MASK_0 + pclosid
pcbm = vcbm << mask_shift
vcbm = pcbm >> mask_shift
Where
MSR_IA32_type_MASK_n: L2 or L3 mask msr address for CLOSIDn, from
0C90H through 0D8FH (inclusive).
max_pcbm: a bitmask that selects all the physical cache ways assigned to the VM
vclosid: virtual CLOSID, always starts from 0
pclosid: corresponding physical CLOSID for a given vclosid
vmsr: virtual msr address, passed to vCAT handlers by the
caller functions rdmsr_vmexit_handler()/wrmsr_vmexit_handler()
pmsr: physical msr address
vcbm: virtual CBM, passed to vCAT handlers by the
caller functions rdmsr_vmexit_handler()/wrmsr_vmexit_handler()
pcbm: physical CBM
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Implement the read_vcbm() and read_vclosid() functions to handle the MSR_IA32_PQR_ASSOC
and MSR_IA32_type_MASK_n vCAT MSRs read request.
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Expose CAT feature to vCAT VM by reporting the number of
cache ways/CLOSIDs via the 04H/10H cpuid instructions, so that the
VM can take advantage of CAT to prioritize and partition cache
resource for its own tasks.
Add the vcat_pcbm_to_vcbm() function to map pcbm to vcbm
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Initialize vCBM MSRs
Initialize vCLOSID MSR
Add some vCAT functions:
Retrieve max_vcbm and max_pcbm
Check if vCAT is configured or not for the VM
Map vclosid to pclosid
write_vclosid: vCLOSID MSR write handler
write_vcbm: vCBM MSR write handler
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
Initialize the emulated_guest_msrs[] array at runtime for
MSR_IA32_type_MASK_n and MSR_IA32_PQR_ASSOC msrs, there is no good
way to do this initialization statically at build time
Tracked-On: #5917
Signed-off-by: dongshen <dongsheng.x.zhang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
PR #6283 updated code and docs to the new kernel HSM driver. Fix
some references to VHM missed in the doxygen comments. Also fixed some
misspellings while in these files.
Tracked-On: #6282
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
In the current hypervisor, only support at most two legacy vuarts
(COM1 and COM2) for a VM, COM1 is usually configured as VM console,
COM2 is configured as communication channel of S5 feature.
Hypervisor can support MAX_VUART_NUM_PER_VM(8) legacy vuart, but only
register handlers for two legacy vuart since the assumption (legacy
vuart is less than 2) is made.
In the current hypervisor configurtion, io port (2F8H) is always
allocated for virtual COM2, it will be not friendly if user wants to
assign this port to physical COM2.
Legacy vuart is common communication channel between service VM and
user VM, it can work in polling mode and its driver exits in each
guest OS. The channel can be used to send shutdown command to user VM
in S5 featuare, so need to config serval vuarts for service VM and one
vuart for each user VM.
The following changes will be made to support at most
MAX_VUART_NUM_PER_VM legacy vuarts:
- Refine legacy vuarts initialization to register PIO handler for
related vuart.
- Update assumption of legacy vuart number.
BTW, config tools updates about legacy vuarts will be made in other
patch.
v1-->v2:
Update commit message to make this patch's purpose clearer;
If vuart index is valid, register handler for it.
Tracked-On: #6652
Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
This patch changes the size of vvmcs[] array from 1 to
PER_VCPU_ACTIVE_VVMCS_NUM, and actually enables multiple active VMCS12
support in ACRN. The basic operations:
- if L1 VMPTRLDs a VMCS12 without previously VMCLEAR the current
VMCS12, ACRN no longer unconditionally flushes the current VMCS12
back to L1. Instead, it tries to keep both the current and the newly
loaded VMCS12 in the nested->vvmcs[] array, unless:
- if there is no more available vvmcs[] entry, ACRN flushes one active
VMCS12 to make room for this new VMCS12.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
These dirty flags are supposed to be per VMCS12, so move them from the
per vCPU acrn_nested struct to the newly added acrn_vvmcs struct.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This variable represents the L1 GPA of the current VMCS12. But it's
no longer needed in the multiple active VMCS12 case, which uses the
following variables for this purpose.
- nested->current_vvmcs refers to the vvmcs[] entry which contains the
cached current VMCS12, its associated VMCS02, and other context info.
- nested->current_vvmcs->vmcs12_gpa refers to the L1 GPA of this
current VMCS12.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add an array of struct acrn_vvmcs to struct acrn_nested, so it is
possible to cache multiple active VMCS12s.
This patch declares the size of this array to 1, meaning that there is
only one active VMCS12. This is to minimize the logical code changes.
Add pointer current_vvmcs to struct acrn_nested, which refers to the
current vvmcs[] entry. In this patch, if any VMCS12 is active, it
always points to vvmcs[0].
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
By changing the way to assign L1 VPID from bottom-up to top-down,
the possibilities for VPID conflicts between L1 and L2 guests are
small.
Then we can flush VPID just in case of conflicting.
Tracked-On: #6289
Signed-off-by: Anthony Xu <anthony.xu@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In run time, it's rare for L1 to write to the intercepted non host-state
VMCS fields, and using multiple dirty flags is not necessary.
This patch uses one single dirty flag to manage all non host-state VMCS
fields. This helps to simplify current code and in the future we may
not need to declare new dirty flags when we intercept more VMCS fields.
Tracked-On: #5923
Signed-off-by: Zide Chen <zide.chen@intel.com>
is_lapic_pt_enabled() is called at least twice in one loop of the vCPU
thread, and it's called in vmexit_handler() frequently if LAPIC is not
pass-through. Thus the efficiency of this function has direct
impact to the system performance.
Since the LAPIC mode is not changed in run time, we don't have to
calculate it on the fly in is_lapic_pt_enabled().
BTW, removed the unused lapic_mask from struct acrn_vcpu_arch.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Use an unused MSR on host to save ACRN pcpu ID and avoid saving and
restoring TSC AUX MSR on VMX transitions.
Tracked-On: #6289
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
- remove vcpu->arch.nrexits which is useless.
- record full 32 bits of exit_reason to TRACE_2L(). Make the code simpler.
Tracked-On: #6289
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This helps to improve performance:
- Don't need to execute VMREAD in vcpu_get_efer(), which is frequently
called.
- VMX_EXIT_CTLS_SAVE_EFER can be removed from VM-Exit Controls.
- If the value of IA32_EFER MSR is identical between the host and guest
(highly likely), adjust the VMX controls not to load IA32_EFER on
VMExit and VMEntry.
It's convenient to continue use the exiting vcpu_s/get_efer() APIs,
other than the common vcpu_s/get_guest_msr().
Tracked-On: #6289
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Remove the acpi loading function from elf_loader, rawimage_loaer and
bzimage_loader, and call it together in vm_sw_loader.
Now the vm_sw_loader's job is not just loading sw, so we rename it to
prepare_os_image.
Tracked-On: #6323
Signed-off-by: Zhou, Wu <wu.zhou@intel.com>
Reviewed-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
ACRN could run without XSAVE Capability. So remove XSAVE dependence to support
more (hardware or virtual) platforms.
Tracked-On: #6287
Signed-off-by: Fei Li <fei1.li@intel.com>
When guest kernel has multiple loading segments like ELF format image, just
define one load address in sw_kernel_info struct is meaningless.
The patch removes kernel_load_addr member in struct sw_kernel_info, the load
address should be parsed in each specified format image processing.
Tracked-On: #6323
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Because the emulation code is for both split-lock and uc-lock,
rename splitlock.c/splitlock.h to lock_instr_emul.c/lock_instr_emul.h
Tracked-On: #6299
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Because the emulation code is for both split-lock and uc-lock, Changed
these API names:
vcpu_kick_splitlock_emulation() -> vcpu_kick_lock_instr_emulation()
vcpu_complete_splitlock_emulation() -> vcpu_complete_lock_instr_emulation()
emulate_splitlock() -> emulate_lock_instr()
Tracked-On: #6299
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
When ACRN uses decode_instruction to emulate split-lock/uc-lock
instruction, It is actually a try-decode to see if it is XCHG.
If the instruction is XCHG instruction, ACRN must emulate it
(inject #PF if it is triggered) with peer VCPUs paused, and advance
the guest IP. If the instruction is a LOCK prefixed instruction
with accessing the UC memory, ACRN Halted the peer VCPUs, and
advance the IP to skip the LOCK prefix, and then let the VCPU
Executes one instruction by enabling IRQ Windows vm-exit. For
other cases, ACRN injects the exception back to VCPU without
emulating it.
So change the API to decode_instruction(vcpu, bool full_decode),
when full_decode is true, the API does same thing as before. When
full_decode is false, the different is if decode_instruction() meet unknown
instruction, will keep return = -1 and do not inject #UD. We can use
this to distinguish that an #UD has been skipped, and need inject #AC/#GP back.
Tracked-On: #6299
Signed-off-by: Tao Yuhong <yuhong.tao@intel.com>
The API would search ve820 table and return a valid GPA when the requested
size of memory is available in the specified memory range, or return
INVALID_GPA if the requested memory slot is not available;
Tracked-On: #5626
Signed-off-by: Victor Sun <victor.sun@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
get_ept_entry() actually returns the EPTP of a VM. So rename it to
get_eptp() for readability.
Tracked-On: #5923
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
With shadow EPT, the hypervisor walks through guest EPT table:
* If the entry is not present in guest EPT, ACRN injects EPT_VIOLATION
to L1 VM and resumes to L1 VM.
* If the entry is present in guest EPT, do the EPT_MISCONFIG check.
Inject EPT_MISCONFIG to L1 VM if the check failed.
* If the entry is present in guest EPT, do permission check.
Reflect EPT_VIOLATION to L1 VM if the check failed.
* If the entry is present in guest EPT but shadow EPT entry is not
present, create the shadow entry and resumes to L2 VM.
* If the entry is present in guest EPT but the GPA in the entry is
invalid, injects EPT_VIOLATION to L1 VM and resumes L1 VM.
Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
'struct nept_desc' is used to associate guest EPTP with a shadow EPTP.
It's created in the first reference and be freed while no reference.
The life cycle seems like,
While guest VMCS VMX_EPT_POINTER_FULL is changed, the 'struct nept_desc'
of the new guest EPTP is referenced; the 'struct nept_desc' of the old
guest EPTP is dereferenced.
While guest VMCS be cleared(by VMCLEAR in L1 VM), the 'struct nept_desc'
of the old guest EPTP is dereferenced.
While a new guest VMCS be loaded(by VMPTRLD in L1 VM), the 'struct
nept_desc' of the new guest EPTP is referenced. The 'struct nept_desc'
of the old guest EPTP is dereferenced.
Tracked-On: #5923
Signed-off-by: Sainath Grandhi <sainath.grandhi@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Signed-off-by: Shuo A Liu <shuo.a.liu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>