- Add PAGING_REQUEST_TYPE_MODIFY_MT memory map request type
- Update map_mem_region() to allow modifying the memory type related
fields in a page table entry
- Add ept_update_mt()
- add modify_mem_mt() for both EPT and MMU
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Enable VMX vpid ctrl and assign an unique vpid to each vcpu
so that VMX transitions are not required to invalidate any
linear mappings or combined mappings.
SDM Vol 3 - 28.3.3.3
If EPT is in use, the logical processor associates all mappings
it creates with the value of bits 51:12 of current EPTP.
If a VMM uses different EPTP values for different guests, it may
use the same VPID for those guests. Doing so cannot result in one
guest using translations that pertain to the other.
In our UOS, the trusty world and normal world are using different
EPTP. So we can use the same VPID for it.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We need know which tlb to flush: ept or vpid.
1. error handle for invept.
it's the same with invvpid error handle.
change its name to compatible with vpid.
2. the macro name for flush ept tlb request.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to the comments in hypervisor:
" This file includes config header file "bsp_cfg.h" and other
hypervisor used header files.
It should be included in all the source files."
this patch includes all common header files in hypervisor.h
then removes other redundant inclusions
Signed-off-by: Zide Chen <zide.chen@intel.com>
Check ept rwx misconfigurations when config memory attribute,
if misconfig it will assert.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently config_page_table_attr() treats MMU_MEM_ATTR_READ exactly as
MMU_MEM_ATTR_BIT_READ_WRITE for PTT_HOST, so even when MMU_MEM_ATTR_WRITE
is not used, the R/W bit in PTE is still being set
Signed-off-by: Zide Chen <zide.chen@intel.com>
Now just add some basic feature/capability detect (not all). Vapic
didn't add here for if we must support vapic then the code which
for vapic not supported must remove, like mmio apic r/w.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
change its input from map_params to page_table_type, and make it as a
public API.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Before referencing to physical address of devs such as lapic, ioapic,
vtd, and uart, switch to virtual address.
Use a phisical address of pml4 to write CR3.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Since we get cpu feature/capability in boot_cpu_data at boot initialization,
then there no need to get this feature/capability using cpuid again.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
when host mmu got updated, it should invalidate TLB & page-struct cache.
currently, there is no mmu update will be done after any AP start, so the
simplest way(to avoid shootdown) is just do invlpg for BSP.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
- change the input param of check_page_table_present from struct map_params
to page_table_type
- check EPT present bits misconfiguration in check_page_table_present
- change var "table_present" to more suitable name "entry_present"
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
check IA32_VMX_EPT_VPID_CAP MSR to see if ept execution only capability
is supported or not
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
eptp should be record as PA.
this patch changed nworld_eptp, sworld_eptp and m2p eptp to PA type,
necessary HPA2HVA/HVA2HPA transition is used for them after the change.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- read/write page table entries should use VA which defined as "void *"
- the address data in page table entries should us PA which defined as
"uint64_t"
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
all callers for fetch_page_table_offset should already make sure
it will not come to an unknown table_leve, so just panic here.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
- walk_paging_struct should return sub_table_addr, if something wrong,
it return NULL
- update_page_table_entry should return adjusted_size, if something wrong
it return 0
the change is valid under release version, as at that time, ASSERT in
walk_paging_struct is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- remove unused map_params in get_table_entry
- add error return for both, which is valid under release version,
as at that time, ASSERT in get_table_entry is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
function break_page_table should return next_level_page_size, if
something wrong, it return 0.
the change is valid for release version, as at that time ASSERT()
in break_page_table is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
function map_mem_region should return mapped_size, if something wrong,
it return 0.
the change is valid for release version, as at that time ASSERT()
in map_mem_region is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
add error return for all, which is valid under release version,
as at that time, ASSERT in modify_paging is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add check_continuous_hpa API:
when create secure world,if the physical
address is not continuous, will assert.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
--add free_paging_struct api, used for free page tables
it will clear memory before free.
--add HPA2HVA translation when free ept memory
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add 'CPU_PAGE_MASK' used for calculate address,
Change IA32E_REF_MASK from 0x7ffffffffffff000 to 0x000ffffffffff000
for MMU/EPT entry, bit62:52(ignore) bit63(VE/XD)
if we want to obtain the address from the MMU/EPT entry,need to clear
bit63:52 by IA32E_REF_MASK
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
This patch is prepared for enabling secure world feature.
this api will create new eptp for secure world, whose PDPT
entries are copied form normal world,the PML4/PDPT for secure
world are separated from Normal World, PD/PT are shared in the
Secure World's EPT and Normal World's EPT.Secure world can
access Normal World's memory, but Normal World can not access
Secure World's memory
This function implemented:
-- Unmap specific memory from guest ept mapping
-- Copy PDPT from Normal world to Secure world
-- Map specific memory for Secure world
-- Unmap specific memory from SOS ept mapping
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
add key info structure
add sworld_eptp in vm structure, and rename ept->nworld_eptp
add secure world control structure
Change-Id:
Tracked-On:220921
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>