- For UEFI boot, allocate memory for trampoline code in ACRN EFI,
and pass the pointer to HV through efi_ctx
- Correct LOW_RAM_SIZE and LOW_RAM_START in Kconfig and bsp_cfg.h
- use trampline_start16_paddr instead of the hardcoded
CONFIG_LOW_RAM_START for initial guest GDT and page tables
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
in real mode part, add extra pointers for page tables and long jump buffer
so it's possible for HV code to patch the relocation offset
in long mode part, use absolute addressing when referring HV symbols,
and use relative addressing for symbols within trampoline code
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
emalloc_for_low_mem() is used if CONFIG_EFI_STUB is defined.
e820_alloc_low_memory() is used for other cases
In either case, the allocated memory will be marked with E820_TYPE_RESERVED
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
- vm_gva2gpa is same as gva2gpa, so replace it with gva2gpa directly.
- remove dead usage of vm_gva2gpa in emulate_movs.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
there are data transfer between guest virtual space(GVA) & hv(HVA), for
example, guest rip fetching during instruction decoding.
GVA is address continuous, but its GPA could be only 4K page address
continuous, this patch adds copy_from_gva & copy_to_gva functions by
doing page walking of GVA to avoid address breaking during accessing GVA.
v2:
- modify API interface based on new gva2gpa function, err_code added
- combine similar code with inline function _copy_gpa
- change API name from vcopy_from/to_vm to copy_from/to_gva
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
This patch drops "#include <bsp_cfg.h>" and include the generated config.h in
CFLAGS for the configuration data.
Also make sure that all configuration data have the 'CONFIG_' prefix.
v4 -> v5:
* No changes.
v3 -> v4:
* Add '-include config.h' to hypervisor/bsp/uefi/efi/Makefile.
* Update comments mentioning bsp_cfg.h.
v2 -> v3:
* Include config.h on the command line instead of in any header or source to
avoid including config.h multiple times.
* Add config.h as an additional dependency for source compilation.
v1 -> v2:
* No changes.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Zhao Yakui <yakui.zhao@intel.com>
Cleanup "cpu_secondary_xx" in the symbols/section/functions/variables
name in trampline code.
There is item left: the default C entry is Ap start c entry. Before
ACRN enter S3, the c entry will be updated to high level S3 C entry.
So s3 resume will go s3 resume path instead of AP startup path.
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
Linux commit edfe63ec97ed ("x86/mtrr: Fix Xorg crashes in Qemu sessions")
disables PAT feature if MTRR is not enabled. This patch does partial
emulation of MTRR to prevent this from happening: enable fixed-range
MTRRs and disable virable range MTRRs
By default IA32_PAT MSR (SDM Vol3 11.12.4, Table 11-12) doesn't include
'WC' type. If MTRR is disabled from the guests, Linux doesn't allow
writing IA32_PAT MSR so WC type can't be enabled. This creates some
performance issues for certian applications that rely on WC memory type.
Implementation summary:
- Enable MTRR feature: MTRRdefType.E=1
- Enable fixed range MTRRs: MTRRCAP.fix=1, MTRRdefType.FE=1
- For simplicity, disable variable range MTRRs: MTRRCAP.vcnt=0.
It's expected that this bit is honored by the guests and they won't
change the guest memory type through variable MTRRs.
Signed-off-by: bliu11 <baohong.liu@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- Add PAGING_REQUEST_TYPE_MODIFY_MT memory map request type
- Update map_mem_region() to allow modifying the memory type related
fields in a page table entry
- Add ept_update_mt()
- add modify_mem_mt() for both EPT and MMU
Signed-off-by: Zide Chen <zide.chen@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- export start_cpus to start/online APs.
- Add stop_cpus to offline APs.
- Update cpu_dead to decrement running cpus number and do cleanup
for AP down
Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
Change need_scheduled fileds of schedule context to flags because
it's not only used for need_schedule check.
Add two functions to request/handle cpu offline.
The reason we only handle cpu offline request in idle thread is
that we should pause the vcpu running on target pcpu. Then it's
only possible that target pcpu get cpu offline request in idle
thread.
Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
To handle cpu down/up dynamically, arcn needs to support vmx off/on
dynamically. Following changes is introduced:
vmx_off will be used when down AP. It does:
- vmclear the mapped vcpu
- off vmx.
exec_vmxon_instr is updated to handle start and up AP both. It does
- if vmx was on on AP, load the vmxon_region saved. Otherwise,
allocate vmxon_region.
- if there is mapped vcpu, vmptrld mapped vcpu.
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
Per performance consideration, we don't flush vcpu context when doing
vcpu swithing (because it's only swithing between vcpu and idle).
But when enter S3, we need to call vmclear against all vcpus attached
to APs. We need to know which vcpu is attached with which pcpu.
This patch introduced API to get vcpu mapped to specific pcpu.
Signed-off-by: Yin Fegnwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
current_vcpu is not correct when there are multi vcpus in one VM,
using it is in-correct, so remove it.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Retrieve dseed from SeedList HOB(Hand-Off-Block).
SBL passes SeedList HOB to ACRN by MBI modules.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Reviewed-by: Zhu Bing <bing.zhu@intel.com>
Reviewed-by: Wang Kai <kai.z.wang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The current implementation of per_cpu relies on several non-c99 features,
and in additional involves arbitrary pointer arithmetic which is not MIS-
RA C friendly.
This patch introduces struct per_cpu_region which holds all the per_cpu
variables. Allocation of per_cpu data regions and access to per_cpu vari-
ables are greatly simplified, at the cost of making all per_cpu varaibl-
es accessible in files.
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Enable VMX vpid ctrl and assign an unique vpid to each vcpu
so that VMX transitions are not required to invalidate any
linear mappings or combined mappings.
SDM Vol 3 - 28.3.3.3
If EPT is in use, the logical processor associates all mappings
it creates with the value of bits 51:12 of current EPTP.
If a VMM uses different EPTP values for different guests, it may
use the same VPID for those guests. Doing so cannot result in one
guest using translations that pertain to the other.
In our UOS, the trusty world and normal world are using different
EPTP. So we can use the same VPID for it.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
We need know which tlb to flush: ept or vpid.
1. error handle for invept.
it's the same with invvpid error handle.
change its name to compatible with vpid.
2. the macro name for flush ept tlb request.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In current implementation, on sbl platform, vm0 bsp
starts from 64bit mode. And hv need to prepare init
page table for it.
In this patch series, on sbl platform, vm0 bsp starts
from non-paging protected mode.
This patch prepares an init gdt for vm0 bsp on sbl
platform.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Translate gva2gpa in different paging modes.
Change the definition of gva2gpa.
- return value for error status
- Add a parameter for error code when paging fault.
Change the definition of vm_gva2gpa.
- return value for error status
- Add a parameter for error code when paing fault.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Use # of paging level to identify paging mode
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
In current implemenation, cr0/cr4 host mask value are set
according to the value from fixed0/fixed1 values of cr0/cr4.
In fact, host mask can be set to the bits, which need to be trapped.
This patch, add code to support exiting long mode in CR0 write handling.
Add some check when modify CR0/CR4.
- CR0_PG, CR0_PE, CR0_WP, CR0_NE are trapped for CR0.
PG, PE are trapped to track vcpu mode switch.
WP is trapped for info of protection when paing walk.
NE is always on bit.
- CR4_PSE, CR4_PAE, CR4_VMXE are trapped for CR4.
PSE, PAE are trapped to track paging mode.
VMXE is always on bit.
- Reserved bits and always off bits are not allow to be set by guest.
If guest try to set these bits when vmexit, a #GP will be injected.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
_Static_assert is supported in C11 standard.
Please see N1570(C11 mannual) 6.4.1.
replace _Static_assert with ASSERT.
Signed-off-by: huihuang shi <huihuang.shi@intel.com>
Hypercall will return errno to SOS (Now is linux kernel).
Let's define this more general.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
vm_attr_name, vm_state_info_privilege & vm_created are not used
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
vm_state_info in struct vm_arch is not used, remove it
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Guest CR3 read/write operations are not trapped.
Remove CR3 handling in cr_access_vmexit_handler.
Also remove unused API vmx_read_cr3.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Move from vmexit.c to vmx.c
Declare the functions in vmx.h
Rename the functions' name with prefix vmx_.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
There are some massages which is not fatal error but should to print
to serial and sbuf(hvlog) at the same time. pr_fatal is for fatal error
massages and it is not good choice for the situation above.
Introduce a new API pr_acrnlog to deal with the situation. And replace the
following printf with pr_acrnlog for massages should be print to sbuf and
serial. Then developers can get those massages on serial and BTM(Boot Time
Measurement) can use acrnlog to get those massages from sbuf.
BTM refers to Boot Time Measurement which will read acrnlog file to get
timestamps of steps we want.
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
When we create an UOS, we didn't indicate the vmid.
Thus we can't get the vm description for the vm
description array.
Instead we use a temporary vm description to save data to
fill the vm structure when crate an UOS. It's uselesss once
UOS has created. So we don't need to maintain vm description
array here for UOS.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Pointer arithmetic is currently used to calculate the address of a specific
Local Vector Table (LVT) register (except LVT_CMCI) in lapic, since the
registers are continuously placed with fixed padding in between. However each of
these registers are declared as a single uint32_t in struct lapic, resulting
pointer arithmetic on a non-array pointer which violates MISRA C requirements.
This patch refactors struct lapic by converting the LVT registers fields (again
except LVT_CMCI) to an array named lvt. The LVT indices are reordered to reflect
the order of the LVT registers on hardware, and reused to index this lvt array.
The code before and after the changes is semantically equivalent.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
use func vcpu_queue_exception for vcpu_inject_gp and exception_vmexit_handler.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
add func vcpu_queue_exception to queue exception based on SDM Vol3 Table 6-5,
which may cause #DF or triple fault
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
the pending_intr is not only serving for interrupt but also for different
request including TLB & TMR updating, so change the function & variants
name accordingly.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
It is an extension of GCC CPP to:
* allow omitting a variable macro argument entirely, and
* use ## __VA_ARGS__ to remove the the comma before ## __VA_ARGS__ when
__VA_ARGS__ is empty.
The only use of ## _VA_ARGS__ is to define the pr_xxx() macros, with the first
argument being the format string and the rest the to-be-formatted arguments. The
format string is explicitly spelled out because another macro pr_fmt() is used
to add to the format string a prefix which is customizable by defining what
pr_fmt() expands to.
For C99 compliance, this patch changes the pr_xxx() macros in the following
pattern.
- #define pr_fatal(fmt, ...) \
- do_logmsg(LOG_FATAL, pr_fmt(fmt), ## __VA_ARGS__); \
+ #define pr_fatal(...) \
+ do_logmsg(LOG_FATAL, pr_prefix __VA_ARGS__); \
Reference:
* https://gcc.gnu.org/onlinedocs/gcc/Variadic-Macros.html#Variadic-Macros
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
According to the syntax defined in C99, each struct/union field must have an
identifier. This patch adds names to the previously unnamed fields for C99
compatibility.
Here is a summary of the names (marked with a pair of *stars*) added.
struct trusty_mem:
union {
struct {
struct key_info key_info;
struct trusty_startup_param startup_param;
} *data*;
uint8_t page[CPU_PAGE_SIZE];
} first_page;
struct ptdev_remapping_info:
union {
struct ptdev_msi_info msi;
struct ptdev_intx_info intx;
} *ptdev_intr_info*;
union code_segment_descriptor:
uint64_t value;
struct {
union {
...
} low32;
union {
...
} high32;
} *fields*;
similar changes are made to the following structures.
* union data_segment_descriptor,
* union system_segment_descriptor,
* union tss_64_descriptor, and
* union idt_64_descriptor
struct trace_entry:
union {
struct {
uint32_t a, b, c, d;
} *fields_32*;
struct {
uint8_t a1, a2, a3, a4;
uint8_t b1, b2, b3, b4;
uint8_t c1, c2, c3, c4;
uint8_t d1, d2, d3, d4;
} *fields_8*;
struct {
uint64_t e;
uint64_t f;
} *fields_64*;
char str[16];
} *payload*;
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
According to the syntax defined in C99, each struct/union field must have an
identifier. This patch removes unnamed struct/union fields that can be easily
expressed in a C99-compatible way.
Here is a summary of structs/unions removed.
struct vhm_request:
union {
uint32_t type; uint32_t type;
int32_t reserved0[16]; => int32_t reserved0[15];
};
struct vhm_request_buffer:
struct vhm_request_buffer {
union { union vhm_request_buffer {
struct vhm_request ...; => struct vhm_request ...;
int8_t reserved[4096]; int8_t reserved[4096];
} }
}
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
According to the comments in hypervisor:
" This file includes config header file "bsp_cfg.h" and other
hypervisor used header files.
It should be included in all the source files."
this patch includes all common header files in hypervisor.h
then removes other redundant inclusions
Signed-off-by: Zide Chen <zide.chen@intel.com>
- move vmexit handling into vmexit_handler
- add error handling, failure will inject #GP
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
there are data transfer between guest(GPA) & hv(HPA), especially for
hypercall from guest.
guest should make sure these GPAs are address continous, but hv cannot
assure HPAs which mapped to these GPAs are address continous, for example,
after enable hugetlb, a contious GPA range could come from two different
2M pages.
this patch is handling such case by doing gpa page walking during
copy_from_vm & copy_to_vm.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
merge these two APIs to 'dump_intr_excp_frame'
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
rename intr_ctx to intr_excp_ctx
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
move intr_ctx to irq.h, delete intr_ctx.h
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Macro GEN_CASE in hypervisor is not used. It's just for userspcace tool
acrntrace and we get one copy of it in ./tools/acrntrace/trace_event.h.
So, remove it.
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Yan, Like <like.yan@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Changes:
1. Move io request related functions from hypercall.c to io_request.c
since they are not hypercalls;
2. Remove acrn_insert_request_nowait() as it is never used;
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
rename atomic_cmpxchg_int to atomic_cmpxchg
replace atomic_cmpset_long with atomic_cmpxchg64
rename atomic_readandclear_long to atomic_readandclear64
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@Intel.com>
From now on, only plan to support atomic operation for int/long.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add set_memmaps hypercall to support multi regions memmap.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
allocated all pcpus to vm0 to handle possible AP wakeup flow for all cpus,
as we pass org ACPI table to VM0 - that means VM0 can see all CPUs.
SOS(VM0) start expected CPUs through "maxcpus=" kernel cmdline option.
During first hypercall from SOS, calling vm_fixup to free un-expect-enabled
vcpus from VM0.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
remove sipi_from_efi_boot_service_exit & efi_deferred_wakeup_pcpu workaround
for uefi boot flow
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
If the target is an array, then only the first element
will be copied.
So replace structure assignment with memcpy_s().
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Move all string operations functions into a single
source code file, instead of the various source
code files that just implement a single or few
function.
No functional change.
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now one uint64_t type is used to obtain the corresponding descriptor_table
for GDT/IDT. This will cause the stack protect corruption under -O2.
So the descriptor_table struct is added to configure the GDT/IDT of VMCS.
V1->V2: Move the descriptor_table into vmx.h header file
And its type is renamed from dt_addr_t to descriptor_table.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The RFLAGS will be touched in some inline assembly.(exec_vmxon/
RFLAGS_RESTORE). The "cc" constraint should be added. Otherwise
it won't be handled under -O2 option.
And "%%XXX" register should also be added into constraints.
Otherwise it will be optimized incorrectly.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add vlapic_create_timer/vlapic_reset_timer to setup/reset a timer.
Add vlapic_update_lvtt to disarm timer when mode changes.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
make the timer list be ordered to speed up expried timer
process and next timer event finding.
Add timer would not schedule timer unless it's the next
timer event.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The name of acrn_register is too generic, rename to acpi_generic_address
which is more common.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
V3->V4: Updated function/variable names for accurancy
V2->V3: Changed a few function/variable names to make it less confusing
V1->V2: removed the unneccesary cache flushing
- For UEFI boot, allocate memory for trampoline code in ACRN EFI,
and pass the pointer to HV through efi_ctx
- For other boot, scan E820 to allocate memory in HV run time
- update_trampoline_code_refs() updates all the references that need the
absolute PA with the actual load address
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
V2->V3: Fixed the booting issue on MRB board and removed the restriction
of allocate memory from address 0
1) Fix the booting from MRB issue
-#define CONFIG_LOW_RAM_SIZE 0x000CF000
+#define CONFIG_LOW_RAM_SIZE 0x00010000
2) changed e820_alloc_low_memory() to handle corner case of unaligned e820 entries
and enable it to allocate memory at address 0
+ a length = end > start ? (end - start) : 0;
- /* We don't want the first page */
- if ((length == size) && (start == 0))
- continue;
3) changed emalloc_for_low_mem() to enable to allocate memory at address 0
- /* We don't want the first page */
- if (start == 0)
- start = EFI_PAGE_SIZE;
V1->V2: moved e820_alloc_low_memory() to guest.c and added the logic to
handle unaligned E820 entries
emalloc_for_low_mem() is used if CONFIG_EFI_STUB is defined.
e820_alloc_low_memory() is used for other cases
In either case, the allocated memory will be marked with E820_TYPE_RESERVED
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
- rename it to 'io_shared_page' to keep consistent
with ACRN HDL foils.
- update related code that reference this data structure.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
with this patch guest could access idle io port and enter idle normally.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The patch add function in vhm hypercall to retrieve physical cx data
to VHM/DM.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
Each VM would have its own Cx data, for now we copy it from boot_cpu_info.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The cx data is hardcoded within HV, load it to boot_cpu_data when HV boot.
The patch provide a3960 soc cx data for example.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
currently, pass-thru devices are managed by per-vm's remapping entries
which is virtual based:
- MSI entry is identified by virt_bdf+msix_index
- INTx entry is identified by virt_pin+vpin_src
it works but it's not a good design for physical resource management, for
example a physical IOAPIC pin could belong to different vm's INTx entries,
the Device Model then must make sure there is no resource conflict from
application's level.
This patch change the design from virtual to physical based:
- MSI entry is identified by phys_bdf+msix_index
- INTx entry is identified by phys_pin
The physical resource is directly managed in hypervisor, a miss adding
entry will be found by hypervisor and return error message with failure.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
add initialize_timer to initialize or reset a timer;
add_timer add timer to corresponding physical cpu timer list.
del_timer delete timer from corresponding physical cpu timer list.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Get tsc hz by cpuid 0x15 if we supported, otherwise
calibrate tsc by pit timer.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The input operand for inline assembly is passed from the caller. And they
are not the immediate type. Instead the register should be used.
This also helps to reduce the compile error if the optimizatin is enabled.
Signed-off-by: Zhao Yakui<yakui.zhao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Now just add some basic feature/capability detect (not all). Vapic
didn't add here for if we must support vapic then the code which
for vapic not supported must remove, like mmio apic r/w.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Split pm.c from cpu_state_tbl.c to put guest power management related
functions, keep cpu_state_tbl.c to store host cpu state table and
related functions.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
adding "hugepagesz=1G" and "hugepages=X" into SOS cmdline, for X, current
strategy is making it equal
e820_mem.total_mem_size -CONFIG_REMAIN_1G_PAGES
if CONFIG_REMAIN_1G_PAGES is not set, it will use 3 by default.
CONFIG_CMA is added to indicate using cma cmdline option for SOS kernel,
by default system will use hugetlb cmdline option if no CONFIG_CMA defined.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
change its input from map_params to page_table_type, and make it as a
public API.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
1. exception vector and other information
can be extracted from 'VM-Exit Interrupt-Information'
field of VMCS only if bit31 (Valid) is set.
-Intel SDM 24.9.2, Vol3
2. Rename 'exit-interrupt_info' to 'idt_vectoring_info'
in 'struct vcpu_arch', which is consistent with
SDM 24.9.3, Vol3
3. 'IDT-vectoring information' in VMCS is 32bit
-Intel SDM 24.9.3, Vol3
Update the type of 'idt_vectoring_info' in
'struct vcpu_arch'from 'uint32_t' to 'uint64_t'.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
If subsequent write is on the same address, maybe the compiler will optimize
the access of MMIO memory and only the last write takes effect.In such case
it is wrong. For example:
mmio_write_long(0x25, addr);
mmio_write_long(0x26, addr);
mmio_write_long(0x27, addr);
After volatile is added, it can avoid the above possible optimization and
assure that each write takes effect.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The mmio_or_long/mmio_and_long/mmio_rmw_long is defined to perform
the read & write operation. But they are not used. So they are removed.
Signed-off-by: Zhao Yakui <yakui.zhao@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
These APIs are not used, and not as safe as spinlock_irqsave_obtain/
spinlock_irqrestore_release.
Signed-off-by: Yan, Like <like.yan@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Before referencing to physical address of devs such as lapic, ioapic,
vtd, and uart, switch to virtual address.
Use a phisical address of pml4 to write CR3.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
SDM 24.9.1 Volume3:
- 'Exit reason' field in VMCS is 32 bits.
SDM 24.9.4 in Volume3
- 'VM-exit instruction length' field
in VMCS is 32 bits.
This patch is to redefine the data types of above fields
in 'struct vcpu_arch' and udpate the code using these
two fields.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Update X86_FEATURE_OSXSAVE when enabled and replace is_xsave_supported
with cpu_has_cap(X86_FEATURE_OSXSAVE).
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Add cpu_has_cap API for cpu feature/capability detect instead of
add get_xxx_cap for each feature/capability detect.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
remove data defination of mmio_addr_t, vaddr_t, paddr_t,
and ioport_t.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Using ebx will truncate the high 32bit part of 64bit virtual address.
So using rbx instead of ebx.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Reviewed-by: Yakui, Zhao <yakui.zhao@intel.com>
This address maybe invalid if a hostile address was set
in hypercall 'HC_SET_IOREQ_BUFFER'.it should be validated
before using.
Update:
-- save HVA to guest OS's request buffer in hyperviosr
-- change type of 'req_buf' from 'uint64_t' to 'void *'
-- remove HPA to HVA translation code when using this addr.
-- use error number instead of -1 when return error cases.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Before copy data between guest and host, should convert the GPA
to HVA and do the copy.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Reviewed-by: Chen, Jason Cl <jason.cj.chen@intel.com>
Reviewed-by: Yakui, Zhao <yakui.zhao@intel.com>
- change the input param of check_page_table_present from struct map_params
to page_table_type
- check EPT present bits misconfiguration in check_page_table_present
- change var "table_present" to more suitable name "entry_present"
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
check IA32_VMX_EPT_VPID_CAP MSR to see if ept execution only capability
is supported or not
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
cpu_halt actually mean cpu dead in current code, so change it with
more clear name.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
use pcpu_active_bitmap presents which cpu is active
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
Now the MACRO SECURE_WORLD_ENABLED (1<<0)
Change it to 64 bit data
SECURE_WORLD_ENABLED (1UL<<0)
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Yin Fengwei <fengwei.yin@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Before this patch, guest temporary page tables were generated by hardcode
at compile time, HV will copy this page tables to guest before guest
launch.
This patch creates temporary page tables at runtime for the range of 0~4G,
and create page tables to cover new range(511G~511G+16M) with trusty
requirement.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
eptp should be record as PA.
this patch changed nworld_eptp, sworld_eptp and m2p eptp to PA type,
necessary HPA2HVA/HVA2HPA transition is used for them after the change.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- read/write page table entries should use VA which defined as "void *"
- the address data in page table entries should us PA which defined as
"uint64_t"
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Currently acrn partitions cpus between SOS and UOS, so the default
policy is to allow guest managing CPU px state. However we would
not blindly passthrough perf_ctrl MSR to guest. Instead guest access
is always trapped and validated by acrn hypervisor before forwarding
to pcpu. Doing so leaves room for future power budget control in
hypervisor, e.g. limiting turbo percentage that a cpu can enter.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
We can use this interface for VHM to pass per-cpu power state data
to guest per its request.
For now the vcpu power state is per-vm, this could be changed if
per-cpu power state support is required in the future.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The vm px info would be used for guest Pstate control.
Currently it is copied from host boot cpu.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The patch takes Intel ATOM A3960 as example that hard code all Px info
which is needed for Px control into Acrn HV and load it in boot process.
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
The cpu model name would be used to distinguish which hard coded data
need to be loaded to boot_cpu_data;
Signed-off-by: Victor Sun <victor.sun@intel.com>
Acked-by: Kevin Tian <kevin.tian@intel.com>
- remove unused map_params in get_table_entry
- add error return for both, which is valid under release version,
as at that time, ASSERT in get_table_entry is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
add error return for all, which is valid under release version,
as at that time, ASSERT in modify_paging is empty.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Documentation for parameters must match exactly in spelling and case.
Parameter named "vcpu" was incorrectly documented as "VCPU", and
parameter named "param" was documented as "param's".
Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
Add check_continuous_hpa API:
when create secure world,if the physical
address is not continuous, will assert.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
This patch added API destroy_secure_world, which will do:
-- clear trusty memory space
-- restore memory to SOS ept mapping
It will be called when VM is destroyed, furthermore, ept of
Secure world will be destroyed as well.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
--add free_paging_struct api, used for free page tables
it will clear memory before free.
--add HPA2HVA translation when free ept memory
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Per SDM Vol. 2:
If CPUID.80000008H:EAX[7:0] is supported, the maximum physical address
number supported should come from this field.
This patch gets the maximum physical address number from CPUID leaf
0x80000008 and calculates the physical address mask when the leaf is
available.
Currently ACRN does not support platforms w/o this leaf and will panic
on such platforms.
Also call get_cpu_capabilities() earlier since the physical address mask
is required for initializing paging.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Per SDM:
When CPUID executes with EAX set to 80000000H, the processor returns
the highest value the processor recognizes for returning extended
processor information. The value is returned in the EAX register and is
processor specific.
This patch caches this value in the global cpuinfo_x86.cpuid_leaves. This
value will be used to check the availability of any CPUID extended
function.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
enable Xsave feature and pass-through it to guests
update based on v2:
- enable host xsave before expose it to guests.
- add validation for the value to be set to 'xcr0' before call xsetbv
when handling xsetbv vmexit.
- tested in SOS guest, created two threads to do different
FP calculations,test code runs in user land of sos.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to C99:
The empty list in a function declarator that is not part of a definition of
that function specifies that no information about the number or types of the
parameters is supplied.
This means gcc is happy with the following code, which is undesirable.
void foo(); /* declaration with an empty parameter list */
void bar() {
foo(); /* OK */
foo(1); /* OK */
foo(1, 2); /* OK */
}
This patch fixes declarations of functions with empty parameter lists by adding
an unnamed parameter of type void, which is the standard way to specify that a
function has no parameters. The following coccinelle script is used.
@@
type T;
identifier f;
@@
-T f();
+T f(void);
New compilation errors are fixed accordingly.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Save the pointer of efi_ctx in mi_drivers_addr field of
multiboot structure and pass to hypervisor, not by
saving in register RDX(the third default parameter in
64bit call function).
With this method, we can be compatible with the original
32bit boot parameters passing method and no need to
large the array size of boot_regs in hypervisor.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
With current code, the acrn.efi is inserted between
cl bootloader.efi and bzImage.efi that destroyed the chain
relationship of cl bootloader and cl bzImage.efi.
And the following is current boot flow:
UEFI -> cl bootloader.efi -> acrn.efi -> bzImage.efi
The purpose of this patch is resume above chain relationship,
and make uefi vm return to efi stub context once launched,
then continue to call the UEFI API(LoadImage/StartImage) to launch
cl bootloader or other bootloaders. So the boot flow will
change to be as below:
UEFI -> acrn.efi -> cl bootloader.efi -> bzImage.efi
After applying this patch, the code related to loading
bzImage.efi and getting pe_entry is unnecessary due to
the bzImage.efi will not be launched by acrn.efi directly,
so it is removed.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
With current code, memcpy rsdp to 0x500 maybe overwrite uefi
code/data region.
So remove the legacy BIOS deliver method of RSDP, which need copy
the RSDP to EBDA space which is addressed by the 16bit pointer
at 0x40E or upper memory BIOS space 0xe0000-0xfffff. And just
deliver the pointer of RSDP, which is already saved in UEFI system
table, to hypervisor.
Create a function named efi_init() to separate efi initialize code.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
For trusty bring-up, key_info is needed.
Currently, bootloader did not transfer key_info to hypervisor.
So in this patch, use dummy key_info temporarily.
Derive vSeed from dSeed before trusty startup, the vSeed will
bind with UUID of each VM.
Remove key_info from sworld_control structure.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
UOS_Loader will trigger boot of Trusty-OS by HC_INITIALIZE_TRUSTY.
UOS_Loader will load trusty image and alloc runtime memory for
trusty. UOS_Loader will transfer these information include
trusty runtime memory base address, entry address and memory
size to hypervisor by trusty_boot_param structure.
In hypervisor, once HC_INITIALIZE_TRUSTY received, it will create
EPT for Secure World, save Normal World vCPU context, init
Secure World vCPU context and switch World state to Secure World.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
For ARM, The SMC instruction is used to generate a synchronous
exception that is handled by Secure Monitor code running in EL3.
In the ARM architecture, synchronous control is transferred between
the normal Non-secure state and the Secure state through Secure
Monitor Call exceptions. SMC exceptions are generated by the SMC
instruction, and handled by the Secure Monitor.The operation of
the Secure Monitor is determined by the parameters that are passed
in through registers.
For ACRN, Hypervisor will simulate SMC by hypercall to switch vCPU
State between Normal World and Secure World.
There are 4 registers(RDI, RSI, RDX, RBX) reserved for paramters
passing between Normal World and Secure World.
Signed-off-by: Qi Yadong <yadong.qi@intel.com>
this patch is a preparation for changing ptdev remapping entry from
virtual to physical based, it changes the ptdev_lock from per-vm to
global, as entries based on physical mode are global resource.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
this patch is a preparation for changing ptdev remapping entry from
virtual to physical based, it changes the ptdev_list from per-vm to
global, as entries based on physical mode are global resource.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
ASSERT is only for debug purpose, for release version, it should try
error handling instead of deadloop there.
v1:
- change the ASSERT under release version to empty code
TODO: revise all ASSERT usage
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong (Eddie.dong@intel.com)
Generate all common virtual cpuid entries for flexible support of
guest VCPUID emulation, by decoupling from PCPUID.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Chen, Jason CJ <jason.cj.chen@intel.com>
microcode update from UOS is disabled.
microcode version checking is available for both SOS and UOS.
There are two TODOs of this patch:
1. This patch only update the uCode on pCPUs SOS owned. For the
pCPUs not owned by SOS, the uCode is not updated. To handle
this gap, we will have SOS own all pCPUs at boot time. So
all pCPUs could have uCode updated. This will be handled
in the patch to enable SOS own all pCPUs at boot time.
2. gva2gpa now doesn't check possible page table walk failure.
Will add the failure check in gva2gpa in different patch.
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Reviewed-by: Anthony Xu (anthony.xu@intel.com)
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Tian, Kevin <kevin.tian@intel.com>
Add a global boot_cpu_data to cache common cpu capbility/feature
for detect cpu capbility/feature.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
this patch is to detect and enable only APICv features which
are actually supported by the processor, instead fo tuning on
all features by default.
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
this patch save native lapic configuration and restore it to vm0's vlapic
before its running, then doing hpet timer interrupt injection through vlapic
interface -- this will not mess up vlapic and we can see hpet
timer interrupt coming continuously.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
we added uefi stub for hv, and want vm0 continue running under uefi env to
boot other uefi payload (osloader or bzImage).
during this, the uefi timer irq need be handled elegantly.
there are 3 types for uefi timer:
1. 8254 based on IRQ0 of PIC
2. HPET based on IOAPIC
3. HPET based on MSI
currently, we only support type 3 (HPET+MSI). But we are following a
in-correct flow to handle this timer interrupt:
- we set VMX_ENTRY_INT_INFO_FIELD directly if a timer interrupt happened
before vcpu launching, this will make its vlapic mess up, which finally
cause hpet timer stop.
this patch remove this in-correct approach, the new approach patch will
be followed by next patch.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
According to the explaination for pref_address
in Documentation/x86/boot.txt, a relocating bootloader
should attempt to load kernel at pref_address if possible.
But due to a non-relocatable kernel will unconditionally
move itself and to run at perf address, no need to copy
kernel to perf_address by bootloader.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
1. refine multiboot related code, move to /boot.
2. firmware files and ramdisk can be stitched in iasImage;
and they will be loaded as multiboot modules.
Signed-off-by: Minggui Cao <minggui.cao@intel.com>
Add 'CPU_PAGE_MASK' used for calculate address,
Change IA32E_REF_MASK from 0x7ffffffffffff000 to 0x000ffffffffff000
for MMU/EPT entry, bit62:52(ignore) bit63(VE/XD)
if we want to obtain the address from the MMU/EPT entry,need to clear
bit63:52 by IA32E_REF_MASK
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
add key info structure
add sworld_eptp in vm structure, and rename ept->nworld_eptp
add secure world control structure
Change-Id:
Tracked-On:220921
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>