HV:MM:gpa2hpa related error checking fix

In the current hypervisor design, when HPA is not
found for the specified gpa by calling gpa2hpa or
local_gpa2hpa, 0 will be returned as a error code,
but 0 may be a valid HPA for vm0; error checking
is missed when invoking gpa2hpa or local_gpa2hpa;
when invoking lookup_address, the caller guarantees
that parameter pointer pml4_page and pointer pg_size
is not NULL.

If local_gpa2hpa/gpa2hpa returns a invalid HPA,
it means that this function fails to find the
HPA of the specified gpa of vm. If local_gpa2hpa/gpa2hpa
return value is a valid HPA, it means that this
function have found the HPA of the specified gpa of vm.

Each valid vm's EPTP is initialized during vm creating,
vm's EPTP is valid until this vm is destroyed. So the caller
can guarantee parameter pointer pml4_page is not NULL.
The caller uses a temporary variable to store page size.
So the caller can guarantee parameter pointer pg_size
is not NULL.

In this patch, define a invalid HPA for gpa2hpa and
local_gpa2hpa;add some error checking when invoking
local_gpa2hpa/gpa2hpa;add precondition for lookup_address
function and remove redundant error checking.

V1-->V2:
	Define INVALID_HPA as a invalid HPA for gpa2hpa
	and local_gpa2hpa;
	Updated related error checking when invoking
	gpa2hpa or local_gpa2hpa;
V2-->V3:
	Add some debug information if specified gpa2hpa
	mapping doesn't exit and ept_mr_del is called;
	Update INVALID_HPA definition easier to be reviewed.
V3-->V4:
	Add vm->id and gpa into pr_error;
	Add precondition to ept_mr_del to cover [gpa,gpa+size)
	unmapping case.
V4-->V5:
	Update comments;
	Update pr_error message.

Tracked-On: #1258

Signed-off-by: Xiangyang Wu <xiangyang.wu@linux.intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
This commit is contained in:
Xiangyang Wu
2018-10-10 15:23:50 +08:00
committed by wenlingz
parent 041bd594ae
commit a11a10fa4e
7 changed files with 67 additions and 19 deletions

View File

@@ -90,6 +90,9 @@ void flush_vpid_single(uint16_t vpid);
void flush_vpid_global(void);
void invept(struct vcpu *vcpu);
bool check_continuous_hpa(struct vm *vm, uint64_t gpa_arg, uint64_t size_arg);
/**
*@pre (pml4_page != NULL) && (pg_size != NULL)
*/
uint64_t *lookup_address(uint64_t *pml4_page, uint64_t addr,
uint64_t *pg_size, enum _page_table_type ptt);
@@ -125,15 +128,32 @@ static inline void clflush(volatile void *p)
asm volatile ("clflush (%0)" :: "r"(p));
}
/**
* Invalid HPA is defined for error checking,
* according to SDM vol.3A 4.1.4, the maximum
* host physical address width is 52
*/
#define INVALID_HPA (0x1UL << 52U)
/* External Interfaces */
void destroy_ept(struct vm *vm);
/**
* @return INVALID_HPA - the HPA of parameter gpa is unmapping
* @return hpa - the HPA of parameter gpa is hpa
*/
uint64_t gpa2hpa(struct vm *vm, uint64_t gpa);
/**
* @return INVALID_HPA - the HPA of parameter gpa is unmapping
* @return hpa - the HPA of parameter gpa is hpa
*/
uint64_t local_gpa2hpa(struct vm *vm, uint64_t gpa, uint32_t *size);
uint64_t hpa2gpa(struct vm *vm, uint64_t hpa);
void ept_mr_add(struct vm *vm, uint64_t *pml4_page, uint64_t hpa,
uint64_t gpa, uint64_t size, uint64_t prot_orig);
void ept_mr_modify(struct vm *vm, uint64_t *pml4_page, uint64_t gpa,
uint64_t size, uint64_t prot_set, uint64_t prot_clr);
/**
* @pre [gpa,gpa+size) has been mapped into host physical memory region
*/
void ept_mr_del(struct vm *vm, uint64_t *pml4_page, uint64_t gpa,
uint64_t size);
void free_ept_mem(uint64_t *pml4_page);