HV:MM:gpa2hpa related error checking fix

In the current hypervisor design, when HPA is not
found for the specified gpa by calling gpa2hpa or
local_gpa2hpa, 0 will be returned as a error code,
but 0 may be a valid HPA for vm0; error checking
is missed when invoking gpa2hpa or local_gpa2hpa;
when invoking lookup_address, the caller guarantees
that parameter pointer pml4_page and pointer pg_size
is not NULL.

If local_gpa2hpa/gpa2hpa returns a invalid HPA,
it means that this function fails to find the
HPA of the specified gpa of vm. If local_gpa2hpa/gpa2hpa
return value is a valid HPA, it means that this
function have found the HPA of the specified gpa of vm.

Each valid vm's EPTP is initialized during vm creating,
vm's EPTP is valid until this vm is destroyed. So the caller
can guarantee parameter pointer pml4_page is not NULL.
The caller uses a temporary variable to store page size.
So the caller can guarantee parameter pointer pg_size
is not NULL.

In this patch, define a invalid HPA for gpa2hpa and
local_gpa2hpa;add some error checking when invoking
local_gpa2hpa/gpa2hpa;add precondition for lookup_address
function and remove redundant error checking.

V1-->V2:
	Define INVALID_HPA as a invalid HPA for gpa2hpa
	and local_gpa2hpa;
	Updated related error checking when invoking
	gpa2hpa or local_gpa2hpa;
V2-->V3:
	Add some debug information if specified gpa2hpa
	mapping doesn't exit and ept_mr_del is called;
	Update INVALID_HPA definition easier to be reviewed.
V3-->V4:
	Add vm->id and gpa into pr_error;
	Add precondition to ept_mr_del to cover [gpa,gpa+size)
	unmapping case.
V4-->V5:
	Update comments;
	Update pr_error message.

Tracked-On: #1258

Signed-off-by: Xiangyang Wu <xiangyang.wu@linux.intel.com>
Reviewed-by: Li, Fei1 <fei1.li@intel.com>
This commit is contained in:
Xiangyang Wu
2018-10-10 15:23:50 +08:00
committed by wenlingz
parent 041bd594ae
commit a11a10fa4e
7 changed files with 67 additions and 19 deletions

View File

@@ -371,8 +371,9 @@ int32_t hcall_set_ioreq_buffer(struct vm *vm, uint16_t vmid, uint64_t param)
vmid, iobuf.req_buf);
hpa = gpa2hpa(vm, iobuf.req_buf);
if (hpa == 0UL) {
pr_err("%s: invalid GPA.\n", __func__);
if (hpa == INVALID_HPA) {
pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
__func__, vm->vm_id, iobuf.req_buf);
target_vm->sw.io_shared_page = NULL;
return -EINVAL;
}
@@ -437,6 +438,11 @@ static int32_t local_set_vm_memory_region(struct vm *vm,
pml4_page = (uint64_t *)target_vm->arch_vm.nworld_eptp;
if (region->type != MR_DEL) {
hpa = gpa2hpa(vm, region->vm0_gpa);
if (hpa == INVALID_HPA) {
pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
__func__, vm->vm_id, region->vm0_gpa);
return -EINVAL;
}
base_paddr = get_hv_image_base();
if (((hpa <= base_paddr) &&
((hpa + region->size) > base_paddr)) ||
@@ -558,6 +564,11 @@ static int32_t write_protect_page(struct vm *vm, struct wp_data *wp)
uint64_t prot_clr;
hpa = gpa2hpa(vm, wp->gpa);
if (hpa == INVALID_HPA) {
pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
__func__, vm->vm_id, wp->gpa);
return -EINVAL;
}
dev_dbg(ACRN_DBG_HYCALL, "[vm%d] gpa=0x%x hpa=0x%x",
vm->vm_id, wp->gpa, hpa);
@@ -666,6 +677,11 @@ int32_t hcall_gpa_to_hpa(struct vm *vm, uint16_t vmid, uint64_t param)
return -1;
}
v_gpa2hpa.hpa = gpa2hpa(target_vm, v_gpa2hpa.gpa);
if (v_gpa2hpa.hpa == INVALID_HPA) {
pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
__func__, target_vm->vm_id, v_gpa2hpa.gpa);
return -EINVAL;
}
if (copy_to_gpa(vm, &v_gpa2hpa, param, sizeof(v_gpa2hpa)) != 0) {
pr_err("%s: Unable copy param to vm\n", __func__);
return -1;
@@ -985,8 +1001,9 @@ int32_t hcall_vm_intr_monitor(struct vm *vm, uint16_t vmid, uint64_t param)
/* the param for this hypercall is page aligned */
hpa = gpa2hpa(vm, param);
if (hpa == 0UL) {
pr_err("%s: invalid GPA.\n", __func__);
if (hpa == INVALID_HPA) {
pr_err("%s,vm[%hu] gpa 0x%llx,GPA is unmapping.",
__func__, vm->vm_id, param);
return -EINVAL;
}