It will print error information inside memcpy_s if
the parameteter is invalid, the caller can not check
the return value for memcpy_s/strcpy_s/strncpy_s
code like this:
int a(void) {
return 0;
}
int b(void){
a();
}
fix as follow:
int a(void) {
return 0;
}
int b(void){
(void)a();
}
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
No need to check the return value for memset
code like this:
int a(void) {
return 0;
}
int b(void){
a();
}
fix as follow:
int a(void) {
return 0;
}
int b(void){
(void)a();
}
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In the hypervisor, virtual cpu id is defined as "int" or "uint32_t"
type in the hypervisor. So there are some sign conversion issues
about virtual cpu id (vcpu_id) reported by static analysis tool.
Sign conversion violates the rules of MISRA C:2012.
BTW, virtual cpu id has different names (vcpu_id, cpu_id, logical_id)
for different modules of HV, its type is defined as "int" or "uint32_t"
in the HV. cpu_id type and logical_id type clean up will be done in
other patchs.
V1-->V2:
More clean up the type of vcpu id;
"%hu" is for vcpu id in the print function.
Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Misra C required signed/unsigned conversion with cast.
V1->V2:
a.split patch to patch series
V2->V3:
a.change the uint64_t type numeric constant's suffix from U to UL
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
To reduce type conversion in HV:
Update return type of function ffs64 and ffz64 as uint16;
For ffs64, when the input is zero, INVALID_BIT_INDEX is returned;
Update temporary variable type and return value check of caller
when it call ffs64 or ffz64;
Note: In the allocate_mem, there is no return value checking for
calling ffz64, this will be updated latter.
V1-->V2:
INVALID_BIT_INDEX instead of INVALID_NUMBER
Coding style fixing;
INVALID_CPU_ID instead of INVALID_PCPU_ID or INVALID_VCPU_ID;
"%hu" is used to print vcpu id (uint16_t);
Add "U/UL" for constant value as needed.
V2-->V3:
ffs64 return INVALID_BIT_INDEX directly when
the input value is zero;
Remove excess "%hu" updates.
V3-->V4:
Clean up the comments of ffs64;
Add "U" for constant value as needed.
Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Function prepare_vm0_memmap_and_e820 and init_vm0_boot_info are specific for vm0.
There is no need to check is_vm0 again in those functions.
This patch remove the unnecssary checks.
v1 -> v2:
- Add pre-condition comment before the function as Junjie's suggestion.
Signed-off-by: Kaige Fu <kaige.fu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
It is already assumed that ''long'' has 8-bytes, and thus there is no need to
use ULL to indicate a 8-byte unsigned constant.
This patch changes all ULL suffixes found in the hypervisor to UL.
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Update the structure definition to define the address type
(HVA vs HPA vs GPA) explicitly.
Convert address to HVA before access the GPA/HPA type of address.
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Before we set the page table, we should know the attribute. So
move configure the page table attribute outside of modify_paging.
Signed-off-by: Li, Fei1 <fei1.li@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
The member of width in struct e820_entries,can be declared to
uint32_t(the range of the member is bigger than 0) to avoid
negative shift.
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
MISRA C doesn't allowed negative shift, changed any potential signed value
to unsigned value.
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
In the hypervisor, physical cpu id is defined as "int" or "uint32_t"
type in the hypervisor. So there are some sign conversion issues
about physical cpu id (pcpu_id) reported by static analysis tool.
Sign conversion violates the rules of MISRA C:2012.
In this patch, define physical cpu id as "uint16_t" type for all
modules in the hypervisor and change related codes. The valid
range of pcpu_id is 0~65534, INVALID_PCPU_ID is defined to the
invalid pcpu_id for error detection, BROADCAST_PCPU_ID is
broadcast pcpu_id used to notify all valid pcpu.
The type of pcpu_id in the struct vcpu and vcpu_id is "int" type,
this will be fixed in another patch.
V1-->V2:
* Change the type of pcpu_id from uint32_t to uint16_t;
* Define INVALID_PCPU_ID for error detection;
* Define BROADCAST_PCPU_ID to notify all valid pcpu.
V2-->V3:
* Update comments for INVALID_PCPU_ID and BROADCAST_PCPU_ID;
* Update addtional pcpu_id;
* Convert hexadecimals to unsigned to meet the type of pcpu_id;
* Clean up for MIN_PCPU_ID and MAX_PCPU_ID, they will be
defined by configuration.
Note: fix bug in the init_lapic(), the pcpu_id shall be less than 8,
this is constraint by implement in the init_lapic().
Signed-off-by: Xiangyang Wu <xiangyang.wu@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
MISRA C explicit required expression should be boolean when
in branch statements (if,while...).
Signed-off-by: Huihuang Shi <huihuang.shi@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
- For UEFI boot, allocate memory for trampoline code in ACRN EFI,
and pass the pointer to HV through efi_ctx
- Correct LOW_RAM_SIZE and LOW_RAM_START in Kconfig and bsp_cfg.h
- use trampline_start16_paddr instead of the hardcoded
CONFIG_LOW_RAM_START for initial guest GDT and page tables
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
emalloc_for_low_mem() is used if CONFIG_EFI_STUB is defined.
e820_alloc_low_memory() is used for other cases
In either case, the allocated memory will be marked with E820_TYPE_RESERVED
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
there are data transfer between guest virtual space(GVA) & hv(HVA), for
example, guest rip fetching during instruction decoding.
GVA is address continuous, but its GPA could be only 4K page address
continuous, this patch adds copy_from_gva & copy_to_gva functions by
doing page walking of GVA to avoid address breaking during accessing GVA.
v2:
- modify API interface based on new gva2gpa function, err_code added
- combine similar code with inline function _copy_gpa
- change API name from vcopy_from/to_vm to copy_from/to_gva
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Cleanup "cpu_secondary_xx" in the symbols/section/functions/variables
name in trampline code.
There is item left: the default C entry is Ap start c entry. Before
ACRN enter S3, the c entry will be updated to high level S3 C entry.
So s3 resume will go s3 resume path instead of AP startup path.
Signed-off-by: Yin Fengwei <fengwei.yin@intel.com>
Signed-off-by: Zheng Gen <gen.zheng@intel.com>
Acked-by: Anthony Xu <anthony.xu@intel.com>
Acked-by: Eddie Dong <Eddie.dong@intel.com>
In current code, sos/uos bsp can only start from 64bit mode.
For sbl platform:
This patch start sos bsp from protected mode by default.
CONFIG_START_VM0_BSP_64BIT is defined to allow start sos bsp
from 64bit mode. If a config CONFIG_START_VM0_BSP_64BIT
defined in config file, then sos bsp will start from 64bit mode.
This patch start uos bsp from real mode, which needs the integration
of virtual bootloader (vsbl).
For uefi platform:
This patch sets sos bsp vcpu mode according to the uefi context.
This patch starts uos bsp from protected mode, because vsbl is not ready
to publish for uefi platform yet. After vsbl is ready, can change to
start uos bsp from real mode.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
In current implementation, on sbl platform, vm0 bsp
starts from 64bit mode. And hv need to prepare init
page table for it.
In this patch series, on sbl platform, vm0 bsp starts
from non-paging protected mode.
This patch prepares an init gdt for vm0 bsp on sbl
platform.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Translate gva2gpa in different paging modes.
Change the definition of gva2gpa.
- return value for error status
- Add a parameter for error code when paging fault.
Change the definition of vm_gva2gpa.
- return value for error status
- Add a parameter for error code when paing fault.
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Use # of paging level to identify paging mode
Signed-off-by: Binbin Wu <binbin.wu@intel.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
Reviewed-by: Eddie Dong <eddie.dong@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
_Static_assert is supported in C11 standard.
Please see N1570(C11 mannual) 6.4.1.
replace _Static_assert with ASSERT.
Signed-off-by: huihuang shi <huihuang.shi@intel.com>
According to the comments in hypervisor:
" This file includes config header file "bsp_cfg.h" and other
hypervisor used header files.
It should be included in all the source files."
this patch includes all common header files in hypervisor.h
then removes other redundant inclusions
Signed-off-by: Zide Chen <zide.chen@intel.com>
there are data transfer between guest(GPA) & hv(HPA), especially for
hypercall from guest.
guest should make sure these GPAs are address continous, but hv cannot
assure HPAs which mapped to these GPAs are address continous, for example,
after enable hugetlb, a contious GPA range could come from two different
2M pages.
this patch is handling such case by doing gpa page walking during
copy_from_vm & copy_to_vm.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
V2->V3: Updated variable name: trampoline_code_paddr
V1->V2: changed variable name: init_ap_code_addr
These page tablea are sitting right after the trampoline code, so adjust it according to
the actual loaded address for trampoline code
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
V2->V3: Fixed the booting issue on MRB board and removed the restriction
of allocate memory from address 0
1) Fix the booting from MRB issue
-#define CONFIG_LOW_RAM_SIZE 0x000CF000
+#define CONFIG_LOW_RAM_SIZE 0x00010000
2) changed e820_alloc_low_memory() to handle corner case of unaligned e820 entries
and enable it to allocate memory at address 0
+ a length = end > start ? (end - start) : 0;
- /* We don't want the first page */
- if ((length == size) && (start == 0))
- continue;
3) changed emalloc_for_low_mem() to enable to allocate memory at address 0
- /* We don't want the first page */
- if (start == 0)
- start = EFI_PAGE_SIZE;
V1->V2: moved e820_alloc_low_memory() to guest.c and added the logic to
handle unaligned E820 entries
emalloc_for_low_mem() is used if CONFIG_EFI_STUB is defined.
e820_alloc_low_memory() is used for other cases
In either case, the allocated memory will be marked with E820_TYPE_RESERVED
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Signed-off-by: Zide Chen <zide.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
adding "hugepagesz=1G" and "hugepages=X" into SOS cmdline, for X, current
strategy is making it equal
e820_mem.total_mem_size -CONFIG_REMAIN_1G_PAGES
if CONFIG_REMAIN_1G_PAGES is not set, it will use 3 by default.
CONFIG_CMA is added to indicate using cma cmdline option for SOS kernel,
by default system will use hugetlb cmdline option if no CONFIG_CMA defined.
Signed-off-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Xu, Anthony <anthony.xu@intel.com>
Before this patch, guest temporary page tables were generated by hardcode
at compile time, HV will copy this page tables to guest before guest
launch.
This patch creates temporary page tables at runtime for the range of 0~4G,
and create page tables to cover new range(511G~511G+16M) with trusty
requirement.
Signed-off-by: Mingqiang Chi <mingqiang.chi@intel.com>
Reviewed-by: Jason Chen CJ <jason.cj.chen@intel.com>
Acked-by: Eddie Dong <eddie.dong@intel.com>
According to the explaination for pref_address
in Documentation/x86/boot.txt, a relocating bootloader
should attempt to load kernel at pref_address if possible.
But due to a non-relocatable kernel will unconditionally
move itself and to run at perf address, no need to copy
kernel to perf_address by bootloader.
Signed-off-by: Zheng, Gen <gen.zheng@intel.com>
1. refine multiboot related code, move to /boot.
2. firmware files and ramdisk can be stitched in iasImage;
and they will be loaded as multiboot modules.
Signed-off-by: Minggui Cao <minggui.cao@intel.com>