Compare commits

...

29 Commits
master ... v3.2

Author SHA1 Message Date
wenlingz
01e0ff077b version:v3.2
Signed-off-by: wenlingz <wenling.zhang@intel.com>
2023-08-08 10:29:30 +08:00
David B. Kinder
7e28a17a53 doc: update release branch with changed docs
After the release branch is made, we continue with documentation up to
the release date.  This PR includes all those changes for the v3.2
release.

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2023-08-07 09:30:50 -07:00
Kunhui-Li
6ad053fc88 configurator: update tauri version
update tauri version to 1.4.1 fix security vulnerability for configurator
dependent library.

Tracked-On: #8445
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-07-19 10:40:20 +08:00
Jiaqing Zhao
cee2579e35 dm: virtio-gpu: fix uninitialized memory access
In virtio_gpu_cmd_create_blob() and virtio_gpu_cmd_resource_attach_
backing(), entries may be accessed before initialization. Fix it by
using calloc() to allocate it instead of malloc().

Tracked-On: #8439
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
2023-07-18 13:47:14 +08:00
Wu Zhou
a9860fad05 hv: bugfix: skip invalid ffs64 return value
ffs64() returns INVALID_BIT_INDEX (0xffffU) when it tries to deal with
zero input value. This may happen In calculate_logical_dest_mask() when
the guest tries to write some illegal destination IDs to MSI config
registers of a pt-device. The ffs64() return value is used as per_cpu
array index, and it would cause a page fault.

This patch adds protection to the per_cpu array, making this function
return zero on illegal value. As in logical destination's definition, a
zero logical designation would point to no CPU.

Fixes: 1334349f8
Tracked-On: #8454
Signed-off-by: Wu Zhou <wu.zhou@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-07-14 17:05:17 +08:00
Jiaqing Zhao
b38003b870 misc: sample_application: fix setup_hmi_vm.sh for Ubuntu 22.04
In chroot environment, the running kernel is the host kernel, so uname
command cannot get the kernel verison in image. Since hmi-vm uses
GVT-d, and kernel 5.15 does not support newer iGPUs in 13th Gen
processors, this patch installs the linux-generic-hwe kernel (5.19)
instead of linux-modules-extra package.

In Ubuntu 22.04, package needrestart is installed by default to
interactively prompt user there is a pending kernel upgrade or serivces
need to be restarted in apt. This patch removes it.

Also, this patch expands hmi-vm image by 2GB to hold the new kernel and
runs 'apt autoremove' after everything is installed.

Tracked-On: #8448
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
2023-07-10 15:04:38 +08:00
David B. Kinder
3a001f9be6 Update sample app scripts for Ubuntu 22.04
The switch to Ubuntu 22.04 for ACRN v3.2 requires a few changes to the
sample application scripts:

- user _apt cannot access debian packages in user directories, so copy
  them to /tmp to install
- HMI-VM image size needs to be bigger (running out of space during
  image update)

Tracked-On: #8352

Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
2023-07-10 15:04:38 +08:00
Jiaqing Zhao
1c8396abef dm: vdisplay_sdl: fix command line option parsing
strcasestr() returns NULL if specified substring is not found, which
should be handled when parsing the command line options.

Tracked-On: #8439
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
Reviewed-by: Jian Jun Chen <jian.jun.chen@intel.com>
2023-07-05 18:13:23 +08:00
Jiaqing Zhao
ce3f31dcb6 dm: passthrough: check romfile path length in command
This patch checks the romfile path length in command line to avoid
possible buffer overflow, maximum path supported is 255 characters.

Tracked-On: #8439
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
Reviewed-by: Jian Jun Chen <jian.jun.chen@intel.com>
2023-07-05 18:13:23 +08:00
Jiaqing Zhao
b6cea37b49 dm: fix uninitialized heap access risk in virtio GPU
This patch fix potential uninitialized heap use in virtio_gpu.c file.

Tracked-On: #8439
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
Reviewed-by: Jian Jun Chen <jian.jun.chen@intel.com>
2023-07-05 18:13:23 +08:00
Jiaqing Zhao
31fb783879 dm: fix NULL pointer dereference risk in vdisplay
This patch fix several issues that NULL pointers possibly be
dereferenced in display module.

Tracked-On: #8439
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
Reviewed-by: Jian Jun Chen <jian.jun.chen@intel.com>
2023-07-05 18:13:23 +08:00
Jiaqing Zhao
d98901a890 dm: fix NULL pointer dereference risk in vhost vsock
Pointer 'vsock->vhost_vsock' returned from call to function
'vhost_vsock_init' may be NULL and will be dereferenced when
calling 'vhost_vsock_set_guest_cid()'.

Tracked-On: #8439
Signed-off-by: Yonghua Huang <yonghua.huang@intel.com>
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
Reviewed-by: Jian Jun Chen <jian.jun.chen@intel.com>
2023-07-05 18:13:23 +08:00
Wu Zhou
1334349f89 hv: bugfix: fix the ptdev irq destination issue
According to SDM Vol3 11.12.10, in x2APIC mode, Logical Destination has
two parts:
  - Cluster ID (LDR[31:16])
  - Logical ID (LDR[15:0])
Cluster ID is a numerical address, while Logical ID is a 16bit mask. We
can only use Logical ID to address multi destinations within a Cluster.

So we can't just 'or' all the Logical Destination in LDR registers to
get one mask for all target pCPUs. This would get a wrong destination
mask if the target Destinations are from different Clusters.

For example in ADL/RPL x2APIC LDRs for core 2-5 are 0x10001 0x10100
0x20001 0x20100. If we 'or' them together, we would get a Logical
Destination of 0x30101, which points to core 6 and another core.
If core 6 is running a RTVM, then the irq is unable to get to
core 2-5, causing the guest on core 2-5 driver fail.

Guests working in xAPIC mode may use 'Flat Model' to select an
arbitrary list of CPUs as its irq destination. HV may not be able to
include them all when transfering to physical destinations, because
the HW is working in x2APIC mode and can only use 'Cluster Model'.

There would be no perfect fix for this issue. This patch is a simple
fix, by just keep the first Cluster of all target Logical Destinations.

Tracked-On: #8435
Signed-off-by: Wu Zhou <wu.zhou@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-07-05 17:06:09 +08:00
Jiaqing Zhao
feb1afbc3c dm: passthrough DSM region for ADL-N and RPL
The Data of Stolen Memory (DSM) region on Alder Lake-N and Raptor Lake
platform is indicated by the BDSM register (0xC0 and 0xC4 in PCI config
space), which is the same as Gen 11 (Tiger Lake) iGPU. This patch adds
ADL-N and RPL iGPU device id to passthrough the DSM region properly
when using GVT-d.

The PCI device ids are taken from i915 kernel driver.

Tracked-On: #8432
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
2023-06-25 10:33:53 +08:00
Jiaqing Zhao
080f43216c hv: sgx: refactor partition_epc()
This patch refactors partition_epc() to make the code easier to
understand, also fixes the maybe-uninitialized warning for gcc-13.

Initializing 'vm_config' to get_vm_config(0) is okay here as scenario
validator ensures CONFIG_MAX_VM_NUM to be always larger than 0.

Tracked-On: #8413
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
2023-06-13 15:43:48 +08:00
Jiaqing Zhao
56446fe366 dm: gvt: add bound check in gvt_init_config()
gvt_init_config() may perform out-of-range read on host_config, add
bound check before accessing it.

Tracked-On: #8382
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
Reviewed-by: Jian Jun Chen <jian.jun.chen@intel.com>
2023-06-13 15:43:48 +08:00
Jiaqing Zhao
0016a64655 misc: life_mngr: fix use-after-free in uart channel
LIST_FOREACH() doesn't allow var to be removed or freed within the
loop, but c_dev is freed inside the loop here. gcc 12 also reports
error on it. This patch uses list_foreach_safe() macro instead for
freeing var within the loop safely.

Tracked-On: #8382
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-06-13 15:43:48 +08:00
Kunhui-Li
b1b4bc98af config_tools: fix the issue that fail to generate config_summary.rst if enable CAT
currently, configurator will fail to generate config_summary.rst file if
user enable the "Cache Allocation Technology" because note function in rstcloth
is replaced by self.doc.note.

So this patch updates function and usage to fix this issue.

fixs: 9c2d0f8 ("config_tools: replace RstCloth library with class.")
Tracked-On: #8422
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-06-09 18:56:47 +08:00
Jiaqing Zhao
2cf158994b config_tools: remove rstcloth package
9c2d0f8858 ("config_tools: replace RstCloth library with class.")
removes all usage of rstcloth in code, but the rstcloth package is not
removed from acrn-configurator and it will still download dependencies
for rstcloth. This patch simply removes it.

Tracked-On: #8395
Signed-off-by: Jiaqing Zhao <jiaqing.zhao@linux.intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-05-16 12:29:04 +08:00
Kunhui-Li
151305e8c9 config_tools: replace RstCloth library with class.
Currently, Configurator load fail because it needs to download
RstCloth related packages. This patch defines Doc class to replace
RstCloth library to fix this issue.

Tracked-On: #8395
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-05-11 14:49:59 +08:00
wenlingz
ca99bb58a2 update CODEOWNERS in release_3.2
Tracked-On:#5581
Signed-off-by: wenlingz <wenling.zhang@intel.com>
2023-05-09 01:35:09 +08:00
Kunhui-Li
de188258f6 config_tools: capture the IOError exception
If no TURBO_RATIO_LIMIT and TURBO_ACTIVATION_RATIO MSRs info
on target, board inspector will crash because of the IOError exception.

This patch captures the IOError exception to handle this error.

Tracked-On: #8380
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-05-04 15:10:44 +08:00
Kunhui-Li
d83d0fed47 config_tools: filter non-ascii characters in iomem infomation
If we use 5.15 kernel on rpl-p platform, some iomem info dumped with
non-ascii characters, then tool will raise a decode error.

So this patch filters non-ascii characters to handle this error.

Tracked-On: #8388
Signed-off-by: Kunhui-Li <kunhuix.li@intel.com>
Reviewed-by: Junjie Mao <junjie.mao@intel.com>
2023-04-25 18:05:26 +08:00
Chenli Wei
31c0362ac4 misc: fix the summary issue after update board xml
The new board xml have add a "module" node under the "processors/die"
which has cause an issue when we run the summary, this patch  use "//"
to select all "cpu_id" under the "processors".

Tracked-On: #8385
Signed-off-by: Chenli Wei <chenli.wei@intel.com>
2023-04-24 23:03:05 +08:00
Min Yang
7315ff3cf9 debian: modify kernel version display to GRUB menuentry
Tracked-On:#8359
Signed-off-by: Min Yang <minx.yang@intel.com>
2023-04-06 14:58:02 +08:00
Min Yang
5d3702af77 debian: add kernel version to GRUB menuentry
1. add kernel version to menuentry "Ubuntu-ACRN Board Inspector"
2. add kernel version and acrn version to menuentry "Ubuntu with ACRN hypervisor"

Tracked-On:#8359
Signed-off-by: Min Yang <minx.yang@intel.com>
2023-04-06 14:58:02 +08:00
Junjie Mao
c34649aafa debian/rules: change default BOARDLIST and SCENARIOLIST to empty
The variables BOARDLIST and SCENARIOLIST serve as filters of XMLs that are
found under the user-given directories, and there is no need to assume any
filter if a user does not specify that explicitly.

Update the default filters to none so that all found XMLs will be used if a
user does not state otherwise.

Tracked-On: #8246
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2022-11-29 22:27:49 +08:00
Junjie Mao
0bde54613b debian/rules: search for XML files only
When searching for scenario XMLs that are saved under the same directory as
a board XML, debian/rules uses the wildcard `*` which includes other
non-XML files. That causes some non-XML files to be considered as scenario
XMLs as well and will cause build-time errors when the build system
attempts to parse them as XMLs.

Change the wildcard expression to `*.xml` to restrict the found files to be
XML.

Tracked-On: #8344
Signed-off-by: Junjie Mao <junjie.mao@intel.com>
2022-11-29 22:27:49 +08:00
Jian Jun Chen
dde388d82c misc: life_mngr: revise try_receive_message_by_uart
Revise try_receive_message_by_uart to read one char from uart one time.
With this implementation each char can be checked. This can be used to
address the following 2 problems:
1) nosie data: it is found that there is noise data in the uart from
   guest VM when guest startup.
2) split multiple commands

Tracked-On: #8111
Signed-off-by: Jian Jun Chen <jian.jun.chen@intel.com>
2022-11-24 09:41:51 +08:00
41 changed files with 2750 additions and 1475 deletions

View File

@ -14,7 +14,7 @@
Makefile @terryzouhao @NanlinXie
/hypervisor/ @dongyaozu @lifeix
/devicemodel/ @ywan170
/devicemodel/ @ywan170 @chejianj
/doc/ @dbkinder @NanlinXie
/misc/debug_tools/acrn_crashlog/ @ywan170 @lifeix
/misc/debug_tools/acrn_log/ @ywan170 @lifeix

View File

@ -1,3 +1,3 @@
MAJOR_VERSION=3
MINOR_VERSION=2
EXTRA_VERSION=-unstable
EXTRA_VERSION=

View File

@ -103,7 +103,7 @@ linux_entry ()
if [ -z "$boot_device_id" ]; then
boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
fi
echo "menuentry '$(echo "$os" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-acrn-board-inspector-$boot_device_id' {"
echo "menuentry '$(echo "$os, with Linux ${version}" | grub_quote)' ${CLASS} \$menuentry_id_option 'gnulinux-acrn-board-inspector-$boot_device_id' {"
# Use ELILO's generic "efifb" when it's known to be available.
# FIXME: We need an interface to select vesafb in case efifb can't be used.

View File

@ -208,7 +208,7 @@ linux_entry ()
boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
fi
title="$(gettext_printf "%s with ACRN hypervisor" "${os}")"
echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'acrn-gnulinux-$boot_device_id' {"
echo "menuentry '$(echo "$title, with Linux ${version} (ACRN ${acrn_version})" | grub_quote)' ${CLASS} \$menuentry_id_option 'acrn-gnulinux-$boot_device_id' {"
if [ -z "${prepare_boot_cache}" ]; then
prepare_boot_cache="$(prepare_grub_to_access_device ${GRUB_DEVICE_BOOT} | grub_add_tab)"
@ -327,7 +327,7 @@ while [ "x${acrn_list}" != "x" ] ; do
else
title="$(gettext_printf "%s with ACRN hypervisor %s" "${OS}" "${acrn_version}")"
fi
echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'acrn-gnulinux-partitioned-${acrn_version}' {"
echo "menuentry '$(echo "$title, with Linux ${version} (ACRN ${acrn_version})" | grub_quote)' ${CLASS} \$menuentry_id_option 'acrn-gnulinux-partitioned-${acrn_version}' {"
message="$(gettext_printf "Loading ACRN hypervisor %s ..." ${acrn_version})"
cat << EOF
echo '$(echo "$message" | grub_quote)'

6
debian/rules vendored
View File

@ -32,8 +32,8 @@ rwildcard=$(foreach d,$(wildcard $(1:=/*)),$(call rwildcard,$d,$2) $(filter $(su
unquote = $(subst $\",,$1)
# set these variables to define build of certain boards/scenarios, e.g.
ACRN_BOARDLIST ?= whl-ipc-i5 nuc11tnbi5 cfl-k700-i7 tgl-vecow-spc-7100-Corei7
ACRN_SCENARIOLIST ?= partitioned shared hybrid hybrid_rt
ACRN_BOARDLIST ?=
ACRN_SCENARIOLIST ?=
# for now build the debug versions
# set to y for RELEASE build
@ -62,7 +62,7 @@ $(eval $(call unquote,$(shell xmllint --xpath '/acrn-config/@board' $1 2>/dev/nu
$(eval $(if $(board), \
$(eval config_$(board) := $1) \
$(eval boardlist := $(sort $(boardlist) $(board))) \
$(foreach f,$(wildcard $(addprefix $(dir $1),*)), \
$(foreach f,$(wildcard $(addprefix $(dir $1),*.xml)), \
$(if $(strip $(shell xmllint --xpath '/acrn-config/@board' $f 2>/dev/null)),, \
$(if $(subst scenario.xml,,$(notdir $f)), \
$(eval scenario = $(basename $(notdir $f))), \

View File

@ -256,7 +256,7 @@ gvt_init_config(struct pci_gvt *gvt)
/* capability */
pci_set_cfgdata8(gvt->gvt_pi, PCIR_CAP_PTR, gvt->host_config[0x34]);
cap_ptr = gvt->host_config[0x34];
while (cap_ptr != 0) {
while (cap_ptr != 0 && cap_ptr <= PCI_REGMAX - 15) {
pci_set_cfgdata32(gvt->gvt_pi, cap_ptr,
gvt->host_config[cap_ptr]);
pci_set_cfgdata32(gvt->gvt_pi, cap_ptr + 4,

View File

@ -616,6 +616,27 @@ passthru_gpu_dsm_opregion(struct vmctx *ctx, struct passthru_dev *ptdev,
case 0x46c1:
case 0x46c2:
case 0x46c3:
/* Alder Lake-N */
case 0x46d0:
case 0x46d1:
case 0x46d2:
/* Raptor Lake-S */
case 0xa780:
case 0xa781:
case 0xa782:
case 0xa783:
case 0xa788:
case 0xa789:
case 0xa78a:
case 0xa78b:
/* Raptor Lake-U */
case 0xa721:
case 0xa7a1:
case 0xa7a9:
/* Raptor Lake-P */
case 0xa720:
case 0xa7a0:
case 0xa7a8:
/* BDSM register has 64 bits.
* bits 63:20 contains the base address of stolen memory
*/
@ -744,7 +765,11 @@ passthru_init(struct vmctx *ctx, struct pci_vdev *dev, char *opts)
} else if (!strncmp(opt, "romfile=", 8)) {
need_rombar = true;
opt += 8;
strcpy(rom_file, opt);
if (strnlen(opt, PATH_MAX) >= sizeof(rom_file)) {
pr_err("romfile path too long, max supported path length is 255");
return -EINVAL;
}
strncpy(rom_file, opt, sizeof(rom_file));
} else
pr_warn("Invalid passthru options:%s", opt);
}

View File

@ -298,14 +298,16 @@ virtio_vhost_vsock_init(struct vmctx *ctx, struct pci_vdev *dev, char *opts)
virtio_set_modern_bar(&vsock->base, false);
vsock->vhost_vsock = vhost_vsock_init(&vsock->base, 0);
if (!vsock->vhost_vsock) {
pr_err("vhost vosck init failed.");
free(vsock);
return -1;
}
vhost_vsock_set_guest_cid(&vsock->vhost_vsock->vdev, vsock->config.guest_cid);
if (virtio_interrupt_init(&vsock->base, virtio_uses_msix())) {
if (vsock) {
if (vsock->vhost_vsock)
vhost_vsock_deinit(vsock->vhost_vsock);
free(vsock);
}
return -1;
}
return 0;

View File

@ -702,6 +702,12 @@ virtio_gpu_cmd_resource_create_2d(struct virtio_gpu_command *cmd)
}
r2d = (struct virtio_gpu_resource_2d*)calloc(1, \
sizeof(struct virtio_gpu_resource_2d));
if (!r2d) {
pr_err("%s: memory allocation for r2d failed.\n", __func__);
resp.type = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
goto response;
}
r2d->resource_id = req.resource_id;
r2d->width = req.width;
r2d->height = req.height;
@ -774,15 +780,42 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_command *cmd)
struct virtio_gpu_ctrl_hdr resp;
int i;
uint8_t *pbuf;
struct iovec *iov;
memcpy(&req, cmd->iov[0].iov_base, sizeof(req));
memset(&resp, 0, sizeof(resp));
/*
* 1. Per VIRTIO GPU specification,
* 'cmd->iovcnt' = 'nr_entries' of 'struct virtio_gpu_resource_attach_backing' + 2,
* where 'nr_entries' is number of instance of 'struct virtio_gpu_mem_entry'.
* case 'cmd->iovcnt < 3' means above 'nr_entries' is zero, which is invalid
* and ignored.
* 2. Function 'virtio_gpu_ctrl_bh(void *data)' guarantees cmd->iovcnt >=1.
*/
if (cmd->iovcnt < 2) {
resp.type = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
memcpy(cmd->iov[cmd->iovcnt - 1].iov_base, &resp, sizeof(resp));
pr_err("%s : invalid memory entry.\n", __func__);
return;
}
r2d = virtio_gpu_find_resource_2d(cmd->gpu, req.resource_id);
if (r2d) {
r2d->iov = malloc(req.nr_entries * sizeof(struct iovec));
if (r2d && req.nr_entries > 0) {
iov = malloc(req.nr_entries * sizeof(struct iovec));
if (!iov) {
resp.type = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
goto exit;
}
r2d->iov = iov;
r2d->iovcnt = req.nr_entries;
entries = malloc(req.nr_entries * sizeof(struct virtio_gpu_mem_entry));
entries = calloc(req.nr_entries, sizeof(struct virtio_gpu_mem_entry));
if (!entries) {
free(iov);
resp.type = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
goto exit;
}
pbuf = (uint8_t*)entries;
for (i = 1; i < (cmd->iovcnt - 1); i++) {
memcpy(pbuf, cmd->iov[i].iov_base, cmd->iov[i].iov_len);
@ -796,13 +829,13 @@ virtio_gpu_cmd_resource_attach_backing(struct virtio_gpu_command *cmd)
r2d->iov[i].iov_len = entries[i].length;
}
free(entries);
resp.type = VIRTIO_GPU_RESP_OK_NODATA;
} else {
pr_err("%s: Illegal resource id %d\n", __func__, req.resource_id);
resp.type = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
}
exit:
cmd->iolen = sizeof(resp);
resp.type = VIRTIO_GPU_RESP_OK_NODATA;
virtio_gpu_update_resp_fence(&cmd->hdr, &resp);
memcpy(cmd->iov[cmd->iovcnt - 1].iov_base, &resp, sizeof(resp));
}
@ -1166,6 +1199,7 @@ virtio_gpu_cmd_create_blob(struct virtio_gpu_command *cmd)
struct virtio_gpu_ctrl_hdr resp;
int i;
uint8_t *pbuf;
struct iovec *iov;
memcpy(&req, cmd->iov[0].iov_base, sizeof(req));
cmd->iolen = sizeof(resp);
@ -1177,7 +1211,19 @@ virtio_gpu_cmd_create_blob(struct virtio_gpu_command *cmd)
resp.type = VIRTIO_GPU_RESP_ERR_INVALID_RESOURCE_ID;
memcpy(cmd->iov[cmd->iovcnt - 1].iov_base, &resp, sizeof(resp));
return;
}
/*
* 1. Per VIRTIO GPU specification,
* 'cmd->iovcnt' = 'nr_entries' of 'struct virtio_gpu_resource_create_blob' + 2,
* where 'nr_entries' is number of instance of 'struct virtio_gpu_mem_entry'.
* 2. Function 'virtio_gpu_ctrl_bh(void *data)' guarantees cmd->iovcnt >=1.
*/
if (cmd->iovcnt < 2) {
resp.type = VIRTIO_GPU_RESP_ERR_INVALID_PARAMETER;
memcpy(cmd->iov[cmd->iovcnt - 1].iov_base, &resp, sizeof(resp));
pr_err("%s : invalid memory entry.\n", __func__);
return;
}
if ((req.blob_mem != VIRTIO_GPU_BLOB_MEM_GUEST) ||
@ -1200,9 +1246,24 @@ virtio_gpu_cmd_create_blob(struct virtio_gpu_command *cmd)
r2d = (struct virtio_gpu_resource_2d *)calloc(1,
sizeof(struct virtio_gpu_resource_2d));
if (!r2d) {
pr_err("%s : memory allocation for r2d failed.\n", __func__);
resp.type = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
memcpy(cmd->iov[cmd->iovcnt - 1].iov_base, &resp, sizeof(resp));
return;
}
r2d->resource_id = req.resource_id;
entries = malloc(req.nr_entries * sizeof(struct virtio_gpu_mem_entry));
if (req.nr_entries > 0) {
entries = calloc(req.nr_entries, sizeof(struct virtio_gpu_mem_entry));
if (!entries) {
pr_err("%s : memory allocation for entries failed.\n", __func__);
free(r2d);
resp.type = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
memcpy(cmd->iov[cmd->iovcnt - 1].iov_base, &resp, sizeof(resp));
return;
}
pbuf = (uint8_t *)entries;
for (i = 1; i < (cmd->iovcnt - 1); i++) {
memcpy(pbuf, cmd->iov[i].iov_base, cmd->iov[i].iov_len);
@ -1230,7 +1291,16 @@ virtio_gpu_cmd_create_blob(struct virtio_gpu_command *cmd)
r2d->image = pixman_image_create_bits(
r2d->format, r2d->width, r2d->height, NULL, 0);
r2d->iov = malloc(req.nr_entries * sizeof(struct iovec));
iov = malloc(req.nr_entries * sizeof(struct iovec));
if (!iov) {
free(entries);
free(r2d);
resp.type = VIRTIO_GPU_RESP_ERR_OUT_OF_MEMORY;
memcpy(cmd->iov[cmd->iovcnt - 1].iov_base, &resp, sizeof(resp));
return;
}
r2d->iov = iov;
r2d->iovcnt = req.nr_entries;
for (i = 0; i < req.nr_entries; i++) {
r2d->iov[i].iov_base = paddr_guest2host(
@ -1242,6 +1312,7 @@ virtio_gpu_cmd_create_blob(struct virtio_gpu_command *cmd)
}
free(entries);
}
resp.type = VIRTIO_GPU_RESP_OK_NODATA;
LIST_INSERT_HEAD(&cmd->gpu->r2d_list, r2d, link);
memcpy(cmd->iov[cmd->iovcnt - 1].iov_base, &resp, sizeof(resp));
@ -1552,7 +1623,7 @@ virtio_gpu_vga_render(void *param)
gpu->vga.surf.stride = 0;
/* The below logic needs to be refined */
while(gpu->vga.enable) {
if(gpu->vga.gc->gc_image->vgamode) {
if ((gpu->vga.gc->gc_image->vgamode) && (gpu->vga.dev != NULL)) {
vga_render(gpu->vga.gc, gpu->vga.dev);
break;
}
@ -1801,6 +1872,9 @@ virtio_gpu_deinit(struct vmctx *ctx, struct pci_vdev *dev, char *opts)
int i;
gpu = (struct virtio_gpu *)dev->arg;
if (!gpu)
return;
gpu->vga.enable = false;
pthread_mutex_lock(&gpu->vga_thread_mtx);
@ -1860,10 +1934,8 @@ virtio_gpu_deinit(struct vmctx *ctx, struct pci_vdev *dev, char *opts)
vdpy_deinit(gpu->vdpy_handle);
if (gpu) {
pthread_mutex_destroy(&gpu->mtx);
free(gpu);
}
virtio_gpu_device_cnt--;
}

View File

@ -1369,13 +1369,16 @@ int vdpy_parse_cmd_option(const char *opts)
error = 0;
vdpy.vscrs = calloc(VSCREEN_MAX_NUM, sizeof(struct vscreen));
if (!vdpy.vscrs) {
pr_err("%s, memory allocation for vscrs failed.", __func__);
return -1;
}
vdpy.vscrs_num = 0;
stropts = strdup(opts);
while ((str = strsep(&stropts, ",")) != NULL) {
vscr = vdpy.vscrs + vdpy.vscrs_num;
tmp = strcasestr(str, "geometry=");
if (str && strcasestr(str, "geometry=fullscreen")) {
if ((tmp = strcasestr(str, "geometry=fullscreen")) != NULL) {
snum = sscanf(tmp, "geometry=fullscreen:%d", &vscr->pscreen_id);
if (snum != 1) {
vscr->pscreen_id = 0;
@ -1388,7 +1391,7 @@ int vdpy_parse_cmd_option(const char *opts)
pr_info("virtual display: fullscreen on monitor %d.\n",
vscr->pscreen_id);
vdpy.vscrs_num++;
} else if (str && strcasestr(str, "geometry=")) {
} else if ((tmp = strcasestr(str, "geometry=")) != NULL) {
snum = sscanf(tmp, "geometry=%dx%d+%d+%d",
&vscr->guest_width, &vscr->guest_height,
&vscr->org_x, &vscr->org_y);

View File

@ -1291,6 +1291,10 @@ vga_init(struct gfx_ctx *gc, int io_only)
int port, error;
vd = calloc(1, sizeof(struct vga_vdev));
if (!vd) {
pr_err("%s: out of memory.\n", __func__);
return NULL;
}
bzero(&iop, sizeof(struct inout_port));
iop.name = "VGA";
@ -1326,8 +1330,12 @@ vga_init(struct gfx_ctx *gc, int io_only)
return NULL;
}
vd->vga_ram = malloc(256 * KB);
memset(vd->vga_ram, 0, 256 * KB);
vd->vga_ram = calloc(256, KB);
if (!vd->vga_ram) {
pr_err("%s: failed to allocate vga_ram.\n", __func__);
free(vd);
return NULL;
}
{
static uint8_t palette[] = {

View File

@ -1809,7 +1809,7 @@ LATEX_HIDE_INDICES = NO
# The default value is: NO.
# This tag requires that the tag GENERATE_LATEX is set to YES.
LATEX_SOURCE_CODE = NO
# LATEX_SOURCE_CODE = NO
# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
# bibliography, e.g. plainnat, or ieeetr. See
@ -1891,7 +1891,7 @@ RTF_EXTENSIONS_FILE =
# The default value is: NO.
# This tag requires that the tag GENERATE_RTF is set to YES.
RTF_SOURCE_CODE = NO
# RTF_SOURCE_CODE = NO
#---------------------------------------------------------------------------
# Configuration options related to the man page output
@ -1989,7 +1989,7 @@ DOCBOOK_OUTPUT = docbook
# The default value is: NO.
# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
DOCBOOK_PROGRAMLISTING = NO
# DOCBOOK_PROGRAMLISTING = NO
#---------------------------------------------------------------------------
# Configuration options for the AutoGen Definitions output
@ -2187,7 +2187,7 @@ EXTERNAL_PAGES = YES
# powerful graphs.
# The default value is: YES.
CLASS_DIAGRAMS = YES
# CLASS_DIAGRAMS = YES
# You can define message sequence charts within doxygen comments using the \msc
# command. Doxygen will then run the mscgen tool (see:

View File

@ -3,6 +3,45 @@
Security Advisory
#################
Addressed in ACRN v3.0.2
************************
We recommend that all developers using v3.0.1 or earlier upgrade to this v3.0.2
release (or later), which addresses the following security issue discovered in
previous releases. For v3.1 users, these issues are addressed in the v3.2
release:
-----
- Board_inspector: use executables found under system paths
Using partial executable paths in the board inspector may cause unintended
results when another executable has the same name and is also detectable in
the search paths.
Introduce a wrapper module (`external_tools`) which locates executables
only under system paths such as /usr/bin and /usr/sbin and converts partial
executable paths to absolute ones before executing them via the subprocess
module. All invocations to `subprocess.run` or `subprocess.Popen`
throughout the board inspector are replaced with `external_tools.run`, with
the only exception being the invocation to the legacy board parser which
already uses an absolute path to the current Python interpreter.
**Affected Release:** v3.1, v3.0.1 and earlier
- Add tarfile member sanitization to extractall()
A directory traversal vulnerability in the Python tarfile module extractall() functions
could allow user-assisted remote attackers to overwrite arbitrary files via
a ``..`` (dot dot) sequence in filenames in a tar archive, related to CVE-2001-1267.
(Addresses security issue tracked by CVE-2007-4559)
**Affected Release:** v3.1, v3.0.1 and earlier
- PMU (Performance Monitoring Unit) is passed through to an RTVM only for debug mode
Enabling Pass-through PMU counters to RTVM can cause workload interference
in a release build, so enable PMU passthrough only when building ACRN in
debug mode.
**Affected Release:** v3.1, v3.0.1 and earlier
Addressed in ACRN v3.0.1
************************
We recommend that all developers upgrade to this v3.0.1 release (or later), which

View File

@ -46,7 +46,8 @@ extensions = [
# extlinks provides a macro template
extlinks = {
'acrn-issue': ('https://github.com/projectacrn/acrn-hypervisor/issues/%s', '#')
'acrn-issue': ('https://github.com/projectacrn/acrn-hypervisor/issues/%s', '#'),
'acrn-pr': ('https://github.com/projectacrn/acrn-hypervisor/pull/%s', '#')
}
# use intersphinx linking to link to previous version release notes
@ -133,7 +134,7 @@ language = 'en'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
# This patterns also effect to html_static_path and html_extra_path
exclude_patterns = ['_build', 'misc/README.rst' ]
exclude_patterns = ['_build', 'misc/README.rst', 'venv' ]
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'

View File

@ -3312,8 +3312,7 @@ each function:
``@post <post-condition description>``.
12) The brief description of the function return value shall be documented
with the format ``@return <brief description of return value>``.
13) A void-returning function shall be documented with the format
``@return None``.
13) A void-returning function shall not be documented with ``@return``.
14) The comments explaining the actual return values shall be documented with
the format ``@retval <return value> <return value explanation>``.
15) If the description of one element needs to span multiple lines, each line

View File

@ -53,11 +53,6 @@ Before you begin, make sure your machines have the following prerequisites:
- USB keyboard and mouse
- Monitor
- Ethernet cable and Internet access
- A second USB disk with minimum 16GB capacity. Format your USB disk with a
file system that supports files greater than 4GB: extFAT or NTFS, but not
FAT32. We'll use this USB disk to copy files between the development
computer and target system. Instead of a USB drive, you can copy files
between systems over the network using the ``scp`` command.
- Local storage device (NVMe or SATA drive, for example). We recommend having
40GB or more of free space.
@ -135,20 +130,12 @@ To set up the ACRN build environment on the development computer:
cd ~/acrn-work
git clone https://github.com/projectacrn/acrn-hypervisor.git
cd acrn-hypervisor
git checkout master
git checkout release_3.2
cd ..
git clone https://github.com/projectacrn/acrn-kernel.git
cd acrn-kernel
git checkout master
#. Configure git with your name and email address:
.. code-block:: none
git config --global user.name "David Developer"
git config --global user.email "david.developer@company.com"
git checkout release_3.2
.. _gsg-board-setup:
@ -297,48 +284,12 @@ Generate a Board Configuration File
In a few seconds, the build generates a board_inspector Debian package in the
parent (``~/acrn-work``) directory.
#. Copy the Board Inspector Debian package from the development computer to the
target system.
Option 1: Use ``scp``
Use the ``scp`` command to copy the Debian package from your development
computer to the ``/tmp`` directory on the target
system. Replace ``10.0.0.200`` with the target system's IP address you found earlier::
#. Use the ``scp`` command to copy the board inspector Debian package from your
development computer to the ``/tmp`` directory on the target system. Replace
``10.0.0.200`` with the target system's IP address you found earlier::
scp ~/acrn-work/python3-acrn-board-inspector*.deb acrn@10.0.0.200:/tmp
Option 2: Use a USB disk
a. On the development computer, insert the USB disk that you intend to use to
copy files.
#. Ensure that there is only one USB disk inserted by running the following
command:
.. code-block:: bash
ls /media/$USER
Confirm that only one disk name appears. You'll use that disk name in the following steps.
#. Copy the Board Inspector Debian package to the USB disk:
.. code-block:: bash
cd ~/acrn-work/
disk="/media/$USER/"$(ls /media/$USER)
cp -r python3-acrn-board-inspector*.deb "$disk"/
sync && sudo umount "$disk"
#. Remove the USB disk from the development computer and insert it into the target system.
#. Copy the Board Inspector Debian package from the USB disk to the target:
.. code-block:: bash
mkdir -p ~/acrn-work
disk="/media/$USER/"$(ls /media/$USER)
cp -r "$disk"/python3-acrn-board-inspector*.deb /tmp
#. Now that we've got the Board Inspector Debian package on the target system, install it there:
.. code-block:: bash
@ -349,9 +300,9 @@ Generate a Board Configuration File
.. code-block:: bash
reboot
sudo reboot
#. Run the Board Inspector to generate the board configuration file. This
#. Run the Board Inspector on the target system to generate the board configuration file. This
example uses the parameter ``my_board`` as the file name. The Board Inspector
can take a few minutes to scan your target system and create the board XML
file with your target system's information.
@ -359,7 +310,7 @@ Generate a Board Configuration File
.. code-block:: bash
cd ~/acrn-work
sudo board_inspector my_board
sudo acrn-board-inspector my_board
.. note::
@ -373,38 +324,13 @@ Generate a Board Configuration File
ls ./my_board.xml
#. Copy ``my_board.xml`` from the target to the development computer. Again we
have two options:
Option 1: Use ``scp``
From your development computer, use the ``scp`` command to copy the board
configuration file from your target system back to the
``~/acrn-work`` directory on your development computer. Replace
``10.0.0.200`` with the target system's IP address you found earlier::
#. From your development computer, use the ``scp`` command to copy the board
configuration file on your target system back to the ``~/acrn-work``
directory on your development computer. Replace ``10.0.0.200`` with the
target system's IP address you found earlier::
scp acrn@10.0.0.200:~/acrn-work/my_board.xml ~/acrn-work/
Option 2: Use a USB disk
a. Make sure the USB disk is connected to the target.
#. Copy ``my_board.xml`` to the USB disk:
.. code-block:: bash
disk="/media/$USER/"$(ls /media/$USER)
cp ~/acrn-work/my_board.xml "$disk"/
sync && sudo umount "$disk"
#. Insert the USB disk into the development computer.
#. Copy ``my_board.xml`` from the USB disk to the development computer:
.. code-block:: bash
disk="/media/$USER/"$(ls /media/$USER)
cp "$disk"/my_board.xml ~/acrn-work
sync && sudo umount "$disk"
.. _gsg-dev-setup:
.. rst-class:: numbered-step
@ -413,7 +339,7 @@ Generate a Scenario Configuration File and Launch Script
********************************************************
In this step, you will download, install, and use the `ACRN Configurator
<https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.1/acrn-configurator-3.2-unstable.deb>`__
<https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.2/acrn-configurator-3.2.deb>`__
to generate a scenario configuration file and launch script.
A **scenario configuration file** is an XML file that holds the parameters of
@ -429,8 +355,7 @@ post-launched User VM. Each User VM has its own launch script.
.. code-block:: bash
cd ~/acrn-work
wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.1/acrn-configurator-3.2-unstable.deb
cp acrn-configurator-3.2-unstable.deb /tmp
wget https://github.com/projectacrn/acrn-hypervisor/releases/download/v3.2/acrn-configurator-3.2.deb -P /tmp
If you already have a previous version of the acrn-configurator installed,
you should first remove it:
@ -443,7 +368,7 @@ post-launched User VM. Each User VM has its own launch script.
.. code-block:: bash
sudo apt install -y /tmp/acrn-configurator-3.2-unstable.deb
sudo apt install -y /tmp/acrn-configurator-3.2.deb
#. Launch the ACRN Configurator:
@ -541,9 +466,9 @@ post-launched User VM. Each User VM has its own launch script.
#. Confirm that the **VM type** is ``Standard``. In the previous step,
``STD`` in the VM name is short for Standard.
#. Scroll down to **Memory size (MB)** and change the value to ``1024``. For
#. Scroll down to **Memory size (MB)** and change the value to ``2048``. For
this example, we will use Ubuntu 22.04 to boot the post-launched VM.
Ubuntu 22.04 needs at least 1024 MB to boot.
Ubuntu 22.04 needs at least 2048 MB to boot.
#. For **Physical CPU affinity**, select pCPU ID ``0``, then click **+** and
select pCPU ID ``1`` to affine (or pin) the VM to CPU cores 0 and 1. (That will
@ -554,13 +479,17 @@ post-launched User VM. Each User VM has its own launch script.
log in to the User VM later in this guide.
#. For **Virtio block device**, click **+** and enter
``/home/acrn/acrn-work/ubuntu-22.04.1-desktop-amd64.iso``. This parameter
``/home/acrn/acrn-work/ubuntu-22.04.2-desktop-amd64.iso``. This parameter
specifies the VM's OS image and its location on the target system. Later
in this guide, you will save the ISO file to that directory. (If you used
a different username when installing Ubuntu on the target system, here's
where you'll need to change the ``acrn`` username to the username you used.)
.. image:: images/configurator-postvm.png
.. image:: images/configurator_postvm01.png
:align: center
:class: drop-shadow
.. image:: images/configurator-postvm02.png
:align: center
:class: drop-shadow
@ -585,14 +514,14 @@ post-launched User VM. Each User VM has its own launch script.
.. rst-class:: numbered-step
Build ACRN
***************
**********
#. On the development computer, build the ACRN hypervisor:
.. code-block:: bash
cd ~/acrn-work/acrn-hypervisor
debian/debian_build.sh clean && debian/debian_build.sh -c ~/acrn-work/MyConfiguration -b my_board.board -s scenario
debian/debian_build.sh clean && debian/debian_build.sh -c ~/acrn-work/MyConfiguration
The build typically takes a few minutes. When done, the build generates several
Debian packages in the parent (``~/acrn-work``) directory:
@ -611,7 +540,7 @@ Build ACRN
acrn-tools_*.deb
grub-acrn_*.deb
The Debian packages contain the ACRN hypervisor and tools to ease installing
These Debian packages contain the ACRN hypervisor and tools to ease installing
ACRN on the target.
#. Build the ACRN kernel for the Service VM:
@ -642,19 +571,15 @@ Build ACRN
.. code-block:: bash
cd ..
ls *.deb
linux-headers-5.15.44-acrn-service-vm_5.15.44-acrn-service-vm-1_amd64.deb
linux-image-5.15.44-acrn-service-vm_5.15.44-acrn-service-vm-1_amd64.deb
linux-image-5.15.44-acrn-service-vm-dbg_5.15.44-acrn-service-vm-1_amd64.deb
linux-libc-dev_5.15.44-acrn-service-vm-1_amd64.deb
ls *acrn-service-vm*.deb
linux-headers-5.15.71-acrn-service-vm_5.15.71-acrn-service-vm-1_amd64.deb
linux-image-5.15.71-acrn-service-vm_5.15.71-acrn-service-vm-1_amd64.deb
linux-image-5.15.71-acrn-service-vm-dbg_5.15.71-acrn-service-vm-1_amd64.deb
linux-libc-dev_5.15.71-acrn-service-vm-1_amd64.deb
#. Copy all the necessary files generated on the development computer to the
target system, using one of these two options:
Option 1: Use ``scp``
Use the ``scp`` command to copy files from your development computer to
the target system.
Replace ``10.0.0.200`` with the target system's IP address you found earlier::
#. Use the ``scp`` command to copy files from your development computer to the
target system. Replace ``10.0.0.200`` with the target system's IP address
you found earlier::
sudo scp ~/acrn-work/acrn*.deb \
~/acrn-work/grub*.deb \
@ -662,30 +587,6 @@ Build ACRN
~/acrn-work/MyConfiguration/launch_user_vm_id1.sh \
acrn@10.0.0.200:~/acrn-work
Option 2: by USB disk
a. Insert the USB disk into the development computer and run these commands:
.. code-block:: bash
disk="/media/$USER/"$(ls /media/$USER)
cp ~/acrn-work/acrn*.deb "$disk"/
cp ~/acrn-work/grub*.deb "$disk"/
cp ~/acrn-work/*acrn-service-vm*.deb "$disk"/
cp ~/acrn-work/MyConfiguration/launch_user_vm_id1.sh "$disk"/
sync && sudo umount "$disk"
#. Insert the USB disk you just used into the target system and run these
commands to copy the files locally:
.. code-block:: bash
disk="/media/$USER/"$(ls /media/$USER)
cp "$disk"/acrn*.deb ~/acrn-work
cp "$disk"/grub*.deb ~/acrn-work
cp "$disk"/*acrn-service-vm*.deb ~/acrn-work
cp "$disk"/launch_user_vm_id1.sh ~/acrn-work
sync && sudo umount "$disk"
.. _gsg-install-acrn:
.. rst-class:: numbered-step
@ -699,8 +600,18 @@ Install ACRN
.. code-block:: bash
cd ~/acrn-work
sudo apt install ./acrn*.deb ./grub*.deb
sudo apt install ./*acrn-service-vm*.deb
cp ./acrn*.deb ./grub*.deb ./*acrn-service-vm*.deb /tmp
sudo apt install /tmp/acrn*.deb /tmp/grub*.deb /tmp/*acrn-service-vm*.deb
#. Modify the GRUB menu display using ``sudo vi /etc/default/grub``, comment out the hidden style
and changing the timeout to 5 seconds (leave other lines as they are), as shown::
#GRUB_TIMEOUT_STYLE=hidden
GRUB_TIMEOUT=5
and install the new GRUB menu using::
sudo update-grub
#. Reboot the system:
@ -708,19 +619,25 @@ Install ACRN
reboot
#. Confirm that you see the GRUB menu with the "ACRN multiboot2" entry. Select
it and proceed to booting ACRN. (It may be auto-selected, in which case it
The target system will reboot into the ACRN hypervisor and
start the Ubuntu Service VM.
#. Confirm that you see the GRUB menu with "Ubuntu-ACRN Board Inspector, with 5.15.0-56-generic" entry.
Select it and proceed to booting ACRN. (It may be auto-selected, in which case it
will boot with this option automatically in 5 seconds.)
Example grub menu shown as below:
.. code-block:: console
:emphasize-lines: 5
GNU GRUB version 2.04
────────────────────────────────────────────────────────────────────────────────
Ubuntu
Advanced options for Ubuntu
*Ubuntu GNU/Linux, with ACRN hypervisor
Advanced options for Ubuntu GNU/Linux (with ACRN hypervisor)
Ubuntu-ACRN Board Inspector, with Linux 5.15.71-acrn-service-vm
*Ubuntu-ACRN Board Inspector, with Linux 5.15.0-56-generic
Ubuntu with ACRN hypervisor, with Linux 5.15.71-acrn-service-vm (ACRN 3.2)
Ubuntu with ACRN hypervisor, with Linux 5.15.0-56-generic (ACRN 3.2)
UEFI Firmware Settings
.. _gsg-run-acrn:
@ -732,7 +649,8 @@ Run ACRN and the Service VM
The ACRN hypervisor boots the Ubuntu Service VM automatically.
#. On the target, log in to the Service VM. (It will look like a normal
#. On the target, log in to the Service VM using the ``acrn`` username and
password you set up previously. (It will look like a normal
graphical Ubuntu session.)
#. Verify that the hypervisor is running by checking ``dmesg`` in the Service
@ -753,8 +671,12 @@ The ACRN hypervisor boots the Ubuntu Service VM automatically.
so the Device Model can create a bridge device (acrn-br0) that provides User VMs with
wired network access:
.. warning::
The IP address of Service VM may change after executing the following command.
.. code-block:: bash
cp /usr/share/doc/acrnd/examples/* /etc/systemd/network
sudo systemctl enable --now systemd-networkd
.. _gsg-user-vm:
@ -764,12 +686,12 @@ The ACRN hypervisor boots the Ubuntu Service VM automatically.
Launch the User VM
*******************
#. On the target system, use the web browser to go to the `official Ubuntu website <https://releases.ubuntu.com/jammy/>`__ to
#. On the target system, use the web browser to visit the `official Ubuntu website <https://releases.ubuntu.com/jammy/>`__ and
get the Ubuntu Desktop 22.04 LTS ISO image
``ubuntu-22.04.1-desktop-amd64.iso`` for the User VM. (The same image you
``ubuntu-22.04.2-desktop-amd64.iso`` for the User VM. (The same image you
specified earlier in the ACRN Configurator UI.) Alternatively, instead of
downloading it again, you can use a USB drive or ``scp`` to copy the ISO
image file to the ``~/acrn-work`` directory on the target system.
downloading it again, you could use ``scp`` to copy the ISO
image file from the development system to the ``~/acrn-work`` directory on the target system.
#. If you downloaded the ISO file on the target system, copy it from the
Downloads directory to the ``~/acrn-work/`` directory (the location we said
@ -778,7 +700,7 @@ Launch the User VM
.. code-block:: bash
cp ~/Downloads/ubuntu-22.04.1-desktop-amd64.iso ~/acrn-work
cp ~/Downloads/ubuntu-22.04.2-desktop-amd64.iso ~/acrn-work
#. Launch the User VM:
@ -793,7 +715,7 @@ Launch the User VM
.. code-block:: console
Ubuntu 22.04.1 LTS ubuntu hvc0
Ubuntu 22.04.2 LTS ubuntu hvc0
ubuntu login:
@ -804,16 +726,22 @@ Launch the User VM
.. code-block:: console
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.0-43-generic x86_64)
Welcome to Ubuntu 22.04.2 LTS (GNU/Linux 5.19.0-32-generic x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
0 packages can be updated.
0 updates are security updates.
Expanded Security Maintenance for Applications is not enabled.
Your Hardware Enablement Stack (HWE) is supported until April 2025.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
The list of available updates is more than a week old.
To check for new updates run: sudo apt update
The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
@ -833,7 +761,7 @@ Launch the User VM
.. code-block:: console
ubuntu@ubuntu:~$ uname -r
5.15.0-43-generic
5.19.0-32-generic
Then open a new terminal window and use the command to see that the Service
VM is running the ``acrn-kernel`` Service VM image:
@ -841,7 +769,7 @@ Launch the User VM
.. code-block:: console
acrn@vecow:~$ uname -r
5.15.44-acrn-service-vm
5.15.71-acrn-service-vm
The User VM has launched successfully. You have completed this ACRN setup.

Binary file not shown.

After

Width:  |  Height:  |  Size: 54 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 87 KiB

View File

@ -76,7 +76,7 @@ Preparing the Target System
===========================
On the target system, reboot and choose the regular Ubuntu image (not the
Multiboot2 choice created when following the Getting Started Guide).
Ubuntu-ACRN Board Inspector choice created when following the Getting Started Guide).
1. Log in as the **acrn** user. We'll be making ssh connections to the target system
later in these steps, so install the ssh server on the target system using::
@ -108,7 +108,7 @@ As a normal (e.g., **acrn**) user, follow these steps:
1. Install some additional packages in your development computer used for
building the sample application::
sudo apt install -y cloud-guest-utils schroot kpartx qemu-kvm
sudo apt install -y cloud-guest-utils schroot kpartx qemu-utils
#. Check out the ``acrn-hypervisor`` source code branch (already cloned from the
``acrn-hypervisor`` repo when you followed the :ref:`gsg`). We've tagged a
@ -117,7 +117,7 @@ As a normal (e.g., **acrn**) user, follow these steps:
cd ~/acrn-work/acrn-hypervisor
git fetch --all
git checkout master
git checkout release_3.2
#. Build the ACRN sample application source code::
@ -189,10 +189,10 @@ Make the RT_VM Image
.. code-block:: console
linux-headers-5.15.44-rt46-acrn-kernel-rtvm+_5.15.44-rt46-acrn-kernel-rtvm+-1_amd64.deb
linux-image-5.15.44-rt46-acrn-kernel-rtvm+-dbg_5.15.44-rt46-acrn-kernel-rtvm+-1_amd64.deb
linux-image-5.15.44-rt46-acrn-kernel-rtvm+_5.15.44-rt46-acrn-kernel-rtvm+-1_amd64.deb
linux-libc-dev_5.15.44-rt46-acrn-kernel-rtvm+-1_amd64.deb
linux-headers-5.15.71-rt46-acrn-kernel-rtvm+_5.15.71-rt46-acrn-kernel-rtvm+-1_amd64.deb
linux-image-5.15.71-rt46-acrn-kernel-rtvm+-dbg_5.15.71-rt46-acrn-kernel-rtvm+-1_amd64.deb
linux-image-5.15.71-rt46-acrn-kernel-rtvm+_5.15.71-rt46-acrn-kernel-rtvm+-1_amd64.deb
linux-libc-dev_5.15.71-rt46-acrn-kernel-rtvm+-1_amd64.deb
#. Make the RT VM image::
@ -394,43 +394,21 @@ Build the ACRN Hypervisor and Service VM Images
cd ~/acrn-work/acrn-hypervisor
make clean
make BOARD=~/acrn-work/MyConfiguration/my_board.board.xml SCENARIO=~/acrn-work/MyConfiguration/scenario.xml
debian/debian_build.sh clean && debian/debian_build.sh -c ~/acrn-work/MyConfiguration
The build typically takes about a minute. When done, the build
generates a Debian package in the build directory with your board and
working folder name.
generates several Debian packages in the build directory. Only one
with your board and working folder name among these Debian packages
is different from genetated in the Getting Started Guide. So we only
need to copy and reinstall one Debian package to the target system.
This Debian package contains the ACRN hypervisor and tools for
installing ACRN on the target.
#. Build the ACRN kernel for the Service VM (the sample application
requires a newer version of the Service VM than generated in the
Getting Started Guide, so we'll need to generate it again) using a tagged
version of the ``acrn-kernel``::
cd ~/acrn-work/acrn-kernel
git fetch --all
git checkout acrn-v3.1
make distclean
cp kernel_config_service_vm .config
make olddefconfig
make -j $(nproc) deb-pkg
The kernel build can take 15 minutes or less on a fast computer, but
could take one to two hours depending on the performance of your development
computer. When done, the build generates four Debian packages in the
directory above the build root directory:
.. code-block:: console
$ ls ../*acrn-service*.deb
linux-headers-5.15.44-acrn-service-vm_5.15.44-acrn-service-vm-1_amd64.deb
linux-image-5.15.44-acrn-service-vm_5.15.44-acrn-service-vm-1_amd64.deb
linux-image-5.15.44-acrn-service-vm-dbg_5.15.44-acrn-service-vm-1_amd64.deb
linux-libc-dev_5.15.44-acrn-service-vm-1_amd64.deb
#. Use the ACRN kernel for the Service VM already on your development computer
when you followed the Getting Started Guide (the sample application
requires the same version of the Service VM as generated in the
Getting Started Guide, so no need to generate it again).
.. rst-class:: numbered-step
@ -439,12 +417,9 @@ Copy Files from the Development Computer to Your Target System
1. Copy all the files generated on the development computer to the
target system. This includes the sample application executable files,
HMI_VM and RT_VM images, Debian packages for the Service VM and
Hypervisor, launch scripts, and the iasl tool built following the
Getting Started Guide. You can use ``scp`` to copy across the local network,
or use a USB stick:
HMI_VM and RT_VM images, Debian packages for ACRN Hypervisor,
and the launch scripts.
Option 1: use ``scp`` to copy files over the local network
Use ``scp`` to copy files from your development computer to the
``~/acrn-work`` directory on the target (replace the IP address used in
this example with the target system's IP address you found earlier)::
@ -452,78 +427,46 @@ Copy Files from the Development Computer to Your Target System
cd ~/acrn-work
scp acrn-hypervisor/misc/sample_application/image_builder/build/*_vm.img \
acrn-hypervisor/build/acrn-my_board-MyConfiguration*.deb \
*acrn-service-vm*.deb MyConfiguration/launch_user_vm_id*.sh \
acpica-unix-20210105/generate/unix/bin/iasl \
acrn-hypervisor*.deb \
MyConfiguration/launch_user_vm_id*.sh \
acrn@10.0.0.200:~/acrn-work
Then on the target system, run these commands::
sudo cp ~/acrn-work/iasl /usr/sbin
sudo ln -s /usr/sbin/iasl /usr/bin/iasl
Option 2: use a USB stick to copy files
Because the VM image files are large, format your USB stick with a file
system that supports files greater than 4GB: extFAT or NTFS, but not FAT32.
Insert a USB stick into the development computer and run these commands::
disk="/media/$USER/"$(ls /media/$USER)
cd ~/acrn-work
cp acrn-hypervisor/misc/sample_application/image_builder/build/*_vm.img rt_vm.img "$disk"
cp acrn-hypervisor/build/acrn-my_board-MyConfiguration*.deb "$disk"
cp *acrn-service-vm*.deb "$disk"
cp MyConfiguration/launch_user_vm_id*.sh "$disk"
cp acpica-unix-20210105/generate/unix/bin/iasl "$disk"
sync && sudo umount "$disk"
Move the USB stick you just used to the target system and run
these commands to copy the files locally::
disk="/media/$USER/"$(ls /media/$USER)
cp "$disk"/*_vm.img ~/acrn-work
cp "$disk"/acrn-my_board-MyConfiguration*.deb ~/acrn-work
cp "$disk"/*acrn-service-vm*.deb ~/acrn-work
cp "$disk"/launch_user_vm_id*.sh ~/acrn-work
sudo cp "$disk"/iasl /usr/sbin/
sudo ln -s /usr/sbin/iasl /usr/bin/iasl
sync && sudo umount "$disk"
.. rst-class:: numbered-step
Install and Run ACRN on the Target System
*****************************************
1. On your target system, install the ACRN Debian package and ACRN
1. On the target system, configure your network according to instruction of below link:
https://www.ubuntupit.com/how-to-configure-and-use-network-bridge-in-ubuntu-linux/
#. On your target system, install the ACRN Debian package and ACRN
kernel Debian packages using these commands::
cd ~/acrn-work
cp ./acrn-hypervisor*.deb ./*acrn-service-vm*.deb /tmp
sudo apt purge acrn-hypervisor
sudo apt install ./acrn-my_board-MyConfiguration*.deb
sudo apt install ./*acrn-service-vm*.deb
sudo apt install /tmp/acrn-hypervisor*.deb /tmp/*acrn-service-vm*.deb
#. Enable networking services for sharing with the HMI User VM::
#. Enable networking services for sharing with the HMI User VM:
.. warning::
The IP address of Service VM may change after executing the following command.
.. code-block:: bash
cp /usr/share/doc/acrnd/examples/* /etc/systemd/network
sudo systemctl enable --now systemd-networkd
#. Reboot the system::
reboot
#. Confirm that you see the GRUB menu with the "ACRN multiboot2" entry. Select
it and press :kbd:`Enter` to proceed to booting ACRN. (It may be
auto-selected, in which case it will boot with this option automatically in 5
seconds.)
#. The target system will boot automatically into the ACRN hypervisor and
launch the Service VM.
.. image:: images/samp-image016.png
:class: drop-shadow
:align: center
This will boot the ACRN hypervisor and launch the Service VM.
#. Log in to the Service VM (using the target's keyboard and HDMI monitor) using
Log in to the Service VM (using the target's keyboard and HDMI monitor) using
the ``acrn`` username.
#. Find the Service VM's IP address (the first IP address shown by this command):
@ -606,7 +549,7 @@ Install and Run ACRN on the Target System
ubuntu login: root
Password:
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.44-rt46-acrn-kernel-rtvm+ x86_64)
Welcome to Ubuntu 22.04.1 LTS (GNU/Linux 5.15.71-rt46-acrn-kernel-rtvm+ x86_64)
. . .

View File

@ -66,47 +66,60 @@ level includes the activities described in the lower levels.
.. _ASRock iEP-9010E:
https://www.asrockind.com/en-gb/iEP-9010E
+------------------------+----------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+-------------------+
.. _ASUS PN64-E1:
https://www.asus.com/displays-desktops/mini-pcs/pn-series/asus-expertcenter-pn64-e1/
.. important::
We recommend you use a system configuration that includes a serial port.
.. # Note For easier editing, I'm using unicode non-printing spaces in this table to help force the width of the first two columns to help prevent wrapping (using &nbsp; isn't compact enough)
+------------------------+---------------------------------+-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| | | .. rst-class:: |
| | | centered |
| | | |
| | | ACRN Version |
| | +-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
| Intel Processor Family | Tested Products | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | centered | centered | centered | centered | centered | centered | centered | centered |
| | | | | | | | | | |
| | | v1.0 | v1.6.1 | v2.0 | v2.5 | v2.6 | v2.7 | v3.0 | v3.1 |
+========================+============================+===================+===================+===================+===================+===================+===================+===================+===================+
| Alder Lake | | `ASRock iEPF-9010S-EY4`_,| | .. rst-class:: | .. rst-class:: |
| | +-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
| Intel Processor Family | Tested Products                 | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| Code Name | | centered | centered | centered | centered | centered | centered | centered | centered | centered |
| | | | | | | | | | | |
| | | v1.0 | v1.6.1 | v2.0 | v2.5 | v2.6 | v2.7 | v3.0 | v3.1 | v3.2 |
+========================+=================================+===================+===================+===================+===================+===================+===================+===================+===================+===================+
| Raptor Lake | `ASUS PN64-E1`_ | | .. rst-class:: |
| | | | centered |
| | | | |
| | | | Community |
+------------------------+---------------------------------+-----------------------------------------------------------------------------------------------------------------------+-------------------+-------------------+-------------------+
| Alder Lake | | `ASRock iEPF-9010S-EY4`_, | | .. rst-class:: | .. rst-class:: |
| | | `ASRock iEP-9010E`_ | | centered | centered |
| | | | | |
| | | | Release | Community |
+------------------------+----------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+
| Tiger Lake | `Vecow SPC-7100`_ | | .. rst-class:: |
+------------------------+---------------------------------+-----------------------------------------------------------------------------------------------------------------------+-------------------+---------------------------------------+
| Tiger Lake | `Vecow SPC-7100`_ | | .. rst-class:: |
| | | | centered |
| | | | |
| | | | Maintenance |
+------------------------+----------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+---------------------------------------+
| Tiger Lake | `NUC11TNHi5`_ | | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | | | | centered | centered | centered |
| | | | | | | | |
| | | | | | Release | Maintenance | Community |
+------------------------+----------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------+---------------------------------------+
| Whiskey Lake | `WHL-IPC-I5`_ | | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | | | centered | centered | centered |
| | | | | | | |
| | | | | Release | Maintenance | Community |
+------------------------+----------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-----------------------------------------------------------+
| Kaby Lake | `NUC7i7DNHE`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
+------------------------+---------------------------------+-----------------------------------------------------------+-------------------+---------------------------------------+-----------------------------------------------------------+
| Tiger Lake | `NUC11TNHi5`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | | centered | centered | centered |
| | | | | | |
| | | | Release | Maintenance | Community |
+------------------------+----------------------------+-------------------+-------------------+---------------------------------------+-------------------------------------------------------------------------------+
| Apollo Lake | | `NUC6CAYH`_, | .. rst-class:: | .. rst-class:: | .. rst-class:: |
+------------------------+---------------------------------+---------------------------------------+-------------------+-------------------+-------------------+-------------------+-----------------------------------------------------------+
| Whiskey Lake | `WHL-IPC-I5`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | | centered | centered | centered |
| | | | | | |
| | | | Release | Maintenance | Community |
+------------------------+---------------------------------+-------------------+-------------------+-------------------+-------------------+-------------------+-------------------------------------------------------------------------------+
| Kaby Lake | `NUC7i7DNHE`_ | | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | | centered | centered | centered |
| | | | | | |
| | | | Release | Maintenance | Community |
+------------------------+---------------------------------+-------------------+-------------------+---------------------------------------+---------------------------------------------------------------------------------------------------+
| Apollo Lake | | `NUC6CAYH`_, | .. rst-class:: | .. rst-class:: | .. rst-class:: |
| | | `UP2-N3350`_, | centered | centered | centered |
| | | `UP2-N4200`_, | | | |
| | | `UP2-x5-E3940`_ | Release | Maintenance | Community |
+------------------------+----------------------------+-------------------+-------------------+-----------------------------------------------------------------------------------------------------------------------+
+------------------------+---------------------------------+-------------------+-------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
* **Release**: New ACRN features are complete and tested for the listed product.
This product is recommended for this ACRN version. Support for older products
@ -136,4 +149,4 @@ you will use to configure the ACRN hypervisor, as described in the
acrn-user@lists.projectacrn.org mailing list on your findings about
unlisted products.
.. # vim: tw=200
.. # vim: tw=300

View File

@ -0,0 +1,56 @@
.. _release_notes_3.0.2:
ACRN v3.0.2 (Nov 2022)
######################
We are pleased to announce the release of the Project ACRN hypervisor
version 3.0.2 with hot fixes to the v3.0 release.
ACRN is a flexible, lightweight reference hypervisor that is built with
real-time and safety-criticality in mind. It is optimized to streamline
embedded development through an open-source platform. See the
:ref:`introduction` introduction for more information.
All project ACRN source code is maintained in the
https://github.com/projectacrn/acrn-hypervisor repository and includes
folders for the ACRN hypervisor, the ACRN device model, tools, and
documentation. You can download this source code either as a zip or
tar.gz file (see the `ACRN v3.0.2 GitHub release page
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v3.0.2>`_) or
use Git ``clone`` and ``checkout`` commands::
git clone https://github.com/projectacrn/acrn-hypervisor
cd acrn-hypervisor
git checkout v3.0.2
The project's online technical documentation is also tagged to
correspond with a specific release: generated v3.0 documents can be
found at https://projectacrn.github.io/3.0/. Documentation for the
latest development branch is found at https://projectacrn.github.io/latest/.
ACRN v3.0.2 requires Ubuntu 20.04 (as does v3.0). Follow the instructions in the
:ref:`gsg` to get started with ACRN.
What's New in v3.0.2
********************
Passthrough PMU (performance monitor unit) to user VM only in debug builds
ACRN v2.6 introduced PMU passthrough to RT VMs that have LAPIC passthrough
enabled. This is useful for performance profiling at development time but can
cause workload interference in a production build. PMU passthrough is only
enabled now for a hypervisor debug mode build.
Added tarfile member sanitization to Python tarfile package extractall() calls
A vulnerability in the ACRN Configurator is patched, where files extracted
from a maliciously crafted tarball could be written to somewhere outside the
target directory and cause unsafe behavior.
Run executables with absolute paths in board inspector
Using partial executable paths in the board inspector may cause unintended
results when another executable has the same name and is found via PATH
settings. The board inspector now uses absolute paths to executable.
See :ref:`release_notes_3.0` and :ref:`release_notes_3.0.1` for additional release information.

View File

@ -0,0 +1,188 @@
.. _release_notes_3.2:
ACRN v3.2 (Aug 2023)
####################
We are pleased to announce the release of the Project ACRN hypervisor
version 3.2.
ACRN is a flexible, lightweight reference hypervisor that is built with
real-time and safety-criticality in mind. It is optimized to streamline
embedded development through an open-source platform. See the
:ref:`introduction` introduction for more information.
All project ACRN source code is maintained in the
https://github.com/projectacrn/acrn-hypervisor repository and includes
folders for the ACRN hypervisor, the ACRN device model, tools, and
documentation. You can download this source code either as a zip or
tar.gz file (see the `ACRN v3.2 GitHub release page
<https://github.com/projectacrn/acrn-hypervisor/releases/tag/v3.2>`_) or
use Git ``clone`` and ``checkout`` commands::
git clone https://github.com/projectacrn/acrn-hypervisor
cd acrn-hypervisor
git checkout v3.2
The project's online technical documentation is also tagged to
correspond with a specific release: generated v3.2 documents can be
found at https://projectacrn.github.io/3.2/. Documentation for the
latest development branch is found at https://projectacrn.github.io/latest/.
ACRN v3.2 requires Ubuntu 22.04. Follow the instructions in the
:ref:`gsg` to get started with ACRN.
What's New in v3.2
******************
Enabling New Generation Intel® Processors
ACRN v3.2 release now supports 12th Generation Intel® Atom N-Series Processors
(formerly code named Alder Lake N) and 13th Generation Intel® Core™ Mobile and
Desktop Processors (formerly code named Raptor Lake) with real-time SKUs.
Hypervisor-Managed Processor Performance Policy Controls
The ACRN hypervisor Configurator now provides processor performance policy
control for CPU frequency if the system supports hardware-controlled
performance states (HWP). This ensures that loaded CPUs can run at least at
their guaranteed frequency level.
New Debianization Solution for ACRN
The v3.2 release provides a standardized approach for ACRN debianization.
We provide an option to build each component as a separate Debian package and
allow users to select the binary to deploy at package installation time. Users
can also reselect the binary by reconfiguring the installed package.
Service VM Upgraded to use Ubuntu 22.04
The v3.2 release upgrades the Service VM OS from Ubuntu 20.04 to 22.04.
Upgrading to v3.2 from Previous Releases
****************************************
We recommend you generate a new board XML for your target system with the v3.2
Board Inspector. You should also use the v3.2 Configurator to generate a new
scenario XML file and launch scripts. Scenario XML files and launch scripts
created by previous ACRN versions will not work with the v3.2 ACRN hypervisor
build process and could produce unexpected errors during the build.
Given the scope of changes for the v3.2 release, we have recommendations for how
to upgrade from prior ACRN versions:
1. Start fresh from our :ref:`gsg`. This is the best way to ensure you have a
v3.2-ready board XML file from your target system and generate a new scenario
XML and launch scripts from the new ACRN Configurator that are consistent and
will work for the v3.2 build system.
#. Use the :ref:`upgrader tool <upgrading_configuration>` to attempt upgrading
your configuration files that worked with prior releases. You'll need the
matched pair of scenario XML and launch XML files from a prior configuration,
and use them to create a new merged scenario XML file. See
:ref:`upgrading_configuration` for details.
#. Manually edit your previous older scenario XML and launch XML files to make them
compatible with v3.2. This is not our recommended approach.
Here are some additional details about upgrading to the v3.22 release.
Generate New Board XML
======================
Board XML files, generated by ACRN Board Inspector, contain board information
that is essential for building the ACRN hypervisor and setting up User VMs.
Compared to previous versions, ACRN v3.2 adds the following information to the
board XML file for supporting new features and fixes:
* Add CPU frequency information. (See :acrn-pr:`8174`)
* Get connected displays and add them as child nodes to a corresponding graphics
card. (See :acrn-pr:`8230`)
* Add bdf information to an ioport serial controller. (See :acrn-pr:`8237`)
* Stop running and report an error if VMD is enabled in the BIOS setting. (See
:acrn-pr:`8328`)
* Report an error if a USB device is unplugged or disconnected while extracting
USB device information. (See :acrn-pr:`8326`)
* Handle PCI functions with an undefined header layout. (See :acrn-pr:`8233`)
See the :ref:`board_inspector_tool` documentation for a complete list of steps
to install and run the tool.
Update Configuration Options
============================
As explained in this :ref:`upgrading_configuration` document, we do provide a
tool that can assist upgrading your existing pre-v3.2 scenario XML files in the
new merged v3.2 format. From there, you can use the v3.2 ACRN Configurator UI to
open the upgraded scenario file for viewing and further editing if the upgrader
tool lost meaningful data during the conversion.
The ACRN Configurator adds the following features and fixes to improve the user
experience:
* Support virtio GPU configuration. (See :acrn-pr:`8248`)
* Determine SSRAM_ENABLED value automatically. (See :acrn-pr:`8232`)
* Add "CPU performance policy type" option. (See :acrn-pr:`8174`)
* Add "exclusively owns physical CPUs" checkbox to pre-launched and
post-launched VMs. (See :acrn-pr:`8290`)
* Generate ``config_summary.rst`` when saving scenario XML and launch scripts.
(See :acrn-pr:`8309`)
See the :ref:`scenario-config-options` documentation for details about all the
available configuration options in the new Configurator.
Document Updates
****************
Here are some of the more significant documentation updates from the v3.1 release:
.. rst-class:: rst-columns2
* :ref:`asa`
* :ref:`hld-security`
* :ref:`hv-cpu-virt`
* :ref:`gsg`
* :ref:`GSG_sample_app`
* :ref:`release_notes_3.2`
* :ref:`release_notes_3.0.2`
* :ref:`acrn_configurator_tool`
* :ref:`acrn_doc`
* :ref:`enable_multiple_displays`
* :ref:`acrn-dm_parameters-and-launch-script`
* :ref:`scenario-config-options`
Fixed Issues Details
********************
.. comment example item
- :acrn-issue:`5626` - Host Call Trace once detected
- :acrn-issue:`8435` - Post-launch RTVM and WaaG running simultaneously will cause Windows kernel crash
- :acrn-issue:`8445` - Fix security vulnerability for configurator dependent library
- :acrn-issue:`8454` - Fail to boot RTVM or UaaG when passthru Ethernet controller
- :acrn-issue:`8448` - The script in Sample Application Guide is not working
- :acrn-issue:`8352` - Sample app fails to build for v3.2 RC1
- :acrn-issue:`8439` - Possible null pointer dereference/uninitialized variable/buffer overflow in code
- :acrn-issue:`8435` - Post-launch RTVM and WaaG running simultaneously will cause Windows kernel crash
- :acrn-issue:`8432` - Flickering screen when passing ADL-N and RPL-P platforms
- :acrn-issue:`8413` - hypervisor: 'vm_config' may be used uninitialized [-Werror=maybe-uninitialized]
- :acrn-issue:`8382` - Failed to build with gcc 12
- :acrn-issue:`8422` - Failed to generate config summary and launch scripts if CAT is enabled in configurator
- :acrn-issue:`8395` - Configurator load fails because it needs to download RstCloth packages.
- :acrn-issue:`8380` - Cannot generate XML file for target system
- :acrn-issue:`8388` - Fail to generate board XML because of non-ASCII characters
- :acrn-issue:`8385` - Failed to generate config_summary.rst when board.xml has "module" node under the "processors/die"
- :acrn-issue:`8359` - GSG: change the method of checking kernel version of grub menuentry
- :acrn-issue:`8246` - Debianization improvement
- :acrn-issue:`8344` - debian/debian_build.sh fails when a work folder contains files other than XML
- :acrn-issue:`8111` - Sync between Service VM OS and RTVM failed when startup hence life_mngr cannot work
- :acrn-issue:`8315` - Invoking a command with partial executable path in Board Inspector Python file
- :acrn-issue:`8274` - Wrong kernel cmdline added in grub menu when install acrn-hypervisor
Known Issues
************
- :acrn-issue:`6631` - Kata support is broken since v2.7
- :acrn-issue:`6978` - openstack failed since ACRN v2.7
- :acrn-issue:`7827` - Pre_launched standard VMs cannot share CPU with Service VM in configurator
- :acrn-issue:`8202` - HV fail to boot acrn on QEMU
- :acrn-issue:`8471` - PTM enabling failure on i225 NIC
- :acrn-issue:`8472` - Failed to clear memory for post-launched standard VM
- :acrn-issue:`8473` - Missing VirtIO GPU Windows VF driver

View File

@ -36,7 +36,6 @@ import logging
import mmap
import os
import re
import sre_constants
import sys
import traceback
@ -69,7 +68,7 @@ def config_import_file(filename):
regex = gd['regex']
try:
r = re.compile(regex, re.MULTILINE)
except sre_constants.error as e:
except re.error as e:
logging.error("%s: bytes %d-%d: bad regex: %s",
filename, m.start(), m.end(), e)
raise

View File

@ -291,7 +291,8 @@ body {
counter-reset: step-count;
}
div.numbered-step h2::before {
div.numbered-step h2::before,
section.numbered-step h2::before {
counter-increment: step-count;
content: counter(step-count);
background: #cccccc;

View File

@ -53,17 +53,35 @@ static struct acrn_vcpu *is_single_destination(struct acrn_vm *vm, const struct
static uint32_t calculate_logical_dest_mask(uint64_t pdmask)
{
uint32_t dest_mask = 0UL;
uint32_t dest_cluster_id = 0U, cluster_id, logical_id_mask = 0U;
uint64_t pcpu_mask = pdmask;
uint16_t pcpu_id;
pcpu_id = ffs64(pcpu_mask);
while (pcpu_id < MAX_PCPU_NUM) {
if (pcpu_id < MAX_PCPU_NUM) {
/* Guests working in xAPIC mode may use 'Flat Model' to select an
* arbitrary list of CPUs. But as the HW is woring in x2APIC mode and can only
* use 'Cluster Model', destination mask can only be assigned to pCPUs within
* one Cluster. So some pCPUs may not be included.
* Here we use the first Cluster of all the requested pCPUs.
*/
dest_cluster_id = per_cpu(lapic_ldr, pcpu_id) & X2APIC_LDR_CLUSTER_ID_MASK;
do {
bitmap_clear_nolock(pcpu_id, &pcpu_mask);
dest_mask |= per_cpu(lapic_ldr, pcpu_id);
pcpu_id = ffs64(pcpu_mask);
cluster_id = per_cpu(lapic_ldr, pcpu_id) & X2APIC_LDR_CLUSTER_ID_MASK;
if (cluster_id == dest_cluster_id) {
logical_id_mask |= (per_cpu(lapic_ldr, pcpu_id) & X2APIC_LDR_LOGICAL_ID_MASK);
} else {
pr_warn("The cluster ID of pCPU %d is %d which differs from that (%d) of "
"the previous cores in the guest logical destination.\n"
"Ignore that pCPU in the logical destination for physical interrupts.",
pcpu_id, cluster_id >> 16U, dest_cluster_id >> 16U);
}
return dest_mask;
pcpu_id = ffs64(pcpu_mask);
} while (pcpu_id < MAX_PCPU_NUM);
}
return (dest_cluster_id | logical_id_mask);
}
/**

View File

@ -57,17 +57,22 @@ static int32_t partition_epc(void)
uint16_t vm_id = 0U;
uint32_t psec_id = 0U, mid = 0U;
uint64_t psec_addr = 0UL, psec_size = 0UL;
uint64_t vm_request_size = 0UL, free_size = 0UL, alloc_size;
struct acrn_vm_config *vm_config;
uint64_t free_size = 0UL, alloc_size;
struct acrn_vm_config *vm_config = get_vm_config(vm_id);
uint64_t vm_request_size = vm_config->epc.size;
int32_t ret = 0;
while ((psec_id < MAX_EPC_SECTIONS) && (vm_id < CONFIG_MAX_VM_NUM)) {
if (vm_request_size == 0U) {
while (psec_id < MAX_EPC_SECTIONS) {
if (vm_request_size == 0UL) {
vm_id++;
if (vm_id == CONFIG_MAX_VM_NUM) {
break;
}
mid = 0U;
vm_config = get_vm_config(vm_id);
vm_request_size = vm_config->epc.size;
}
if ((free_size == 0UL) && (vm_request_size != 0UL)) {
} else {
if (free_size == 0UL) {
ret = get_epc_section(psec_id, &psec_addr, &psec_size);
free_size = psec_size;
if ((ret != 0) || (free_size == 0UL)) {
@ -75,12 +80,7 @@ static int32_t partition_epc(void)
}
psec_id++;
}
if (vm_request_size != 0UL) {
if (vm_request_size <= free_size) {
alloc_size = vm_request_size;
} else {
alloc_size = free_size;
}
alloc_size = min(vm_request_size, free_size);
vm_epc_maps[mid][vm_id].size = alloc_size;
vm_epc_maps[mid][vm_id].hpa = psec_addr + psec_size - free_size;
vm_epc_maps[mid][vm_id].gpa = vm_config->epc.base + vm_config->epc.size - vm_request_size;
@ -88,9 +88,6 @@ static int32_t partition_epc(void)
free_size -= alloc_size;
mid++;
}
if (vm_request_size == 0UL) {
vm_id++;
}
}
if (vm_request_size != 0UL) {
ret = -ENOMEM;

View File

@ -253,6 +253,8 @@ union ioapic_rte {
/* fields in LDR */
#define APIC_LDR_RESERVED 0x00ffffffU
#define X2APIC_LDR_LOGICAL_ID_MASK 0x0000ffffU
#define X2APIC_LDR_CLUSTER_ID_MASK 0xffff0000U
/* fields in DFR */
#define APIC_DFR_RESERVED 0x0fffffffU

View File

@ -72,9 +72,12 @@ def extract_model(processors_node, cpu_id, family_id, model_id, core_type, nativ
msr_regs = [MSR_TURBO_RATIO_LIMIT, MSR_TURBO_ACTIVATION_RATIO]
for msr_reg in msr_regs:
try:
msr_data = msr_reg.rdmsr(cpu_id)
for attr in msr_data.attribute_bits:
add_child(n, "attribute", str(getattr(msr_data, attr)), id=attr)
except IOError:
logging.debug(f"No {msr_reg} MSR info for CPU {cpu_id}.")
def extract_topology(processors_node):
cpu_ids = get_online_cpu_ids()

View File

@ -3,7 +3,7 @@
# SPDX-License-Identifier: BSD-3-Clause
#
import parser_lib, os
import parser_lib, os, re
from inspectorlib import external_tools
from extractors.helpers import get_bdf_from_realpath
@ -165,10 +165,12 @@ def dump_system_ram(config):
:param config: file pointer that opened for writing board config information
"""
print("\t<IOMEM_INFO>", file=config)
with open(MEM_PATH[0], 'rt') as mem_info:
with open(MEM_PATH[0], 'rt', errors='ignore') as mem_info:
while True:
line = mem_info.readline().strip('\n')
line = re.sub('[^!-~]+', ' ', line)
if not line:
break

File diff suppressed because it is too large Load Diff

View File

@ -17,7 +17,7 @@ tauri-build = { version = "1.0.0-rc.8", features = [] }
[dependencies]
serde_json = "1.0.81"
serde = { version = "1.0.137", features = ["derive"] }
tauri = { version = "1.0.0-rc.10", features = ["api-all", "devtools"] }
tauri = { version = "1.4.1", features = ["api-all", "devtools"] }
log = "0.4.17"
glob = "0.3.0"
dirs = "4.0.0"

View File

@ -18,8 +18,7 @@ export default async function () {
'./thirdLib/elementpath-2.5.0-py3-none-any.whl',
'./thirdLib/defusedxml-0.7.1-py2.py3-none-any.whl',
'./thirdLib/xmlschema-1.9.2-py3-none-any.whl',
'./thirdLib/acrn_config_tools-3.0-py3-none-any.whl',
'./thirdLib/rstcloth-0.5.2-py3-none-any.whl'
'./thirdLib/acrn_config_tools-3.0-py3-none-any.whl'
])
`)

View File

@ -124,23 +124,6 @@
"to": "defusedxml-0.7.1-py2.py3-none-any.whl"
}
]
},
{
"name": "rstcloth-0.5.2-py3-none-any.whl",
"check": {
"type": "file",
"path": "rstcloth-0.5.2-py3-none-any.whl"
},
"clean": [
"rstcloth-0.5.2-py3-none-any.whl"
],
"install": [
{
"type": "download",
"from": "https://files.pythonhosted.org/packages/f1/fa/e653417b4eb6319e9b120f8d9bb16f7c5a4bcc5d1f8a2039d3106f7504e6/rstcloth-0.5.2-py3-none-any.whl",
"to": "rstcloth-0.5.2-py3-none-any.whl"
}
]
}
]
}

View File

@ -4,14 +4,91 @@
#
# SPDX-License-Identifier: BSD-3-Clause
#
import sys
import argparse
import logging
from rstcloth import RstCloth
import typing
import functools
import textwrap
from lxml import etree
t_content = typing.Union[str, typing.List[str]]
class Doc:
def __init__(self, stream: typing.TextIO = sys.stdout, line_width: int = 72) -> None:
self._stream = stream
self._line_width = line_width
def fill(self, text: str, initial_indent: int = 0, subsequent_indent: int = 0) -> str:
return textwrap.fill(
text=text,
width=self._line_width,
initial_indent=" " * initial_indent,
subsequent_indent=" " * subsequent_indent,
expand_tabs=False,
break_long_words=False,
break_on_hyphens=False,
)
def _add(self, content: t_content) -> None:
if isinstance(content, list):
self._stream.write("\n".join(content) + "\n")
else:
self._stream.write(content + "\n")
def content(self, content: t_content, indent: int = 0) -> None:
if isinstance(content, list):
content = " ".join(content)
self._add(self.fill(content, indent, indent))
def note(self, content: t_content, indent: int = 0):
marker = ".. {type}::".format(type='note')
self._add(marker)
self.content(content, indent=indent + 3)
def newline(self, count: int = 1) -> None:
if count == 1:
self._add("")
else:
self._add("\n" * (count - 1))
def table(self, header: typing.List, data) -> None:
column_widths = list()
content = list()
data = [header] + data
for i in range(len(header)):
column_widths.append(max(list(map(lambda x: len(str(x[i])), data))))
for j in range(len(data)):
overline = "+" + "+".join(["-" * column_widths[i] for i in range(len(header))]) + "+"
underline = "+" + "+".join(["=" * column_widths[i] for i in range(len(header))]) + "+"
format_raw = "|" + "|".join([str(data[j][i]).ljust(column_widths[i]) for i in range(len(header))]) + "|"
if j == 0:
content.extend([overline, format_raw, underline])
else:
content.extend([format_raw, overline])
if len(data) == 1:
content.append(overline)
self.newline()
self._add(content)
self.newline()
def heading(self, text: str, char: str, overline: bool = False) -> None:
underline = char * len(text)
content = [text, underline]
if overline:
content.insert(0, underline)
self._add(content)
h1 = functools.partialmethod(heading, char="#")
h2 = functools.partialmethod(heading, char="*")
h3 = functools.partialmethod(heading, char="=")
title = functools.partialmethod(heading, char="=", overline=True)
class GenerateRst:
io_port = {}
@ -26,7 +103,7 @@ class GenerateRst:
self.board_etree = etree.parse(board_file_name)
self.scenario_etree = etree.parse(scenario_file_name)
self.file = open(rst_file_name, 'w')
self.doc = RstCloth(self.file)
self.doc = Doc(self.file)
# The rst content is written in three parts according to the first level title
# 1. Hardware Resource Allocation 2. Inter-VM Connections 3. VM info
@ -100,7 +177,7 @@ class GenerateRst:
# Get all physical CPU information from board.xml
def get_pcpu(self):
pcpu_list = list(map(int, self.board_etree.xpath("processors/die/core/thread/cpu_id/text()")))
pcpu_list = list(map(int, self.board_etree.xpath("processors//cpu_id/text()")))
return pcpu_list
def write_shared_cache(self):
@ -124,7 +201,7 @@ class GenerateRst:
each_cache_way_size = self.get_each_cache_way_info(cache_level, cache_info[1])[0]
column_title, data_table = self.get_vcpu_table({cache_info: vm_info}, cache_level)
self.doc.table(column_title, data_table)
self.doc.note(name="note", content=f"Each cache chunk is {each_cache_way_size}KB.")
self.doc.note(content=f"Each cache chunk is {each_cache_way_size}KB.")
self.doc.newline()
# Get used vcpu table

View File

@ -227,7 +227,7 @@ function cleanup() {
mount_point=$(pwd)/mnt
if [[ ${vm_type} == "hmi-vm" ]]; then
target_image=${hmi_vm_image}
size_modifier="+4G"
size_modifier="+7G"
elif [[ ${vm_type} == "rt-vm" ]]; then
target_image=${rt_vm_image}
size_modifier="+1G"
@ -236,7 +236,7 @@ else
exit 1
fi
try_step "Download Ubuntu Focal cloud image" download_image ${cloud_image} ${cloud_image_url}
try_step "Download Ubuntu cloud image" download_image ${cloud_image} ${cloud_image_url}
if [[ ${vm_type} == "rt-vm" ]]; then
try_step "Copy the RT kernel to build directory" copy_rt_kernel
try_step "Check availability of RT kernel image" check_rt_kernel

View File

@ -12,19 +12,22 @@ function umount_directory() {
}
function update_package_info() {
apt update -y && apt install python3 python3-pip \
net-tools python3-matplotlib \
linux-modules-extra-$(uname -r) \
openssh-server \
isc-dhcp-server -y
apt update -y
# Remove needrestart to disable interactive prompts in apt install
apt remove -y needrestart
apt install -y python3 python3-pip net-tools python3-matplotlib openssh-server \
isc-dhcp-server linux-generic-hwe-$(lsb_release -sr)
pip3 install flask 'numpy>=1.18.5' pandas posix_ipc
}
function install_desktop() {
apt install ubuntu-gnome-desktop -y
}
function cleanup_packages() {
apt autoremove -y
}
function change_root_password() {
passwd root
}
@ -64,6 +67,7 @@ try_step "Unmounting /root" umount_directory /root
try_step "Unmounting /home" umount_directory /home
try_step "Updating package information" update_package_info
try_step "Installing GNOME desktop" install_desktop
try_step "Cleaning up packages" cleanup_packages
try_step "Changing the password of the root user" change_root_password
try_step "Enable root user login" enable_root_login
try_step "Adding the normal user acrn" add_normal_user

View File

@ -35,7 +35,8 @@ function install_rt_kernel() {
search_dir=$1
for file in $(ls -r ${search_dir}/*acrn-kernel-*.deb)
do
sudo apt install ${file} -y
cp ${file} /tmp
sudo apt install /tmp/${file##*/} -y
done
}

View File

@ -0,0 +1,11 @@
/*
* Copyright (C) 2023 Intel Corporation.
* SPDX-License-Identifier: BSD-3-Clause
*/
#include <sys/queue.h>
#define list_foreach_safe(var, head, field, tvar) \
for ((var) = LIST_FIRST((head)); \
(var) && ((tvar) = LIST_NEXT((var), field), 1); \
(var) = (tvar))

View File

@ -18,12 +18,7 @@
#include <arpa/inet.h>
#include "socket.h"
#include "log.h"
#define list_foreach_safe(var, head, field, tvar) \
for ((var) = LIST_FIRST((head)); \
(var) && ((tvar) = LIST_NEXT((var), field), 1);\
(var) = (tvar))
#include "list.h"
static int setup_and_listen_unix_socket(const char *sock_path, int num)

View File

@ -22,31 +22,32 @@
/* it read from uart, and if end is '\0' or '\n' or len = buff-len it will return */
static ssize_t try_receive_message_by_uart(int fd, void *buffer, size_t buf_len)
{
ssize_t rc = 0U, count = 0U;
char *tmp;
ssize_t rc = 0, count = 0;
char *p = (char *)buffer;
char ch;
unsigned int retry_times = RETRY_RECV_TIMES;
do {
/* NOTE: Now we can't handle multi command message at one time. */
rc = read(fd, buffer + count, buf_len - count);
if (rc > 0) {
count += rc;
tmp = (char *)buffer;
if ((tmp[count - 1] == '\0') || (tmp[count - 1] == '\n')
|| (count == buf_len)) {
if (tmp[count - 1] == '\n')
tmp[count - 1] = '\0';
while (count < buf_len) {
rc = read(fd, &ch, 1);
if (rc == 1) {
if (ch == (char)(-1)) /* ignore noise data */
continue;
if (ch == '\n') /* end of command */
ch = '\0';
p[count++] = ch;
if (ch == '\0')
break;
}
} else {
if (errno == EAGAIN) {
usleep(WAIT_RECV);
} else if ((rc == -1) && (errno == EAGAIN)) {
if (retry_times > 0) {
retry_times--;
usleep(WAIT_RECV);
} else {
break;
}
} else {
break;
}
}
} while (retry_times != 0U);
return count;
}

View File

@ -18,6 +18,7 @@
#include <stdint.h>
#include "uart_channel.h"
#include "log.h"
#include "list.h"
#include "config.h"
#include "command.h"
@ -308,9 +309,9 @@ struct channel_dev *create_uart_channel_dev(struct uart_channel *c, char *path,
}
static void destroy_uart_channel_devs(struct uart_channel *c)
{
struct channel_dev *c_dev;
struct channel_dev *c_dev, *tc_dev;
LIST_FOREACH(c_dev, &c->tty_open_head, open_list) {
list_foreach_safe(c_dev, &c->tty_open_head, open_list, tc_dev) {
pthread_mutex_lock(&c->tty_conn_list_lock);
LIST_REMOVE(c_dev, open_list);
pthread_mutex_unlock(&c->tty_conn_list_lock);