"RKE2 - Rancher's Next Generation Kuberentes Distribution" can easily be
supported by kata-deploy with some simple adjustments to what we've been
relying on for "k3s".
The main differences between k3s and RKE2 are, basically:
1. The location where the containerd configuration is stored
- k3s: /var/lib/rancher/k3s/agent/etc/containerd/
- rke2: /var/lib/rancher/rke2/agent/etc/containerd/
2. The name of the systemd services used:
- k3s: k3s.service or k3s-agent.service
- rke2: rke2-server.service or rke2-agent.service
Knowing this, let's add a new overlay for RKE2, adapt the kata-deploy
and the kata-cleanup scripts, and that's it.
Fixes: #4160
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's move the specific installation instructions, such as for k3s,
upper in the document.
This helps reading (and also skipping) according to what the user
is looking for.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
If we fail to download the clh binary, we fall-back to build from source.
Unfortunately, `pull_clh_released_binary()` leaves a `cloud_hypervisor`
directory behind, which causes `build_clh_from_source()` not to clone
the git repo:
[ -d "${repo_dir}" ] || git clone "${cloud_hypervisor_repo}"
When building from a kata-containers git repo, the subsequent calls
to `git` in this function thus apply to the kata-containers repo and
eventually fail, e.g.:
+ git checkout v23.0
error: pathspec 'v23.0' did not match any file(s) known to git
It doesn't quite make sense actually to keep an existing directory the
content of which is arbitrary when we want to it to contain a specific
version of clh. Just remove it instead.
Fixes: #4151
Signed-off-by: Greg Kurz <groug@kaod.org>
The idea is to pass this README file to kata-doc-to-script.sh script and
then execute the result.
Added comments with a file name on top of each YAML snippet.
This helps in assigning a file name when we cat the YAML to a file.
Fixes: #3943
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
`make kata-tarball` relies on `kata-deploy-binaries.sh -s` which
silently ignores errors, and you may end up with an incomplete
tarball without noticing it because `make`'s exit status is 0.
`kata-deploy-binaries.sh` does set the `errexit` option and all the
code in the script seems to assume that since it doesn't do error
checking. Unfortunately, bash automatically disables `errexit` when
calling a function from a conditional pipeline, like done in the `-s`
case:
if [ "${silent}" == true ]; then
if ! handle_build "${t}" &>"$log_file"; then
^^^^^^
this disables `errexit`
and `handle_build` ends with a `tar tvf` that always succeeds.
Adding error checking all over the place isn't really an option
as it would seriously obfuscate the code. Drop the conditional
pipeline instead and print the final error message from a `trap`
handler on the special ERR signal. This requires the `errtrace`
option as `trap`s aren't propagated to functions by default.
Since all outputs of `handle_build` are redirected to the build
log file, some file descriptor duplication magic is needed for
the handler to be able to write to the orignal stdout and stderr.
Fixes#3757
Signed-off-by: Greg Kurz <groug@kaod.org>
This script is responsible for generating a tarball with all the rust
vendored code that is needed for fully building kata-containers on a
disconnected environment.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
'make kata-tarball' sometimes fails early with:
cp: cannot create regular file '[...]/tools/packaging/kata-deploy/local-build/dockerbuild/install_yq.sh': File exists
This happens because all assets are built in parallel using the same
`kata-deploy-binaries-in-docker.sh` script, and thus all try to copy
the `install_yq.sh` script to the same location with the `cp` command.
This is a well known race condition that cannot be avoided without
serialization of `cp` invocations.
Move the copying of `install_yq.sh` to a separate script and ensure
it is called *before* parallel builds. Make the presence of the copy
a prerequisite for each sub-build so that they still can be triggered
individually. Update the GH release workflow to also call this script
before calling `kata-deploy-binaries-in-docker.sh`.
Fixes#3756
Signed-off-by: Greg Kurz <groug@kaod.org>
NO_TTY configured whether to add the -t option to docker run. It makes no
sense for the caller to configure this, since whether you need it depends
on the commands you're running. Since the point here is to run
non-interactive build scripts, we don't need -t, or -i either.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
This directory consists entirely of files built during a make kata-tarball,
so it should not be committed to the tree. A symbolic link to this directory
might be created during 'make tarball', ignore it as well.
Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[greg: - rearranged the subject to make the subsystem checker happy
- also ignore the symbolic link created by
`kata-deploy-binaries-in-docker.sh`]
Signed-off-by: Greg Kurz <groug@kaod.org>
Right now it doesn't do much for us, as we're always building from a
specific version. However, this opens the possibility for us to add a
CI, similar to the one we have for CRI-O, for testing against each
cloud-hypervisor PR, on the cloud-hypervisor branch.
Fixes: #3908
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
bring Intel SGX support
Changes tha may impact in Kata Containers
Arm:
The 'virt' machine now supports an emulated ITS
The 'virt' machine now supports more than 123 CPUs in TCG emulation mode
The pl031 real-time clock device now supports sending RTC_CHANGE QMP events
PowerPC:
Improved POWER10 support for the 'powernv' machine
Initial support for POWER10 DD2.0 CPU added
Added support for FORM2 PAPR NUMA descriptions in the "pseries" machine
type
s390x:
Improved storage key emulation (e.g. fixed address handling, lazy
storage key enablement for TCG, ...)
New gen16 CPU features are now enabled automatically in the latest
machine type
KVM:
Support for SGX in the virtual machine, using the /dev/sgx_vepc device
on the host and the "memory-backend-epc" backend in QEMU.
New "hv-apicv" CPU property (aliased to "hv-avic") sets the
HV_DEPRECATING_AEOI_RECOMMENDED bit in CPUID[0x40000004].EAX.
virtio-mem:
QEMU now fully supports guest memory dumps with virtio-mem.
QEMU now cleanly supports precopy migration, postcopy migration and
background snapshots with virtio-mem.
fixes#3902
Signed-off-by: Julio Montes <julio.montes@intel.com>
As 2.4.0-rc0 has been released, let's switch the kata-deploy / kata-cleanup
tags back to "latest", and re-add the kata-deploy-stable and the
kata-cleanup-stable files.
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
kata-deploy files must be adapted to a new release. The cases where it
happens are when the release goes from -> to:
* main -> stable:
* kata-deploy-stable / kata-cleanup-stable: are removed
* stable -> stable:
* kata-deploy / kata-cleanup: bump the release to the new one.
There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.
Signed-off-by: Eric Ernst <eric_ernst@apple.com>
During the release of 2.4.0-rc0 @egernst noticed an incositency in the
way we handle release tags, as release candidates are being taken as
"stable" releases, while both the kata-deploy tests and the release
action consider this as "latest".
Ideally we should have our own tag for "release candidate", but that's
something that could and should be discussed more extensively outside of
the scope of this quick fix.
For now, let's align the code generating the PR for bumping the release
with what we already do as part of the release action and kata-deploy
test, and tag "-rc" as latest, regardless of which branch it's coming
from.
Fixes: #3847
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Use `multistrap` for building Ubuntu rootfs. Adds support for building
for foreign architectures using the `ARCH` environment variable.
In the process, the Ubuntu rootfs workflow is vastly simplified.
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
- Add a doc comment
- Pass to build container, e.g. to build x86_64 with glibc (would
always use musl)
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
Remove a lot of cruft of musl installations -- we needed those for the
Go agent, but Rustup just takes care of everything. aarch64 on
Debian-based & Alpine is an exception -- create a symlink
`aarch64-linux-musl-gcc` to `musl-tools`'s `musl-gcc` or `gcc` on
Alpine. This is unified -- arch-specific Dockerfiles are removed.
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
Hadolint DL3019. If you're wondering why this is in this PR, that's
because I touch the file later, and we're only triggering the lints for
changed files.
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
Add a new entry of arm-kernel-experimental and let the kernel build
script support to build it.
Fixes: #3280
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
I'm sure that it is correct to remove CONFIG_ARM64_UAO and
CONFIG_MANDATORY_FILE_LOCKING and . Both are gone in 5.15. Maintain a
specific config files for a kernel version is a little ugly. If someone
needs them, shout at me.
Fixes: #3280
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
As the support for vcpu hotplug is on the road, I pick them up here as
experimental to let user try cpu hotplug and virtio-mem on arm64.
Fixes: #3280
Signed-off-by: Jianyong Wu <jianyong.wu@arm.com>
With the ACPI PCI hotplug changes introduced in 2.3, QEMU >= 6.1 is required.
Remove unnecessary qemu version check in build script.
Fixes#3547
Signed-off-by: goodluckbot <tangbo_gl@hotmail.com>
4c164afbac renamed extra_build_args to
features, but did it only in one place, leading to:
```
21:15:28 /home/jenkins/workspace/kata-containers-2.0-ubuntu-ARM-PR/go/src/github.com/kata-containers/kata-containers/tools/packaging/static-build/cloud-hypervisor/build-static-clh.sh: line 55: features: unbound variable
21:15:29 make[1]: *** [tools/packaging/kata-deploy/local-build/Makefile:30: cloud-hypervisor-tarball-build] Error 1
```
Fixes: #3775
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's bump the Cloud Hypervisor version to 5343e09e7b8db, as that brings
a few fixes we're interested in, such as:
* hypervisor, vmm: Handle TDX hypercalls with INVALID_OPERAND
- https://github.com/cloud-hypervisor/cloud-hypervisor/pull/3723
- This is needed for the TDX support on the cloud hypervisor driver,
which is part of this very same series.
* openapi: Update the PciBdf types
- https://github.com/cloud-hypervisor/cloud-hypervisor/pull/3748
- This is needed due to a change in a DeviceNode field, which would
cause a marshalling / demarshalling error when running with a
version of cloud-hypervisor that includes the TDX fixes mentioned
above.
* scripts: dev_cli: Don't quote $features_build
* scripts: dev_cli: Add --features option
- https://github.com/cloud-hypervisor/cloud-hypervisor/pull/3773
- This is needed due to changes in the scripts used to build Cloud
Hypervisor, which are used as part of Kata Containers CIs and
github actions.
Due to this change, we're also adapting the build scripts as part
of this very same commit.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The kata-deploy Dockerfile is based on CentOS 7, which has no s390x
support. Add an `IMAGE` argument to specify the registry, which still
defaults to CentOS, but e.g. ClefOS can be selected instead.
Other x86_64 assumptions are also removed. Other general simplicifations
are made.
This does not address the more general issue of #3723 -- what we're
doing here does not seem to be working with systemd >= something between
235-237.
Fixes: #3722
Signed-off-by: Jakob Naucke <jakob.naucke@ibm.com>
When using kata-deploy, no `containerd-shim-kata-v2` binary is deployed,
but we do deploy a `kata` runtime class, which seems very much
incosistent.
As the default configuration for kata-containers points to QEMU, let's
also use kata with QEMU as the default shim-v2 binary.
Fixes: #3228, #3734
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The name of SYS_SUPPORTS_HUGETLBFS has been changed to
ARCH_SUPPORTS_HUGETLBFS which is being selected on default
by another kernel config.
More info- 855f9a8e87
Change applicable from v5.13.
Fixes: #3720
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>