therwise we may end up running into unexpected issues when calling the
cleanup option, as the same checks would be done, and files could end up
being copied again, overwriting the original content which was backked
up by the install option.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Neither CRI-O nor containerd requires that, and removing such symlinks
makes everything less intrusive from our side.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
It's already being used with CRi-O, let's simplify what we do and also
use this for containerd, which will allow us to do further cleanups in
the coming patches.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
For the rare cases where containerd_conf_file does not exist, cp could fail
and let the pod in Error state. Let's make it a little bit more robust.
Signed-off-by: Beraldo Leal <bleal@redhat.com>
This reverts commit 8d9bec2e01, as it
causes issues in the operator and kata-deploy itself, leading to the
node to be NotReady.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
- At the moment we aren't factoring in the kata version on our caches,
so it means that when we bump this just before release, we don't
rebuilt components that pull in the VERSION content, so the release build
ends up with incorrect versions in it's binaries
Fixes: #10092
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Adding reset_cleanup to cleanup action so that it is done automatically
without the need to run yet another DS just to reset the runtime.
This is now part of the lifecycle hook when issuing kata-deploy.sh
cleanup
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Rather then modifying the kata-depoy scripts let's use Helm and
create a values.yaml that can be used to render the final templates
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
For easier handling of kata-deploy we can leverage a Helm chart to get
rid of all the base and overlays for the various components
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
In kata-deploy-binaries.sh we want to understand if we are running
as part of a release, so we need to pass through the RELEASE env
from the workflow, which I missed in
https://github.com/kata-containers/kata-containers/pull/9550Fixes: #9921
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This PR removes the CI variable in kata deploy in docker script
which was supported it in jenkins environment which is not
longer being supported it.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR removes the CI variable from the local build makefile as
this was part of the jenkins environment which is not longer supported
it.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
For a minimal initrd/image build we may want to leverage busybox.
This is part number two of the NVIDIA initrd/image build
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
To build the build-kata-deploy image, it should be copied ci/install_yq.sh to
tools/packaging/kata-deploy/local-build/dockerbuild as this script will install
yq within the image. Currently, if
tools/packaging/kata-deploy/local-build/dockerbuild/install_yq.sh exists then
make won't copy it again. This can raise problems as, for example, the current
update of yq version (commit c99ba42d) in ci/install_yq.sh won't force the
rebuild of the build-kata-deploy image.
Note: this isn't a problem on a fresh dev or CI environment.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Allow kata-deploy to install and configure the qemu-runtime-rs runtimeClass
which ties to qemu hypervisor implementation in rust for the runtime-rs.
Fixes: #9804
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
A new PULL_TYPE environment variable is recognized by the kata-deploy's
install script to allow it to configure CRIO-O for guest-pull image pulling
type.
The tests/integration/kubernetes/gha-run.sh change allows for testing it:
```
export PULL_TYPE=guest-pull
cd tests/integration/kubernetes
./gha-run.sh deploy-k8s
```
Fixes#9474
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Since yq frequently updates, let's upgrade to a version from February to
bypass potential issues with versions 4.41-4.43 for now. We can always
upgrade to the newest version if necessary.
Fixes#9354
Depends-on:github.com/kata-containers/tests#5818
Signed-off-by: Beraldo Leal <bleal@redhat.com>
We are currently building Oras from source on ppc64le. Now that they offically release the artefacts
for power, consume them to install Oras.
Fixes: #9213
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
k0s was added to kata-deploy, but it's kata-cleanup counterpart was
never added. Let's fix it.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
k0s deployment has been broken since we moved to using `tomlq` in our
scripts. The reason is that before using `tomlq` our script would,
involuntarily, end up creating the file.
Now, in order to fix the situation, we need to explicitly create the
file and let `tomlq` add the needed content.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Container tags can be a maximum of 128 characters long
so calculate the length of the arch suffix and then restrict
the tag to this length subtracted from 128
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- Previously I copied the logic that abbreviated the commit hash
from the versioning, but looking at our versions.yaml the clear pattern
is that when pointing at commits of dependencies we use the full
commit hash, not the abbreviated one, so for consistency I think we should
do the same with the components that we make available
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
As we have multi-arch builds for nearly all components, we want to ensure
that all the cache tags we set have the architecture suffix, not just the
`TARGET_BRANCH` one.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
VERSION_ID is not guaranteed to be specified in os-release, this
makes kaka-deploy breaks in rolling distros like arch linux and void
linux.
Note that operating system vendors may choose not to provide
version information, for example to accommodate for rolling releases.
In this case, VERSION and VERSION_ID may be unset.
Applications should not rely on these fields to be set.
Signed-off-by: vac <dot.fun@protonmail.com>
- The tags have a trailing non-printable character, which results
in our cache tags having a trailing underscore e.g. `ghcr.io/kata-containers/cached-artefacts/agent:ce24e9835_`
For ease of use of these cached components, we should strip off the trailing underscore.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- Container image tags can only contain alphanumeric, period,
hyphen and underscore characters, so convert characters outside
of these to be underscores, to avoid having invalid tag failures
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Recently the extra gpu caching was added, unfortunately when I
rebased I ended up with both the new tagging logic and old logic.
Let's try and integrate them properly to avoid doing the push twice.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Now we have the workflow updated and can test the changes in caching
we've hit an error:
```
line 1180: artefact_tag: unbound variable
```
so we need to fix that up. Sorry for missing this before.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- CoCo wants to use the agent and coco-guest-components cached artifacts
so tag them with a helpful version, so make these easier to get
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
No commands remaining.
A length of the result of `git log -1 --pretty=format:%h` could vary
over different CI systems, highly likely messing up their caching
mechanisms.
This commit is to use an option `--abbrev=9` to standardize the length
to 9 characters for CI.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
This is needed, as b1710ee2c0 made the
default agent shipped the one with policy support. However, we simply
didn't update the rootfs to reflect that, causing then an issue to start
the agent as shown by the strace below:
```
open("/etc/kata-opa/default-policy.rego", O_RDONLY|O_LARGEFILE|O_CLOEXEC) = -1 ENOENT (No such file or directory)
futex(0x7f401eba0c28, FUTEX_WAKE_PRIVATE, 1) = 1
rt_sigprocmask(SIG_BLOCK, ~[RTMIN RT_1 RT_2], [], 8) = 0
tkill(553681, SIGABRT) = 0
rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
--- SIGABRT {si_signo=SIGABRT, si_code=SI_TKILL, si_pid=553681, si_uid=1000} ---
+++ killed by SIGABRT (core dumped) +++
```
This happens as the default policy **must** be set when the agent is
built with policy support, but the code path that copies that into the
rootfs is only triggered if the rootfs itself is built with
AGENT_POLICY=yes, which we're now doing for both confidential and
non-confidential cases.
Sadly this was not caught by CI till we the cache was not used for
rootfs, which should be solved by the previous commit.
Fixes: #9630, #9631
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is to add an info for files at `tools/packaging/kata-deploy/local-build/*
to a version of the components and ensure that the cached artefacts are not used
when the files of interest are updated.
Fixes: #9630
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
the `tdx_not_supported_warning` function does not exists, the
`tdx_not_supported` should be called instead.
Fixes: #9628
Signed-off-by: Lukáš Doktor <ldoktor@redhat.com>
Whenever we count on having the headers tarball, we must unpack the
cached content into the expected directory, otherwise we'd simply fail,
as we've been failing in our CI, at the end of the process where we
generate the tarball from the cached components.
It's weird to me, sincerely, that the headers tarball end up in such
weird place (build/kernel-nvidia-gpu/builddir/), but I'll leave that to
Zvonko to figure out whether something better can be done, as the intuit
of this PR is simply unblock Kata Containers CI.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
New env var so everyone can test the PUSH_TO_REGISTRY feature
export PUSH_TO_REGISTRY=yes
export ARTEFACT_REGISTRY=quay.io
export ARTEFACT_REPOSITORY=my-fancy-kata-containers
export ARTEFACT_REGISTRY_USERNAME=zvonkok
export ARTEFACT_REGISTRY_PASSWORD=<super-secret>
make ...-tarball
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Fixes: #9483
For the added configurations we need to provide runtimeClasses.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Here we're checking the distro's `/etc/os-release` or
`/usr/lib/os-release` in order to get which distro we're deploying the
Kata Containers artefacts to, and then to properly adjust the QEMU and
OVMF with TDX support that's been shipped with the distros.
Together with that, we're also printing the instructions provided by the
distro on how to enable and use TDX.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We'll need to have access to the host os-release file (either under
`/etc/os-release` or under `/usr/lib/os-release`), and the simplest
approach that comes to my mind to do is doing what a debug pod would do,
mounting `/` as `/host` and then allowing us to have access to those
files, and then corectly set the TDX specific QEMU and OVMF (TDVF) paths
for the tdx available configurations.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's remove everything related to the TDX specific QEMU building /
shipping from our repo, as we'll be relying on the one coming from the
distros.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's remove everything related to the TDVF building / shipping from our
repo, as we'll be relying on the one coming from the distro.
Later on, we may need to re-add TDVF logic, as we're already using
upstream edk2 repo / content, but when that's needed we'll simply revert
this commit.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Created the runtimeclasses/kata-qemu-coco-dev.yaml file and updated the list
of SHIMS.
Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
Now that the `kata-agent` is being built with policy support, let's stop
building the `kata-opa-agent`, reducing the amount of things we need to
test and maintain.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Now that the OPA binary is not required anymore, let's start shipping
the agent with the policy enabled by default.
The agent *without* policy enabled has 30MB, while it's 34MB *with* the
policy enabled.
This 4MB (~10%) increase is, IMHO, worth it in order to reduce the
amount of components we have to maintain and test, including the
possibility to also reduce the amount of possible rootfs / initrd
images.
Whoever wants to use the agent without policy enabled can simply do that
by building their own agent. :-)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
When docker is installed on the host system using script from https://get.docker.com/ it automatically creates a docker group with gid=999.
Then during docker build process of tarball, eg. make qemu-tdx-experimental-tarball docker is also installed inside the image with the same
script, which also automatically adds docker group with gid=999.
Then, the build tries to add a new group docker_on_host with gid=999, which already exists, which breaks the build.
Signed-off-by: Jakub Ledworowski <jakub.ledworowski@intel.com>
This should only be done once, and if CRI-O restarts, there's a big
chance kata-deploy will also restart and the user would end up with a
file that looks like:
```
[crio]
log_level = "debug"
[crio]
log_level = "debug"
[crio]
log_level = "debug"
...
```
And that would simply cause CRI-O to not start.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is to make `boot-image-se-tarball` use confidential kernel and
initrd instead of vanilla version of artifacts.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
The commit is to make the OPA build from source working in `ubuntu-rootfs-osbuilder`.
To achieve the goal, the configuration is changed as follows:
- Switch the make target to `ci-build-linux-static` not triggering docker-in-docker build
- Install go in the builder image for s390x and ppc64le
Fixes: #9466
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Let's make sure those two proxy settings are respected, as those will be
widely used when pulling the image inside the guest on the Confidential
Containers case.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Enable to build kata-agent with PULL_TYPE feature.
We build kata-agent with guest-pull feature by default, with PULL_TYPE set to default.
This doesn't affect how kata shares images by virtio-fs. The snapshotter controls the image pulling in the guest.
Only the nydus snapshotter with proxy mode can activate this feature.
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
All releases are now created in the `main` branch following
the very same workflow. No need to special case pre-releases.
Signed-off-by: Greg Kurz <groug@kaod.org>
As we're not maintaining a stable branch anymore, let's get rid of the
kata-deploy stable pieces.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Remove specific kernel/initrd/image leftovers in Makefile of
local-build, which is the part of #9026.
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Remove some unnecessary whitespace from a couple of `kata-deploy` files.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
whitespace
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
Now we're using a "confidential" image that has support for all of
those.
Fixes: #9010 -- part II
#8982 -- part II
#8978 -- part II
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
For Confidential containers stack, the pause image is managed by host side,
then it may configure a malicious pause image, we need package
a pause image inside the rootfs and don't the pause image from host.
But the installation of skopeo is not included in 20.04 release, so we
can not directly install skopeo in rootfs and pull pause image.
So I plan to let the task as a static build stuff, which would not be influenced
by the system version in rootfs. And the pause image will be part of the Kata Containers rootfs
that's used by the Confidential Containers usecase. This commit enables the component to be built
both locally and in our CI environment with the command: make pause-image-tarball.
Fixes: #9032
Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
Co-authored-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Co-authored-by: Wang, Arron <arron.wang@intel.com>
Co-authored-by: stevenhorsman <steven@uk.ibm.com>
Co-authored-by: Jakob Naucke <jakob.naucke@ibm.com>
Let's install the coco-guest-components into the confidential rootfs
image and initrd.
Fixes: #9021
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As DESTDIR was not being passed, we've been installing the final
binaries in a container path that was not exposed to the host, leading
to creating an empty tarball with the guest components.
Now, theoretically, guest-components should respect a PREFIX passed, but
that's not the case and we're manually adding "/usr/local/bin" to the
passed DESTDIR.
Here's the result of the tarball:
```bash
⋊> kata-containers ≡ tar tf build/kata-static-coco-guest-components.tar.xz
./
./usr/
./usr/local/
./usr/local/bin/
./usr/local/bin/confidential-data-hub
./usr/local/bin/attestation-agent
./usr/local/bin/api-server-rest
```
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's use a single rootfs image / initrd for confidential workloads,
instead of having those split for different TEEs.
We can easily do this now as the soon-to-be-added guest-components can
be built in a generic way.
Fixes: #8982
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Now that we're using the kernel-confidential, let the rootfs depending
on it, instead of depending on the TEE specific ones.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We need to do this in order to ensure that the measure boot will be
taking the latest kernel bits, as needed.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This is already done for the TDX kernel, and should have been done also
for the confidential one.
This action requires us to bump the kernel version as the resulting
kernel will be different from the cached one.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
I made this a required argument during the series and ended up
forgetting to add that while calling the function.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This issues was introduced due to a typo not caught during reviews on
e5bca90274.
Fixes: #6415 -- part II
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Till now we didn't have a logic to consume the kernel modules cached
tarball. Let's make sure those are consumed as it'll save us a
reasonable amount of build time.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>