We first try without passing the `--break-system-packages` argument, as
that's not supported on Ubuntu 22.04 or older, but that's required on
Ubuntu 24.04 or newer.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Otherwise a bump in the os name and / or os version would lead to the CI
using a cached artefact.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
We have gotten Ubuntu 20.04 working pretty much "by luck", as multistrap
fails the deployment, and then a hacky function was introduced to add
the proper dbus links. However, this does not scale at all, and we
should:
* Fail if multistrap fails
* I won't do this for Ubuntu 20.04 as it's working for now and soon
enough it'll be EOL
* Add better logging to ensure someone can know when multistrap fails
Below you can find the failure that we're hitting on Ubuntu 20.04:
```sh
Errors were encountered while processing:
dbus
ERR: dpkg configure reported an error.
Native mode configuration reported an error!
I: Tidying up apt cache and list data.
Multistrap system reported 1 error in /rootfs/.
I: Tidying up apt cache and list data.
```
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Right now we're hitting an interesting situation with osbuilder, where
regardless of what's being passed Ubuntu 20.04 (focal) is being used
when building the rootfs-image, as shown in the snippets of the logs
below:
```
ffidenci@tatu:~/src/upstream/kata-containers/kata-containers$ make rootfs-image-confidential-tarball
/home/ffidenci/src/upstream/kata-containers/kata-containers/tools/packaging/kata-deploy/local-build//kata-deploy-copy-libseccomp-installer.sh "agent"
make agent-tarball-build
...
make pause-image-tarball-build
...
make coco-guest-components-tarball-build
...
make kernel-confidential-tarball-build
...
make rootfs-image-confidential-tarball-build
make[1]: Entering directory '/home/ffidenci/src/upstream/kata-containers/kata-containers'
/home/ffidenci/src/upstream/kata-containers/kata-containers/tools/packaging/kata-deploy/local-build//kata-deploy-binaries-in-docker.sh --build=rootfs-image-confidential
sha256:f16c57890b0e85f6e1bbe1957926822495063bc6082a83e6ab7f7f13cabeeb93
Build kata version 3.13.0: rootfs-image-confidential
INFO: DESTDIR /home/ffidenci/src/upstream/kata-containers/kata-containers/tools/packaging/kata-deploy/local-build/build/rootfs-image-confidential/destdir
INFO: Create image
build image
~/src/upstream/kata-containers/kata-containers/tools/osbuilder ~/src/upstream/kata-containers/kata-containers/tools/packaging/kata-deploy/local-build/build/rootfs-image-confidential/builddir
INFO: Build image
INFO: image os: ubuntu
INFO: image os version: latest
Creating rootfs for ubuntu
/home/ffidenci/src/upstream/kata-containers/kata-containers/tools/osbuilder/rootfs-builder/rootfs.sh -o 3.13.0-13f0807e9f5687d8e5e9a0f4a0a8bb57ca50d00c-dirty -r /home/ffidenci/src/upstream/kata-containers/kata-containers/tools/packaging/kata-deploy/local-build/build/rootfs-image-confidential/builddir/rootfs-image/ubuntu_rootfs ubuntu
INFO: rootfs_lib.sh file found. Loading content
~/src/upstream/kata-containers/kata-containers/tools/osbuilder/rootfs-builder/ubuntu ~/src/upstream/kata-containers/kata-containers/tools/osbuilder
~/src/upstream/kata-containers/kata-containers/tools/osbuilder
INFO: rootfs_lib.sh file found. Loading content
INFO: build directly
WARNING: apt does not have a stable CLI interface. Use with caution in scripts.
Get:1 http://security.ubuntu.com/ubuntu focal-security InRelease [128 kB]
Get:2 http://archive.ubuntu.com/ubuntu focal InRelease [265 kB]
Get:3 http://archive.ubuntu.com/ubuntu focal-updates InRelease [128 kB]
Get:4 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [4276 kB]
Get:5 http://archive.ubuntu.com/ubuntu focal-backports InRelease [128 kB]
Get:6 http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages [11.3 MB]
Get:7 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [1297 kB]
Get:8 http://security.ubuntu.com/ubuntu focal-security/multiverse amd64 Packages [30.9 kB]
Get:9 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [4187 kB]
Get:10 http://archive.ubuntu.com/ubuntu focal/restricted amd64 Packages [33.4 kB]
Get:11 http://archive.ubuntu.com/ubuntu focal/main amd64 Packages [1275 kB]
Get:12 http://archive.ubuntu.com/ubuntu focal/multiverse amd64 Packages [177 kB]
Get:13 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [4663 kB]
Get:14 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1589 kB]
Get:15 http://archive.ubuntu.com/ubuntu focal-updates/multiverse amd64 Packages [34.6 kB]
Get:16 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [4463 kB]
Get:17 http://archive.ubuntu.com/ubuntu focal-backports/main amd64 Packages [55.2 kB]
Get:18 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [28.6 kB]
Fetched 34.1 MB in 5s (6284 kB/s)
...
```
The reason this is happening is due to a few issues in different places:
1. IMG_OS_VERSION, passed to osbuilder, is not used anywhere and
OS_VERSION should be used instead. And we should break if OS_VERSION
is not properly passed down
2. Using UBUNTU_CODENAME is simply wrong, as it'll use whatever comes as
the base container from kata-deploy's local-build scripts, and it has
just been working by luck
Note that at the same time this commit fixes the wrong behaviour, it
would break the rootfses build as they are, this we need to set the
versions.yaml to use 20.04 were it was already using 20.04 even without
us knowing.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
As this is required as part of the osbuilder tool to be able to properly
set the repositories used when building the rootfs.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
While having variables are nice, those are more extensive to write down,
and actually confusing for tired developer eyes to read, plus we're
mixing the use of the yaml variables here and there together with not
using them for some architectures.
With the best "all or nothing" spirit, let's just make it easier for our
developers to read the versions.yaml and easily understand what's being
used.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
As the devices controller works in a different way in cgroupsv2, the
"/sys/fs/cgroup/devices/devices.list" file simply doesn't exist.
For now, let's skip the test till the test maintainer decides to
re-enable it for cgroupsv2.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
The changes done are:
* cpu/cpu.shares was replaced by cpu.weight
* The weight, according to our reference[0], is calculated by:
weight = (1 + ((request - 2) * 9999) / 262142)
* cpu/cpu.cfs_quota_us & cpu/cpu.cfs_period_us were replaced by cpu.max,
where quota and period are written together (in this order)
[0]: https://github.com/containers/crun/blob/main/crun.1.md#cgroup-v2
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
This reverts commit 091ad2a1b2, in order
to ensure tests would be running with cgroupsv2 on the guest.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
In the last couple of days I've seen the blogbench
metrics write latency test on clh fail a few times because
the latency was too low, so adjust the minimum range
to tolerate quicker finishes.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
The static-checks targets are `pull_request`, so
they can run the PR workflow version, so we want to
update the required-tests.yaml so that static-check
workflow changes do trigger static checks in order
to test them properly.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Now we have the build-assets running on the gh-hosted
runners, try the same approach for the static-checks
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
I've noticed the following error when running the tests with SEV:
```
2025-01-21T17:10:28.7999896Z # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
2025-01-21T17:10:28.8000614Z # @ WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! @
2025-01-21T17:10:28.8001217Z # @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
2025-01-21T17:10:28.8001857Z # IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
2025-01-21T17:10:28.8003009Z # Someone could be eavesdropping on you right now (man-in-the-middle attack)!
2025-01-21T17:10:28.8003348Z # It is also possible that a host key has just been changed.
2025-01-21T17:10:28.8004422Z # The fingerprint for the ED25519 key sent by the remote host is
2025-01-21T17:10:28.8005019Z # SHA256:x7wF8zI+LLyiwphzmUhqY12lrGY4gs5qNCD81f1Cn1E.
2025-01-21T17:10:28.8005459Z # Please contact your system administrator.
2025-01-21T17:10:28.8006734Z # Add correct host key in /home/kata/.ssh/known_hosts to get rid of this message.
2025-01-21T17:10:28.8007031Z # Offending ED25519 key in /home/kata/.ssh/known_hosts:178
2025-01-21T17:10:28.8007254Z # remove with:
2025-01-21T17:10:28.8008172Z # ssh-keygen -f "/home/kata/.ssh/known_hosts" -R "10.244.0.71"
```
And this was causing a failure to ssh into the confidential pod.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Relying on dmesg is really not ideal, as we may lose important info,
mainly those which happen very early in the boot, depending on the size
of kernel ring buffer.
So, for this specific test, let's increase the kernel ring buffer, by
default, to 4M.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Let's make sure that we don't use Kata Containers' agent as init for the
Confidential related rootfses*, as we don't want to increase the agent's
complexity for no reason ... mainly when we can rely on a proper init
system.
*:
- images already used systemd as init
- initrds are now using systemd as init
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Bumps the go_modules group with 1 update in the /src/runtime directory: [golang.org/x/net](https://github.com/golang/net).
Bumps the go_modules group with 1 update in the /src/tools/csi-kata-directvolume directory: [golang.org/x/net](https://github.com/golang/net).
Bumps the go_modules group with 1 update in the /tools/testing/kata-webhook directory: [golang.org/x/net](https://github.com/golang/net).
Updates `golang.org/x/net` from 0.25.0 to 0.33.0
- [Commits](https://github.com/golang/net/compare/v0.25.0...v0.33.0)
Updates `golang.org/x/net` from 0.23.0 to 0.33.0
- [Commits](https://github.com/golang/net/compare/v0.25.0...v0.33.0)
Updates `golang.org/x/net` from 0.23.0 to 0.33.0
- [Commits](https://github.com/golang/net/compare/v0.25.0...v0.33.0)
---
updated-dependencies:
- dependency-name: golang.org/x/net
dependency-type: indirect
dependency-group: go_modules
- dependency-name: golang.org/x/net
dependency-type: direct:production
dependency-group: go_modules
- dependency-name: golang.org/x/net
dependency-type: indirect
dependency-group: go_modules
...
Signed-off-by: dependabot[bot] <support@github.com>
When the agent is run as the init process cgroupfs is being
setup. In the case of cgroupsV1 we needed to enable the memory hiearchy
this is now per default enabled in cgroupsV2. Additionally the file
/sys/fs/cgroup/memory/memory.use_hierarchy isn't even available with V2.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
The `Create AKS cluster` step in `run-k8s-tests-on-aks.yaml` is likely
to fail fail since we are trying to issue `PUT` to `aks` in a relatively
high frequency, while the `aks` end has it's limit on `bucket-size` and
`refill-rate`, documented here [1].
Use `nick-fields/retry@v3` to retry in 10 seconds after request fail,
based on observations that AKS were request 7, or 8 second delays
before retry as part of their 429 response
[1] https://learn.microsoft.com/en-us/azure/aks/quotas-skus-regions#throttling-limits-on-aks-resource-provider-apisFixes: #10772
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
With this change, `virtiofsd` (gnu target) could be built and then to be
used with other components.
Depends: #10741Fixes: #10739
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>