While the local-build's folder's Makefile dependencies for the
confidential nvidia rootfs targets already declare the pause image
and coco-guest-components dependencies, the actual rootfs
composition does not contain the pause image bundle and relevant
certificates for guest pull. This change ensure the rootfs gets
composed with the relevant files.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
After supporting the Arm CCA, it will rely on the kernel kvm.h headers to build the
runtime. The kernel-headers currently quite new with the traditional one, so that we
rely on build the kernel header first and then inject it to the shim-v2 build container.
Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org>
Co-authored-by: Seunguk Shin <seunguk.shin@arm.com>
With this change we namespace the stage one rootfs tarball name
and use the same name across all uses. This will help overcome
several subtle local build problems.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
We need to ensure that any change on the Dockerfile (and its dir) leads
to the build being retriggered, rather than using the cached version.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We are seeing more protoc related failures on the new
runners, so try adding the protobuf-compiler dependency
to these steps to see if it helps.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
On commit 9602ba6ccc, from February this
year, we've introduced a check to ensure that the files needed for
signing the kernel build are present. However, we've noticed last week
that there were a reasonable amount of wrong assumptions with the
workflow. :-)
Zvonko fixed the majority of those, but this bit was left and it'd cause
breakages when using kernel that was cached ... although passing when
building new kernels.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
This is needed to the kernel setup picks up the correct
config values from our fragments directories.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
We need to make sure that the kernel we're using has the
correct configs set, otherwise the module signing will not work.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
What was done in the past, trying to set the env var on the same step
it'd be used, simply does not work.
Instead, we need to properly set it through the `env` set up, as done
now.
We're also bumping the kata_config_version to ensure we retrigger the
kernel builds.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
There's no reason to have the code duplication between the SNP / TDX
tests for CoCo, as those are basically using the same configuration
nowadays.
Note that for the TEEs case, as the nydus-snapshotter is deployed by the
admin, once, instead of deploying it on every run ... I'm actually
removing the nydus-snapshotter steps so we make it clear that those
steps are not performed by the CI.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
For DGX like systems we need additional binaries and libraries,
enable the Kata AND CoCo use-case.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Update tools/osbuilder/rootfs-builder/nvidia/nvidia_rootfs.sh
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
For those who are not willing to use the nydus-snapshotter for pulling
the image inside the guest, let's allow them setting the
experimetal_force_guest_pull, introduced by Edgeless, as part of our
helm-chart.
This option can be set as:
_experimentalForceGuestPull: "qemu-tdx,qemu-coco-dev"
Which would them ensure that the configuration for `qemu-tdx` and
`qemu-coco-dev` would have the option enabled.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As the kata-deploy helm chart has been the only way we've been testing
kata-containers deployment as part of our CI, it's time to finally get
rid of the kustomize yamls and avoid us having to maintain two different
methods (with one of those not being tested).
Here I removed:
* kata-deploy yamls and kustomize yamls
* kata-cleanup yamls and kustomize yamls
* kata-rbac yals and kustomize yamls
* README.md for the kustomize yamls was removed
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
NVRC introduced the confidential feature flag and we
haven't updated the rootfs build to accomodate.
If rootfs_type==confidential user --feature=confidential
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Canonical TDX release is not needed for vanilla Ubuntu 25.10 but
GRUB_CMDLINE_LINUX_DEFAULT needs to contain `nohibernate` and
`kvm_intel.tdx=1`
Signed-off-by: Szymon Klimek <szymon.klimek@intel.com>
Let's expose the EXPERIMENTAL_SETUP_SNAPSHOTTER script environment
variable to our chart, allowing then users of our helm chart to take
advantage of this experimental feature.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We may deploy in scenarios where we want to have both snapshotters set
up, sometimes even for simple test on which one behaves better.
With this in mind, let's allow EXTERNAL_SETUP_SNAPSHOTTER to receive a
comma separated list of snapshotters, such as:
```
EXPERIMENTAL_SETUP_SNAPSHOTTER="erofs,nydus"
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Similarly to what's been done for the nydus-snapshotter, let's allow
users to have erofs-snapshotter set up by simply passing:
```
EXPERIMENTAL_SETUP_SNAPSHOTTER="erofs".
```
Mind that erofs, although a built-in containerd snapshotter, has system
depdencies that we will *NOT* install and it's up to the admin to do so.
These dependencies are:
* erofs-utils
* fsverity
* erofs module loaded
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's introduce a new EXPERIMENTAL_SETUP_SNAPSHOTTER environemnt
variable that, when set, allows kata-deploy to put the nydus snapshotter
in the correct place, and configure containerd accordingly.
Mind, this is a stop gap till the nydus-snapshotter helm chart is ready
to be used and behaving well enough to become a weak dependency of our
helm chart. When that happens this code can be deleted entirely.
Users can have nydus-snapshotter deployed and configured for the
guest-pull use case by simply passing:
```
EXPERIMENTAL_SETUP_SNAPSHOTTER="nydus"
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Otherwise we'd end up adding a the file several times, which could lead
to problems when removing the entry, leading to containerd not being
able to start due to an import file not being present.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We have noticed in the CI that the `gen_init_cpio ...` was returning 255
and breaking the build. Why? I am not sure.
When chatting with Steve, he suggested to split the command, so it'd be
easier to see what's actually breaking. But guess what? There's no
breakage when we split the command.
So, let's try it out and see whether the CI passes after it.
If someone is willing to educate us on this one, please, that would be
helpful! :-)
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Moving the CUDA repo to the top for all essential packages
and adding a repo priority favouring NVIDIA based repos.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Currently, use of openvpn clients/servers is not possible in Kata UVMs.
Following error message can be expected:
ERROR: Cannot open TUN/TAP dev /dev/net/tun: No such device (errno=19)
To support opevpn scenarios using bridging and TAP, we enable various
kernel networking config options.
Signed-off-by: Manuel Huber <mahuber@microsoft.com>
Since we cannot build all components with libc=musl and
static RUSTFLAG we still need to ship libcc for AA or other guest
components.
Without this change the guest components do not work and we see
/usr/local/bin/attestation-agent: error while loading shared
libraries: libgcc_s.so.1: cannot open shared object file: No such file or directory
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
As a consequence of moving away from Advanced Security for Zizmor, it now
checks the entire codebase and will error out on this PR and future.
To be reverted once we address all Zizmor findings in a future PR.
Signed-off-by: Aurélien Bombo <abombo@microsoft.com>
coco-guest-components tarball is used as is for both vanilla coco
rootfs and the nvidia enabled rootfs. nvidia-attester can be built
without nvml so make it globally enabled for coco-guest-components.
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
I've hit this when using a machine with slow internet connection, which
took ages to download the kata-cleanup image, and then helm timed out in
the middle of the cleanup, leading to the cleanup job being restarted
and then bailing with an error as the runtimeclasses that kata-deploy
tries to delete were already deleted.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
A minor release of QEMU is out, so update to it for fixes and features.
QEMU changelog: https://wiki.qemu.org/ChangeLog/10.1
Notes:
* AVX support is not an option to be enabled / disabled anymore.
* Passt requires Glibc 2.40.+, which means a dependency on Ubuntu 25.04
or newer, thus we're disabling it.
Signed-off-by: Alex Tibbles <alex@bleg.org>
There are still some issues to be address before we can mark `make test`
for `libs` as required. Mark this case as not required temporarily.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>