So attestation-agent and others have a version including the ttrpc bump
to v0.8.4, allowing us to use the latest LTS kernel.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
We've been appending to the wrong variable for quite some time, it
seems, leading to not actually regenerating the rootfs when needed.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Set CONFIG_BLK_DEV_WRITE_MOUNTED=y to restore previous kernel behaviour.
Kernel v6.8+ will by default block buffer writes to block devices mounted by filesystems.
This unfortunately is what we need to use mounted loop devices needed by some teams
to build OSIs and as an overlay backing store.
More info on this config item [here](https://cateee.net/lkddb/web-lkddb/BLK_DEV_WRITE_MOUNTED.html)
Fixes: #10808
Signed-off-by: Simon Kaegi <simon.kaegi@gmail.com>
Run:
```
cargo update -p cookie-store
cargo update -p publicsuffix
```
to update the version of idna and resolve CVE-2024-12224
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Removed a rogue printf and updated the logging to say
that we're waiting for CDI spec(s) to be generated rather
than saying there is an error, it's not we have a timeout
after that it is an error.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
With the create_container_timeout the dial_timeout is lest important.
Add the custom timeout for GPUs in create_container_timeout
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
The tags created automatically for published Github releases
are probably not annotated, so by simply running `git describe` we are
not getting the correct tag. Use a `git describe --tags` to allow git
to look at all tags, not just annotated ones.
Signed-off-by: Anastassios Nanos <ananos@nubificus.co.uk>
AgentConfig now has the cdi_timeout from the kernel
cmdline, update the proper function signature and use
it in the for loop.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Some systems like a DGX where we have 8 H100 or 8 H800 GPUs
need some extended time to be initialized. We need to make
sure we can configure CDI timeout, to enable even systems with 16 GPUs.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Phase 1 of Issue #10840
AMD has deprecated SEV support on
Kata Containers, and going forward,
SNP will be the only AMD feature
supported. As a first step in this
deprecation process, we are removing
the SEV CI workflow from the test suite
to unblock the CI.
Will be adding future commits to
remove redundant SEV code paths.
Signed-Off-By: Adithya Krishnan Kannan <AdithyaKrishnan.Kannan@amd.com>
The block volume test has failed on 10/10 nightlies
and all the PRs I've seen, so skip it until it can be assessed.
See #10873
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Based on the guidance from @Xynnn007 in #10851
> The new version of image-rs will do attestation once
ClientBuilder.build().await() is called, while the old version
will do so lazily the first image pull request comes.
Looks like it's called in rpc::start() in kata-agent, when
I'm afraid the network hasn't been initialized yet.
> I am not sure if the guest network is prepared after
the DNS is configured (in create_sandbox),
if so we can move (the init_image_service) right after that.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
As this brings in the commit bumping ttrpc to 0.8.4, which fixes
connection issues with kernel 6.12.9+.
As image-rs has a new builder pattern and several of the values in the
image client config have been renamed, let's change the agent to account
for this.
Signed-off-by: Tobin Feldman-Fitzthum <tobin@linux.ibm.com>
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
cgroups v2 enforces stricter delegation rules, preventing operations on
cgroups outside our ownership boundary. When running Docker-in-Docker (DinD),
processes must be attached to an "init" subcgroup within the systemd unit.
This fix detects and uses the init subcgroup when proxying process attachment.
Fixes#10733
Signed-off-by: Antoine Gaillard <antoine.gaillard@datadoghq.com>
When trying to deploy nydus on kcli locally we get the
following failure:
```
root@sh-kata-ci1:~# kubectl get pods -n nydus-system
NAMESPACE NAME READY STATUS RESTARTS AGE
nydus-system nydus-snapshotter-5kdqs 0/1 CrashLoopBackOff 4 (84s ago) 7m29s
```
Digging into this I found that the nydus-snapshotter service
is failing with:
```
ubuntu@kata-k8s-worker-0:~$ journalctl -u nydus-snapshotter.service
-- Logs begin at Wed 2025-02-12 15:06:08 UTC, end at Wed 2025-02-12 15:20:27 UTC. --
Feb 12 15:10:39 kata-k8s-worker-0 systemd[1]: Started nydus snapshotter.
Feb 12 15:10:39 kata-k8s-worker-0 containerd-nydus-grpc[6349]: /usr/local/bin/containerd-nydus-grpc:
/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required b>
Feb 12 15:10:39 kata-k8s-worker-0 containerd-nydus-grpc[6349]: /usr/local/bin/containerd-nydus-grpc:
/lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required b>
Feb 12 15:10:39 kata-k8s-worker-0 systemd[1]: nydus-snapshotter.service: Main process exited, code=exited, status=1/FAILURE
```
I think this is because 20.04 has version:
```
ubuntu@kata-k8s-worker-0:~$ ldd --version
ldd (Ubuntu GLIBC 2.31-0ubuntu9.16) 2.31
```
so it's too old for the nydus snapshotter.
Also 20.04 is EoL soon, so bumping is better.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Some problem hidden in `dbs` crates are revealed after making these
crates workspace components, fix according to `cargo clippy` suggests.
Signed-off-by: Ruoqing He <heruoqing@iscas.ac.cn>
Peer pods have a linux namespace of type network. We want to make sure that all
container in the same pod use the same namespace. Therefore, we add the first
namespace path to the state and check all other requests against that.
This commit also adds the corresponding integration test in the policy crate
showcasing the benefit of having rust integration tests for the policy.
Signed-off-by: Leonard Cohnen <lc@edgeless.systems>
The generated rego policies for `CreateContainerRequest` are stateful and that
state is handled in the policy crate. We use this policy crate in the
genpolicy integration test to be able to test if those state changes are
handled correctly without spinning up an agent or even a cluster.
This also allows to easily test on a e.g., CreateContainerRequest level
instead of relying on changing the yaml that is applied to a cluster.
Signed-off-by: Leonard Cohnen <lc@edgeless.systems>
This commit allows to programmatically invoke genpolicy. This allows for other
rust tools that don't want to consume genpolicy as binary to generate policies.
One such use-case is the policy integration test implemented in the following
commits.
Signed-off-by: Leonard Cohnen <lc@edgeless.systems>
The policy module augments the policy generated with genpolicy by keeping and
providing state to each invocation.
Therefore, it is not sufficient anymore to test the passing of requests in
the genpolicy crate.
Since in Rust, integration tests cannot call functions that are not exposed
publicly, this commit factors out the policy module of the agent into its
own crate and exposes the necessary functions to be consumed by the agent
and an integration tests. The integration test itself is implemented in the
following commits.
Signed-off-by: Leonard Cohnen <lc@edgeless.systems>
the latest containerd had an issue for its e2e test, thus we should do
the following fix to workaround this issue. For much info about this issue,
please see:
https://github.com/containerd/containerd/pull/11240
Once this pr was merged and release new version, we can remove
this workaround.
Signed-off-by: Fupan Li <fupan.lfp@antgroup.com>