fixes#9748
A configuration option `guest_component_procs` has been introduced that
indicates which guest component processes are supposed to be spawned by
the agent. The default behaviour remains that all of those processes are
actively spawned by the agent. At the moment this is based on presence
of binaries in the rootfs and the guest_component_api_rest option.
The new option is incremental:
none -> attestation-agent -> confidential-data-hub -> api-server-rest
e.g. api-server-rest implies attestation-agent and confidential-data-hub
the `none` option has been removed from guest_component_api_rest, since
this is addresses by the introduced option.
To not change expected behaviour for non-coco guests we still will still
only attempt to spawn the processes if the requested attestation binaries
are present on the rootfs, and issue in warning in those cases.
Signed-off-by: Magnus Kulke <magnuskulke@microsoft.com>
It creates this line, as the Golang runtime does:
-object rng-random,id=rng0,filename=/dev/urandom -device virtio-rng-pci,rng=rng0
Signed-off-by: Emanuel Lima <emlima@redhat.com>
We need to remove the device from the tracking map, a container
restart will increment the bus index and we will get out of root-ports
and crash the machine.
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
We need special handling for pod_sandbox, pod_container and
single_container how and when to inject CDI devices
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
In Kubernetes we still do not have proper VM sizing
at sandbox creation level. This KEP tries to mitigates
that: kubernetes/enhancements#4113 but this can take
some time until Kube and containerd or other runtimes
have those changes rolled out.
Before we used a static config of VFIO ports, and we
introduced CDI support which needs a patched contianerd.
We want to eliminate the patched continerd in the GPU case
as well.
Fixes: #8860
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Let's see if it helps with issues like:
```
error: must build at directory: not a valid directory: evalsymlink
failure on
'"/home/runner/actions-runner/_work/kata-containers/kata-containers/tests/functional/kata-deploy/../../..//tools/packaging/kata-deploy/kata-cleanup/overlays/k0s"'
: lstat
/home/runner/actions-runner/_work/kata-containers/kata-containers/tests/functional/kata-deploy/":
no such file or directory
```
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This takes a few minutes that could be saved, so let's avoid doing this
on all the platforms, but simply do this when it's needed (the baremetal
use case).
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Currently only "baremetal" runs all the tests, but we could easily run
"all" locally or using the github provided runners, even when not using
a "baremetal" system.
The reason I'd like to have a differentiation between "all" and
"baremetal" is because "baremetal" may require some cleanup, which "all"
can simply skip if testing against a fresh created VM.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
For now we've only exposed the option to deploy kata-deploy for k3s and
vanilla kubernetes when using containerd.
However, I do need to also deploy k0s and rke2 for an internal CI, and
having those exposed here do not hurt, and allow us to easily expand the
CI at any time in the future.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
k0s was added to kata-deploy, but it's kata-cleanup counterpart was
never added. Let's fix it.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
k0s deployment has been broken since we moved to using `tomlq` in our
scripts. The reason is that before using `tomlq` our script would,
involuntarily, end up creating the file.
Now, in order to fix the situation, we need to explicitly create the
file and let `tomlq` add the needed content.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>