We need to do proper sandbox sizing when we're doing cold-plug introduce CDI,
the de-facto standard for enabling devices in containers. containerd
will pass-through annotations for accumulated CPU,Memory and now CDI
devices. With that information sandbox sizing can be derived correctly.
Fixes: #7331
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
Let's add targets and actually enable users and oursevles to build those
components in the same way we build the rest of the project.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As we'd like to ship the content from src/tools, we need to build them
in the very same way we build the other components, and the first step
is providing scripts that can build those inside a container.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's move, adapt, and use the kata-monitor tests from the tests repo.
In this PR I'm keeping the SoB from every single contributor from who
touched those tests in the past.
Fixes: #8074
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: yaoyinnan <yaoyinnan@foxmail.com>
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
The kata-monitor tests is currently running as part of the Jenkins CI
with the following setups:
* Container Engines: CRI-O | containerd
* VMMs: QEMU
When using containerd, we're testing it with:
* Snapshotter: overlayfs | devmapper
We will stop running those tests on devmapper / overlayfs as that hardly
would get us a functionality issue.
Also, we're restricting this to run with the LTS version of containerd,
when containerd is used.
As it's known due to our GHA limitation, this is just a placeholder and
the tests will actually be added in the next iterations.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This will serve us quite will in the upcoming tests addition, which will
also have to be executed using CRi-O.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This will become handy when doing tests with CRI-O, as CRI-O doesn't
install the CNI plugins for us.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's ensure we have runc running with `SystemdCgroups = false`,
otherwise we'll face failures when running tests depending on runc on
Ubuntu 22.04, woth LTS containerd.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Allow Cloud Hypervisor to create a confidential guest (a TD or
"Trust Domain") rather than a VM (Virtual Machine) on Intel systems
that provide TDX functionality.
> **Notes:**
>
> - At least currently, when built with the `tdx` feature, Cloud Hypervisor
> cannot create a standard VM on a TDX capable system: it can only create
> a TD. This implies that on TDX capable systems, the Kata Configuration
> option `confidential_guest=` must be set to `true`. If it is not, Kata
> will detect this and display the following error:
>
> ```
> TDX guest protection available and must be used with Cloud Hypervisor (set 'confidential_guest=true')
> ```
>
> - This change expands the scope of the protection code, changing
> Intel TDX specific booleans to more generic "available guest protection"
> code that could be "none" or "TDX", or some other form of guest
> protection.
Fixes: #6448.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
Introduce a few new constants (for PCI segment count and FS queues) and
move the disk queue constants to `convert.rs` to allow them to be used
there too.
> **Note:**
>
> This change gives the `ShareFs` code it's own set of values rather
> than relying on the disk queue constants.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
Modify the Cloud Hypervisor `add_device()` method to add `ShareFs` and
`Network` devices to the list of pending devices since only these two
device types need to be cached before VM startup. Full details in the
comments.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
Remove the `VIRTIO_BLK_MMIO` check which appears to have been added
erroneously in the first place.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
This patch re-generates the client code for Cloud Hypervisor v35.0.
Note: The client code of cloud-hypervisor's OpenAPI is automatically
generated by openapi-generator.
Fixes: #8057
Signed-off-by: Bo Chen <chen.bo@intel.com>
This PR fixes the latency yamls path for the latency test for
kata metrics.
Fixes#8055
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
We've faced this as part of the CI, only happening with the CRI-O tests:
```
not ok 1 Test readonly volume for pods
# (from function `exec_host' in file tests_common.sh, line 51,
# in test file k8s-file-volume.bats, line 25)
# `exec_host "echo "$file_body" > $tmp_file"' failed with status 127
# [bats-exec-test:38] INFO: k8s configured to use runtimeclass
# bash: line 1: $'\r': command not found
#
# Error from server (NotFound): pods "test-file-volume" not found
```
I must say I didn't dig into figuring out why this is happening, but we
may be safe enough to just trail the '\r', as long as all the tests keep
passing on containerd.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We need the default capabilities to be enabled, especially `SYS_CHROOT`,
in order to have tests accessing the host to pass.
A huge thanks to Greg Kurz for spotting this and suggesting the fix.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Signed-off-by: Greg Kurz <groug@kaod.org>
Some of the "k8s distros" allow using CRI-O in a non-official way, and
if that's done we cannot simply assume they're on containerd, otherwise
kata-deploy will simply not work.
In order to avoid such issue, let's check for `cri-o` as the container
engine as the first place and only proceed with the checks for the "k8s
distros" after we rule out that CRI-O is not being used.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This PR fixes the network metrics section at the README by leaving
the current tests that we have in our kata metrics.
Fixes#8017
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>