So far we've only been building the initrd for the nvidia rootfs.
However, we're also interested on having the image beind used for a few
use-cases.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Fixes: #12123
`include` in #12069, introduced to choose a different runner
based on component, leads to another set of redundant jobs
where `matrix.command` is empty.
This commit gets back to the `runs-on` solution, but makes
the condition human-readable.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
In the CoCo tests jobs @wainersm create a report tests step
that summarises the jobs, so they are easier to understand and
get results for. This is very useful, so let's roll it out to all the bats
tests.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Add an allow-all policy for the CC GPU tests and ensure the init-data
device is being created (hypervisor annotations).
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Right now we have only been passing the env var to the deployment
script, but we really need to pass it to the tests script as well.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When testing this branch, on several occasions the Delete
AKS cluster step has hung for multiple hours, so add a timeout
to prevent this.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
The new environment of Power runners for agent checks is causing two test case failures
w.r.to selinux and inode which needs further understanding and is mostly an issue
due to environemnt change and not to do with the agent.
Fall back to running agent checks on original ppc64le self hosted runners.
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
As the arm 22.04 runner isn't working at the moment, let's test the
24.04 version to see if that is better.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
The fact that we were not explicitly setting the VMM was leading to us
testing with the default runtime class (qemu). :-/
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
By doing this, the ones interested on RISC-V support can still have a
ood visibility of its state, without the extra noise in our CI.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We have had those tests broken for months. It's time to get rid of
those.
NOTE that we could easily revert this commit and re-add those tests as
soon as we find someone to maintain and be responsible for such
integration.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's ensure Trustee is deployed as some of the tests rely images that
live behind authentication. /o\
The approach taken here to deploy Trustee is exactly the same one taken
on the other CoCo tests, apart from an env var passed to ensure we're
using the NVIDIA remote verifier (which will be in handy very very
soon).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Let's add a new NVIDIA machine, which later on will be used for CC
related tests.
For now the current tests are skipped in the CC capable machine.
Signed-off-by: Manuel Huber <manuelh@nvidia.com>
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
When added, I've mistakenly used the wrong test-type name, which is now
fixed and should be enough to trigger the tests correctly.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
On IBM actionspz P/Z runners, the following error was observed during
runtime tests:
```
host system doesn't support vsock: stat /dev/vhost-vsock: no such file or directory
```
Since loading the vsock module on the fly is not permitted, this commit
moves the runtime tests back to self-hosted runners for P/Z.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Stratovirt has been failing for a considerable amount of time, with no
sign of someone watching it and being actively working on a fix.
With this we also stop building and shipping stratovirt as part of our
release as we cannot test it.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
A few weeks ago we've tested nydus-snapshotter with this approach, and
we DID find issues with it.
Now, let's also test this with `experimental_force_guest_pull`.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Otherwise we'll face issues like:
```
Error: found in Chart.yaml, but missing in charts/ directory: node-feature-discovery
```
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
As we have the ability to deploy NFD as a sub-chart of our chart, let's
make sure we test it during our CI.
We had to increase the timeout values, where we had timeouts set, to
deploy / undeploy kata, as now NFD is also deployed / undeployed.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We have 2 tests running on GitHub provided runners:
* devmapper
* CRI-O
- devmapper situation
For devmapper, we're currently testing devmapper with s390x as part of
one of its jobs.
More than that, this test has been failing here due to a lack of space
in the machine for quite some time, and no-action was taken to bring it
back either via GARM or some other way.
With that said, let's rely on the s390x CI to test devmapper and avoid
one extra failure on our CI by removing this one.
- cri-o situation
CRI-O is being tested with a fixed version of kubernetes that's already
reached its EOL, and a CRI-O version that matches that k8s version.
There has been attempts to raise issues, and also to provide a PR that
does at least part of the work ... leaving the debugging part for the
maintainers of the CI. However, there was no action on those from the
maintainers.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
There's no reason to keep the env var / input as it's never been used
and now kata-deploy detects automatically whether NFD is deployed or
not.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Temporarily disables the new runners for building artifacts jobs. Will be re-enabled once they are stable.
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
This partially reverts 8dcd91c for the s390x because the
CI jobs are currently blocking the release. The new runners
will be re-introduced once they are stable and no longer
impact critical paths.
Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
Migrate the k8s job to a different runner and use a long running cluster
instead of creating the cluster on every run.
Signed-off-by: Amulyam24 <amulmek1@in.ibm.com>
This will help immensely projects consuming the kata-deploy helm chart
to use configuration options added during the development cycle that are
waiting for a release to be out ... allowing very early tests of the
stack.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
Temporarily disable the auto-generated Agent Policy on Mariner hosts,
to workaround the new test failures on these hosts.
When re-enabling auto-generated policy in the future, that would be
better achieved with a tests/integration/kubernetes/gha-run.sh change.
Those changes are easier to test compared with GHA YAML changes.
Signed-off-by: Dan Mihai <dmihai@microsoft.com>
After supporting the Arm CCA, it will rely on the kernel kvm.h headers to build the
runtime. The kernel-headers currently quite new with the traditional one, so that we
rely on build the kernel header first and then inject it to the shim-v2 build container.
Signed-off-by: Kevin Zhao <kevin.zhao@linaro.org>
Co-authored-by: Seunguk Shin <seunguk.shin@arm.com>
One problem that we've been having for a reasonable amount of time, is
containerd not behaving very well when we have multiple snapshotters.
Although I'm adding this test with my "CoCo" hat in mind, the issue can
happen easily with any other case that requires a different snapshotter
(such as, for instance, firecracker + devmapper).
With this in mind, let's do some stability tests, checking every hour a
simple case of running a few pre-defined containers with runc, and then
running the same containers with kata.
This should be enough to put us in the situation where containerd gets
confused about which snapshotter owns the image layers, and break on us
(or not break and show us that this has been solved ...).
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We are seeing more protoc related failures on the new
runners, so try adding the protobuf-compiler dependency
to these steps to see if it helps.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Now that we have added the ability to deploy kata-containers with
experimental_force_guest_pull configured, let's make sure we test it to
avoid any kind of regressions.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
What was done in the past, trying to set the env var on the same step
it'd be used, simply does not work.
Instead, we need to properly set it through the `env` set up, as done
now.
We're also bumping the kata_config_version to ensure we retrigger the
kernel builds.
Signed-off-by: Fabiano Fidêncio <ffidencio@nvidia.com>
We have some scalable s390x and ppc runners, so
start to use them for build and test, to improve
the throughput of our CI
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Co-authored-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>