- As the metrics tests are largely independent
then allow subsequent tests to run even if previous
ones failed. The results might not be perfect if
clean-up is required, but we can work on that later.
- Move the test results check out of the latency
test that seems arbitrary and into it's own job step
- Add timeouts to steps that might fail/hang if there
are containerd/K8s issues
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Currently the run-metrics job runs a manual install
and does this in a separate job before the metrics
tests run. This doesn't make sense as if we have multiple
CI runs in parallel (like we often do), there is a high chance
that the setup for another PR runs between the metrics
setup and the runs, meaning it's not testing the correct
version of code. We want to remove this from happening,
so install (and delete to cleanup) kata as part of the metrics
test jobs.
Also switch to kata-deploy rather than manual install for
simplicity and in order to test what we recommend to users.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Update the Speed & Density metric tests baseline for StratoVirt
and re-enable them, and skip other metric tests temporarily.
Fixes: #8656
Signed-off-by: Liu Wenyuan <liuwenyuan9@huawei.com>
This PR enables the new FIO test based on the containerd client
which is used to track the I/O metrics in the kata-ci environment.
Additionally this PR fixes the parsing of results.
Fixes: #8199
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
This PR enables the latency test for gha run script for kata metrics.
Fixes#8037
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
FIO test is showing ongoing issues when running in k8s.
Working on running FIO on the ctr client which has been
shown to be stable.
Fixes: #7920
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
This PR enables the iperf benchmark to run on the gha for kata metrics.
Fixes#7575
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR configures the corresponding kata runtime in K8s
based on the tested hypervisor.
This PR also enables FIO metrics test in the kata metrics-ci.
Fixes: #7665
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
This PR changes the metrics workflow in order to just install
kata once, and run the checks for multiple hypervisor variations.
In this way we save time avoiding installing kata for each
hypervisor to be tested.
Fixes: #7578
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
This PR moves the checkmetrics to gha-run script to gathered
tensorflow information.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
The `install_kata` function was moved from the metrics' `gha-run.sh`
file to the `common.bash` in the commit 3ffd48bc16, but I didn't notice
that it brought with it a call to `install_check_metrics`, which is
totally unrelated to installing Kata Containers.
Let's remove the call so the function is a little bit less specific, and
move the call to install_check_metrics to the metrics `gha-run.sh` file.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This PR adds the tensorflow function in gha-run script in order to
be triggered in the gha.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR enables the blogbench performance test for the kata metrics CI.
Fixes#7281
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
Those functions were originally introduced as part of the
`metrics/gha-run.sh` file, but those will be very hand at the time we
start adding more tests.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This PR will enable the memory inside container metrics for the Kata CI.
Fixes#7254
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR fixes the call to check_metrics function as KATA_HYPERVISOR
is not needed to be passed.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR enables storing metrics workflow artifacts in two
separated flavours: clh and qemu.
Fixes: #7239
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
This PR adds blogbench and webtooling metrics checks to this repo.
The function running the test intentionally returns zero, so
the test will be enabled in another PR once the workflow is
green.
Fixes: #7069
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
This PR adds double quotes in all variables to have uniformity across
all the gha-run.sh script.
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR adds checkmetrics installation for gha-run.sh in order to compare
results limits as part of the metrics CI.
Fixes#7198
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR adds memory foot print metrics to tests/metrics/density
folder.
Intentionally, each test exits w/ zero in all test cases to ensure
that tests would be green when added, and will be enabled in a
subsequent PR.
A workflow matrix was added to define hypervisor variation on
each job, in order to run them sequentially.
The launch-times test was updated to make use of the matrix
environment variables.
Fixes: #7066
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
This PR adds the word function before the function names in order to have
uniformity across the script as some are using this and some are not.
Fixes#7196
Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
This PR installs kata static tarball on metrics runner
and run launch-times tests.
Fixes: #7049
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>
This gh-workflow prints a simple msg, but is the base for future
PRs that will gradually add the jobs corresponding to the kata
metrics test.
Fixes: #7100
Signed-off-by: David Esparza <david.esparza.borquez@intel.com>