Due to the agent-api tests requiring the agent to be deployed in the
CI by the tarball, so in the short-term lets only do this on the release
stage, so that both kata-manager works with the release and the
agent-api tests work with the other CI builds.
In the longer term we need to re-evaluate what is in our tarballs
(issue #10619), but want to unblock the tests in the short-term.
Fixes: #10630
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
With the code I originally did I think there is potentially
a case where we can get a failure due to timing of steps.
Before this change the `build-asset-shim-v2`
job could start the `get-artifacts` step and concurrently
`remove-rootfs-binary-artifacts` could run and delete the artifact
during the download and result in the error. In this commit, I
try to resolve this by making sure that the shim build waits
for the artifact deletes to complete before starting.
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- Fix copy-paste errors in artifact filters for arm64 and ppc64le
- Remove the trailing wildcard filter that falsely ends up removing agent-ctl
and replace with the tarball-suffix, which should exactly match the artifacts
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
This reverts commit 559018554b.
As we've noticed that this is causing issues with initrd builds in the
CI.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
We need the publish certain artefacts for the rootfs,
like the agent, guest-components, pause bundle etc
as they are consumed in the `build-asset-rootfs` step.
However after this point they aren't needed and probably
shouldn't be included in the overall kata tarball, so delete
them once they aren't needed any more to avoid them
being included.
Fixes: #10575
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Now we are downloading artifacts to create the rootfs
we need to ensure they are uploaded always,
even on releases
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
Let's ensure that we get the already built rootfs tarball from previous
steps of the action at the time we're building the shim-v2.
The reason we do that is because the rootfs binary tarballs has a
root_hash.txt file that contains the information needed the shim-v2
build scripts to add the measured rootfs arguments to the shim-v2
configuration files.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
As the rootfs will have what we need to add as part of the shim-v2
configuration files for measured rootfs, we **must** ensure this is
built **before** shim-v2.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
By doing this we can just re-use the dependencies already built, saving
us a reasonable amount of time.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
So far we haven't been storing the rootfs dependencies as part of our
workflows, but we better do it to re-use them as part of the rootfs
build.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
The only reason we had this one passing for amd64 is because the check
was done using the wrong variable (`matrix.stage`, while in the other
workflows the variable used is `inputs.stage`).
The commit that broke the release process is 67a8665f51, which
blindly copy & pasted the logic from the matrix assets.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
By doing this we can ensure that whenever the rootfs changes, we'll be
able to get the new root_hash.txt and use it.
This is the very first step to bring the measured rootfs tests back.
Signed-off-by: Fabiano Fidêncio <fabiano@fidencio.org>
- It appears like the `if` isn't required when setting env as a
conditional
- `inputs.stage` over input.stage
- Swap matrix.component to matrix.asset
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- We don't want to ship certain components (agent, coco-guest-components)
as part of the release, but for other consumers it's useful to be able to pull in the components
from oras, so rather than not building them, just don't upload it as part of the release.
- Also make the archs all consistent on not shipping the agent
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
- Set RELEASE env to 'yes', or 'no', based on if the stage
passed in was 'release', so we can use it in the build scripts
Signed-off-by: stevenhorsman <steven@uk.ibm.com>
`Node.js 19` is deprecated. Bump to a new version based on `Node.js 20`.
As explained at [1] :
> The contents of an Artifact are uploaded together into an immutable
> archive. They cannot be altered by subsequent jobs. Both of these
> factors help reduce the possibility of accidentally corrupting
> Artifact files.
This means that artifacts cannot have the same name.
Adapt all `build-kata-static-tarball` workflows accordingly by
embedding `${{ matrix.asset }}` in the artifact names that would
otherwise collide.
Fixes#9245
Signed-off-by: Greg Kurz <groug@kaod.org>
Those are not yet being cached for no reason, and they better be as
it'll allow us to save a considerable amount of time building the
rootfs.
Fixes: #8917
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Allow kata-deploy process to pull StratoVirt from release binaries, and
add them as a part of kata release.
Fixes: #7794
Signed-off-by: Liu Wenyuan <liuwenyuan9@huawei.com>
Otherwise we'll use any arm64 machine that's added as a runner, and
whenever new machines are added those may end up being only used for
running some specific set of the tests.
Fixes: #8109
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We do the build of our artefacts inside a container image, and we need
to expose some env vars to the container so ORAS can be used there to
push the artefacts we want to cache to ghcr.io.
The env vars we're exposing are:
* ARTEFACT_REGISTRY: The registry where we're going to save the
artefacts.
* ARTEFACT_REGISTRY_USERNAME: The username to log in to the registry, as
ORAS does not use the same json file used by docker.
* ARTEFACT_REGISTRY_PASSWORD: The pasword to log in to the the registry,
as the ORAS does not use the same json file used by docker.
* TARGET_BRANCH: The target branch, which will be part of the tag of the
artefact, as we may end up caching the artefacts for both main and
stable branches.
Fixes: #7834 -- part 0
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We're changing what's been done as part of ac939c458c, as we've
notcied issues using `github.event.pull_request.merge_commit_sha`.
Basically, whenever a force-push would happen, the reference of
merge_commit_sha wouldn't be updated, leading us to test PRs with the
old code. :-/
In order to get the rebase properly working, we need to ensure we pull
the hash of the commit as part of checkout action, and ensure
fetch-depth is set to 0.
Fixes: #7414
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
`stage` has been added, but only hooked up to the amd64 logic, leaving
arm64 and s390x behind.
Let's fix this right now, and make sure no error occurs when passing
this down to the yaml files.
Fixes: #7497
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make things simpler to figure out which version of Kata
Containers has been deployed, and also which artefacts come with it.
This will help us immensely in the future, for the TEEs use case, so we
can easily know whether we can deploy a specific guest kernel for a
specific host kernel.
Fixes: #7394
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's ensure we're not relying, on any of the called workflows, on event
specific information.
Right now, the two information we've been relying on are:
* PR number, coming from github.event.pull_request.number
* Commit hash, coming from github.event.pull_request.head.sha
As we want to, in the future, add nightly jobs, which will be triggered
by a different event (thus, having different fields populated), we
should ensure that those are not used unless it's in the "top action"
that's trigerred by the event.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The "build-assets-${arch}" jobs need to have access to the secrets in
order to log into the container registry in the cases where
"push-to-registry", which is used to push the builder containers to
quay.io, is set to "yes".
Now that "build-assets-${arch}" pass the secrets down, we need to log
into the container registry in the "build-kata-static-tarball-${arch}"
files, in case "push-to-registry" is set to "yes".
Fixes: #6899
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
This reverts commit 7855b43062.
Unfortunately we have to revert the PRs related to the switch done to
using `workflow_run` instead of `pull_request_target`. The reason for
that being that we can only mark jobs as required if they are targetting
PRs.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
56331bd7bc oversaw the fact that we
mistakenly tried to push the build containers to the registry for a PR,
rather than doing so only when the code is merged.
As the workflow is now shared between different actions, let's introduce
an input variable to specify which are the cases we actually need to
perform a push to the registry.
Fixes: #6592
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As we're using the `workflow_run` event, the checkout action would
pull the **current target branch** instead of the PR one.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's split those actions into two different ones:
* Build the kata-static tarball
* Publish the kata-deploy payload
We're doing this as, later in this series we'll start taking advantage
of both pieces.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>