Compare commits

..

79 Commits

Author SHA1 Message Date
Fabiano Fidêncio
67d67ab66d Merge pull request #4204 from fidencio/2.4.1-branch-bump
# Kata Containers 2.4.1
2022-05-04 19:11:37 +02:00
Fabiano Fidêncio
99c6726cf6 release: Kata Containers 2.4.1
- stable-2.4 | Second round of backports for the 2.4.1 release
- stable-2.4 | First round of backports for the 2.4.1 release
- stable-2.4 | versions: Upgrade to Cloud Hypervisor v23.0
- stable-2.4 | runtime: Base64 encode the direct volume mountInfo path
- stable-2.4 | agent: Avoid agent panic when reading empty stats

8e076c87 release: Adapt kata-deploy for 2.4.1
b50b091c agent: watchers: ensure uid/gid is preserved on copy/mkdir
03bc89ab clh: Rely on Cloud Hypervisor for generating the device ID
6b2c641f tools: fix typo in clh directory name
81e10fe3 packaging: Fix clh build from source fall-back
8b21c5f7 agent: modify the type of swappiness to u64
3f5c6e71 runtime: Allock mockfs storage to be placed in any directory
0bd1abac runtime: Let MockFSInit create a mock fs driver at any path
3e74243f runtime: Move mockfs control global into mockfs.go
aed4fe6a runtime: Export StoragePathSuffix
e1c4f57c runtime: Don't abuse MockStorageRootPath() for factory tests
c49084f3 runtime: Make bind mount tests better clean up after themselves
4e350f7d runtime: Clean up mock hook logs in tests
415420f6 runtime: Make SetupOCIConfigFile clean up after itself
688b9abd runtime: Don't use fixed /tmp/mountPoint path
dc1288de kata-monitor: add a README file
78edf827 kata-monitor: add some links when generating pages for browsers
eff74fab agent: fsGroup support for direct-assigned volume
01cd5809 proto: fsGroup support for direct-assigned volume
97ad1d55 runtime: fsGroup support for direct-assigned volume
b62cced7 runtime: no need to write virtiofsd error to log
8242cfd2 kata-monitor: update the hrefs in the debug/pprof index page
a37d4e53 agent: best-effort removing mount point
d1197ee8 tools/packaging: Fix error path in 'kata-deploy-binaries.sh -s'
c9c77511 tools/packaging: Fix usage of kata-deploy-binaries.sh
1e622316 tools/packaging/kata-deploy: Copy install_yq.sh in a dedicated script
8fa64e01 packaging: Eliminate TTY_OPT and NO_TTY variables in kata-deploy
8f67f9e3 tools/packaging/kata-deploy/local-build: Add build to gitignore
3049b776 versions: Bump firecracker to v0.23.4
aedfef29 runtime/virtcontainers: Pass the hugepages resources to agent
c9e1f727 agent: Verify that we allocated as many hugepages as we need
ba858e8c agent: Don't attempt to create directories for hugepage configuration
bc32eff7 virtcontainers: clh: Re-generate the client code
984ef538 versions: Upgrade to Cloud Hypervisor v23.0
adf6493b runtime: Base64 encode the direct volume mountInfo path
6b417540 agent: Avoid agent panic when reading empty stats

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-05-04 16:18:45 +02:00
Fabiano Fidêncio
8e076c8701 release: Adapt kata-deploy for 2.4.1
kata-deploy files must be adapted to a new release.  The cases where it
happens are when the release goes from -> to:
* main -> stable:
  * kata-deploy-stable / kata-cleanup-stable: are removed

* stable -> stable:
  * kata-deploy / kata-cleanup: bump the release to the new one.

There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-05-04 16:18:45 +02:00
Fabiano Fidêncio
b11f7df5ab Merge pull request #4202 from fidencio/topic/second-round-of-backports-for-2.4.1
stable-2.4 | Second round of backports for the 2.4.1 release
2022-05-04 14:30:23 +02:00
Yibo Zhuang
b50b091c87 agent: watchers: ensure uid/gid is preserved on copy/mkdir
Today in agent watchers, when we copy files/symlinks
or create directories, the ownership of the source path
is not preserved which can lead to permission issues.

In copy, ensure that we do a chown of the source path
uid/gid to the destination file/symlink after copy to
ensure that ownership matches the source ownership.
fs::copy() takes care of setting the permissions.

For directory creation, ensure that we set the
permissions of the created directory to the source
directory permissions and also perform a chown of the
source path uid/gid to ensure directory ownership
and permissions matches to the source.

Fixes: #4188

Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
(cherry picked from commit 70eda2fa6c)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-05-04 12:39:44 +02:00
Fabiano Fidêncio
03bc89ab0b clh: Rely on Cloud Hypervisor for generating the device ID
We're currently hitting a race condition on the Cloud Hypervisor's
driver code when quickly removing and adding a block device.

This happens because the device removal is an asynchronous operation,
and we currently do *not* monitor events coming from Cloud Hypervisor to
know when the device was actually removed.  Together with this, the
sandbox code doesn't know about that and when a new device is attached
it'll quickly assign what may be the very same ID to the new device,
leading to the Cloud Hypervisor's driver trying to hotplug a device with
the very same ID of the device that was not yet removed.

This is, in a nutshell, why the tests with Cloud Hypervisor and
devmapper have been failing every now and then.

The workaround taken to solve the issue is basically *not* passing down
the device ID to Cloud Hypervisor and simply letting Cloud Hypervisor
itself generate those, as Cloud Hypervisor does it in a manner that
avoids such conflicts.  With this addition we have then to keep a map of
the device ID and the Cloud Hypervisor's generated ID, so we can
properly remove the device.

This workaround will probably stay for a while, at least till someone
has enough cycles to implement a way to watch the device removal event
and then properly act on that.  Spoiler alert, this will be a complex
change that may not even be worth it considering the race can be avoided
with this commit.

Fixes: #4196

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 33a8b70558)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-05-04 12:39:39 +02:00
Fabiano Fidêncio
d4dccb4900 Merge pull request #4153 from fidencio/wip/first-round-of-backports-for-2.4.1
stable-2.4 | First round of backports for the 2.4.1 release
2022-04-27 11:23:35 +02:00
Greg Kurz
6b2c641f0b tools: fix typo in clh directory name
This allows to get released binaries again.

Fixes: #4151

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit b658dccc5f)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:11:05 +02:00
Greg Kurz
81e10fe34f packaging: Fix clh build from source fall-back
If we fail to download the clh binary, we fall-back to build from source.
Unfortunately, `pull_clh_released_binary()` leaves a `cloud_hypervisor`
directory behind, which causes `build_clh_from_source()` not to clone
the git repo:

    [ -d "${repo_dir}" ] || git clone "${cloud_hypervisor_repo}"

When building from a kata-containers git repo, the subsequent calls
to `git` in this function thus apply to the kata-containers repo and
eventually fail, e.g.:

+ git checkout v23.0
error: pathspec 'v23.0' did not match any file(s) known to git

It doesn't quite make sense actually to keep an existing directory the
content of which is arbitrary when we want to it to contain a specific
version of clh. Just remove it instead.

Fixes: #4151

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit afbd60da27)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:11:01 +02:00
holyfei
8b21c5f78d agent: modify the type of swappiness to u64
The type of MemorySwappiness in runtime is uint64, and the type of swappiness in agent is int64,
if we set max uint64 in runtime and pass it to agent, the value will be equal to -1. We should
modify the type of swappiness to u64

Fixes: #4123

Signed-off-by: holyfei <yangfeiyu20092010@163.com>
(cherry picked from commit 0239502781)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
3f5c6e7182 runtime: Allock mockfs storage to be placed in any directory
Currently EnableMockTesting() takes no arguments and will always place the
mock storage in the fixed location /tmp/vc/mockfs.  This means that one
test run can interfere with the next one if anything isn't cleaned up
(and there are other bugs which means that happens).  If if those were
fixed this would allow developers testing on the same machine to interfere
with each other.

So, allow the mockfs to be placed at an arbitrary place given as a
parameter to EnableMockTesting().  In TestMain() we place it under our
existing temporary directory, so we don't need any additional cleanup just
for the mockfs.

fixes #4140

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 1b931f4203)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
0bd1abac3e runtime: Let MockFSInit create a mock fs driver at any path
Currently MockFSInit always creates the mockfs at the fixed path
/tmp/vc/mockfs.  This change allows it to be initialized at any path
given as a parameter.  This allows the tests in fs_test.go to be
simplified, because the by using a temporary directory from
t.TempDir(), which is automatically cleaned up, we don't need to
manually trigger initTestDir() (which is misnamed, it's actually a
cleanup function).

For now we still use the fixed path when auto-creating the mockfs in
MockAutoInit(), but we'll change that later.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit ef6d54a781)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
3e74243fbe runtime: Move mockfs control global into mockfs.go
virtcontainers/persist/fs/mockfs.go defines a mock filesystem type for
testing.  A global variable in virtcontainers/persist/manager.go is used to
force use of the mock fs rather than a normal one.

This patch moves the global, and the EnableMockTesting() function which
sets it into mockfs.go.  This is slightly cleaner to begin with, and will
allow some further enhancements.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 5d8438e939)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
aed4fe6a2e runtime: Export StoragePathSuffix
storagePathSuffix defines the file path suffix - "vc" - used for
Kata's persistent storage information, as a private constant.  We
duplicate this information in fc.go which also needs it.

Export it from fs.go instead, so it can be used in fc.go.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 963d03ea8a)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
e1c4f57c35 runtime: Don't abuse MockStorageRootPath() for factory tests
A number of unit tests under virtcontainers/factory use
MockStorageRootPath() as a general purpose temporary directory.  This
doesn't make sense: the mockfs driver isn't even in use here since we only
call EnableMockTesting for the pase virtcontainers package, not the
subpackages.

Instead use t.TempDir() which is for exactly this purpose.  As a bonus it
also handles the cleanup, so we don't need MockStorageDestroy any more.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 1719a8b491)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
c49084f303 runtime: Make bind mount tests better clean up after themselves
There are several tests in mount_test.go which perform a sample bind
mount.  These need a corresponding unmount to clean up afterwards or
attempting to delete the temporary files will fail due to the existing
mountpoint.  Most of them had such an unmount, but
TestBindMountInvalidPgtypes was missing one.

In addition, the existing unmounts where done inconsistently - one was
simply inline (so wouldn't be executed if the test fails too early) and one
is a defer.  Change them all to use the t.Cleanup mechanism.

For the dummy mountpoint files, rather than cleaning them up after the
test, the tests were removing them at the beginning of the test.  That
stops the test being messed up by a previous run, but messily.  Since
these are created in a private temporary directory anyway, if there's
something already there, that indicates a problem we shouldn't ignore.
In fact we don't need to explicitly remove these at all - they'll be
removed along with the rest of the private temporary directory.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit bec59f9e39)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
4e350f7d53 runtime: Clean up mock hook logs in tests
The tests in hook_test.go run a mock hook binary, which does some debug
logging to /tmp/mock_hook.log.  Currently we don't clean up those logs
when the tests are done.  Use a test cleanup function to do this.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit f7ba21c86f)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
415420f689 runtime: Make SetupOCIConfigFile clean up after itself
SetupOCIConfigFile creates a temporary directory with os.MkDirTemp().  This
means the callers need to register a deferred function to remove it again.
At least one of them was commented out meaning that a /temp/katatest-
directory was leftover after the unit tests ran.

Change to using t.TempDir() which as well as better matching other parts of
the tests means the testing framework will handle cleaning it up.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 90b2f5b776)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
David Gibson
688b9abd35 runtime: Don't use fixed /tmp/mountPoint path
Several tests in kata_agent_test.go create /tmp/mountPoint as a dummy
directory to mount.  This is not cleaned up after the test.  Although it
is in /tmp, that's still a little messy and can be confusing to a user.
In addition, because it uses the same name every time, it allows for one
run of the test to interfere with the next.

Use the built in t.TempDir() to use an automatically named and deleted
temporary directory instead.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 2eeb5dc223)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
Francesco Giudici
dc1288de8d kata-monitor: add a README file
Fixes: #3704

Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
(cherry picked from commit 7b2ff02647)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
bin
78edf827df kata-monitor: add some links when generating pages for browsers
Add some links to rendered webpages for better user experience,
let users can jump to pages only by clicking links in browsers.

Fixes: #4061

Signed-off-by: bin <bin@hyper.sh>
(cherry picked from commit f8cc5d1ad8)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:10:21 +02:00
Yibo Zhuang
eff74fab0e agent: fsGroup support for direct-assigned volume
Adding two functions set_ownership and
recursive_ownership_change to support changing group id
ownership for a mounted volume.

The set_ownership will be called in common_storage_handler
after mount_storage performs the mount for the volume.
set_ownership will be a noop if the FSGroup field in the
Storage struct is not set which indicates no chown will be
performed. If FSGroup field is specified, then it will
perform the recursive walk of the mounted volume path to
change ownership of all files and directories to the
desired group id. It will also configure the SetGid bit
so that files created the directory will have group
following parent directory group.

If the fsGroupChangePolicy is on root mismatch,
then the group ownership will be skipped if the root
directory group id alreasy matches the desired group
id and if the SetGid bit is also set on the root directory.

This is the same behavior as what
Kubelet does today when performing the recursive walk
to change ownership.

Fixes #4018

Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
(cherry picked from commit 92c00c7e84)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:09:22 +02:00
Yibo Zhuang
01cd58094e proto: fsGroup support for direct-assigned volume
This change adds two fields to the Storage pb

FSGroup which is a group id that the runtime
specifies to indicate to the agent to perform a
chown of the mounted volume to the specified
group id after mounting is complete in the guest.

FSGroupChangePolicy which is a policy to indicate
whether to always perform the group id ownership
change or only if the root directory group id
does not match with the desired group id.

These two fields will allow CSI plugins to indicate
to Kata that after the block device is mounted in
the guest, group id ownership change should be performed
on that volume.

Fixes #4018

Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
(cherry picked from commit 6a47b82c81)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Yibo Zhuang
97ad1d55ff runtime: fsGroup support for direct-assigned volume
The fsGroup will be specified by the fsGroup key in
the direct-assign mountinfo metadate field.
This will be set when invoking the kata-runtime
binary and providing the key, value pair in the metadata
field. Similarly, the fsGroupChangePolicy will also
be provided in the mountinfo metadate field.

Adding an extra fields FsGroup and FSGroupChangePolicy
in the Mount construct for container mount which will
be populated when creating block devices by parsing
out the mountInfo.json.

And in handleDeviceBlockVolume of the kata-agent client,
it checks if the mount FSGroup is not nil, which
indicates that fsGroup change is required in the guest,
and will provide the FSGroup field in the protobuf to
pass the value to the agent.

Fixes #4018

Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
(cherry picked from commit 532d53977e)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Zhuoyu Tie
b62cced7f4 runtime: no need to write virtiofsd error to log
The scanner reads nothing from viriofsd stderr pipe, because param
'--syslog' rediercts stderr to syslog. So there is no need to write
scanner.Text() to kata log

Fixes: #4063

Signed-off-by: Zhuoyu Tie <tiezhuoyu@outlook.com>
(cherry picked from commit 6e79042aa0)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Francesco Giudici
8242cfd2be kata-monitor: update the hrefs in the debug/pprof index page
kata-monitor allows to get data profiles from the kata shim
instances running on the same node by acting as a proxy
(e.g., http://$NODE_ADDRESS:8090/debug/pprof/?sandbox=$MYSANDBOXID).
In order to proxy the requests and the responses to the right shim,
kata-monitor requires to pass the sandbox id via a query string in the
url.

The profiling index page proxied by kata-monitor contains the link to all
the data profiles available. All the links anyway do not contain the
sandbox id included in the request: the links result then broken when
accessed through kata-monitor.
This happens because the profiling index page comes from the kata shim,
which will not include the query string provided in the http request.

Let's add on-the-fly the sandbox id in each href tag returned by the kata
shim index page before providing the proxied page.

Fixes: #4054

Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
(cherry picked from commit 86977ff780)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Feng Wang
a37d4e538f agent: best-effort removing mount point
During container exit, the agent tries to remove all the mount point directories,
which can fail if it's a readonly filesytem (e.g. device mapper). This commit ignores
the removal failure and logs a warning message.

Fixes: #4043

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit aabcebbf58)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Greg Kurz
d1197ee8e5 tools/packaging: Fix error path in 'kata-deploy-binaries.sh -s'
`make kata-tarball` relies on `kata-deploy-binaries.sh -s` which
silently ignores errors, and you may end up with an incomplete
tarball without noticing it because `make`'s exit status is 0.

`kata-deploy-binaries.sh` does set the `errexit` option and all the
code in the script seems to assume that since it doesn't do error
checking. Unfortunately, bash automatically disables `errexit` when
calling a function from a conditional pipeline, like done in the `-s`
case:

	if [ "${silent}" == true ]; then
		if ! handle_build "${t}" &>"$log_file"; then
                ^^^^^^
           this disables `errexit`

and `handle_build` ends with a `tar tvf` that always succeeds.

Adding error checking all over the place isn't really an option
as it would seriously obfuscate the code. Drop the conditional
pipeline instead and print the final error message from a `trap`
handler on the special ERR signal. This requires the `errtrace`
option as `trap`s aren't propagated to functions by default.

Since all outputs of `handle_build` are redirected to the build
log file, some file descriptor duplication magic is needed for
the handler to be able to write to the orignal stdout and stderr.

Fixes #3757

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit a779e19bee)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Greg Kurz
c9c7751184 tools/packaging: Fix usage of kata-deploy-binaries.sh
Add missing documentation for -s .

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 0baebd2b37)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Greg Kurz
1e62231610 tools/packaging/kata-deploy: Copy install_yq.sh in a dedicated script
'make kata-tarball' sometimes fails early with:

cp: cannot create regular file '[...]/tools/packaging/kata-deploy/local-build/dockerbuild/install_yq.sh': File exists

This happens because all assets are built in parallel using the same
`kata-deploy-binaries-in-docker.sh` script, and thus all try to copy
the `install_yq.sh` script to the same location with the `cp` command.
This is a well known race condition that cannot be avoided without
serialization of `cp` invocations.

Move the copying of `install_yq.sh` to a separate script and ensure
it is called *before* parallel builds. Make the presence of the copy
a prerequisite for each sub-build so that they still can be triggered
individually. Update the GH release workflow to also call this script
before calling `kata-deploy-binaries-in-docker.sh`.

Fixes #3756

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 154c8b03d3)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
David Gibson
8fa64e011d packaging: Eliminate TTY_OPT and NO_TTY variables in kata-deploy
NO_TTY configured whether to add the -t option to docker run.  It makes no
sense for the caller to configure this, since whether you need it depends
on the commands you're running.  Since the point here is to run
non-interactive build scripts, we don't need -t, or -i either.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 1ed7da8fc7)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
David Gibson
8f67f9e384 tools/packaging/kata-deploy/local-build: Add build to gitignore
This directory consists entirely of files built during a make kata-tarball,
so it should not be committed to the tree. A symbolic link to this directory
might be created during 'make tarball', ignore it as well.

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
[greg: - rearranged the subject to make the subsystem checker happy
       - also ignore the symbolic link created by
         `kata-deploy-binaries-in-docker.sh`]
Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit bad859d2f8)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Greg Kurz
3049b7760a versions: Bump firecracker to v0.23.4
This release changes Docker images repository from DockerHub to Amazon
ECR. This resolves the `You have reached your pull rate limit` error
when building the firecracker tarball.

Fixes #4001

Signed-off-by: Greg Kurz <groug@kaod.org>
(cherry picked from commit 0d5f80b803)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Miao Xia
aedfef29a3 runtime/virtcontainers: Pass the hugepages resources to agent
The hugepages resources claimed by containers should be limited
by cgroup in the guest OS.

Fixes: #3695

Signed-off-by: Miao Xia <xia.miao1@zte.com.cn>
(cherry picked from commit a2f5c1768e)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
David Gibson
c9e1f72785 agent: Verify that we allocated as many hugepages as we need
allocate_hugepages() writes to the kernel sysfs file to allocate hugepages
in the Kata VM.  However, even if the write succeeds, it's not certain that
the kernel will actually be able to allocate as many hugepages as we
requested.

This patch reads back the file after writing it to check if we were able to
allocate all the required hugepages.

fixes #3816

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 42e35505b0)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
David Gibson
ba858e8cd9 agent: Don't attempt to create directories for hugepage configuration
allocate_hugepages() constructs the path for the sysfs directory containing
hugepage configuration, then attempts to create this directory if it does
not exist.

This doesn't make sense: sysfs is a view into kernel configuration, if the
kernel has support for the hugepage size, the directory will already be
there, if it doesn't, trying to create it won't help.

For the same reason, attempting to create the "nr_hugepages" file
itself is pointless, so there's no reason to call
OpenOptions::create(true).

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
(cherry picked from commit 608e003abc)
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2022-04-27 09:05:29 +02:00
Fabiano Fidêncio
b784763685 Merge pull request #4120 from likebreath/0420/backport_clh_v23.0
stable-2.4 | versions: Upgrade to Cloud Hypervisor v23.0
2022-04-21 14:33:37 +02:00
Fabiano Fidêncio
df2d57e9b8 Merge pull request #4098 from fengwang666/stable-2.4_backport
stable-2.4 | runtime: Base64 encode the direct volume mountInfo path
2022-04-21 12:54:03 +02:00
Bo Chen
bc32eff7b4 virtcontainers: clh: Re-generate the client code
This patch re-generates the client code for Cloud Hypervisor v23.0.
Note: The client code of cloud-hypervisor's (CLH) OpenAPI is
automatically generated by openapi-generator [1-2].

[1] https://github.com/OpenAPITools/openapi-generator
[2] https://github.com/kata-containers/kata-containers/blob/main/src/runtime/virtcontainers/pkg/cloud-hypervisor/README.md

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 29e569aa92)
2022-04-20 15:57:50 -07:00
Bo Chen
984ef5389e versions: Upgrade to Cloud Hypervisor v23.0
Highlights from the Cloud Hypervisor release v23.0: 1) vDPA Support; 2)
Updated OS Support list (Jammy 22.04 added with EOLed versions removed);
3) AArch64 Memory Map Improvements; 4) AMX Support; 5) Bug Fixes;

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v23.0

Fixes: #4101

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 6012c19707)
2022-04-20 15:57:50 -07:00
Feng Wang
adf6493b89 runtime: Base64 encode the direct volume mountInfo path
This is to avoid accidentally deleting multiple volumes.

Fixes #4020

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit 354cd3b9b6)
2022-04-13 22:30:53 -07:00
Greg Kurz
10bab3c96a Merge pull request #4081 from fidencio/wip/stable-2.4-agent-avoid-panic-when-getting-empty-stats
stable-2.4 | agent: Avoid agent panic when reading empty stats
2022-04-13 14:13:13 +02:00
Fabiano Fidêncio
6b41754018 agent: Avoid agent panic when reading empty stats
This was seen in an issue report, where we'd try to unwrap a None value,
leading to a panic.

Fixes: #4077
Related: #4043

Full backtrace:
```
"thread 'tokio-runtime-worker' panicked at 'called `Option::unwrap()` on a `None` value', rustjail/src/cgroups/fs/mod.rs:593:31"
"stack backtrace:"
"   0:     0x7f0390edcc3a - std::backtrace_rs::backtrace::libunwind::trace::hd5eff4de16dbdd15"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5"
"   1:     0x7f0390edcc3a - std::backtrace_rs::backtrace::trace_unsynchronized::h04a775b4c6ab90d6"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5"
"   2:     0x7f0390edcc3a - std::sys_common::backtrace::_print_fmt::h3253c3db9f17d826"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:67:5"
"   3:     0x7f0390edcc3a - <std::sys_common::backtrace::_print::DisplayBacktrace as core::fmt::Display>::fmt::h02bfc712fc868664"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:46:22"
"   4:     0x7f0390a91fbc - core::fmt::write::hfd5090d1132106d8"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/fmt/mod.rs:1149:17"
"   5:     0x7f0390edb804 - std::io::Write::write_fmt::h34acb699c6d6f5a9"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/io/mod.rs:1697:15"
"   6:     0x7f0390edbee0 - std::sys_common::backtrace::_print::hfca761479e3d91ed"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:49:5"
"   7:     0x7f0390edbee0 - std::sys_common::backtrace::print::hf666af0b87d2b5ba"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:36:9"
"   8:     0x7f0390edbee0 - std::panicking::default_hook::{{closure}}::hb4617bd1d4a09097"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:211:50"
"   9:     0x7f0390edb2da - std::panicking::default_hook::h84f684d9eff1eede"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:228:9"
"  10:     0x7f0390edb2da - std::panicking::rust_panic_with_hook::h8e784f5c39f46346"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:606:17"
"  11:     0x7f0390f0c416 - std::panicking::begin_panic_handler::{{closure}}::hef496869aa926670"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:500:13"
"  12:     0x7f0390f0c3b6 - std::sys_common::backtrace::__rust_end_short_backtrace::h8e9b039b8ed3e70f"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys_common/backtrace.rs:139:18"
"  13:     0x7f0390f0c372 - rust_begin_unwind"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/panicking.rs:498:5"
"  14:     0x7f03909062c0 - core::panicking::panic_fmt::h568976b83a33ae59"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:107:14"
"  15:     0x7f039090641c - core::panicking::panic::he2e71cfa6548cc2c"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/core/src/panicking.rs:48:5"
"  16:     0x7f0390eb443f - <rustjail::cgroups::fs::Manager as rustjail::cgroups::Manager>::get_stats::h85031fc1c59c53d9"
"  17:     0x7f03909c0138 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hfa6e6cd7516f8d11"
"  18:     0x7f0390d697e5 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hffbaa534cfa97d44"
"  19:     0x7f039099c0b3 - <core::future::from_generator::GenFuture<T> as core::future::future::Future>::poll::hae3ab083a06d0b4b"
"  20:     0x7f0390af9e1e - std::panic::catch_unwind::h1fdd25c8ebba32e1"
"  21:     0x7f0390b7c4e6 - tokio::runtime::task::raw::poll::hd3ebbd0717dac808"
"  22:     0x7f0390f49f3f - tokio::runtime::thread_pool::worker::Context::run_task::hfdd63cd1e0b17abf"
"  23:     0x7f0390f3a599 - tokio::runtime::task::raw::poll::h62954f6369b1d210"
"  24:     0x7f0390f37863 - std::sys_common::backtrace::__rust_begin_short_backtrace::h1c58f232c078bfe9"
"  25:     0x7f0390f4f3dd - core::ops::function::FnOnce::call_once{{vtable.shim}}::h2d329a84c0feed57"
"  26:     0x7f0390f0e535 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h137e5243c6233a3b"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/boxed.rs:1694:9"
"  27:     0x7f0390f0e535 - <alloc::boxed::Box<F,A> as core::ops::function::FnOnce<Args>>::call_once::h7331c46863d912b7"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/alloc/src/boxed.rs:1694:9"
"  28:     0x7f0390f0e535 - std::sys::unix::thread::Thread::new::thread_start::h1fb20b966cb927ab"
"                               at /rustc/db9d1b20bba1968c1ec1fc49616d4742c1725b4b/library/std/src/sys/unix/thread.rs:106:17"
```

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 78f30c33c6)
2022-04-12 18:59:02 +02:00
Fabiano Fidêncio
0ad6f05dee Merge pull request #4024 from bergwolf/2.4.0-branch-bump
# Kata Containers 2.4.0
2022-04-01 13:46:35 +02:00
Peng Tao
4c9c01a124 release: Kata Containers 2.4.0
- stable-2.4 | agent: fix container stop error with signal SIGRTMIN+3
- stable-2.4 | kata-monitor: fix duplicated output when printing usage
- stable-2.4 | runtime: Stop getting OOM events from agent for "ttrpc closed" error
- kata-deploy: fix version bump from -rc to stable
- stable-2.4: release: Include all the rust vendored code into the vendored tarball
- stable-2.4 | tools: release: Do not consider release candidates as stable releases
- agent: Signal the whole process group
- stable-2.4 | docs: Update k8s documentation
- backport main commits to stable 2.4
- stable-2.4: Bump QEMU to 6.2 (bringing then SGX support in)
- runtime: Properly handle ESRCH error when signaling container
- stable-2.4 | versions: Upgrade to Cloud Hypervisor v22.1

f2319d69 release: Adapt kata-deploy for 2.4.0
cae48e9c agent: fix container stop error with signal SIGRTMIN+3
342aa95c kata-monitor: fix duplicated output when printing usage
9f75e226 runtime: add logs around sandbox monitor
363fbed8 runtime: stop getting OOM events when ttrpc: closed error
f840de5a workflows,release: Ship *all* the rust vendored code
952cea5f tools: Add a generate_vendor.sh script
cc965fa0 kata-deploy: fix version bump from -rc to stable
f41cc184 tools: release: Do not consider release candidates as stable releases
e059b50f runtime: Add more debug logs for container io stream copy
71ce6f53 agent: Kill the all the container processes of the same cgroup
30fc2c86 docs: Update k8s documentation
24028969 virtcontainers: Run mock hook from build tree rather than system bin dir
4e54aa5a doc: fix filename typo
d815393c manager: Add options to change self test behaviour
4111e1a3 manager: Add option to enable component debug
2918be18 manager: Create containerd link
6b31b068 kernel: fix cve-2022-0847
5589b246 doc: update Intel SGX use cases document
1da88dca tools: update QEMU to 6.2
3e2f9223 runtime: Properly handle ESRCH error when signaling container
4c21cb3e versions: Upgrade to Cloud Hypervisor v22.1

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-04-01 06:20:20 +00:00
Peng Tao
f2319d693d release: Adapt kata-deploy for 2.4.0
kata-deploy files must be adapted to a new release.  The cases where it
happens are when the release goes from -> to:
* main -> stable:
  * kata-deploy-stable / kata-cleanup-stable: are removed

* stable -> stable:
  * kata-deploy / kata-cleanup: bump the release to the new one.

There are no changes when doing an alpha release, as the files on the
"main" branch always point to the "latest" and "stable" tags.

Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-04-01 06:20:20 +00:00
Bin Liu
98ccf8f6a1 Merge pull request #4008 from wxx213/stable-2.4
stable-2.4 | agent: fix container stop error with signal SIGRTMIN+3
2022-04-01 11:29:18 +08:00
Wang Xingxing
cae48e9c9b agent: fix container stop error with signal SIGRTMIN+3
The nix::sys::signal::Signal package api cannot deal with SIGRTMIN+3,
directly use libc function to send the signal.

Fixes: #3990

Signed-off-by: Wang Xingxing <stellarwxx@163.com>
(cherry picked from commit 0d765bd082)
Signed-off-by: Wang Xingxing <stellarwxx@163.com>
2022-03-31 16:49:06 +08:00
snir911
a36103c759 Merge pull request #4003 from fgiudici/kata-monitor_fix_help_backport
stable-2.4 | kata-monitor: fix duplicated output when printing usage
2022-03-30 18:57:17 +03:00
Fabiano Fidêncio
6abbcc551c Merge pull request #3997 from liubin/backport-2.4
stable-2.4 | runtime: Stop getting OOM events from agent for "ttrpc closed" error
2022-03-30 14:08:55 +02:00
Francesco Giudici
342aa95cc8 kata-monitor: fix duplicated output when printing usage
(default: "/run/containerd/containerd.sock") is duplicated when
printing kata-monitor usage:

[root@kubernetes ~]# kata-monitor --help
Usage of kata-monitor:
  -listen-address string
        The address to listen on for HTTP requests. (default ":8090")
  -log-level string
        Log level of logrus(trace/debug/info/warn/error/fatal/panic). (default "info")
  -runtime-endpoint string
        Endpoint of CRI container runtime service. (default: "/run/containerd/containerd.sock") (default "/run/containerd/containerd.sock")

the golang flag package takes care of adding the defaults when printing
usage. Remove the explicit print of the value so that it would not be
printed on screen twice.

Fixes: #3998

Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
(cherry picked from commit a63bbf9793)
2022-03-30 14:02:54 +02:00
bin
9f75e226f1 runtime: add logs around sandbox monitor
For debugging purposes, add some logs.

Fixes: #3815

Signed-off-by: bin <bin@hyper.sh>
2022-03-30 17:11:40 +08:00
bin
363fbed804 runtime: stop getting OOM events when ttrpc: closed error
getOOMEvents is a long-waiting call, it will retry when failed.
For cases of agent shutdown, the retry should stop.

When the agent hasn't detected agent has died, we can also check
whether the error is "ttrpc: closed".

Fixes: #3815

Signed-off-by: bin <bin@hyper.sh>
2022-03-30 17:11:35 +08:00
Fabiano Fidêncio
54a638317a Merge pull request #3988 from bergwolf/github/kata-deploy
kata-deploy: fix version bump from -rc to stable
2022-03-30 11:01:45 +02:00
Peng Tao
8ce6b12b41 Merge pull request #3993 from fidencio/wip/stable-2.4-release-include-all-rust-vendored-code-to-the-vendored-tarball
stable-2.4: release: Include all the rust vendored code into the vendored tarball
2022-03-30 16:10:47 +08:00
Fabiano Fidêncio
f840de5acb workflows,release: Ship *all* the rust vendored code
Instead of only vendoring the code needed by the agent, let's ensure we
vendor all the needed rust code, and let's do it using the newly
introduced enerate_vendor.sh script.

Fixes: #3973

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 3606923ac8)
2022-03-29 23:27:43 +02:00
Fabiano Fidêncio
952cea5f5d tools: Add a generate_vendor.sh script
This script is responsible for generating a tarball with all the rust
vendored code that is needed for fully building kata-containers on a
disconnected environment.

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 2eb07455d0)
2022-03-29 23:27:29 +02:00
Peng Tao
cc965fa0cb kata-deploy: fix version bump from -rc to stable
In such case, we should bump from "latest" tag rather than from
current_version.

Fixes: #3986
Signed-off-by: Peng Tao <bergwolf@hyper.sh>
2022-03-29 03:45:27 +00:00
GabyCT
44b1473d0c Merge pull request #3977 from fidencio/wip/backport-fix-for-3847
stable-2.4 | tools: release: Do not consider release candidates as stable releases
2022-03-28 10:38:47 -06:00
Fupan Li
565efd1bf2 Merge pull request #3975 from bergwolf/github/backport-stable-2.4
agent: Signal the whole process group
2022-03-28 18:26:12 +08:00
Fabiano Fidêncio
f41cc18427 tools: release: Do not consider release candidates as stable releases
During the release of 2.4.0-rc0 @egernst noticed an incositency in the
way we handle release tags, as release candidates are being taken as
"stable" releases, while both the kata-deploy tests and the release
action consider this as "latest".

Ideally we should have our own tag for "release candidate", but that's
something that could and should be discussed more extensively outside of
the scope of this quick fix.

For now, let's align the code generating the PR for bumping the release
with what we already do as part of the release action and kata-deploy
test, and tag "-rc"  as latest, regardless of which branch it's coming
from.

Fixes: #3847

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 4adf93ef2c)
2022-03-28 11:01:58 +02:00
Feng Wang
e059b50f5c runtime: Add more debug logs for container io stream copy
This can help debugging container lifecycle issues

Fixes: #3913

Signed-off-by: Feng Wang <feng.wang@databricks.com>
2022-03-28 16:22:22 +08:00
Feng Wang
71ce6f537f agent: Kill the all the container processes of the same cgroup
Otherwise the container process might leak and cause an unclean exit

Fixes: #3913

Signed-off-by: Feng Wang <feng.wang@databricks.com>
2022-03-28 16:21:51 +08:00
Bin Liu
a2b73b60bd Merge pull request #3960 from cmaf/update-k8s-docs-1-stable-2.4
stable-2.4 | docs: Update k8s documentation
2022-03-25 15:25:25 +08:00
Bin Liu
2ce9ce7b8f Merge pull request #3954 from bergwolf/github/backport-stable-2.4
backport main commits to stable 2.4
2022-03-25 14:45:17 +08:00
Chelsea Mafrica
30fc2c863d docs: Update k8s documentation
Update documentation with missing step to untaint node to enable
scheduling and update the example to run a pod using the kata runtime
class instead of untrusted workloads, which applies to versions of CRI-O
prior to v1.12.

Fixes #3863

Signed-off-by: Chelsea Mafrica <chelsea.e.mafrica@intel.com>
(cherry picked from commit 5c434270d1)
2022-03-24 11:22:18 -07:00
David Gibson
24028969c2 virtcontainers: Run mock hook from build tree rather than system bin dir
Running unit tests should generally have minimal dependencies on
things outside the build tree.  It *definitely* shouldn't modify
system wide things outside the build tree.  Currently the runtime
"make test" target does so, though.

Several of the tests in src/runtime/pkg/katautils/hook_test.go require a
sample hook binary.  They expect this hook in
/usr/bin/virtcontainers/bin/test/hook, so the makefile, as root, installs
the test binary to that location.

Go tests automatically run within the package's directory though, so
there's no need to use a system wide path.  We can use a relative path to
the binary build within the tree just as easily.

fixes #3941

Signed-off-by: David Gibson <david@gibson.dropbear.id.au>
2022-03-24 12:02:00 +08:00
Garrett Mahin
4e54aa5a7b doc: fix filename typo
Corrects a filename typo in cleanup cluster part
of kata-deploy README.md

Fixes: #3869
Signed-off-by: Garrett Mahin <garrett.mahin@gmail.com>
2022-03-24 12:00:17 +08:00
James O. D. Hunt
d815393c3e manager: Add options to change self test behaviour
Added new `kata-manager` options to control the self-test behaviour. By
default, after installation the manager will run a test to ensure a Kata
Containers container can be created. New options allow:

- The self test to be disabled.
- Only the self test to be run (no installation).

These features allow changes to be made to the installed system before
the self test is run.

Fixes: #3851.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2022-03-24 11:59:48 +08:00
James O. D. Hunt
4111e1a3de manager: Add option to enable component debug
Added a `-d` option to `kata-manager` to enable Kata Containers
and containerd debug.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2022-03-24 11:59:33 +08:00
James O. D. Hunt
2918be180f manager: Create containerd link
Make the `kata-manager` create a `containerd` link to ensure the
downloaded containerd systemd service file can find the daemon when
using the GitHub packaged version of containerd.

Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
2022-03-24 11:59:26 +08:00
Julio Montes
6b31b06832 kernel: fix cve-2022-0847
bump guest kernel version to fix cve-2022-0847 "Dirty Pipe"

fixes #3852

Signed-off-by: Julio Montes <julio.montes@intel.com>
2022-03-24 11:58:43 +08:00
Fabiano Fidêncio
53a9cf7dc4 Merge pull request #3927 from fidencio/stable-2.4/qemu-bump
stable-2.4: Bump QEMU to 6.2 (bringing then SGX support in)
2022-03-23 07:20:35 +01:00
Julio Montes
5589b246d7 doc: update Intel SGX use cases document
Installation section is not longer needed because of the latest
default kata kernel supports Intel SGX.
Include QEMU to the list of supported hypervisors.

fixes #3911

Signed-off-by: Julio Montes <julio.montes@intel.com>
(cherry picked from commit 24b29310b2)
2022-03-22 08:36:04 +01:00
Julio Montes
1da88dca4b tools: update QEMU to 6.2
bring Intel SGX support

Changes tha may impact in Kata Containers
Arm:
The 'virt' machine now supports an emulated ITS
The 'virt' machine now supports more than 123 CPUs in TCG emulation mode
The pl031 real-time clock device now supports sending RTC_CHANGE QMP events

PowerPC:
Improved POWER10 support for the 'powernv' machine
Initial support for POWER10 DD2.0 CPU added
Added support for FORM2 PAPR NUMA descriptions in the "pseries" machine
 type

s390x:
Improved storage key emulation (e.g. fixed address handling, lazy
 storage key enablement for TCG, ...)
New gen16 CPU features are now enabled automatically in the latest
 machine type

KVM:
Support for SGX in the virtual machine, using the /dev/sgx_vepc device
 on the host and the "memory-backend-epc" backend in QEMU.
New "hv-apicv" CPU property (aliased to "hv-avic") sets the
 HV_DEPRECATING_AEOI_RECOMMENDED bit in CPUID[0x40000004].EAX.

virtio-mem:
QEMU now fully supports guest memory dumps with virtio-mem.
QEMU now cleanly supports precopy migration, postcopy migration and
 background snapshots with virtio-mem.

fixes #3902

Signed-off-by: Julio Montes <julio.montes@intel.com>
(cherry picked from commit 18d4d7fb1d)
2022-03-22 08:35:45 +01:00
Peng Tao
8cc2231818 Merge pull request #3892 from fengwang666/my_2.4_pr_backport
runtime: Properly handle ESRCH error when signaling container
2022-03-15 10:11:25 +08:00
GabyCT
63c1498f05 Merge pull request #3891 from likebreath/stable-2.4
stable-2.4 | versions: Upgrade to Cloud Hypervisor v22.1
2022-03-14 17:44:09 -06:00
Feng Wang
3e2f9223b0 runtime: Properly handle ESRCH error when signaling container
Currently kata shim v2 doesn't translate ESRCH signal, causing container
fail to stop and shim leak.

Fixes: #3874

Signed-off-by: Feng Wang <feng.wang@databricks.com>
(cherry picked from commit aa5ae6b17c)
2022-03-14 13:15:54 -07:00
Bo Chen
4c21cb3eb1 versions: Upgrade to Cloud Hypervisor v22.1
This is a bug fix release. The following issues have been addressed:
1) VFIO ioctl reordering to fix MSI on AMD platforms; 2) Fix virtio-net
control queue.

Details can be found: https://github.com/cloud-hypervisor/cloud-hypervisor/releases/tag/v22.1

Fixes: #3872

Signed-off-by: Bo Chen <chen.bo@intel.com>
(cherry picked from commit 7a18e32fa7)
2022-03-14 12:34:31 -07:00
7385 changed files with 269890 additions and 1661335 deletions

View File

@@ -1,7 +0,0 @@
root = true
[*]
charset = utf-8
end_of_line = lf
insert_final_newline = true
trim_trailing_whitespace = true

View File

@@ -1,30 +0,0 @@
{
"Verbose": false,
"Debug": false,
"IgnoreDefaults": false,
"SpacesAfterTabs": false,
"NoColor": false,
"Exclude": [
"src/runtime/vendor",
"src/tools/log-parser/vendor",
"tests/metrics/cmd/checkmetrics/vendor",
"tests/vendor",
"src/runtime/virtcontainers/pkg/cloud-hypervisor/client",
"\\.img$",
"\\.dtb$",
"\\.drawio$",
"\\.svg$",
"\\.patch$"
],
"AllowedContentTypes": [],
"PassedFiles": [],
"Disable": {
"EndOfLine": false,
"Indentation": false,
"IndentSize": false,
"InsertFinalNewline": false,
"TrimTrailingWhitespace": false,
"MaxLineLength": false,
"Charset": false
}
}

View File

@@ -1,30 +0,0 @@
# Copyright (c) 2024 Red Hat
#
# SPDX-License-Identifier: Apache-2.0
#
# Configuration file with rules for the actionlint tool.
#
self-hosted-runner:
# Labels of self-hosted runner that linter should ignore
labels:
- amd64-nvidia-a100
- amd64-nvidia-h100-snp
- arm64-k8s
- garm-ubuntu-2004
- garm-ubuntu-2004-smaller
- garm-ubuntu-2204
- garm-ubuntu-2304
- garm-ubuntu-2304-smaller
- garm-ubuntu-2204-smaller
- ppc64le
- ppc64le-k8s
- ppc64le-small
- ubuntu-24.04-ppc64le
- ubuntu-24.04-s390x
- metrics
- riscv-builder
- sev-snp
- s390x
- s390x-large
- tdx
- ubuntu-24.04-arm

View File

@@ -1,40 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2022 Red Hat
#
# SPDX-License-Identifier: Apache-2.0
#
script_dir=$(dirname "$(readlink -f "$0")")
parent_dir=$(realpath "${script_dir}/../..")
cidir="${parent_dir}/ci"
source "${cidir}/../tests/common.bash"
cargo_deny_file="${script_dir}/action.yaml"
cat cargo-deny-skeleton.yaml.in > "${cargo_deny_file}"
changed_files_status=$(run_get_pr_changed_file_details)
changed_files_status=$(echo "$changed_files_status" | grep "Cargo\.toml$" || true)
changed_files=$(echo "$changed_files_status" | awk '{print $NF}' || true)
if [ -z "$changed_files" ]; then
cat >> "${cargo_deny_file}" << EOF
- run: echo "No Cargo.toml files to check"
shell: bash
EOF
fi
for path in $changed_files
do
cat >> "${cargo_deny_file}" << EOF
- name: ${path}
continue-on-error: true
shell: bash
run: |
pushd $(dirname ${path})
cargo deny check
popd
EOF
done

View File

@@ -1,30 +0,0 @@
#
# Copyright (c) 2022 Red Hat
#
# SPDX-License-Identifier: Apache-2.0
#
name: 'Cargo Crates Check'
description: 'Checks every Cargo.toml file using cargo-deny'
env:
CARGO_TERM_COLOR: always
runs:
using: "composite"
steps:
- name: Install Rust
uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: nightly
override: true
- name: Cache
uses: Swatinem/rust-cache@f0deed1e0edfc6a9be95417288c0e1099b1eeec3 # v2.7.7
- name: Install Cargo deny
shell: bash
run: |
which cargo
cargo install --locked cargo-deny || true

View File

@@ -1,98 +0,0 @@
---
version: 2
updates:
- package-ecosystem: "cargo"
directories:
- "/src/agent"
- "/src/dragonball"
- "/src/libs"
- "/src/mem-agent"
- "/src/mem-agent/example"
- "/src/runtime-rs"
- "/src/tools/agent-ctl"
- "/src/tools/genpolicy"
- "/src/tools/kata-ctl"
- "/src/tools/trace-forwarder"
schedule:
interval: "daily"
cooldown:
default-days: 7
ignore:
# rust-vmm repos might cause incompatibilities on patch versions, so
# lets handle them manually for now.
- dependency-name: "event-manager"
- dependency-name: "kvm-bindings"
- dependency-name: "kvm-ioctls"
- dependency-name: "linux-loader"
- dependency-name: "seccompiler"
- dependency-name: "vfio-bindings"
- dependency-name: "vfio-ioctls"
- dependency-name: "virtio-bindings"
- dependency-name: "virtio-queue"
- dependency-name: "vm-fdt"
- dependency-name: "vm-memory"
- dependency-name: "vm-superio"
- dependency-name: "vmm-sys-util"
# As we often have up to 8/9 components that need the same versions bumps
# create groups for common dependencies, so they can all go in a single PR
# We can extend this as we see more frequent groups
groups:
bit-vec:
patterns:
- bit-vec
bumpalo:
patterns:
- bumpalo
clap:
patterns:
- clap
crossbeam:
patterns:
- crossbeam
h2:
patterns:
- h2
idna:
patterns:
- idna
openssl:
patterns:
- openssl
protobuf:
patterns:
- protobuf
rsa:
patterns:
- rsa
rustix:
patterns:
- rustix
slab:
patterns:
- slab
time:
patterns:
- time
tokio:
patterns:
- tokio
tracing:
patterns:
- tracing
- package-ecosystem: "gomod"
directories:
- "src/runtime"
- "tools/testing/kata-webhook"
- "src/tools/csi-kata-directvolume"
schedule:
interval: "daily"
cooldown:
default-days: 7
- package-ecosystem: "github-actions"
directory: "/"
schedule:
interval: "monthly"
cooldown:
default-days: 7

View File

@@ -9,19 +9,14 @@ on:
- labeled
- unlabeled
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
pr_wip_check:
runs-on: ubuntu-22.04
runs-on: ubuntu-latest
name: WIP Check
steps:
- name: WIP Check
uses: tim-actions/wip-check@1c2a1ca6c110026b3e2297bb2ef39e1747b5a755 # master (2021-06-10)
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: tim-actions/wip-check@1c2a1ca6c110026b3e2297bb2ef39e1747b5a755
with:
labels: '["do-not-merge", "wip", "rfc"]'
keywords: '["WIP", "wip", "RFC", "rfc", "dnm", "DNM", "do-not-merge"]'

View File

@@ -1,30 +0,0 @@
name: Lint GHA workflows
on:
workflow_dispatch:
pull_request:
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
run-actionlint:
name: run-actionlint
env:
GH_TOKEN: ${{ github.token }}
runs-on: ubuntu-24.04
steps:
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Install actionlint gh extension
run: gh extension install https://github.com/cschleiden/gh-actionlint
- name: Run actionlint
run: gh actionlint

View File

@@ -0,0 +1,55 @@
# Copyright (c) 2020 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
name: Add newly created issues to the backlog project
on:
issues:
types:
- opened
- reopened
jobs:
add-new-issues-to-backlog:
runs-on: ubuntu-latest
steps:
- name: Install hub
run: |
HUB_ARCH="amd64"
HUB_VER=$(curl -sL "https://api.github.com/repos/github/hub/releases/latest" |\
jq -r .tag_name | sed 's/^v//')
curl -sL \
"https://github.com/github/hub/releases/download/v${HUB_VER}/hub-linux-${HUB_ARCH}-${HUB_VER}.tgz" |\
tar xz --strip-components=2 --wildcards '*/bin/hub' && \
sudo install hub /usr/local/bin
- name: Install hub extension script
run: |
# Clone into a temporary directory to avoid overwriting
# any existing github directory.
pushd $(mktemp -d) &>/dev/null
git clone --single-branch --depth 1 "https://github.com/kata-containers/.github" && cd .github/scripts
sudo install hub-util.sh /usr/local/bin
popd &>/dev/null
- name: Checkout code to allow hub to communicate with the project
uses: actions/checkout@v2
- name: Add issue to issue backlog
env:
GITHUB_TOKEN: ${{ secrets.KATA_GITHUB_ACTIONS_TOKEN }}
run: |
issue=${{ github.event.issue.number }}
project_name="Issue backlog"
project_type="org"
project_column="To do"
hub-util.sh \
add-issue \
"$issue" \
"$project_name" \
"$project_type" \
"$project_column"

View File

@@ -1,355 +0,0 @@
name: CI | Basic amd64 tests
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-containerd-sandboxapi:
name: run-containerd-sandboxapi
strategy:
# We can set this to true whenever we're 100% sure that
# the all the tests are not flaky, otherwise we'll fail
# all the tests due to a single flaky instance.
fail-fast: false
matrix:
containerd_version: ['active']
vmm: ['dragonball', 'cloud-hypervisor', 'qemu-runtime-rs']
# TODO: enable me when https://github.com/containerd/containerd/issues/11640 is fixed
if: false
runs-on: ubuntu-22.04
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
SANDBOXER: "shim"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/integration/cri-containerd/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/cri-containerd/gha-run.sh install-kata kata-artifacts
- name: Run containerd-sandboxapi tests
timeout-minutes: 10
run: bash tests/integration/cri-containerd/gha-run.sh run
run-containerd-stability:
name: run-containerd-stability
strategy:
fail-fast: false
matrix:
containerd_version: ['lts', 'active']
vmm: ['clh', 'cloud-hypervisor', 'dragonball', 'qemu', 'qemu-runtime-rs']
runs-on: ubuntu-22.04
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
SANDBOXER: "podsandbox"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/stability/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/stability/gha-run.sh install-kata kata-artifacts
- name: Run containerd-stability tests
timeout-minutes: 15
run: bash tests/stability/gha-run.sh run
run-nydus:
name: run-nydus
strategy:
# We can set this to true whenever we're 100% sure that
# the all the tests are not flaky, otherwise we'll fail
# all the tests due to a single flaky instance.
fail-fast: false
matrix:
containerd_version: ['lts', 'active']
vmm: ['clh', 'qemu', 'dragonball', 'qemu-runtime-rs']
runs-on: ubuntu-22.04
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/integration/nydus/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata
run: bash tests/integration/nydus/gha-run.sh install-kata kata-artifacts
- name: Install kata-tools
run: bash tests/integration/nydus/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Run nydus tests
timeout-minutes: 10
run: bash tests/integration/nydus/gha-run.sh run
run-tracing:
name: run-tracing
strategy:
fail-fast: false
matrix:
vmm:
- clh # cloud-hypervisor
- qemu
# TODO: enable me when https://github.com/kata-containers/kata-containers/issues/9763 is fixed
# TODO: Transition to free runner (see #9940).
if: false
runs-on: garm-ubuntu-2204-smaller
env:
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/functional/tracing/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/functional/tracing/gha-run.sh install-kata kata-artifacts
- name: Run tracing tests
timeout-minutes: 15
run: bash tests/functional/tracing/gha-run.sh run
run-vfio:
name: run-vfio
strategy:
fail-fast: false
matrix:
vmm:
- clh
- qemu
# TODO: enable with clh when https://github.com/kata-containers/kata-containers/issues/9764 is fixed
# TODO: enable with qemu when https://github.com/kata-containers/kata-containers/issues/9851 is fixed
# TODO: Transition to free runner (see #9940).
if: false
runs-on: garm-ubuntu-2304
env:
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/functional/vfio/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Run vfio tests
timeout-minutes: 15
run: bash tests/functional/vfio/gha-run.sh run
run-nerdctl-tests:
name: run-nerdctl-tests
strategy:
# We can set this to true whenever we're 100% sure that
# all the tests are not flaky, otherwise we'll fail them
# all due to a single flaky instance.
fail-fast: false
matrix:
vmm:
- clh
- dragonball
- qemu
- cloud-hypervisor
- qemu-runtime-rs
runs-on: ubuntu-22.04
env:
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
env:
GITHUB_API_TOKEN: ${{ github.token }}
GH_TOKEN: ${{ github.token }}
run: bash tests/integration/nerdctl/gha-run.sh install-dependencies
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/nerdctl/gha-run.sh install-kata kata-artifacts
- name: Run nerdctl smoke test
timeout-minutes: 5
run: bash tests/integration/nerdctl/gha-run.sh run
- name: Collect artifacts ${{ matrix.vmm }}
if: always()
run: bash tests/integration/nerdctl/gha-run.sh collect-artifacts
continue-on-error: true
- name: Archive artifacts ${{ matrix.vmm }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: nerdctl-tests-garm-${{ matrix.vmm }}
path: /tmp/artifacts
retention-days: 1
run-kata-agent-apis:
name: run-kata-agent-apis
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/functional/kata-agent-apis/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata & kata-tools
run: |
bash tests/functional/kata-agent-apis/gha-run.sh install-kata kata-artifacts
bash tests/functional/kata-agent-apis/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Run kata agent api tests with agent-ctl
run: bash tests/functional/kata-agent-apis/gha-run.sh run

View File

@@ -1,108 +0,0 @@
name: CI | Basic s390x tests
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-containerd-sandboxapi:
name: run-containerd-sandboxapi
strategy:
# We can set this to true whenever we're 100% sure that
# the all the tests are not flaky, otherwise we'll fail
# all the tests due to a single flaky instance.
fail-fast: false
matrix:
containerd_version: ['active']
vmm: ['qemu-runtime-rs']
# TODO: enable me when https://github.com/containerd/containerd/issues/11640 is fixed
if: false
runs-on: s390x-large
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
SANDBOXER: "shim"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/integration/cri-containerd/gha-run.sh
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-s390x${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/cri-containerd/gha-run.sh install-kata kata-artifacts
- name: Run containerd-sandboxapi tests
timeout-minutes: 10
run: bash tests/integration/cri-containerd/gha-run.sh run
run-containerd-stability:
name: run-containerd-stability
strategy:
fail-fast: false
matrix:
containerd_version: ['lts', 'active']
vmm: ['qemu']
runs-on: s390x-large
env:
CONTAINERD_VERSION: ${{ matrix.containerd_version }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
SANDBOXER: "podsandbox"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/stability/gha-run.sh install-dependencies
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-s390x${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/stability/gha-run.sh install-kata kata-artifacts
- name: Run containerd-stability tests
timeout-minutes: 15
run: bash tests/stability/gha-run.sh run

View File

@@ -1,134 +0,0 @@
# This yaml is designed to be used until all components listed in
# `build-checks.yaml` are supported
on:
workflow_dispatch:
inputs:
instance:
default: "riscv-builder"
description: "Default instance when manually triggering"
workflow_call:
inputs:
instance:
required: true
type: string
permissions: {}
name: Build checks preview riscv64
jobs:
check:
name: check
runs-on: ${{ inputs.instance }}
strategy:
fail-fast: false
matrix:
command:
- "make vendor"
- "make check"
- "make test"
- "sudo -E PATH=\"$PATH\" make test"
component:
- name: agent
path: src/agent
needs:
- rust
- libdevmapper
- libseccomp
- protobuf-compiler
- clang
- name: agent-ctl
path: src/tools/agent-ctl
needs:
- rust
- musl-tools
- protobuf-compiler
- clang
- name: trace-forwarder
path: src/tools/trace-forwarder
needs:
- rust
- musl-tools
- name: genpolicy
path: src/tools/genpolicy
needs:
- rust
- musl-tools
- protobuf-compiler
- name: runtime
path: src/runtime
needs:
- golang
- XDG_RUNTIME_DIR
- name: runtime-rs
path: src/runtime-rs
needs:
- rust
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R "$USER":"$USER" "$GITHUB_WORKSPACE" "$HOME"
sudo rm -rf "$GITHUB_WORKSPACE"/* || { sleep 10 && sudo rm -rf "$GITHUB_WORKSPACE"/*; }
sudo rm -f /tmp/kata_hybrid* # Sometime we got leftover from test_setup_hvsock_failed()
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Install yq
run: |
./ci/install_yq.sh
env:
INSTALL_IN_GOPATH: false
- name: Install golang
if: contains(matrix.component.needs, 'golang')
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "$GITHUB_PATH"
- name: Setup rust
if: contains(matrix.component.needs, 'rust')
run: |
./tests/install_rust.sh
echo "${HOME}/.cargo/bin" >> "$GITHUB_PATH"
if [ "$(uname -m)" == "x86_64" ] || [ "$(uname -m)" == "aarch64" ]; then
sudo apt-get update && sudo apt-get -y install musl-tools
fi
- name: Install devicemapper
if: contains(matrix.component.needs, 'libdevmapper') && matrix.command == 'make check'
run: sudo apt-get update && sudo apt-get -y install libdevmapper-dev
- name: Install libseccomp
if: contains(matrix.component.needs, 'libseccomp') && matrix.command != 'make vendor' && matrix.command != 'make check'
run: |
libseccomp_install_dir=$(mktemp -d -t libseccomp.XXXXXXXXXX)
gperf_install_dir=$(mktemp -d -t gperf.XXXXXXXXXX)
./ci/install_libseccomp.sh "${libseccomp_install_dir}" "${gperf_install_dir}"
echo "Set environment variables for the libseccomp crate to link the libseccomp library statically"
echo "LIBSECCOMP_LINK_TYPE=static" >> "$GITHUB_ENV"
echo "LIBSECCOMP_LIB_PATH=${libseccomp_install_dir}/lib" >> "$GITHUB_ENV"
- name: Install protobuf-compiler
if: contains(matrix.component.needs, 'protobuf-compiler') && matrix.command != 'make vendor'
run: sudo apt-get update && sudo apt-get -y install protobuf-compiler
- name: Install clang
if: contains(matrix.component.needs, 'clang') && matrix.command == 'make check'
run: sudo apt-get update && sudo apt-get -y install clang
- name: Setup XDG_RUNTIME_DIR
if: contains(matrix.component.needs, 'XDG_RUNTIME_DIR') && matrix.command != 'make check'
run: |
XDG_RUNTIME_DIR=$(mktemp -d "/tmp/kata-tests-$USER.XXX" | tee >(xargs chmod 0700))
echo "XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}" >> "$GITHUB_ENV"
- name: Skip tests that depend on virtualization capable runners when needed
if: inputs.instance == 'riscv-builder'
run: |
echo "GITHUB_RUNNER_CI_NON_VIRT=true" >> "$GITHUB_ENV"
- name: Running `${{ matrix.command }}` for ${{ matrix.component.name }}
run: |
cd "${COMPONENT_PATH}"
${COMMAND}
env:
COMMAND: ${{ matrix.command }}
COMPONENT_PATH: ${{ matrix.component.path }}
RUST_BACKTRACE: "1"
RUST_LIB_BACKTRACE: "0"
SKIP_GO_VERSION_CHECK: "1"

View File

@@ -1,146 +0,0 @@
on:
workflow_call:
inputs:
instance:
required: true
type: string
permissions: {}
name: Build checks
jobs:
check:
name: check
runs-on: >-
${{
( contains(inputs.instance, 's390x') && matrix.component.name == 'runtime' ) && 's390x' ||
( contains(inputs.instance, 'ppc64le') && (matrix.component.name == 'runtime' || matrix.component.name == 'agent') ) && 'ppc64le' ||
inputs.instance
}}
strategy:
fail-fast: false
matrix:
command:
- "make vendor"
- "make check"
- "make test"
- "sudo -E PATH=\"$PATH\" make test"
component:
- name: agent
path: src/agent
needs:
- rust
- libdevmapper
- libseccomp
- protobuf-compiler
- clang
- name: dragonball
path: src/dragonball
needs:
- rust
- name: runtime
path: src/runtime
needs:
- golang
- XDG_RUNTIME_DIR
- name: runtime-rs
path: src/runtime-rs
needs:
- rust
- name: libs
path: src/libs
needs:
- rust
- protobuf-compiler
- name: agent-ctl
path: src/tools/agent-ctl
needs:
- rust
- protobuf-compiler
- clang
- name: kata-ctl
path: src/tools/kata-ctl
needs:
- rust
- protobuf-compiler
- name: trace-forwarder
path: src/tools/trace-forwarder
needs:
- rust
- name: genpolicy
path: src/tools/genpolicy
needs:
- rust
- protobuf-compiler
instance:
- ${{ inputs.instance }}
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R "$USER":"$USER" "$GITHUB_WORKSPACE" "$HOME"
sudo rm -rf "$GITHUB_WORKSPACE"/* || { sleep 10 && sudo rm -rf "$GITHUB_WORKSPACE"/*; }
sudo rm -f /tmp/kata_hybrid* # Sometime we got leftover from test_setup_hvsock_failed()
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Install yq
run: |
./ci/install_yq.sh
env:
INSTALL_IN_GOPATH: false
- name: Install golang
if: contains(matrix.component.needs, 'golang')
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "$GITHUB_PATH"
- name: Setup rust
if: contains(matrix.component.needs, 'rust')
run: |
./tests/install_rust.sh
echo "${HOME}/.cargo/bin" >> "$GITHUB_PATH"
if [ "$(uname -m)" == "x86_64" ] || [ "$(uname -m)" == "aarch64" ]; then
sudo apt-get update && sudo apt-get -y install musl-tools
fi
- name: Install devicemapper
if: contains(matrix.component.needs, 'libdevmapper') && matrix.command == 'make check'
run: sudo apt-get update && sudo apt-get -y install libdevmapper-dev
- name: Install libseccomp
if: contains(matrix.component.needs, 'libseccomp') && matrix.command != 'make vendor' && matrix.command != 'make check'
run: |
libseccomp_install_dir=$(mktemp -d -t libseccomp.XXXXXXXXXX)
gperf_install_dir=$(mktemp -d -t gperf.XXXXXXXXXX)
./ci/install_libseccomp.sh "${libseccomp_install_dir}" "${gperf_install_dir}"
echo "Set environment variables for the libseccomp crate to link the libseccomp library statically"
echo "LIBSECCOMP_LINK_TYPE=static" >> "$GITHUB_ENV"
echo "LIBSECCOMP_LIB_PATH=${libseccomp_install_dir}/lib" >> "$GITHUB_ENV"
- name: Install protobuf-compiler
if: contains(matrix.component.needs, 'protobuf-compiler') && matrix.command != 'make vendor'
run: sudo apt-get update && sudo apt-get -y install protobuf-compiler
- name: Install clang
if: contains(matrix.component.needs, 'clang') && matrix.command == 'make check'
run: sudo apt-get update && sudo apt-get -y install clang
- name: Setup XDG_RUNTIME_DIR
if: contains(matrix.component.needs, 'XDG_RUNTIME_DIR') && matrix.command != 'make check'
run: |
XDG_RUNTIME_DIR=$(mktemp -d "/tmp/kata-tests-$USER.XXX" | tee >(xargs chmod 0700))
echo "XDG_RUNTIME_DIR=${XDG_RUNTIME_DIR}" >> "$GITHUB_ENV"
- name: Skip tests that depend on virtualization capable runners when needed
if: ${{ endsWith(inputs.instance, '-arm') }}
run: |
echo "GITHUB_RUNNER_CI_NON_VIRT=true" >> "$GITHUB_ENV"
- name: Running `${{ matrix.command }}` for ${{ matrix.component.name }}
run: |
cd "${COMPONENT_PATH}"
eval "${COMMAND}"
env:
COMMAND: ${{ matrix.command }}
COMPONENT_PATH: ${{ matrix.component.path }}
RUST_BACKTRACE: "1"
RUST_LIB_BACKTRACE: "0"
SKIP_GO_VERSION_CHECK: "1"

View File

@@ -1,460 +0,0 @@
name: CI | Build kata-static tarball for amd64
on:
workflow_call:
inputs:
stage:
required: false
type: string
default: test
tarball-suffix:
required: false
type: string
push-to-registry:
required: false
type: string
default: no
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
QUAY_DEPLOYER_PASSWORD:
required: false
KBUILD_SIGN_PIN:
required: true
permissions: {}
jobs:
build-asset:
name: build-asset
runs-on: ubuntu-22.04
permissions:
contents: read
packages: write
id-token: write
attestations: write
strategy:
matrix:
asset:
- agent
- busybox
- cloud-hypervisor
- cloud-hypervisor-glibc
- coco-guest-components
- firecracker
- kernel
- kernel-dragonball-experimental
- kernel-nvidia-gpu
- nydus
- ovmf
- ovmf-sev
- ovmf-tdx
- pause-image
- qemu
- qemu-snp-experimental
- qemu-tdx-experimental
- virtiofsd
stage:
- ${{ inputs.stage }}
exclude:
- asset: cloud-hypervisor-glibc
stage: release
env:
PERFORM_ATTESTATION: ${{ matrix.asset == 'agent' && inputs.push-to-registry == 'yes' && 'yes' || 'no' }}
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Build ${{ matrix.asset }}
id: build
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
KBUILD_SIGN_PIN: ${{ contains(matrix.asset, 'nvidia') && secrets.KBUILD_SIGN_PIN || '' }}
- name: Parse OCI image name and digest
id: parse-oci-segments
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
env:
KATA_ASSET: ${{ matrix.asset }}
run: |
oci_image="$(<"build/${KATA_ASSET}-oci-image")"
echo "oci-name=${oci_image%@*}" >> "$GITHUB_OUTPUT"
echo "oci-digest=${oci_image#*@}" >> "$GITHUB_OUTPUT"
- uses: oras-project/setup-oras@22ce207df3b08e061f537244349aac6ae1d214f6 # v1.2.4
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
version: "1.2.0"
# for pushing attestations to the registry
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/attest-build-provenance@ef244123eb79f2f7a7e75d99086184180e6d0018 # v1.4.4
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
subject-name: ${{ steps.parse-oci-segments.outputs.oci-name }}
subject-digest: ${{ steps.parse-oci-segments.outputs.oci-digest }}
push-to-registry: true
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
- name: store-extratarballs-artifact ${{ matrix.asset }}
if: ${{ matrix.asset == 'kernel' || startsWith(matrix.asset, 'kernel-nvidia-gpu') }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-${{ matrix.asset }}-modules${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-modules.tar.zst
retention-days: 15
if-no-files-found: error
build-asset-rootfs:
name: build-asset-rootfs
runs-on: ubuntu-22.04
needs: build-asset
permissions:
contents: read
packages: write
strategy:
matrix:
asset:
- rootfs-image
- rootfs-image-confidential
- rootfs-image-mariner
- rootfs-image-nvidia-gpu
- rootfs-image-nvidia-gpu-confidential
- rootfs-initrd
- rootfs-initrd-confidential
- rootfs-initrd-nvidia-gpu
- rootfs-initrd-nvidia-gpu-confidential
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-amd64-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Build ${{ matrix.asset }}
id: build
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
KBUILD_SIGN_PIN: ${{ contains(matrix.asset, 'nvidia') && secrets.KBUILD_SIGN_PIN || '' }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts:
name: remove-rootfs-binary-artifacts
runs-on: ubuntu-22.04
needs: build-asset-rootfs
strategy:
matrix:
asset:
- busybox
- coco-guest-components
- kernel-modules
- kernel-nvidia-gpu-modules
- pause-image
steps:
- uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.1.0
with:
name: kata-artifacts-amd64-${{ matrix.asset}}${{ inputs.tarball-suffix }}
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts-for-release:
name: remove-rootfs-binary-artifacts-for-release
runs-on: ubuntu-22.04
needs: build-asset-rootfs
strategy:
matrix:
asset:
- agent
steps:
- uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.1.0
if: ${{ inputs.stage == 'release' }}
with:
name: kata-artifacts-amd64-${{ matrix.asset}}${{ inputs.tarball-suffix }}
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: ubuntu-22.04
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts, remove-rootfs-binary-artifacts-for-release]
permissions:
contents: read
packages: write
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-amd64-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Build shim-v2
id: build
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: shim-v2
TAR_OUTPUT: shim-v2.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
MEASURED_ROOTFS: yes
- name: store-artifact shim-v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-amd64-shim-v2${{ inputs.tarball-suffix }}
path: kata-build/kata-static-shim-v2.tar.zst
retention-days: 15
if-no-files-found: error
create-kata-tarball:
name: create-kata-tarball
runs-on: ubuntu-22.04
needs: [build-asset, build-asset-rootfs, build-asset-shim-v2]
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
fetch-tags: true
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-amd64-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
env:
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-static.tar.zst
retention-days: 15
if-no-files-found: error
build-tools-asset:
name: build-tools-asset
runs-on: ubuntu-22.04
permissions:
contents: read
packages: write
strategy:
matrix:
asset:
- agent-ctl
- csi-kata-directvolume
- genpolicy
- kata-ctl
- kata-manager
- trace-forwarder
stage:
- ${{ inputs.stage }}
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Build ${{ matrix.asset }}
id: build
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-tools-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-tools-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-tools-artifacts-amd64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-tools-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
create-kata-tools-tarball:
name: create-kata-tools-tarball
runs-on: ubuntu-22.04
needs: [build-tools-asset]
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
fetch-tags: true
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-tools-artifacts-amd64-*${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
merge-multiple: true
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-tools-artifacts versions.yaml kata-tools-static.tar.zst
env:
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-static.tar.zst
retention-days: 15
if-no-files-found: error

View File

@@ -1,336 +0,0 @@
name: CI | Build kata-static tarball for arm64
on:
workflow_call:
inputs:
stage:
required: false
type: string
default: test
tarball-suffix:
required: false
type: string
push-to-registry:
required: false
type: string
default: no
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
QUAY_DEPLOYER_PASSWORD:
required: false
KBUILD_SIGN_PIN:
required: true
permissions: {}
jobs:
build-asset:
name: build-asset
runs-on: ubuntu-24.04-arm
permissions:
contents: read
packages: write
id-token: write
attestations: write
strategy:
matrix:
asset:
- agent
- busybox
- cloud-hypervisor
- firecracker
- kernel
- kernel-dragonball-experimental
- kernel-nvidia-gpu
- kernel-cca-confidential
- nydus
- ovmf
- qemu
- virtiofsd
env:
PERFORM_ATTESTATION: ${{ matrix.asset == 'agent' && inputs.push-to-registry == 'yes' && 'yes' || 'no' }}
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
KBUILD_SIGN_PIN: ${{ contains(matrix.asset, 'nvidia') && secrets.KBUILD_SIGN_PIN || '' }}
- name: Parse OCI image name and digest
id: parse-oci-segments
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
env:
KATA_ASSET: ${{ matrix.asset }}
run: |
oci_image="$(<"build/${KATA_ASSET}-oci-image")"
echo "oci-name=${oci_image%@*}" >> "$GITHUB_OUTPUT"
echo "oci-digest=${oci_image#*@}" >> "$GITHUB_OUTPUT"
- uses: oras-project/setup-oras@22ce207df3b08e061f537244349aac6ae1d214f6 # v1.2.4
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
version: "1.2.0"
# for pushing attestations to the registry
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/attest-build-provenance@ef244123eb79f2f7a7e75d99086184180e6d0018 # v1.4.4
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
subject-name: ${{ steps.parse-oci-segments.outputs.oci-name }}
subject-digest: ${{ steps.parse-oci-segments.outputs.oci-digest }}
push-to-registry: true
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
- name: store-extratarballs-artifact ${{ matrix.asset }}
if: ${{ startsWith(matrix.asset, 'kernel-nvidia-gpu') }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-${{ matrix.asset }}-modules${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-modules.tar.zst
retention-days: 15
if-no-files-found: error
build-asset-rootfs:
name: build-asset-rootfs
runs-on: ubuntu-24.04-arm
needs: build-asset
permissions:
contents: read
packages: write
strategy:
matrix:
asset:
- rootfs-image
- rootfs-image-nvidia-gpu
- rootfs-initrd
- rootfs-initrd-nvidia-gpu
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-arm64-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Build ${{ matrix.asset }}
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
KBUILD_SIGN_PIN: ${{ contains(matrix.asset, 'nvidia') && secrets.KBUILD_SIGN_PIN || '' }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts:
name: remove-rootfs-binary-artifacts
runs-on: ubuntu-24.04-arm
needs: build-asset-rootfs
strategy:
matrix:
asset:
- busybox
- kernel-nvidia-gpu-modules
steps:
- uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.1.0
with:
name: kata-artifacts-arm64-${{ matrix.asset}}${{ inputs.tarball-suffix }}
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts-for-release:
name: remove-rootfs-binary-artifacts-for-release
runs-on: ubuntu-24.04-arm
needs: build-asset-rootfs
strategy:
matrix:
asset:
- agent
steps:
- uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.1.0
if: ${{ inputs.stage == 'release' }}
with:
name: kata-artifacts-arm64-${{ matrix.asset}}${{ inputs.tarball-suffix }}
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: ubuntu-24.04-arm
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts, remove-rootfs-binary-artifacts-for-release]
permissions:
contents: read
packages: write
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-arm64-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Build shim-v2
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: shim-v2
TAR_OUTPUT: shim-v2.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifact shim-v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-arm64-shim-v2${{ inputs.tarball-suffix }}
path: kata-build/kata-static-shim-v2.tar.zst
retention-days: 15
if-no-files-found: error
create-kata-tarball:
name: create-kata-tarball
runs-on: ubuntu-24.04-arm
needs: [build-asset, build-asset-rootfs, build-asset-shim-v2]
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
fetch-tags: true
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-arm64-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
env:
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-static-tarball-arm64${{ inputs.tarball-suffix }}
path: kata-static.tar.zst
retention-days: 15
if-no-files-found: error

View File

@@ -1,271 +0,0 @@
name: CI | Build kata-static tarball for ppc64le
on:
workflow_call:
inputs:
stage:
required: false
type: string
default: test
tarball-suffix:
required: false
type: string
push-to-registry:
required: false
type: string
default: no
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions: {}
jobs:
build-asset:
name: build-asset
permissions:
contents: read
packages: write
runs-on: ubuntu-24.04-ppc64le
strategy:
matrix:
asset:
- agent
- kernel
- qemu
- virtiofsd
stage:
- ${{ inputs.stage }}
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-ppc64le-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 1
if-no-files-found: error
build-asset-rootfs:
name: build-asset-rootfs
runs-on: ubuntu-24.04-ppc64le
needs: build-asset
permissions:
contents: read
packages: write
strategy:
matrix:
asset:
- rootfs-initrd
stage:
- ${{ inputs.stage }}
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-ppc64le-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Build ${{ matrix.asset }}
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-ppc64le-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 1
if-no-files-found: error
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts:
name: remove-rootfs-binary-artifacts
runs-on: ubuntu-22.04
needs: build-asset-rootfs
strategy:
matrix:
asset:
- agent
steps:
- uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.1.0
if: ${{ inputs.stage == 'release' }}
with:
name: kata-artifacts-ppc64le-${{ matrix.asset}}${{ inputs.tarball-suffix }}
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: ubuntu-24.04-ppc64le
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts]
permissions:
contents: read
packages: write
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-ppc64le-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Build shim-v2
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: shim-v2
TAR_OUTPUT: shim-v2.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifact shim-v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-ppc64le-shim-v2${{ inputs.tarball-suffix }}
path: kata-build/kata-static-shim-v2.tar.zst
retention-days: 1
if-no-files-found: error
create-kata-tarball:
name: create-kata-tarball
runs-on: ubuntu-24.04-ppc64le
needs: [build-asset, build-asset-rootfs, build-asset-shim-v2]
permissions:
contents: read
packages: write
steps:
- name: Adjust a permission for repo
run: |
sudo chown -R "$USER":"$USER" "$GITHUB_WORKSPACE"
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
fetch-tags: true
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-ppc64le-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
env:
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-static-tarball-ppc64le${{ inputs.tarball-suffix }}
path: kata-static.tar.zst
retention-days: 1
if-no-files-found: error

View File

@@ -1,75 +0,0 @@
name: CI | Build kata-static tarball for riscv64
on:
workflow_call:
inputs:
stage:
required: false
type: string
default: test
tarball-suffix:
required: false
type: string
push-to-registry:
required: false
type: string
default: no
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
build-asset:
name: build-asset
runs-on: riscv-builder
permissions:
contents: read
packages: write
id-token: write
attestations: write
strategy:
matrix:
asset:
- kernel
- virtiofsd
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-riscv64-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 3
if-no-files-found: error

View File

@@ -1,368 +0,0 @@
name: CI | Build kata-static tarball for s390x
on:
workflow_call:
inputs:
stage:
required: false
type: string
default: test
tarball-suffix:
required: false
type: string
push-to-registry:
required: false
type: string
default: no
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
CI_HKD_PATH:
required: true
QUAY_DEPLOYER_PASSWORD:
required: true
permissions: {}
jobs:
build-asset:
name: build-asset
runs-on: ubuntu-24.04-s390x
permissions:
contents: read
packages: write
id-token: write
attestations: write
strategy:
matrix:
asset:
- agent
- coco-guest-components
- kernel
- pause-image
- qemu
- virtiofsd
env:
PERFORM_ATTESTATION: ${{ matrix.asset == 'agent' && inputs.push-to-registry == 'yes' && 'yes' || 'no' }}
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Build ${{ matrix.asset }}
id: build
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: Parse OCI image name and digest
id: parse-oci-segments
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
env:
ASSET: ${{ matrix.asset }}
run: |
oci_image="$(<"build/${ASSET}-oci-image")"
echo "oci-name=${oci_image%@*}" >> "$GITHUB_OUTPUT"
echo "oci-digest=${oci_image#*@}" >> "$GITHUB_OUTPUT"
# for pushing attestations to the registry
- uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- uses: actions/attest-build-provenance@ef244123eb79f2f7a7e75d99086184180e6d0018 # v1.4.4
if: ${{ env.PERFORM_ATTESTATION == 'yes' }}
with:
subject-name: ${{ steps.parse-oci-segments.outputs.oci-name }}
subject-digest: ${{ steps.parse-oci-segments.outputs.oci-digest }}
push-to-registry: true
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
- name: store-extratarballs-artifact ${{ matrix.asset }}
if: ${{ matrix.asset == 'kernel' }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x-${{ matrix.asset }}-modules${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}-modules.tar.zst
retention-days: 15
if-no-files-found: error
build-asset-rootfs:
name: build-asset-rootfs
runs-on: s390x
needs: build-asset
permissions:
contents: read
packages: write
strategy:
matrix:
asset:
- rootfs-image
- rootfs-image-confidential
- rootfs-initrd
- rootfs-initrd-confidential
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-s390x-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Build ${{ matrix.asset }}
id: build
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x-${{ matrix.asset }}${{ inputs.tarball-suffix }}
path: kata-build/kata-static-${{ matrix.asset }}.tar.zst
retention-days: 15
if-no-files-found: error
build-asset-boot-image-se:
name: build-asset-boot-image-se
runs-on: s390x
needs: [build-asset, build-asset-rootfs]
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-s390x-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Place a host key document
run: |
mkdir -p "host-key-document"
cp "${CI_HKD_PATH}" "host-key-document"
env:
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
- name: Build boot-image-se
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "boot-image-se"
make boot-image-se-tarball
build_dir=$(readlink -f build)
sudo cp -r "${build_dir}" "kata-build"
sudo chown -R "$(id -u)":"$(id -g)" "kata-build"
env:
HKD_PATH: "host-key-document"
- name: store-artifact boot-image-se
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x${{ inputs.tarball-suffix }}
path: kata-build/kata-static-boot-image-se.tar.zst
retention-days: 1
if-no-files-found: error
# We don't need the binaries installed in the rootfs as part of the release tarball, so can delete them now we've built the rootfs
remove-rootfs-binary-artifacts:
name: remove-rootfs-binary-artifacts
runs-on: ubuntu-22.04
needs: [build-asset-rootfs, build-asset-boot-image-se]
strategy:
matrix:
asset:
- agent
- coco-guest-components
- pause-image
steps:
- uses: geekyeggo/delete-artifact@f275313e70c08f6120db482d7a6b98377786765b # v5.1.0
if: ${{ inputs.stage == 'release' }}
with:
name: kata-artifacts-s390x-${{ matrix.asset}}${{ inputs.tarball-suffix }}
build-asset-shim-v2:
name: build-asset-shim-v2
runs-on: ubuntu-24.04-s390x
needs: [build-asset, build-asset-rootfs, remove-rootfs-binary-artifacts]
permissions:
contents: read
packages: write
steps:
- name: Login to Kata Containers quay.io
if: ${{ inputs.push-to-registry == 'yes' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0 # This is needed in order to keep the commit ids history
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-s390x-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: Build shim-v2
id: build
run: |
./tests/gha-adjust-to-use-prebuilt-components.sh kata-artifacts "${KATA_ASSET}"
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
mkdir -p kata-build && cp "${build_dir}"/kata-static-"${KATA_ASSET}"*.tar.* kata-build/.
env:
KATA_ASSET: shim-v2
TAR_OUTPUT: shim-v2.tar.gz
PUSH_TO_REGISTRY: ${{ inputs.push-to-registry }}
ARTEFACT_REGISTRY: ghcr.io
ARTEFACT_REGISTRY_USERNAME: ${{ github.actor }}
ARTEFACT_REGISTRY_PASSWORD: ${{ secrets.GITHUB_TOKEN }}
TARGET_BRANCH: ${{ inputs.target-branch }}
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
MEASURED_ROOTFS: no
- name: store-artifact shim-v2
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-artifacts-s390x-shim-v2${{ inputs.tarball-suffix }}
path: kata-build/kata-static-shim-v2.tar.zst
retention-days: 15
if-no-files-found: error
create-kata-tarball:
name: create-kata-tarball
runs-on: ubuntu-24.04-s390x
needs:
- build-asset
- build-asset-rootfs
- build-asset-boot-image-se
- build-asset-shim-v2
permissions:
contents: read
packages: write
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
fetch-tags: true
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
pattern: kata-artifacts-s390x-*${{ inputs.tarball-suffix }}
path: kata-artifacts
merge-multiple: true
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts versions.yaml
env:
RELEASE: ${{ inputs.stage == 'release' && 'yes' || 'no' }}
- name: store-artifacts
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: kata-static-tarball-s390x${{ inputs.tarball-suffix }}
path: kata-static.tar.zst
retention-days: 15
if-no-files-found: error

View File

@@ -1,75 +0,0 @@
name: Build kubectl multi-arch image
on:
schedule:
# Run every Sunday at 00:00 UTC
- cron: '0 0 * * 0'
workflow_dispatch:
# Allow manual triggering
push:
branches:
- main
paths:
- 'tools/packaging/kubectl/Dockerfile'
- '.github/workflows/build-kubectl-image.yaml'
permissions: {}
env:
REGISTRY: quay.io
IMAGE_NAME: kata-containers/kubectl
jobs:
build-and-push:
name: Build and push multi-arch image
runs-on: ubuntu-24.04
permissions:
contents: read
packages: write
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Set up QEMU
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Login to Quay.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ${{ env.REGISTRY }}
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Get kubectl version
id: kubectl-version
run: |
KUBECTL_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
echo "version=${KUBECTL_VERSION}" >> "$GITHUB_OUTPUT"
- name: Generate image metadata
id: meta
uses: docker/metadata-action@902fa8ec7d6ecbf8d84d538b9b233a880e428804 # v5.7.0
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=raw,value=latest
type=raw,value={{date 'YYYYMMDD'}}
type=raw,value=${{ steps.kubectl-version.outputs.version }}
type=sha,prefix=
- name: Build and push multi-arch image
uses: docker/build-push-action@ca052bb54ab0790a636c9b5f226502c73d547a25 # v5.4.0
with:
context: tools/packaging/kubectl/
file: tools/packaging/kubectl/Dockerfile
platforms: linux/amd64,linux/arm64,linux/s390x,linux/ppc64le
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max

View File

@@ -1,32 +0,0 @@
name: Cargo Crates Check Runner
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
cargo-deny-runner:
name: cargo-deny-runner
runs-on: ubuntu-22.04
steps:
- name: Checkout Code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Generate Action
run: bash cargo-deny-generator.sh
working-directory: ./.github/cargo-deny-composite-action/
env:
GOPATH: ${{ github.workspace }}/kata-containers
- name: Run Action
uses: ./.github/cargo-deny-composite-action

View File

@@ -1,33 +0,0 @@
name: Kata Containers CoCo Stability Tests Weekly
on:
# Note: This workload is not currently maintained, so skipping it's scheduled runs
# schedule:
# - cron: '0 0 * * 0'
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
kata-containers-ci-on-push:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/ci-weekly.yaml
with:
commit-hash: ${{ github.sha }}
pr-number: "weekly"
tag: ${{ github.sha }}-weekly
target-branch: ${{ github.ref_name }}
secrets:
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}

View File

@@ -1,35 +0,0 @@
name: Kata Containers CI (manually triggered)
on:
workflow_dispatch:
permissions: {}
jobs:
kata-containers-ci-on-push:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/ci.yaml
with:
commit-hash: ${{ github.sha }}
pr-number: "dev"
tag: ${{ github.sha }}-dev
target-branch: ${{ github.ref_name }}
secrets:
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
ITA_KEY: ${{ secrets.ITA_KEY }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
build-checks:
uses: ./.github/workflows/build-checks.yaml
with:
instance: ubuntu-22.04

View File

@@ -1,34 +0,0 @@
on:
schedule:
- cron: '0 5 * * *'
name: Nightly CI for RISC-V
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
build-kata-static-tarball-riscv:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-riscv64.yaml
with:
tarball-suffix: -${{ github.sha }}
commit-hash: ${{ github.sha }}
target-branch: ${{ github.ref_name }}
build-checks-preview:
strategy:
fail-fast: false
matrix:
instance:
- "riscv-builder"
uses: ./.github/workflows/build-checks-preview-riscv64.yaml
with:
instance: ${{ matrix.instance }}

View File

@@ -1,27 +0,0 @@
on:
schedule:
- cron: '0 5 * * *'
name: Nightly CI for s390x
permissions: {}
jobs:
check-internal-test-result:
name: check-internal-test-result
runs-on: s390x
strategy:
fail-fast: false
matrix:
test_title:
- kata-vfio-ap-e2e-tests
- cc-vfio-ap-e2e-tests
- cc-se-e2e-tests-go
- cc-se-e2e-tests-rs
steps:
- name: Fetch a test result for {{ matrix.test_title }}
run: |
file_name="${TEST_TITLE}-$(date +%Y-%m-%d).log"
"/home/${USER}/script/handle_test_log.sh" download "$file_name"
env:
TEST_TITLE: ${{ matrix.test_title }}

View File

@@ -1,34 +0,0 @@
name: Kata Containers Nightly CI
on:
schedule:
- cron: '0 0 * * *'
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
permissions: {}
jobs:
kata-containers-ci-on-push:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/ci.yaml
with:
commit-hash: ${{ github.sha }}
pr-number: "nightly"
tag: ${{ github.sha }}-nightly
target-branch: ${{ github.ref_name }}
secrets:
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
ITA_KEY: ${{ secrets.ITA_KEY }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}

View File

@@ -1,54 +0,0 @@
name: Kata Containers CI
on:
pull_request_target: # zizmor: ignore[dangerous-triggers] See #11332.
branches:
- 'main'
types:
# Adding 'labeled' to the list of activity types that trigger this event
# (default: opened, synchronize, reopened) so that we can run this
# workflow when the 'ok-to-test' label is added.
# Reference: https://docs.github.com/en/actions/using-workflows/events-that-trigger-workflows#pull_request_target
- opened
- synchronize
- reopened
- labeled
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
skipper:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
uses: ./.github/workflows/gatekeeper-skipper.yaml
with:
commit-hash: ${{ github.event.pull_request.head.sha }}
target-branch: ${{ github.event.pull_request.base.ref }}
kata-containers-ci-on-push:
needs: skipper
if: ${{ needs.skipper.outputs.skip_build != 'yes' }}
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/ci.yaml
with:
commit-hash: ${{ github.event.pull_request.head.sha }}
pr-number: ${{ github.event.pull_request.number }}
tag: ${{ github.event.pull_request.number }}-${{ github.event.pull_request.head.sha }}
target-branch: ${{ github.event.pull_request.base.ref }}
skip-test: ${{ needs.skipper.outputs.skip_test }}
secrets:
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
ITA_KEY: ${{ secrets.ITA_KEY }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}

View File

@@ -1,128 +0,0 @@
name: Run the CoCo Kata Containers Stability CI
on:
workflow_call:
inputs:
commit-hash:
required: true
type: string
pr-number:
required: true
type: string
tag:
required: true
type: string
target-branch:
required: false
type: string
default: ""
secrets:
AUTHENTICATED_IMAGE_PASSWORD:
required: true
AZ_APPID:
required: true
AZ_TENANT_ID:
required: true
AZ_SUBSCRIPTION_ID:
required: true
QUAY_DEPLOYER_PASSWORD:
required: true
KBUILD_SIGN_PIN:
required: true
permissions: {}
jobs:
build-kata-static-tarball-amd64:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-amd64.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
publish-kata-deploy-payload-amd64:
needs: build-kata-static-tarball-amd64
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-22.04
arch: amd64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-and-publish-tee-confidential-unencrypted-image:
name: build-and-publish-tee-confidential-unencrypted-image
permissions:
contents: read
packages: write
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Set up QEMU
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Docker build and push
uses: docker/build-push-action@ca052bb54ab0790a636c9b5f226502c73d547a25 # v5.4.0
with:
tags: ghcr.io/kata-containers/test-images:unencrypted-${{ inputs.pr-number }}
push: true
context: tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/
platforms: linux/amd64
file: tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile
run-kata-coco-stability-tests:
needs: [publish-kata-deploy-payload-amd64, build-and-publish-tee-confidential-unencrypted-image]
uses: ./.github/workflows/run-kata-coco-stability-tests.yaml
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
tarball-suffix: -${{ inputs.tag }}
secrets:
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
permissions:
contents: read
id-token: write

View File

@@ -1,493 +0,0 @@
name: Run the Kata Containers CI
on:
workflow_call:
inputs:
commit-hash:
required: true
type: string
pr-number:
required: true
type: string
tag:
required: true
type: string
target-branch:
required: false
type: string
default: ""
skip-test:
required: false
type: string
default: no
secrets:
AUTHENTICATED_IMAGE_PASSWORD:
required: true
AZ_APPID:
required: true
AZ_TENANT_ID:
required: true
AZ_SUBSCRIPTION_ID:
required: true
CI_HKD_PATH:
required: true
ITA_KEY:
required: true
QUAY_DEPLOYER_PASSWORD:
required: true
NGC_API_KEY:
required: true
KBUILD_SIGN_PIN:
required: true
permissions: {}
jobs:
build-kata-static-tarball-amd64:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-amd64.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
publish-kata-deploy-payload-amd64:
needs: build-kata-static-tarball-amd64
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-22.04
arch: amd64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-kata-static-tarball-arm64:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-arm64.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
publish-kata-deploy-payload-arm64:
needs: build-kata-static-tarball-arm64
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-arm64
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-24.04-arm
arch: arm64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-kata-static-tarball-s390x:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-s390x.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
CI_HKD_PATH: ${{ secrets.ci_hkd_path }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-kata-static-tarball-ppc64le:
permissions:
contents: read
packages: write
uses: ./.github/workflows/build-kata-static-tarball-ppc64le.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-kata-deploy-payload-s390x:
needs: build-kata-static-tarball-s390x
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-s390x
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-24.04-s390x
arch: s390x
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-kata-deploy-payload-ppc64le:
needs: build-kata-static-tarball-ppc64le
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-ppc64le
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-24.04-ppc64le
arch: ppc64le
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-and-publish-tee-confidential-unencrypted-image:
name: build-and-publish-tee-confidential-unencrypted-image
permissions:
contents: read
packages: write
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Set up QEMU
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Docker build and push
uses: docker/build-push-action@ca052bb54ab0790a636c9b5f226502c73d547a25 # v5.4.0
with:
tags: ghcr.io/kata-containers/test-images:unencrypted-${{ inputs.pr-number }}
push: true
context: tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/
platforms: linux/amd64, linux/s390x
file: tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile
publish-csi-driver-amd64:
name: publish-csi-driver-amd64
needs: build-kata-static-tarball-amd64
permissions:
contents: read
packages: write
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64-${{ inputs.tag }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Copy binary into Docker context
run: |
# Copy to the location where the Dockerfile expects the binary.
mkdir -p src/tools/csi-kata-directvolume/bin/
cp /opt/kata/bin/csi-kata-directvolume src/tools/csi-kata-directvolume/bin/directvolplugin
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Docker build and push
uses: docker/build-push-action@ca052bb54ab0790a636c9b5f226502c73d547a25 # v5.4.0
with:
tags: ghcr.io/kata-containers/csi-kata-directvolume:${{ inputs.pr-number }}
push: true
context: src/tools/csi-kata-directvolume/
platforms: linux/amd64
file: src/tools/csi-kata-directvolume/Dockerfile
run-kata-monitor-tests:
if: ${{ inputs.skip-test != 'yes' }}
needs: build-kata-static-tarball-amd64
uses: ./.github/workflows/run-kata-monitor-tests.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
run-k8s-tests-on-aks:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-aks.yaml
permissions:
contents: read
id-token: write # Used for OIDC access to log into Azure
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
secrets:
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
run-k8s-tests-on-arm64:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-arm64
uses: ./.github/workflows/run-k8s-tests-on-arm64.yaml
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-arm64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
run-k8s-tests-on-nvidia-gpu:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-amd64
uses: ./.github/workflows/run-k8s-tests-on-nvidia-gpu.yaml
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
secrets:
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
run-kata-coco-tests:
if: ${{ inputs.skip-test != 'yes' }}
needs:
- publish-kata-deploy-payload-amd64
- build-and-publish-tee-confidential-unencrypted-image
- publish-csi-driver-amd64
uses: ./.github/workflows/run-kata-coco-tests.yaml
permissions:
contents: read
id-token: write # Used for OIDC access to log into Azure
with:
tarball-suffix: -${{ inputs.tag }}
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
secrets:
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
ITA_KEY: ${{ secrets.ITA_KEY }}
run-k8s-tests-on-zvsi:
if: ${{ inputs.skip-test != 'yes' }}
needs: [publish-kata-deploy-payload-s390x, build-and-publish-tee-confidential-unencrypted-image]
uses: ./.github/workflows/run-k8s-tests-on-zvsi.yaml
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-s390x
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
secrets:
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
run-k8s-tests-on-ppc64le:
if: ${{ inputs.skip-test != 'yes' }}
needs: publish-kata-deploy-payload-ppc64le
uses: ./.github/workflows/run-k8s-tests-on-ppc64le.yaml
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-ppc64le
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
run-kata-deploy-tests:
if: ${{ inputs.skip-test != 'yes' }}
needs: [publish-kata-deploy-payload-amd64]
uses: ./.github/workflows/run-kata-deploy-tests.yaml
with:
registry: ghcr.io
repo: ${{ github.repository_owner }}/kata-deploy-ci
tag: ${{ inputs.tag }}-amd64
commit-hash: ${{ inputs.commit-hash }}
pr-number: ${{ inputs.pr-number }}
target-branch: ${{ inputs.target-branch }}
run-basic-amd64-tests:
if: ${{ inputs.skip-test != 'yes' }}
needs: build-kata-static-tarball-amd64
uses: ./.github/workflows/basic-ci-amd64.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
run-basic-s390x-tests:
if: ${{ inputs.skip-test != 'yes' }}
needs: build-kata-static-tarball-s390x
uses: ./.github/workflows/basic-ci-s390x.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
run-cri-containerd-amd64:
if: ${{ inputs.skip-test != 'yes' }}
needs: build-kata-static-tarball-amd64
strategy:
fail-fast: false
matrix:
params: [
{ containerd_version: lts, vmm: clh },
{ containerd_version: lts, vmm: dragonball },
{ containerd_version: lts, vmm: qemu },
{ containerd_version: lts, vmm: cloud-hypervisor },
{ containerd_version: lts, vmm: qemu-runtime-rs },
{ containerd_version: active, vmm: clh },
{ containerd_version: active, vmm: dragonball },
{ containerd_version: active, vmm: qemu },
{ containerd_version: active, vmm: cloud-hypervisor },
{ containerd_version: active, vmm: qemu-runtime-rs },
]
uses: ./.github/workflows/run-cri-containerd-tests.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ubuntu-22.04
arch: amd64
containerd_version: ${{ matrix.params.containerd_version }}
vmm: ${{ matrix.params.vmm }}
run-cri-containerd-s390x:
if: ${{ inputs.skip-test != 'yes' }}
needs: build-kata-static-tarball-s390x
strategy:
fail-fast: false
matrix:
params: [
{ containerd_version: active, vmm: qemu },
{ containerd_version: active, vmm: qemu-runtime-rs },
]
uses: ./.github/workflows/run-cri-containerd-tests.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: s390x-large
arch: s390x
containerd_version: ${{ matrix.params.containerd_version }}
vmm: ${{ matrix.params.vmm }}
run-cri-containerd-tests-ppc64le:
if: ${{ inputs.skip-test != 'yes' }}
needs: build-kata-static-tarball-ppc64le
strategy:
fail-fast: false
matrix:
params: [
{ containerd_version: active, vmm: qemu },
]
uses: ./.github/workflows/run-cri-containerd-tests.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: ppc64le-small
arch: ppc64le
containerd_version: ${{ matrix.params.containerd_version }}
vmm: ${{ matrix.params.vmm }}
run-cri-containerd-tests-arm64:
if: false
needs: build-kata-static-tarball-arm64
strategy:
fail-fast: false
matrix:
params: [
{ containerd_version: active, vmm: qemu },
]
uses: ./.github/workflows/run-cri-containerd-tests.yaml
with:
tarball-suffix: -${{ inputs.tag }}
commit-hash: ${{ inputs.commit-hash }}
target-branch: ${{ inputs.target-branch }}
runner: arm64-non-k8s
arch: arm64
containerd_version: ${{ matrix.params.containerd_version }}
vmm: ${{ matrix.params.vmm }}

View File

@@ -1,38 +0,0 @@
name: Cleanup dangling Azure resources
on:
schedule:
- cron: "0 0 * * *"
workflow_dispatch:
permissions: {}
jobs:
cleanup-resources:
name: cleanup-resources
runs-on: ubuntu-22.04
permissions:
id-token: write # Used for OIDC access to log into Azure
environment: ci
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Log into Azure
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Install Python dependencies
run: |
pip3 install --user --upgrade \
azure-identity==1.16.0 \
azure-mgmt-resource==23.0.1
- name: Cleanup resources
env:
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
CLEANUP_AFTER_HOURS: 24 # Clean up resources created more than this many hours ago.
run: python3 tests/cleanup_resources.py

View File

@@ -1,100 +0,0 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL Advanced"
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
schedule:
- cron: '45 0 * * 1'
permissions: {}
jobs:
analyze:
name: Analyze (${{ matrix.language }})
# Runner size impacts CodeQL analysis time. To learn more, please see:
# - https://gh.io/recommended-hardware-resources-for-running-codeql
# - https://gh.io/supported-runners-and-hardware-resources
# - https://gh.io/using-larger-runners (GitHub.com only)
# Consider using larger runners or machines with greater resources for possible analysis time improvements.
runs-on: ubuntu-24.04
permissions:
# required for all workflows
security-events: write
# required to fetch internal or private CodeQL packs
packages: read
# only required for workflows in private repositories
actions: read
contents: read
strategy:
fail-fast: false
matrix:
include:
- language: go
build-mode: manual
- language: python
build-mode: none
# CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
# Use `c-cpp` to analyze code written in C, C++ or both
# Use 'java-kotlin' to analyze code written in Java, Kotlin or both
# Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
# To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
# see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
# If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
# your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
# Add any setup steps before running the `github/codeql-action/init` action.
# This includes steps like installing compilers or runtimes (`actions/setup-node`
# or others). This is typically only required for manual builds.
# - name: Setup runtime (example)
# uses: actions/setup-example@v1
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@4bdb89f48054571735e3792627da6195c57459e2 # v3.31.10
with:
languages: ${{ matrix.language }}
build-mode: ${{ matrix.build-mode }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
# queries: security-extended,security-and-quality
# If the analyze step fails for one of the languages you are analyzing with
# "We were unable to automatically build your code", modify the matrix above
# to set the build mode to "manual" for that language. Then modify this step
# to build your code.
# Command-line programs to run using the OS shell.
# 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
- if: matrix.build-mode == 'manual' && matrix.language == 'go'
shell: bash
run: |
make -C src/runtime
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@4bdb89f48054571735e3792627da6195c57459e2 # v3.31.10
with:
category: "/language:${{matrix.language}}"

View File

@@ -6,28 +6,21 @@ on:
- reopened
- synchronize
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
env:
error_msg: |+
See the document below for help on formatting commits for the project.
https://github.com/kata-containers/community/blob/main/CONTRIBUTING.md#patch-format
https://github.com/kata-containers/community/blob/master/CONTRIBUTING.md#patch-format
jobs:
commit-message-check:
runs-on: ubuntu-22.04
env:
PR_AUTHOR: ${{ github.event.pull_request.user.login }}
runs-on: ubuntu-latest
name: Commit Message Check
steps:
- name: Get PR Commits
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
id: 'get-pr-commits'
uses: tim-actions/get-pr-commits@c64db31d359214d244884dd68f971a110b29ab83 # v1.2.0
uses: tim-actions/get-pr-commits@v1.2.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
# Filter out revert commits
@@ -35,25 +28,23 @@ jobs:
#
# Revert "<original-subject-line>"
#
# The format of a re-re-vert commit as follows:
#
# Reapply "<original-subject-line>"
filter_out_pattern: '^Revert "|^Reapply "'
filter_out_pattern: '^Revert "'
- name: DCO Check
uses: tim-actions/dco@f2279e6e62d5a7d9115b0cb8e837b777b1b02e21 # v1.1.0
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: tim-actions/dco@2fd0504dc0d27b33f542867c300c60840c6dcb20
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
- name: Commit Body Missing Check
if: ${{ success() || failure() }}
uses: tim-actions/commit-body-check@d2e0e8e1f0332b3281c98867c42a2fbe25ad3f15 # v1.0.2
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') && ( success() || failure() ) }}
uses: tim-actions/commit-body-check@v1.0.2
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
- name: Check Subject Line Length
if: ${{ (env.PR_AUTHOR != 'dependabot[bot]') && ( success() || failure() ) }}
uses: tim-actions/commit-message-checker-with-regex@d6d9770051dd6460679d1cab1dcaa8cffc5c2bbd # v0.3.1
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') && ( success() || failure() ) }}
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
pattern: '^.{0,75}(\n.*)*$'
@@ -61,8 +52,8 @@ jobs:
post_error: ${{ env.error_msg }}
- name: Check Body Line Length
if: ${{ (env.PR_AUTHOR != 'dependabot[bot]') && ( success() || failure() ) }}
uses: tim-actions/commit-message-checker-with-regex@d6d9770051dd6460679d1cab1dcaa8cffc5c2bbd # v0.3.1
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') && ( success() || failure() ) }}
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
# Notes:
@@ -71,12 +62,8 @@ jobs:
# to be specified at the start of the regex as the action is passed
# the entire commit message.
#
# - This check will pass if the commit message only contains a subject
# line, as other body message properties are enforced elsewhere.
#
# - Body lines *can* be longer than the maximum if they start
# with a non-alphabetic character or if there is no whitespace in
# the line.
# with a non-alphabetic character.
#
# This allows stack traces, log files snippets, emails, long URLs,
# etc to be specified. Some of these naturally "work" as they start
@@ -87,13 +74,24 @@ jobs:
#
# - A SoB comment can be any length (as it is unreasonable to penalise
# people with long names/email addresses :)
pattern: '(^[^\n]+$|^.+(\n([a-zA-Z].{0,150}|[^a-zA-Z\n].*|[^\s\n]*|Signed-off-by:.*|))+$)'
error: 'Body line too long (max 150)'
pattern: '^.+(\n([a-zA-Z].{0,149}|[^a-zA-Z\n].*|Signed-off-by:.*|))+$'
error: 'Body line too long (max 72)'
post_error: ${{ env.error_msg }}
- name: Check Fixes
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') && ( success() || failure() ) }}
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
pattern: '\s*Fixes\s*:?\s*(#\d+|github\.com\/kata-containers\/[a-z-.]*#\d+)|^\s*release\s*:'
flags: 'i'
error: 'No "Fixes" found'
post_error: ${{ env.error_msg }}
one_pass_all_pass: 'true'
- name: Check Subsystem
if: ${{ (env.PR_AUTHOR != 'dependabot[bot]') && ( success() || failure() ) }}
uses: tim-actions/commit-message-checker-with-regex@d6d9770051dd6460679d1cab1dcaa8cffc5c2bbd # v0.3.1
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') && ( success() || failure() ) }}
uses: tim-actions/commit-message-checker-with-regex@v0.3.1
with:
commits: ${{ steps.get-pr-commits.outputs.commits }}
pattern: '^[\s\t]*[^:\s\t]+[\s\t]*:'

View File

@@ -6,38 +6,20 @@ on:
- reopened
- synchronize
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
name: Darwin tests
jobs:
test:
name: test
runs-on: macos-latest
strategy:
matrix:
go-version: [1.16.x, 1.17.x]
os: [macos-latest]
runs-on: ${{ matrix.os }}
steps:
- name: Install Protoc
run: |
f=$(mktemp)
curl -sSLo "$f" https://github.com/protocolbuffers/protobuf/releases/download/v28.2/protoc-28.2-osx-aarch_64.zip
mkdir -p "$HOME/.local"
unzip -d "$HOME/.local" "$f"
echo "$HOME/.local/bin" >> "${GITHUB_PATH}"
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- name: Install Go
uses: actions/setup-go@v2
with:
persist-credentials: false
- name: Install golang
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "${GITHUB_PATH}"
- name: Install Rust
run: ./tests/install_rust.sh
go-version: ${{ matrix.go-version }}
- name: Checkout code
uses: actions/checkout@v2
- name: Build utils
run: ./ci/darwin-test.sh

View File

@@ -1,34 +0,0 @@
on:
schedule:
- cron: '0 23 * * 0'
workflow_dispatch:
permissions: {}
name: Docs URL Alive Check
jobs:
test:
name: test
runs-on: ubuntu-22.04
# don't run this action on forks
if: github.repository_owner == 'kata-containers'
env:
target_branch: ${{ github.base_ref }}
steps:
- name: Set env
run: |
echo "GOPATH=${GITHUB_WORKSPACE}" >> "$GITHUB_ENV"
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Install golang
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "${GITHUB_PATH}"
- name: Docs URL Alive Check
run: |
make docs-url-alive-check

View File

@@ -1,32 +0,0 @@
name: Documentation
on:
push:
branches:
- main
permissions: {}
jobs:
deploy-docs:
name: deploy-docs
permissions:
contents: read
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- uses: actions/configure-pages@983d7736d9b0ae728b81ab479565c72886d7745b # v5.0.0
- uses: actions/checkout@93cb6efe18208431cddfb8368fd83d5badbf9bfd # v5.0.1
with:
persist-credentials: false
- uses: actions/setup-python@a26af69be951a213d495a4c3e4e4022e16d87065 # v5.6.0
with:
python-version: 3.x
- run: pip install zensical
- run: zensical build --clean
- uses: actions/upload-pages-artifact@7b1f4a764d45c48632c6b24a0339c27f5614fb0b # v4.0.0
with:
path: site
- uses: actions/deploy-pages@d6db90164ac5ed86f2b6aed7e0febac5b3c0c03e # v4.0.5
id: deployment

View File

@@ -1,29 +0,0 @@
name: EditorConfig checker
on:
pull_request:
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
editorconfig-checker:
name: editorconfig-checker
runs-on: ubuntu-24.04
steps:
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Set up editorconfig-checker
uses: editorconfig-checker/action-editorconfig-checker@4b6cd6190d435e7e084fb35e36a096e98506f7b9 # v2.1.0
with:
version: v3.6.1
- name: Run editorconfig-checker
run: editorconfig-checker

View File

@@ -1,55 +0,0 @@
name: Skipper
# This workflow sets various "skip_*" output values that can be used to
# determine what workflows/jobs are expected to be executed. Sample usage:
#
# skipper:
# uses: ./.github/workflows/gatekeeper-skipper.yaml
# with:
# commit-hash: ${{ github.event.pull_request.head.sha }}
# target-branch: ${{ github.event.pull_request.base.ref }}
#
# your-workflow:
# needs: skipper
# if: ${{ needs.skipper.outputs.skip_build != 'yes' }}
on:
workflow_call:
inputs:
commit-hash:
required: true
type: string
target-branch:
required: false
type: string
default: ""
outputs:
skip_build:
value: ${{ jobs.skipper.outputs.skip_build }}
skip_test:
value: ${{ jobs.skipper.outputs.skip_test }}
skip_static:
value: ${{ jobs.skipper.outputs.skip_static }}
permissions: {}
jobs:
skipper:
name: skipper
runs-on: ubuntu-22.04
outputs:
skip_build: ${{ steps.skipper.outputs.skip_build }}
skip_test: ${{ steps.skipper.outputs.skip_test }}
skip_static: ${{ steps.skipper.outputs.skip_static }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- id: skipper
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
run: |
python3 tools/testing/gatekeeper/skips.py | tee -a "$GITHUB_OUTPUT"
shell: /usr/bin/bash -x {0}

View File

@@ -1,55 +0,0 @@
name: Gatekeeper
# Gatekeeper uses the "skips.py" to determine which job names/regexps are
# required for given PR and waits for them to either complete or fail
# reporting the status.
on:
pull_request_target: # zizmor: ignore[dangerous-triggers] See #11332.
types:
- opened
- synchronize
- reopened
- edited
- labeled
- unlabeled
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
gatekeeper:
name: gatekeeper
runs-on: ubuntu-22.04
permissions:
actions: read
contents: read
issues: read
pull-requests: read
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ github.event.pull_request.head.sha }}
fetch-depth: 0
persist-credentials: false
- id: gatekeeper
env:
TARGET_BRANCH: ${{ github.event.pull_request.base.ref }}
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
COMMIT_HASH: ${{ github.event.pull_request.head.sha }}
GH_PR_NUMBER: ${{ github.event.pull_request.number }}
run: |
#!/usr/bin/env bash -x
mapfile -t lines < <(python3 tools/testing/gatekeeper/skips.py -t)
export REQUIRED_JOBS="${lines[0]}"
export REQUIRED_REGEXPS="${lines[1]}"
export REQUIRED_LABELS="${lines[2]}"
echo "REQUIRED_JOBS: $REQUIRED_JOBS"
echo "REQUIRED_REGEXPS: $REQUIRED_REGEXPS"
echo "REQUIRED_LABELS: $REQUIRED_LABELS"
python3 tools/testing/gatekeeper/jobs.py
exit $?
shell: /usr/bin/bash -x {0}

View File

@@ -1,53 +0,0 @@
on:
workflow_call:
name: Govulncheck
permissions: {}
jobs:
govulncheck:
name: govulncheck
runs-on: ubuntu-22.04
strategy:
matrix:
include:
- binary: "kata-runtime"
make_target: "runtime"
- binary: "containerd-shim-kata-v2"
make_target: "containerd-shim-v2"
- binary: "kata-monitor"
make_target: "monitor"
fail-fast: false
steps:
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4
with:
fetch-depth: 0
persist-credentials: false
- name: Install golang
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "${GITHUB_PATH}"
- name: Install govulncheck
run: |
go install golang.org/x/vuln/cmd/govulncheck@latest
echo "${HOME}/go/bin" >> "${GITHUB_PATH}"
- name: Build runtime binaries
run: |
cd src/runtime
make "${MAKE_TARGET}"
env:
MAKE_TARGET: ${{ matrix.make_target }}
SKIP_GO_VERSION_CHECK: "1"
- name: Run govulncheck on ${{ matrix.binary }}
env:
BINARY: ${{ matrix.binary }}
run: |
cd src/runtime
bash ../../tests/govulncheck-runner.sh "./${BINARY}"

83
.github/workflows/kata-deploy-push.yaml vendored Normal file
View File

@@ -0,0 +1,83 @@
name: kata deploy build
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
paths:
- tools/**
- versions.yaml
jobs:
build-asset:
runs-on: ubuntu-latest
strategy:
matrix:
asset:
- kernel
- shim-v2
- qemu
- cloud-hypervisor
- firecracker
- rootfs-image
- rootfs-initrd
steps:
- uses: actions/checkout@v2
- name: Install docker
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
curl -fsSL https://test.docker.com -o test-docker.sh
sh test-docker.sh
- name: Build ${{ matrix.asset }}
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r --preserve=all "${build_dir}" "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
- name: store-artifact ${{ matrix.asset }}
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/upload-artifact@v2
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
if-no-files-found: error
create-kata-tarball:
runs-on: ubuntu-latest
needs: build-asset
steps:
- uses: actions/checkout@v2
- name: get-artifacts
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/download-artifact@v2
with:
name: kata-artifacts
path: build
- name: merge-artifacts
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
make merge-builds
- name: store-artifacts
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/upload-artifact@v2
with:
name: kata-static-tarball
path: kata-static.tar.xz
make-kata-tarball:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: make kata-tarball
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
make kata-tarball
sudo make install-tarball

147
.github/workflows/kata-deploy-test.yaml vendored Normal file
View File

@@ -0,0 +1,147 @@
on:
issue_comment:
types: [created, edited]
name: test-kata-deploy
jobs:
check-comment-and-membership:
runs-on: ubuntu-latest
if: |
github.event.issue.pull_request
&& github.event_name == 'issue_comment'
&& github.event.action == 'created'
&& startsWith(github.event.comment.body, '/test_kata_deploy')
steps:
- name: Check membership
uses: kata-containers/is-organization-member@1.0.1
id: is_organization_member
with:
organization: kata-containers
username: ${{ github.event.comment.user.login }}
token: ${{ secrets.GITHUB_TOKEN }}
- name: Fail if not member
run: |
result=${{ steps.is_organization_member.outputs.result }}
if [ $result == false ]; then
user=${{ github.event.comment.user.login }}
echo Either ${user} is not part of the kata-containers organization
echo or ${user} has its Organization Visibility set to Private at
echo https://github.com/orgs/kata-containers/people?query=${user}
echo
echo Ensure you change your Organization Visibility to Public and
echo trigger the test again.
exit 1
fi
build-asset:
runs-on: ubuntu-latest
needs: check-comment-and-membership
strategy:
matrix:
asset:
- cloud-hypervisor
- firecracker
- kernel
- qemu
- rootfs-image
- rootfs-initrd
- shim-v2
steps:
- name: get-PR-ref
id: get-PR-ref
run: |
ref=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.pull_request.url' | sed 's#^.*\/pulls#refs\/pull#' | sed 's#$#\/merge#')
echo "reference for PR: " ${ref}
echo "##[set-output name=pr-ref;]${ref}"
- uses: actions/checkout@v2
with:
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
- name: Install docker
run: |
curl -fsSL https://test.docker.com -o test-docker.sh
sh test-docker.sh
- name: Build ${{ matrix.asset }}
run: |
make "${KATA_ASSET}-tarball"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v2
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
if-no-files-found: error
create-kata-tarball:
runs-on: ubuntu-latest
needs: build-asset
steps:
- name: get-PR-ref
id: get-PR-ref
run: |
ref=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.pull_request.url' | sed 's#^.*\/pulls#refs\/pull#' | sed 's#$#\/merge#')
echo "reference for PR: " ${ref}
echo "##[set-output name=pr-ref;]${ref}"
- uses: actions/checkout@v2
with:
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
- name: get-artifacts
uses: actions/download-artifact@v2
with:
name: kata-artifacts
path: kata-artifacts
- name: merge-artifacts
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: kata-static-tarball
path: kata-static.tar.xz
kata-deploy:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- name: get-PR-ref
id: get-PR-ref
run: |
ref=$(cat $GITHUB_EVENT_PATH | jq -r '.issue.pull_request.url' | sed 's#^.*\/pulls#refs\/pull#' | sed 's#$#\/merge#')
echo "reference for PR: " ${ref}
echo "##[set-output name=pr-ref;]${ref}"
- uses: actions/checkout@v2
with:
ref: ${{ steps.get-PR-ref.outputs.pr-ref }}
- name: get-kata-tarball
uses: actions/download-artifact@v2
with:
name: kata-static-tarball
- name: build-and-push-kata-deploy-ci
id: build-and-push-kata-deploy-ci
run: |
PR_SHA=$(git log --format=format:%H -n1)
mv kata-static.tar.xz $GITHUB_WORKSPACE/tools/packaging/kata-deploy/kata-static.tar.xz
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t quay.io/kata-containers/kata-deploy-ci:$PR_SHA $GITHUB_WORKSPACE/tools/packaging/kata-deploy
docker login -u ${{ secrets.QUAY_DEPLOYER_USERNAME }} -p ${{ secrets.QUAY_DEPLOYER_PASSWORD }} quay.io
docker push quay.io/kata-containers/kata-deploy-ci:$PR_SHA
mkdir -p packaging/kata-deploy
ln -s $GITHUB_WORKSPACE/tools/packaging/kata-deploy/action packaging/kata-deploy/action
echo "::set-output name=PKG_SHA::${PR_SHA}"
- name: test-kata-deploy-ci-in-aks
uses: ./packaging/kata-deploy/action
with:
packaging-sha: ${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}}
env:
PKG_SHA: ${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_PASSWORD: ${{ secrets.AZ_PASSWORD }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}

View File

@@ -0,0 +1,82 @@
# Copyright (c) 2020 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
name: Move issues to "In progress" in backlog project when referenced by a PR
on:
pull_request_target:
types:
- opened
- reopened
jobs:
move-linked-issues-to-in-progress:
runs-on: ubuntu-latest
steps:
- name: Install hub
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
HUB_ARCH="amd64"
HUB_VER=$(curl -sL "https://api.github.com/repos/github/hub/releases/latest" |\
jq -r .tag_name | sed 's/^v//')
curl -sL \
"https://github.com/github/hub/releases/download/v${HUB_VER}/hub-linux-${HUB_ARCH}-${HUB_VER}.tgz" |\
tar xz --strip-components=2 --wildcards '*/bin/hub' && \
sudo install hub /usr/local/bin
- name: Install hub extension script
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
# Clone into a temporary directory to avoid overwriting
# any existing github directory.
pushd $(mktemp -d) &>/dev/null
git clone --single-branch --depth 1 "https://github.com/kata-containers/.github" && cd .github/scripts
sudo install hub-util.sh /usr/local/bin
popd &>/dev/null
- name: Checkout code to allow hub to communicate with the project
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/checkout@v2
- name: Move issue to "In progress"
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
env:
GITHUB_TOKEN: ${{ secrets.KATA_GITHUB_ACTIONS_TOKEN }}
run: |
pr=${{ github.event.pull_request.number }}
linked_issue_urls=$(hub-util.sh \
list-issues-for-pr "$pr" |\
grep -v "^\#" |\
cut -d';' -f3 || true)
# PR doesn't have any linked issues
# (it should, but maybe a new user forgot to add a "Fixes: #XXX" commit).
[ -z "$linked_issue_urls" ] && {
echo "::error::No linked issues for PR $pr"
exit 1
}
project_name="Issue backlog"
project_type="org"
project_column="In progress"
for issue_url in $(echo "$linked_issue_urls")
do
issue=$(echo "$issue_url"| awk -F\/ '{print $NF}' || true)
[ -z "$issue" ] && {
echo "::error::Cannot determine issue number from $issue_url for PR $pr"
exit 1
}
# Move the issue to the correct column on the project board
hub-util.sh \
move-issue \
"$issue" \
"$project_name" \
"$project_type" \
"$project_column"
done

View File

@@ -1,35 +0,0 @@
name: nydus-snapshotter-version-sync
on:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
nydus-snapshotter-version-check:
name: nydus-snapshotter-version-check
runs-on: ubuntu-22.04
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Ensure nydus-snapshotter-version is in sync inside our repo
run: |
dockerfile_version=$(grep "ARG NYDUS_SNAPSHOTTER_VERSION" tools/packaging/kata-deploy/Dockerfile | cut -f2 -d'=')
versions_version=$(yq ".externals.nydus-snapshotter.version | explode(.)" versions.yaml)
if [[ "${dockerfile_version}" != "${versions_version}" ]]; then
echo "nydus-snapshotter version must be the same in the following places: "
echo "- versions.yaml: ${versions_version}"
echo "- tools/packaging/kata-deploy/Dockerfile: ${dockerfile_version}"
exit 1
fi

View File

@@ -1,43 +0,0 @@
# A sample workflow which sets up periodic OSV-Scanner scanning for vulnerabilities,
# in addition to a PR check which fails if new vulnerabilities are introduced.
#
# For more examples and options, including how to ignore specific vulnerabilities,
# see https://google.github.io/osv-scanner/github-action/
name: OSV-Scanner
on:
workflow_dispatch:
pull_request:
branches: [ "main" ]
schedule:
- cron: '0 1 * * 0'
push:
branches: [ "main" ]
permissions: {}
jobs:
scan-scheduled:
permissions:
actions: read # # Required to upload SARIF file to CodeQL
contents: read # Read commit contents
security-events: write # Require writing security events to upload SARIF file to security tab
if: ${{ github.event_name == 'push' || github.event_name == 'schedule' || github.event_name == 'workflow_dispatch' }}
uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable.yml@b00f71e051ddddc6e46a193c31c8c0bf283bf9e6" # v2.1.0
with:
scan-args: |-
-r
./
scan-pr:
permissions:
actions: read # Required to upload SARIF file to CodeQL
contents: read # Read commit contents
security-events: write # Require writing security events to upload SARIF file to security tab
if: ${{ github.event_name == 'pull_request' }}
uses: "google/osv-scanner-action/.github/workflows/osv-scanner-reusable-pr.yml@b00f71e051ddddc6e46a193c31c8c0bf283bf9e6" # v2.1.0
with:
# Example of specifying custom arguments
scan-args: |-
-r
./

View File

@@ -1,203 +0,0 @@
name: CI | Publish Kata Containers payload
on:
push:
branches:
- main
workflow_dispatch:
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
jobs:
build-assets-amd64:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-amd64.yaml
with:
commit-hash: ${{ github.sha }}
push-to-registry: yes
target-branch: ${{ github.ref_name }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
build-assets-arm64:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-arm64.yaml
with:
commit-hash: ${{ github.sha }}
push-to-registry: yes
target-branch: ${{ github.ref_name }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
build-assets-s390x:
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/build-kata-static-tarball-s390x.yaml
with:
commit-hash: ${{ github.sha }}
push-to-registry: yes
target-branch: ${{ github.ref_name }}
secrets:
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-assets-ppc64le:
permissions:
contents: read
packages: write
uses: ./.github/workflows/build-kata-static-tarball-ppc64le.yaml
with:
commit-hash: ${{ github.sha }}
push-to-registry: yes
target-branch: ${{ github.ref_name }}
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-kata-deploy-payload-amd64:
needs: build-assets-amd64
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
commit-hash: ${{ github.sha }}
registry: quay.io
repo: kata-containers/kata-deploy-ci
tag: kata-containers-latest-amd64
target-branch: ${{ github.ref_name }}
runner: ubuntu-22.04
arch: amd64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-kata-deploy-payload-arm64:
needs: build-assets-arm64
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
commit-hash: ${{ github.sha }}
registry: quay.io
repo: kata-containers/kata-deploy-ci
tag: kata-containers-latest-arm64
target-branch: ${{ github.ref_name }}
runner: ubuntu-24.04-arm
arch: arm64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-kata-deploy-payload-s390x:
needs: build-assets-s390x
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
commit-hash: ${{ github.sha }}
registry: quay.io
repo: kata-containers/kata-deploy-ci
tag: kata-containers-latest-s390x
target-branch: ${{ github.ref_name }}
runner: s390x
arch: s390x
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-kata-deploy-payload-ppc64le:
needs: build-assets-ppc64le
permissions:
contents: read
packages: write
uses: ./.github/workflows/publish-kata-deploy-payload.yaml
with:
commit-hash: ${{ github.sha }}
registry: quay.io
repo: kata-containers/kata-deploy-ci
tag: kata-containers-latest-ppc64le
target-branch: ${{ github.ref_name }}
runner: ubuntu-24.04-ppc64le
arch: ppc64le
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-manifest:
name: publish-manifest
runs-on: ubuntu-22.04
permissions:
contents: read
packages: write
needs: [publish-kata-deploy-payload-amd64, publish-kata-deploy-payload-arm64, publish-kata-deploy-payload-s390x, publish-kata-deploy-payload-ppc64le]
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Login to Kata Containers quay.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Push multi-arch manifest
run: |
./tools/packaging/release/release.sh publish-multiarch-manifest
env:
KATA_DEPLOY_IMAGE_TAGS: "kata-containers-latest"
KATA_DEPLOY_REGISTRIES: "quay.io/kata-containers/kata-deploy-ci"
upload-helm-chart-tarball:
name: upload-helm-chart-tarball
needs: publish-manifest
runs-on: ubuntu-22.04
permissions:
packages: write # needed to push the helm chart to ghcr.io
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Install helm
uses: azure/setup-helm@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814 # v4.2.0
id: install
- name: Login to the OCI registries
env:
QUAY_DEPLOYER_USERNAME: ${{ vars.QUAY_DEPLOYER_USERNAME }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
GITHUB_TOKEN: ${{ github.token }}
run: |
echo "${QUAY_DEPLOYER_PASSWORD}" | helm registry login quay.io --username "${QUAY_DEPLOYER_USERNAME}" --password-stdin
echo "${GITHUB_TOKEN}" | helm registry login ghcr.io --username "${GITHUB_ACTOR}" --password-stdin
- name: Push helm chart to the OCI registries
run: |
echo "Adjusting the Chart.yaml and values.yaml"
yq eval '.version = "0.0.0-dev" | .appVersion = "0.0.0-dev"' -i tools/packaging/kata-deploy/helm-chart/kata-deploy/Chart.yaml
yq eval '.image.reference = "quay.io/kata-containers/kata-deploy-ci" | .image.tag = "kata-containers-latest"' -i tools/packaging/kata-deploy/helm-chart/kata-deploy/values.yaml
echo "Generating the chart package"
helm dependencies update tools/packaging/kata-deploy/helm-chart/kata-deploy
helm package tools/packaging/kata-deploy/helm-chart/kata-deploy
echo "Pushing the chart to the OCI registries"
helm push "kata-deploy-0.0.0-dev.tgz" oci://quay.io/kata-containers/kata-deploy-charts
helm push "kata-deploy-0.0.0-dev.tgz" oci://ghcr.io/kata-containers/kata-deploy-charts

View File

@@ -1,108 +0,0 @@
name: CI | Publish kata-deploy payload
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
runner:
default: 'ubuntu-22.04'
description: The runner to execute the workflow on. Defaults to 'ubuntu-22.04'.
required: false
type: string
arch:
description: The arch of the tarball.
required: true
type: string
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions: {}
jobs:
kata-payload:
name: kata-payload
permissions:
contents: read
packages: write
runs-on: ${{ inputs.runner }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Remove unnecessary directories to free up space
run: |
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
sudo rm -rf /usr/local/julia*
sudo rm -rf /opt/az
sudo rm -rf /usr/local/share/chromium
sudo rm -rf /opt/microsoft
sudo rm -rf /opt/google
sudo rm -rf /usr/lib/firefox
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tarball for ${{ inputs.arch }}
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-${{ inputs.arch}}${{ inputs.tarball-suffix }}
- name: Login to Kata Containers quay.io
if: ${{ inputs.registry == 'quay.io' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Login to Kata Containers ghcr.io
if: ${{ inputs.registry == 'ghcr.io' }}
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: build-and-push-kata-payload for ${{ inputs.arch }}
id: build-and-push-kata-payload
env:
REGISTRY: ${{ inputs.registry }}
REPO: ${{ inputs.repo }}
TAG: ${{ inputs.tag }}
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)/kata-static.tar.zst" \
"${REGISTRY}/${REPO}" \
"${TAG}"

View File

@@ -1,82 +0,0 @@
name: Publish Kata release artifacts for amd64
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
KBUILD_SIGN_PIN:
required: true
permissions: {}
jobs:
build-kata-static-tarball-amd64:
uses: ./.github/workflows/build-kata-static-tarball-amd64.yaml
with:
push-to-registry: yes
stage: release
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
permissions:
contents: read
packages: write
id-token: write
attestations: write
kata-deploy:
name: kata-deploy
needs: build-kata-static-tarball-amd64
permissions:
contents: read
packages: write
runs-on: ubuntu-22.04
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Kata Containers quay.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64
- name: build-and-push-kata-deploy-ci-amd64
id: build-and-push-kata-deploy-ci-amd64
env:
TARGET_ARCH: ${{ inputs.target-arch }}
run: |
# We need to do such trick here as the format of the $GITHUB_REF
# is "refs/tags/<tag>"
tag=$(echo "$GITHUB_REF" | cut -d/ -f3-)
if [ "${tag}" = "main" ]; then
tag=$(./tools/packaging/release/release.sh release-version)
tags=("${tag}" "latest")
else
tags=("${tag}")
fi
for tag in "${tags[@]}"; do
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.zst "ghcr.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.zst "quay.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
done

View File

@@ -1,82 +0,0 @@
name: Publish Kata release artifacts for arm64
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
KBUILD_SIGN_PIN:
required: true
permissions: {}
jobs:
build-kata-static-tarball-arm64:
uses: ./.github/workflows/build-kata-static-tarball-arm64.yaml
with:
push-to-registry: yes
stage: release
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
permissions:
contents: read
packages: write
id-token: write
attestations: write
kata-deploy:
name: kata-deploy
needs: build-kata-static-tarball-arm64
permissions:
contents: read
packages: write
runs-on: ubuntu-24.04-arm
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Kata Containers quay.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-arm64
- name: build-and-push-kata-deploy-ci-arm64
id: build-and-push-kata-deploy-ci-arm64
env:
TARGET_ARCH: ${{ inputs.target-arch }}
run: |
# We need to do such trick here as the format of the $GITHUB_REF
# is "refs/tags/<tag>"
tag=$(echo "$GITHUB_REF" | cut -d/ -f3-)
if [ "${tag}" = "main" ]; then
tag=$(./tools/packaging/release/release.sh release-version)
tags=("${tag}" "latest")
else
tags=("${tag}")
fi
for tag in "${tags[@]}"; do
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.zst "ghcr.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.zst "quay.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
done

View File

@@ -1,79 +0,0 @@
name: Publish Kata release artifacts for ppc64le
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
secrets:
QUAY_DEPLOYER_PASSWORD:
required: true
permissions: {}
jobs:
build-kata-static-tarball-ppc64le:
uses: ./.github/workflows/build-kata-static-tarball-ppc64le.yaml
with:
push-to-registry: yes
stage: release
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
permissions:
contents: read
packages: write
id-token: write
attestations: write
kata-deploy:
name: kata-deploy
needs: build-kata-static-tarball-ppc64le
permissions:
contents: read
packages: write
runs-on: ubuntu-24.04-ppc64le
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Kata Containers quay.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-ppc64le
- name: build-and-push-kata-deploy-ci-ppc64le
id: build-and-push-kata-deploy-ci-ppc64le
env:
TARGET_ARCH: ${{ inputs.target-arch }}
run: |
# We need to do such trick here as the format of the $GITHUB_REF
# is "refs/tags/<tag>"
tag=$(echo "$GITHUB_REF" | cut -d/ -f3-)
if [ "${tag}" = "main" ]; then
tag=$(./tools/packaging/release/release.sh release-version)
tags=("${tag}" "latest")
else
tags=("${tag}")
fi
for tag in "${tags[@]}"; do
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.zst "ghcr.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.zst "quay.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
done

View File

@@ -1,83 +0,0 @@
name: Publish Kata release artifacts for s390x
on:
workflow_call:
inputs:
target-arch:
required: true
type: string
secrets:
CI_HKD_PATH:
required: true
QUAY_DEPLOYER_PASSWORD:
required: true
permissions: {}
jobs:
build-kata-static-tarball-s390x:
uses: ./.github/workflows/build-kata-static-tarball-s390x.yaml
with:
push-to-registry: yes
stage: release
secrets:
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
permissions:
contents: read
packages: write
id-token: write
attestations: write
kata-deploy:
name: kata-deploy
needs: build-kata-static-tarball-s390x
permissions:
contents: read
packages: write
runs-on: ubuntu-24.04-s390x
steps:
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Kata Containers quay.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-s390x
- name: build-and-push-kata-deploy-ci-s390x
id: build-and-push-kata-deploy-ci-s390x
env:
TARGET_ARCH: ${{ inputs.target-arch }}
run: |
# We need to do such trick here as the format of the $GITHUB_REF
# is "refs/tags/<tag>"
tag=$(echo "$GITHUB_REF" | cut -d/ -f3-)
if [ "${tag}" = "main" ]; then
tag=$(./tools/packaging/release/release.sh release-version)
tags=("${tag}" "latest")
else
tags=("${tag}")
fi
for tag in "${tags[@]}"; do
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.zst "ghcr.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
./tools/packaging/kata-deploy/local-build/kata-deploy-build-and-upload-payload.sh \
"$(pwd)"/kata-static.tar.zst "quay.io/kata-containers/kata-deploy" \
"${tag}-${TARGET_ARCH}"
done

View File

@@ -1,309 +1,177 @@
name: Release Kata Containers
name: Publish Kata 2.x release artifacts
on:
workflow_dispatch
permissions: {}
push:
tags:
- '2.*'
jobs:
release:
name: release
runs-on: ubuntu-22.04
permissions:
contents: write # needed for the `gh release create` command
build-asset:
runs-on: ubuntu-latest
strategy:
matrix:
asset:
- cloud-hypervisor
- firecracker
- kernel
- qemu
- rootfs-image
- rootfs-initrd
- shim-v2
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Create a new release
- uses: actions/checkout@v2
- name: Install docker
run: |
./tools/packaging/release/release.sh create-new-release
curl -fsSL https://test.docker.com -o test-docker.sh
sh test-docker.sh
- name: Build ${{ matrix.asset }}
run: |
./tools/packaging/kata-deploy/local-build/kata-deploy-copy-yq-installer.sh
./tools/packaging/kata-deploy/local-build/kata-deploy-binaries-in-docker.sh --build="${KATA_ASSET}"
build_dir=$(readlink -f build)
# store-artifact does not work with symlink
sudo cp -r "${build_dir}" "kata-build"
env:
GH_TOKEN: ${{ github.token }}
KATA_ASSET: ${{ matrix.asset }}
TAR_OUTPUT: ${{ matrix.asset }}.tar.gz
build-and-push-assets-amd64:
needs: release
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/release-amd64.yaml
with:
target-arch: amd64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
- name: store-artifact ${{ matrix.asset }}
uses: actions/upload-artifact@v2
with:
name: kata-artifacts
path: kata-build/kata-static-${{ matrix.asset }}.tar.xz
if-no-files-found: error
build-and-push-assets-arm64:
needs: release
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/release-arm64.yaml
with:
target-arch: arm64
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
KBUILD_SIGN_PIN: ${{ secrets.KBUILD_SIGN_PIN }}
build-and-push-assets-s390x:
needs: release
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/release-s390x.yaml
with:
target-arch: s390x
secrets:
CI_HKD_PATH: ${{ secrets.CI_HKD_PATH }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
build-and-push-assets-ppc64le:
needs: release
permissions:
contents: read
packages: write
id-token: write
attestations: write
uses: ./.github/workflows/release-ppc64le.yaml
with:
target-arch: ppc64le
secrets:
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
publish-multi-arch-images:
name: publish-multi-arch-images
runs-on: ubuntu-22.04
needs: [build-and-push-assets-amd64, build-and-push-assets-arm64, build-and-push-assets-s390x, build-and-push-assets-ppc64le]
permissions:
contents: write # needed for the `gh release` commands
packages: write # needed to push the multi-arch manifest to ghcr.io
create-kata-tarball:
runs-on: ubuntu-latest
needs: build-asset
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v2
- name: get-artifacts
uses: actions/download-artifact@v2
with:
persist-credentials: false
- name: Login to Kata Containers ghcr.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Kata Containers quay.io
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ vars.QUAY_DEPLOYER_USERNAME }}
password: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
- name: Get the image tags
name: kata-artifacts
path: kata-artifacts
- name: merge-artifacts
run: |
release_version=$(./tools/packaging/release/release.sh release-version)
echo "KATA_DEPLOY_IMAGE_TAGS=$release_version latest" >> "$GITHUB_ENV"
./tools/packaging/kata-deploy/local-build/kata-deploy-merge-builds.sh kata-artifacts
- name: store-artifacts
uses: actions/upload-artifact@v2
with:
name: kata-static-tarball
path: kata-static.tar.xz
- name: Publish multi-arch manifest on quay.io & ghcr.io
run: |
./tools/packaging/release/release.sh publish-multiarch-manifest
env:
KATA_DEPLOY_REGISTRIES: "quay.io/kata-containers/kata-deploy ghcr.io/kata-containers/kata-deploy"
upload-multi-arch-static-tarball:
name: upload-multi-arch-static-tarball
needs: [build-and-push-assets-amd64, build-and-push-assets-arm64, build-and-push-assets-s390x, build-and-push-assets-ppc64le]
permissions:
contents: write # needed for the `gh release` commands
runs-on: ubuntu-22.04
kata-deploy:
needs: create-kata-tarball
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v2
- name: get-kata-tarball
uses: actions/download-artifact@v2
with:
persist-credentials: false
- name: Set KATA_STATIC_TARBALL env var
name: kata-static-tarball
- name: build-and-push-kata-deploy-ci
id: build-and-push-kata-deploy-ci
run: |
tarball=$(pwd)/kata-static.tar.zst
echo "KATA_STATIC_TARBALL=${tarball}" >> "$GITHUB_ENV"
- name: Download amd64 artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
pushd $GITHUB_WORKSPACE
git checkout $tag
pkg_sha=$(git rev-parse HEAD)
popd
mv kata-static.tar.xz $GITHUB_WORKSPACE/tools/packaging/kata-deploy/kata-static.tar.xz
docker build --build-arg KATA_ARTIFACTS=kata-static.tar.xz -t katadocker/kata-deploy-ci:$pkg_sha -t quay.io/kata-containers/kata-deploy-ci:$pkg_sha $GITHUB_WORKSPACE/tools/packaging/kata-deploy
docker login -u ${{ secrets.DOCKER_USERNAME }} -p ${{ secrets.DOCKER_PASSWORD }}
docker push katadocker/kata-deploy-ci:$pkg_sha
docker login -u ${{ secrets.QUAY_DEPLOYER_USERNAME }} -p ${{ secrets.QUAY_DEPLOYER_PASSWORD }} quay.io
docker push quay.io/kata-containers/kata-deploy-ci:$pkg_sha
mkdir -p packaging/kata-deploy
ln -s $GITHUB_WORKSPACE/tools/packaging/kata-deploy/action packaging/kata-deploy/action
echo "::set-output name=PKG_SHA::${pkg_sha}"
- name: test-kata-deploy-ci-in-aks
uses: ./packaging/kata-deploy/action
with:
name: kata-static-tarball-amd64
- name: Upload amd64 static tarball to GitHub
run: |
./tools/packaging/release/release.sh upload-kata-static-tarball
packaging-sha: ${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}}
env:
GH_TOKEN: ${{ github.token }}
ARCHITECTURE: amd64
- name: Download arm64 artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-arm64
- name: Upload arm64 static tarball to GitHub
PKG_SHA: ${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}}
AZ_APPID: ${{ secrets.AZ_APPID }}
AZ_PASSWORD: ${{ secrets.AZ_PASSWORD }}
AZ_SUBSCRIPTION_ID: ${{ secrets.AZ_SUBSCRIPTION_ID }}
AZ_TENANT_ID: ${{ secrets.AZ_TENANT_ID }}
- name: push-tarball
run: |
./tools/packaging/release/release.sh upload-kata-static-tarball
env:
GH_TOKEN: ${{ github.token }}
ARCHITECTURE: arm64
# tag the container image we created and push to DockerHub
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tags=($tag)
tags+=($([[ "$tag" =~ "alpha"|"rc" ]] && echo "latest" || echo "stable"))
for tag in ${tags[@]}; do \
docker tag katadocker/kata-deploy-ci:${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}} katadocker/kata-deploy:${tag} && \
docker tag quay.io/kata-containers/kata-deploy-ci:${{steps.build-and-push-kata-deploy-ci.outputs.PKG_SHA}} quay.io/kata-containers/kata-deploy:${tag} && \
docker push katadocker/kata-deploy:${tag} && \
docker push quay.io/kata-containers/kata-deploy:${tag}; \
done
- name: Download s390x artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-s390x
- name: Upload s390x static tarball to GitHub
run: |
./tools/packaging/release/release.sh upload-kata-static-tarball
env:
GH_TOKEN: ${{ github.token }}
ARCHITECTURE: s390x
- name: Download ppc64le artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-ppc64le
- name: Upload ppc64le static tarball to GitHub
run: |
./tools/packaging/release/release.sh upload-kata-static-tarball
env:
GH_TOKEN: ${{ github.token }}
ARCHITECTURE: ppc64le
- name: Set KATA_TOOLS_STATIC_TARBALL env var
run: |
tarball=$(pwd)/kata-tools-static.tar.zst
echo "KATA_TOOLS_STATIC_TARBALL=${tarball}" >> "$GITHUB_ENV"
- name: Download amd64 tools artifacts
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64
- name: Upload amd64 static tarball tools to GitHub
run: |
./tools/packaging/release/release.sh upload-kata-tools-static-tarball
env:
GH_TOKEN: ${{ github.token }}
ARCHITECTURE: amd64
upload-versions-yaml:
name: upload-versions-yaml
needs: release
runs-on: ubuntu-22.04
permissions:
contents: write # needed for the `gh release` commands
upload-static-tarball:
needs: kata-deploy
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
- uses: actions/checkout@v2
- name: download-artifacts
uses: actions/download-artifact@v2
with:
persist-credentials: false
- name: Upload versions.yaml to GitHub
name: kata-static-tarball
- name: install hub
run: |
./tools/packaging/release/release.sh upload-versions-yaml-file
env:
GH_TOKEN: ${{ github.token }}
HUB_VER=$(curl -s "https://api.github.com/repos/github/hub/releases/latest" | jq -r .tag_name | sed 's/^v//')
wget -q -O- https://github.com/github/hub/releases/download/v$HUB_VER/hub-linux-amd64-$HUB_VER.tgz | \
tar xz --strip-components=2 --wildcards '*/bin/hub' && sudo mv hub /usr/local/bin/hub
- name: push static tarball to github
run: |
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="kata-static-$tag-x86_64.tar.xz"
mv kata-static.tar.xz "$GITHUB_WORKSPACE/${tarball}"
pushd $GITHUB_WORKSPACE
echo "uploading asset '${tarball}' for tag: ${tag}"
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
popd
upload-cargo-vendored-tarball:
name: upload-cargo-vendored-tarball
needs: release
runs-on: ubuntu-22.04
permissions:
contents: write # needed for the `gh release` commands
needs: upload-static-tarball
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Generate and upload vendored code tarball
- uses: actions/checkout@v2
- name: generate-and-upload-tarball
run: |
./tools/packaging/release/release.sh upload-vendored-code-tarball
env:
GH_TOKEN: ${{ github.token }}
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
tarball="kata-containers-$tag-vendor.tar.gz"
pushd $GITHUB_WORKSPACE
bash -c "tools/packaging/release/generate_vendor.sh ${tarball}"
GITHUB_TOKEN=${{ secrets.GIT_UPLOAD_TOKEN }} hub release edit -m "" -a "${tarball}" "${tag}"
popd
upload-libseccomp-tarball:
name: upload-libseccomp-tarball
needs: release
runs-on: ubuntu-22.04
permissions:
contents: write # needed for the `gh release` commands
needs: upload-cargo-vendored-tarball
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Download libseccomp tarball and upload it to GitHub
run: |
./tools/packaging/release/release.sh upload-libseccomp-tarball
- uses: actions/checkout@v2
- name: download-and-upload-tarball
env:
GH_TOKEN: ${{ github.token }}
upload-helm-chart-tarball:
name: upload-helm-chart-tarball
needs: release
runs-on: ubuntu-22.04
permissions:
contents: write # needed for the `gh release` commands
packages: write # needed to push the helm chart to ghcr.io
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Install helm
uses: azure/setup-helm@fe7b79cd5ee1e45176fcad797de68ecaf3ca4814 # v4.2.0
id: install
- name: Generate and upload helm chart tarball
GITHUB_TOKEN: ${{ secrets.GIT_UPLOAD_TOKEN }}
GOPATH: ${HOME}/go
run: |
./tools/packaging/release/release.sh upload-helm-chart-tarball
env:
GH_TOKEN: ${{ github.token }}
- name: Login to the OCI registries
env:
QUAY_DEPLOYER_USERNAME: ${{ vars.QUAY_DEPLOYER_USERNAME }}
QUAY_DEPLOYER_PASSWORD: ${{ secrets.QUAY_DEPLOYER_PASSWORD }}
GITHUB_TOKEN: ${{ github.token }}
run: |
echo "${QUAY_DEPLOYER_PASSWORD}" | helm registry login quay.io --username "${QUAY_DEPLOYER_USERNAME}" --password-stdin
echo "${GITHUB_TOKEN}" | helm registry login ghcr.io --username "${GITHUB_ACTOR}" --password-stdin
- name: Push helm chart to the OCI registries
run: |
release_version=$(./tools/packaging/release/release.sh release-version)
helm push "kata-deploy-${release_version}.tgz" oci://quay.io/kata-containers/kata-deploy-charts
helm push "kata-deploy-${release_version}.tgz" oci://ghcr.io/kata-containers/kata-deploy-charts
publish-release:
name: publish-release
needs: [ build-and-push-assets-amd64, build-and-push-assets-arm64, build-and-push-assets-s390x, build-and-push-assets-ppc64le, publish-multi-arch-images, upload-multi-arch-static-tarball, upload-versions-yaml, upload-cargo-vendored-tarball, upload-libseccomp-tarball ]
runs-on: ubuntu-22.04
permissions:
contents: write # needed for the `gh release` commands
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: Publish a release
run: |
./tools/packaging/release/release.sh publish-release
env:
GH_TOKEN: ${{ github.token }}
pushd $GITHUB_WORKSPACE
./ci/install_yq.sh
tag=$(echo $GITHUB_REF | cut -d/ -f3-)
versions_yaml="versions.yaml"
version=$(${GOPATH}/bin/yq read ${versions_yaml} "externals.libseccomp.version")
repo_url=$(${GOPATH}/bin/yq read ${versions_yaml} "externals.libseccomp.url")
download_url="${repo_url}/releases/download/v${version}"
tarball="libseccomp-${version}.tar.gz"
asc="${tarball}.asc"
curl -sSLO "${download_url}/${tarball}"
curl -sSLO "${download_url}/${asc}"
# "-m" option should be empty to re-use the existing release title
# without opening a text editor.
# For the details, check https://hub.github.com/hub-release.1.html.
hub release edit -m "" -a "${tarball}" "${tag}"
hub release edit -m "" -a "${asc}" "${tag}"
popd

View File

@@ -0,0 +1,54 @@
# Copyright (c) 2020 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
name: Ensure PR has required porting labels
on:
pull_request_target:
types:
- opened
- reopened
- labeled
- unlabeled
branches:
- main
jobs:
check-pr-porting-labels:
runs-on: ubuntu-latest
steps:
- name: Install hub
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
HUB_ARCH="amd64"
HUB_VER=$(curl -sL "https://api.github.com/repos/github/hub/releases/latest" |\
jq -r .tag_name | sed 's/^v//')
curl -sL \
"https://github.com/github/hub/releases/download/v${HUB_VER}/hub-linux-${HUB_ARCH}-${HUB_VER}.tgz" |\
tar xz --strip-components=2 --wildcards '*/bin/hub' && \
sudo install hub /usr/local/bin
- name: Checkout code to allow hub to communicate with the project
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/checkout@v2
- name: Install porting checker script
run: |
# Clone into a temporary directory to avoid overwriting
# any existing github directory.
pushd $(mktemp -d) &>/dev/null
git clone --single-branch --depth 1 "https://github.com/kata-containers/.github" && cd .github/scripts
sudo install pr-porting-checks.sh /usr/local/bin
popd &>/dev/null
- name: Stop PR being merged unless it has a correct set of porting labels
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
env:
GITHUB_TOKEN: ${{ secrets.KATA_GITHUB_ACTIONS_TOKEN }}
run: |
pr=${{ github.event.number }}
repo=${{ github.repository }}
pr-porting-checks.sh "$pr" "$repo"

View File

@@ -1,73 +0,0 @@
name: CI | Run cri-containerd tests
permissions: {}
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
runner:
description: The runner to execute the workflow on.
required: true
type: string
arch:
description: The arch of the tarball.
required: true
type: string
containerd_version:
description: The version of containerd for testing.
required: true
type: string
vmm:
description: The kata hypervisor for testing.
required: true
type: string
jobs:
run-cri-containerd:
name: run-cri-containerd-${{ inputs.arch }} (${{ inputs.containerd_version }}, ${{ inputs.vmm }})
runs-on: ${{ inputs.runner }}
env:
CONTAINERD_VERSION: ${{ inputs.containerd_version }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ inputs.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
timeout-minutes: 15
run: bash tests/integration/cri-containerd/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball for ${{ inputs.arch }}
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-${{ inputs.arch }}${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/integration/cri-containerd/gha-run.sh install-kata kata-artifacts
- name: Run cri-containerd tests for ${{ inputs.arch }}
timeout-minutes: 10
run: bash tests/integration/cri-containerd/gha-run.sh run

View File

@@ -1,159 +0,0 @@
name: CI | Run kubernetes tests on AKS
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
AZ_APPID:
required: true
AZ_TENANT_ID:
required: true
AZ_SUBSCRIPTION_ID:
required: true
permissions: {}
jobs:
run-k8s-tests:
name: run-k8s-tests
strategy:
fail-fast: false
matrix:
host_os:
- ubuntu
vmm:
- clh
- dragonball
- qemu
- qemu-runtime-rs
- cloud-hypervisor
instance-type:
- small
- normal
include:
- host_os: cbl-mariner
vmm: clh
instance-type: small
genpolicy-pull-method: oci-distribution
- host_os: cbl-mariner
vmm: clh
instance-type: small
genpolicy-pull-method: containerd
- host_os: cbl-mariner
vmm: clh
instance-type: normal
runs-on: ubuntu-22.04
permissions:
contents: read
id-token: write # Used for OIDC access to log into Azure
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HOST_OS: ${{ matrix.host_os }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: "vanilla"
K8S_TEST_HOST_TYPE: ${{ matrix.instance-type }}
GENPOLICY_PULL_METHOD: ${{ matrix.genpolicy-pull-method }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Download Azure CLI
uses: azure/setup-kubectl@776406bce94f63e41d621b960d78ee25c8b76ede # v4.0.1
with:
version: 'latest'
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Create AKS cluster
uses: nick-fields/retry@ce71cc2ab81d554ebbe88c79ab5975992d79ba08 # v3.0.2
with:
timeout_minutes: 15
max_attempts: 20
retry_on: error
retry_wait_seconds: 10
command: bash tests/integration/kubernetes/gha-run.sh create-cluster
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Install `kubectl`
uses: azure/setup-kubectl@776406bce94f63e41d621b960d78ee25c8b76ede # v4.0.1
with:
version: 'latest'
- name: Download credentials for the Kubernetes CLI to use them
run: bash tests/integration/kubernetes/gha-run.sh get-cluster-credentials
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-aks
- name: Run tests
timeout-minutes: 60
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Delete AKS cluster
if: always()
run: bash tests/integration/kubernetes/gha-run.sh delete-cluster

View File

@@ -1,91 +0,0 @@
name: CI | Run kubernetes tests on arm64
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-k8s-tests-on-arm64:
name: run-k8s-tests-on-arm64
strategy:
fail-fast: false
matrix:
vmm:
- qemu
- qemu-runtime-rs
k8s:
- kubeadm
runs-on: arm64-k8s
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
K8S_TEST_HOST_TYPE: all
TARGET_ARCH: "aarch64"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Collect artifacts ${{ matrix.vmm }}
if: always()
run: bash tests/integration/kubernetes/gha-run.sh collect-artifacts
continue-on-error: true
- name: Archive artifacts ${{ matrix.vmm }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: k8s-tests-${{ matrix.vmm }}-${{ matrix.k8s }}-${{ inputs.tag }}
path: /tmp/artifacts
retention-days: 1
- name: Delete kata-deploy
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup

View File

@@ -1,131 +0,0 @@
name: CI | Run NVIDIA GPU kubernetes tests on amd64
on:
workflow_call:
inputs:
tarball-suffix:
required: true
type: string
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
NGC_API_KEY:
required: true
permissions: {}
jobs:
run-nvidia-gpu-tests-on-amd64:
name: run-${{ matrix.environment.name }}-tests-on-amd64
strategy:
fail-fast: false
matrix:
environment: [
{ name: nvidia-gpu, vmm: qemu-nvidia-gpu, runner: amd64-nvidia-a100 },
{ name: nvidia-gpu-snp, vmm: qemu-nvidia-gpu-snp, runner: amd64-nvidia-h100-snp },
]
runs-on: ${{ matrix.environment.runner }}
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.environment.vmm }}
KUBERNETES: kubeadm
KBS: ${{ matrix.environment.name == 'nvidia-gpu-snp' && 'true' || 'false' }}
K8S_TEST_HOST_TYPE: baremetal
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Uninstall previous `kbs-client`
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh uninstall-kbs-client
- name: Deploy CoCo KBS
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
env:
NVIDIA_VERIFIER_MODE: remote
KBS_INGRESS: nodeport
- name: Install `kbs-client`
if: matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Run tests ${{ matrix.environment.vmm }}
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-nv-tests
env:
NGC_API_KEY: ${{ secrets.NGC_API_KEY }}
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Collect artifacts ${{ matrix.environment.vmm }}
if: always()
run: bash tests/integration/kubernetes/gha-run.sh collect-artifacts
continue-on-error: true
- name: Archive artifacts ${{ matrix.environment.vmm }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: k8s-tests-${{ matrix.environment.vmm }}-kubeadm-${{ inputs.tag }}
path: /tmp/artifacts
retention-days: 1
- name: Delete kata-deploy
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup
- name: Delete CoCo KBS
if: always() && matrix.environment.name != 'nvidia-gpu'
timeout-minutes: 10
run: |
bash tests/integration/kubernetes/gha-run.sh delete-coco-kbs

View File

@@ -1,81 +0,0 @@
name: CI | Run kubernetes tests on Power(ppc64le)
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-k8s-tests:
name: run-k8s-tests
strategy:
fail-fast: false
matrix:
vmm:
- qemu
k8s:
- kubeadm
runs-on: ppc64le-k8s
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
TARGET_ARCH: "ppc64le"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install golang
run: |
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "$GITHUB_PATH"
- name: Prepare the runner for k8s test suite
run: bash "${HOME}/scripts/k8s_cluster_prepare.sh"
- name: Check if cluster is healthy to run the tests
run: bash "${HOME}/scripts/k8s_cluster_check.sh"
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-kubeadm
- name: Run tests
timeout-minutes: 30
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests

View File

@@ -1,149 +0,0 @@
name: CI | Run kubernetes tests on IBM Cloud Z virtual server instance (zVSI)
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
AUTHENTICATED_IMAGE_PASSWORD:
required: true
permissions: {}
jobs:
run-k8s-tests:
name: run-k8s-tests
strategy:
fail-fast: false
matrix:
snapshotter:
- overlayfs
- devmapper
- nydus
vmm:
- qemu
- qemu-runtime-rs
- qemu-coco-dev
k8s:
- kubeadm
include:
- snapshotter: devmapper
pull-type: default
deploy-cmd: configure-snapshotter
- snapshotter: nydus
pull-type: guest-pull
deploy-cmd: deploy-snapshotter
exclude:
- snapshotter: overlayfs
vmm: qemu
- snapshotter: overlayfs
vmm: qemu-coco-dev
- snapshotter: devmapper
vmm: qemu-runtime-rs
- snapshotter: devmapper
vmm: qemu-coco-dev
- snapshotter: nydus
vmm: qemu
- snapshotter: nydus
vmm: qemu-runtime-rs
runs-on: s390x-large
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HOST_OS: "ubuntu"
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
PULL_TYPE: ${{ matrix.pull-type }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
TARGET_ARCH: "s390x"
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Set SNAPSHOTTER to empty if overlayfs
run: echo "SNAPSHOTTER=" >> "$GITHUB_ENV"
if: ${{ matrix.snapshotter == 'overlayfs' }}
- name: Set KBS and KBS_INGRESS if qemu-coco-dev
run: |
echo "KBS=true" >> "$GITHUB_ENV"
echo "KBS_INGRESS=nodeport" >> "$GITHUB_ENV"
if: ${{ matrix.vmm == 'qemu-coco-dev' }}
# qemu-runtime-rs only works with overlayfs
# See: https://github.com/kata-containers/kata-containers/issues/10066
- name: Configure the ${{ matrix.snapshotter }} snapshotter
env:
DEPLOY_CMD: ${{ matrix.deploy-cmd }}
run: bash tests/integration/kubernetes/gha-run.sh "${DEPLOY_CMD}"
if: ${{ matrix.snapshotter != 'overlayfs' }}
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-zvsi
- name: Uninstall previous `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh uninstall-kbs-client
if: ${{ matrix.vmm == 'qemu-coco-dev' }}
- name: Deploy CoCo KBS
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
if: ${{ matrix.vmm == 'qemu-coco-dev' }}
- name: Install `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
if: ${{ matrix.vmm == 'qemu-coco-dev' }}
- name: Run tests
timeout-minutes: 60
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh cleanup-zvsi
- name: Delete CoCo KBS
if: always()
timeout-minutes: 10
run: |
if [ "${KBS}" == "true" ]; then
bash tests/integration/kubernetes/gha-run.sh delete-coco-kbs
fi

View File

@@ -1,157 +0,0 @@
name: CI | Run Kata CoCo k8s Stability Tests
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
tarball-suffix:
required: false
type: string
secrets:
AZ_APPID:
required: true
AZ_TENANT_ID:
required: true
AZ_SUBSCRIPTION_ID:
required: true
AUTHENTICATED_IMAGE_PASSWORD:
required: true
permissions: {}
jobs:
# Generate jobs for testing CoCo on non-TEE environments
run-stability-k8s-tests-coco-nontee:
name: run-stability-k8s-tests-coco-nontee
strategy:
fail-fast: false
matrix:
vmm:
- qemu-coco-dev
- qemu-coco-dev-runtime-rs
snapshotter:
- nydus
pull-type:
- guest-pull
runs-on: ubuntu-22.04
permissions:
id-token: write # Used for OIDC access to log into Azure
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
# Some tests rely on that variable to run (or not)
KBS: "true"
# Set the KBS ingress handler (empty string disables handling)
KBS_INGRESS: "aks"
KUBERNETES: "vanilla"
PULL_TYPE: ${{ matrix.pull-type }}
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Create AKS cluster
uses: nick-fields/retry@ce71cc2ab81d554ebbe88c79ab5975992d79ba08 # v3.0.2
with:
timeout_minutes: 15
max_attempts: 20
retry_on: error
retry_wait_seconds: 10
command: bash tests/integration/kubernetes/gha-run.sh create-cluster
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Install `kubectl`
uses: azure/setup-kubectl@776406bce94f63e41d621b960d78ee25c8b76ede # v4.0.1
with:
version: 'latest'
- name: Download credentials for the Kubernetes CLI to use them
run: bash tests/integration/kubernetes/gha-run.sh get-cluster-credentials
- name: Deploy Snapshotter
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh deploy-snapshotter
- name: Deploy Kata
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-aks
- name: Deploy CoCo KBS
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
- name: Install `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Run stability tests
timeout-minutes: 300
run: bash tests/stability/gha-stability-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Delete AKS cluster
if: always()
run: bash tests/integration/kubernetes/gha-run.sh delete-cluster

View File

@@ -1,365 +0,0 @@
name: CI | Run kata coco tests
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
AUTHENTICATED_IMAGE_PASSWORD:
required: true
AZ_APPID:
required: true
AZ_TENANT_ID:
required: true
AZ_SUBSCRIPTION_ID:
required: true
ITA_KEY:
required: true
permissions: {}
jobs:
run-k8s-tests-on-tee:
name: run-k8s-tests-on-tee
strategy:
fail-fast: false
matrix:
include:
- runner: tdx
vmm: qemu-tdx
- runner: sev-snp
vmm: qemu-snp
runs-on: ${{ matrix.runner }}
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: "vanilla"
KBS: "true"
K8S_TEST_HOST_TYPE: "baremetal"
KBS_INGRESS: "nodeport"
SNAPSHOTTER: "nydus"
PULL_TYPE: "guest-pull"
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
GH_ITA_KEY: ${{ secrets.ITA_KEY }}
AUTO_GENERATE_POLICY: "yes"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Uninstall previous `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh uninstall-kbs-client
- name: Deploy CoCo KBS
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
env:
ITA_KEY: ${{ env.KATA_HYPERVISOR == 'qemu-tdx' && env.GH_ITA_KEY || '' }}
- name: Install `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy CSI driver
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh deploy-csi-driver
- name: Run tests
timeout-minutes: 100
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Delete kata-deploy
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh cleanup
- name: Delete CoCo KBS
if: always()
timeout-minutes: 10
run: |
[[ "${KATA_HYPERVISOR}" == "qemu-tdx" ]] && echo "ITA_KEY=${GH_ITA_KEY}" >> "${GITHUB_ENV}"
bash tests/integration/kubernetes/gha-run.sh delete-coco-kbs
- name: Delete CSI driver
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh delete-csi-driver
# Generate jobs for testing CoCo on non-TEE environments
run-k8s-tests-coco-nontee:
name: run-k8s-tests-coco-nontee
strategy:
fail-fast: false
matrix:
vmm:
- qemu-coco-dev
- qemu-coco-dev-runtime-rs
snapshotter:
- nydus
pull-type:
- guest-pull
include:
- pull-type: experimental-force-guest-pull
vmm: qemu-coco-dev
snapshotter: ""
runs-on: ubuntu-22.04
permissions:
id-token: write # Used for OIDC access to log into Azure
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
# Some tests rely on that variable to run (or not)
KBS: "true"
# Set the KBS ingress handler (empty string disables handling)
KBS_INGRESS: "aks"
KUBERNETES: "vanilla"
PULL_TYPE: ${{ matrix.pull-type }}
AUTHENTICATED_IMAGE_USER: ${{ vars.AUTHENTICATED_IMAGE_USER }}
AUTHENTICATED_IMAGE_PASSWORD: ${{ secrets.AUTHENTICATED_IMAGE_PASSWORD }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
EXPERIMENTAL_FORCE_GUEST_PULL: ${{ matrix.pull-type == 'experimental-force-guest-pull' && matrix.vmm || '' }}
# Caution: current ingress controller used to expose the KBS service
# requires much vCPUs, lefting only a few for the tests. Depending on the
# host type chose it will result on the creation of a cluster with
# insufficient resources.
K8S_TEST_HOST_TYPE: "all"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Create AKS cluster
uses: nick-fields/retry@ce71cc2ab81d554ebbe88c79ab5975992d79ba08 # v3.0.2
with:
timeout_minutes: 15
max_attempts: 20
retry_on: error
retry_wait_seconds: 10
command: bash tests/integration/kubernetes/gha-run.sh create-cluster
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Install `kubectl`
uses: azure/setup-kubectl@776406bce94f63e41d621b960d78ee25c8b76ede # v4.0.1
with:
version: 'latest'
- name: Download credentials for the Kubernetes CLI to use them
run: bash tests/integration/kubernetes/gha-run.sh get-cluster-credentials
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-aks
env:
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: ${{ env.SNAPSHOTTER == 'nydus' }}
AUTO_GENERATE_POLICY: ${{ env.PULL_TYPE == 'experimental-force-guest-pull' && 'no' || 'yes' }}
- name: Deploy CoCo KBS
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-coco-kbs
- name: Install `kbs-client`
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh install-kbs-client
- name: Deploy CSI driver
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh deploy-csi-driver
- name: Run tests
timeout-minutes: 80
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Delete AKS cluster
if: always()
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh delete-cluster
# Generate jobs for testing CoCo on non-TEE environments with erofs-snapshotter
run-k8s-tests-coco-nontee-with-erofs-snapshotter:
name: run-k8s-tests-coco-nontee-with-erofs-snapshotter
strategy:
fail-fast: false
matrix:
vmm:
- qemu-coco-dev
snapshotter:
- erofs
pull-type:
- default
runs-on: ubuntu-24.04
environment: ci
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
# Some tests rely on that variable to run (or not)
KBS: "false"
# Set the KBS ingress handler (empty string disables handling)
KBS_INGRESS: ""
KUBERNETES: "vanilla"
CONTAINER_ENGINE: "containerd"
CONTAINER_ENGINE_VERSION: "v2.2"
PULL_TYPE: ${{ matrix.pull-type }}
SNAPSHOTTER: ${{ matrix.snapshotter }}
USE_EXPERIMENTAL_SETUP_SNAPSHOTTER: "true"
K8S_TEST_HOST_TYPE: "all"
# We are skipping the auto generated policy tests for now,
# but those should be enabled as soon as we work on that.
AUTO_GENERATE_POLICY: "no"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: get-kata-tools-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-tools-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-tools-artifacts
- name: Install kata-tools
run: bash tests/integration/kubernetes/gha-run.sh install-kata-tools kata-tools-artifacts
- name: Remove unnecessary directories to free up space
run: |
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
sudo rm -rf /usr/local/julia*
sudo rm -rf /opt/az
sudo rm -rf /usr/local/share/chromium
sudo rm -rf /opt/microsoft
sudo rm -rf /opt/google
sudo rm -rf /usr/lib/firefox
- name: Deploy kubernetes
timeout-minutes: 15
run: bash tests/integration/kubernetes/gha-run.sh deploy-k8s
env:
GH_TOKEN: ${{ github.token }}
- name: Install `bats`
run: bash tests/integration/kubernetes/gha-run.sh install-bats
- name: Deploy Kata
timeout-minutes: 20
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata
- name: Deploy CSI driver
timeout-minutes: 5
run: bash tests/integration/kubernetes/gha-run.sh deploy-csi-driver
- name: Run tests
timeout-minutes: 80
run: bash tests/integration/kubernetes/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests

View File

@@ -1,119 +0,0 @@
name: CI | Run kata-deploy tests on AKS
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
secrets:
AZ_APPID:
required: true
AZ_TENANT_ID:
required: true
AZ_SUBSCRIPTION_ID:
required: true
permissions: {}
jobs:
run-kata-deploy-tests:
name: run-kata-deploy-tests
strategy:
fail-fast: false
matrix:
host_os:
- ubuntu
vmm:
- clh
- dragonball
- qemu
- qemu-runtime-rs
include:
- host_os: cbl-mariner
vmm: clh
runs-on: ubuntu-22.04
environment: ci
permissions:
id-token: write # Used for OIDC access to log into Azure
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HOST_OS: ${{ matrix.host_os }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: "vanilla"
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Log into the Azure account
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Create AKS cluster
uses: nick-fields/retry@ce71cc2ab81d554ebbe88c79ab5975992d79ba08 # v3.0.2
with:
timeout_minutes: 15
max_attempts: 20
retry_on: error
retry_wait_seconds: 10
command: bash tests/integration/kubernetes/gha-run.sh create-cluster
- name: Install `bats`
run: bash tests/functional/kata-deploy/gha-run.sh install-bats
- name: Install `kubectl`
uses: azure/setup-kubectl@776406bce94f63e41d621b960d78ee25c8b76ede # v4.0.1
with:
version: 'latest'
- name: Download credentials for the Kubernetes CLI to use them
run: bash tests/functional/kata-deploy/gha-run.sh get-cluster-credentials
- name: Run tests
run: bash tests/functional/kata-deploy/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/integration/kubernetes/gha-run.sh report-tests
- name: Refresh OIDC token in case access token expired
if: always()
uses: azure/login@a457da9ea143d694b1b9c7c869ebb04ebe844ef5 # v2.3.0
with:
client-id: ${{ secrets.AZ_APPID }}
tenant-id: ${{ secrets.AZ_TENANT_ID }}
subscription-id: ${{ secrets.AZ_SUBSCRIPTION_ID }}
- name: Delete AKS cluster
if: always()
run: bash tests/functional/kata-deploy/gha-run.sh delete-cluster

View File

@@ -1,90 +0,0 @@
name: CI | Run kata-deploy tests
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-kata-deploy-tests:
name: run-kata-deploy-tests
strategy:
fail-fast: false
matrix:
vmm:
- qemu
k8s:
- k0s
- k3s
- rke2
- microk8s
runs-on: ubuntu-22.04
env:
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
KUBERNETES: ${{ matrix.k8s }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Remove unnecessary directories to free up space
run: |
sudo rm -rf /usr/local/.ghcup
sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo rm -rf /usr/local/lib/android
sudo rm -rf /usr/share/dotnet
sudo rm -rf /opt/ghc
sudo rm -rf /usr/local/share/boost
sudo rm -rf /usr/lib/jvm
sudo rm -rf /usr/share/swift
sudo rm -rf /usr/local/share/powershell
sudo rm -rf /usr/local/julia*
sudo rm -rf /opt/az
sudo rm -rf /usr/local/share/chromium
sudo rm -rf /opt/microsoft
sudo rm -rf /opt/google
sudo rm -rf /usr/lib/firefox
- name: Deploy ${{ matrix.k8s }}
run: bash tests/functional/kata-deploy/gha-run.sh deploy-k8s
- name: Install `bats`
run: bash tests/functional/kata-deploy/gha-run.sh install-bats
- name: Run tests
run: bash tests/functional/kata-deploy/gha-run.sh run-tests
- name: Report tests
if: always()
run: bash tests/functional/kata-deploy/gha-run.sh report-tests

View File

@@ -1,70 +0,0 @@
name: CI | Run kata-monitor tests
on:
workflow_call:
inputs:
tarball-suffix:
required: false
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-monitor:
name: run-monitor
strategy:
fail-fast: false
matrix:
vmm:
- qemu
container_engine:
- crio
- containerd
# TODO: enable when https://github.com/kata-containers/kata-containers/issues/9853 is fixed
#include:
# - container_engine: containerd
# containerd_version: lts
exclude:
# TODO: enable with containerd when https://github.com/kata-containers/kata-containers/issues/9761 is fixed
- container_engine: containerd
vmm: qemu
runs-on: ubuntu-22.04
env:
CONTAINER_ENGINE: ${{ matrix.container_engine }}
#CONTAINERD_VERSION: ${{ matrix.containerd_version }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Install dependencies
run: bash tests/functional/kata-monitor/gha-run.sh install-dependencies
env:
GH_TOKEN: ${{ github.token }}
- name: get-kata-tarball
uses: actions/download-artifact@d3f86a106a0bac45b974a628896c90dbdf5c8093 # v4.3.0
with:
name: kata-static-tarball-amd64${{ inputs.tarball-suffix }}
path: kata-artifacts
- name: Install kata
run: bash tests/functional/kata-monitor/gha-run.sh install-kata kata-artifacts
- name: Run kata-monitor tests
run: bash tests/functional/kata-monitor/gha-run.sh run

View File

@@ -1,128 +0,0 @@
name: CI | Run test metrics
on:
workflow_call:
inputs:
registry:
required: true
type: string
repo:
required: true
type: string
tag:
required: true
type: string
pr-number:
required: true
type: string
commit-hash:
required: false
type: string
target-branch:
required: false
type: string
default: ""
permissions: {}
jobs:
run-metrics:
name: run-metrics
strategy:
# We can set this to true whenever we're 100% sure that
# the all the tests are not flaky, otherwise we'll fail
# all the tests due to a single flaky instance.
fail-fast: false
matrix:
vmm: ['clh', 'qemu']
max-parallel: 1
runs-on: metrics
env:
GOPATH: ${{ github.workspace }}
KATA_HYPERVISOR: ${{ matrix.vmm }}
DOCKER_REGISTRY: ${{ inputs.registry }}
DOCKER_REPO: ${{ inputs.repo }}
DOCKER_TAG: ${{ inputs.tag }}
GH_PR_NUMBER: ${{ inputs.pr-number }}
K8S_TEST_HOST_TYPE: "baremetal"
KUBERNETES: kubeadm
steps:
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
ref: ${{ inputs.commit-hash }}
fetch-depth: 0
persist-credentials: false
- name: Rebase atop of the latest target branch
run: |
./tests/git-helper.sh "rebase-atop-of-the-latest-target-branch"
env:
TARGET_BRANCH: ${{ inputs.target-branch }}
- name: Deploy Kata
timeout-minutes: 10
run: bash tests/integration/kubernetes/gha-run.sh deploy-kata-kubeadm
- name: Install check metrics
run: bash tests/metrics/gha-run.sh install-checkmetrics
- name: enabling the hypervisor
run: bash tests/metrics/gha-run.sh enabling-hypervisor
- name: run launch times test
timeout-minutes: 15
continue-on-error: true
run: bash tests/metrics/gha-run.sh run-test-launchtimes
- name: run memory foot print test
timeout-minutes: 15
continue-on-error: true
run: bash tests/metrics/gha-run.sh run-test-memory-usage
- name: run memory usage inside container test
timeout-minutes: 15
continue-on-error: true
run: bash tests/metrics/gha-run.sh run-test-memory-usage-inside-container
- name: run blogbench test
timeout-minutes: 15
continue-on-error: true
run: bash tests/metrics/gha-run.sh run-test-blogbench
- name: run tensorflow test
timeout-minutes: 15
continue-on-error: true
run: bash tests/metrics/gha-run.sh run-test-tensorflow
- name: run fio test
timeout-minutes: 15
continue-on-error: true
run: bash tests/metrics/gha-run.sh run-test-fio
- name: run iperf test
timeout-minutes: 15
continue-on-error: true
run: bash tests/metrics/gha-run.sh run-test-iperf
- name: run latency test
timeout-minutes: 15
continue-on-error: true
run: bash tests/metrics/gha-run.sh run-test-latency
- name: check metrics
run: bash tests/metrics/gha-run.sh check-metrics
- name: make metrics tarball ${{ matrix.vmm }}
run: bash tests/metrics/gha-run.sh make-tarball-results
- name: archive metrics results ${{ matrix.vmm }}
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: metrics-artifacts-${{ matrix.vmm }}
path: results-${{ matrix.vmm }}.tar.gz
retention-days: 1
if-no-files-found: error
- name: Delete kata-deploy
timeout-minutes: 10
if: always()
run: bash tests/integration/kubernetes/gha-run.sh cleanup-kubeadm

View File

@@ -1,60 +0,0 @@
# This workflow uses actions that are not certified by GitHub. They are provided
# by a third-party and are governed by separate terms of service, privacy
# policy, and support documentation.
name: Scorecard supply-chain security
on:
# For Branch-Protection check. Only the default branch is supported. See
# https://github.com/ossf/scorecard/blob/main/docs/checks.md#branch-protection
branch_protection_rule:
push:
branches: [ "main" ]
workflow_dispatch:
permissions: {}
jobs:
analysis:
name: Scorecard analysis
runs-on: ubuntu-latest
# `publish_results: true` only works when run from the default branch. conditional can be removed if disabled.
if: github.event.repository.default_branch == github.ref_name || github.event_name == 'pull_request'
permissions:
# Needed to upload the results to code-scanning dashboard.
security-events: write
# Needed to publish results and get a badge (see publish_results below).
id-token: write
steps:
- name: "Checkout code"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
persist-credentials: false
- name: "Run analysis"
uses: ossf/scorecard-action@f49aabe0b5af0936a0987cfb85d86b75731b0186 # v2.4.1
with:
results_file: results.sarif
results_format: sarif
# Public repositories:
# - Publish results to OpenSSF REST API for easy access by consumers
# - Allows the repository to include the Scorecard badge.
# - See https://github.com/ossf/scorecard-action#publishing-results.
publish_results: true
# Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF
# format to the repository Actions tab.
- name: "Upload artifact"
uses: actions/upload-artifact@4cec3d8aa04e39d1a68397de0c4cd6fb9dce8ec1 # v4.6.1
with:
name: SARIF file
path: results.sarif
retention-days: 5
# Upload the results to GitHub's code scanning dashboard (optional).
# Commenting out will disable upload of results to your repo's Code Scanning dashboard
- name: "Upload to code-scanning"
uses: github/codeql-action/upload-sarif@4bdb89f48054571735e3792627da6195c57459e2 # v3.31.10
with:
sarif_file: results.sarif

View File

@@ -1,32 +0,0 @@
# https://github.com/marketplace/actions/shellcheck
name: Check shell scripts
on:
workflow_dispatch:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
shellcheck:
name: shellcheck
runs-on: ubuntu-24.04
steps:
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Run ShellCheck
uses: ludeeus/action-shellcheck@00cae500b08a931fb5698e11e79bfbd38e612a38 # v2.0.0
with:
ignore_paths: "**/vendor/**"

View File

@@ -1,35 +0,0 @@
# https://github.com/marketplace/actions/shellcheck
name: Shellcheck required
on:
workflow_dispatch:
pull_request:
types:
- opened
- edited
- reopened
- synchronize
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
shellcheck-required:
name: shellcheck-required
runs-on: ubuntu-24.04
steps:
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Run ShellCheck
uses: ludeeus/action-shellcheck@00cae500b08a931fb5698e11e79bfbd38e612a38 # v2.0.0
with:
severity: error
ignore_paths: "**/vendor/**"

39
.github/workflows/snap-release.yaml vendored Normal file
View File

@@ -0,0 +1,39 @@
name: Release Kata 2.x in snapcraft store
on:
push:
tags:
- '2.*'
jobs:
release-snap:
runs-on: ubuntu-20.04
steps:
- name: Check out Git repository
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Install Snapcraft
uses: samuelmeuli/action-snapcraft@v1
with:
snapcraft_token: ${{ secrets.snapcraft_token }}
- name: Build snap
run: |
sudo apt-get install -y git git-extras
kata_url="https://github.com/kata-containers/kata-containers"
latest_version=$(git ls-remote --tags ${kata_url} | egrep -o "refs.*" | egrep -v "\-alpha|\-rc|{}" | egrep -o "[[:digit:]]+\.[[:digit:]]+\.[[:digit:]]+" | sort -V -r | head -1)
current_version="$(echo ${GITHUB_REF} | cut -d/ -f3)"
# Check semantic versioning format (x.y.z) and if the current tag is the latest tag
if echo "${current_version}" | grep -q "^[[:digit:]]\+\.[[:digit:]]\+\.[[:digit:]]\+$" && echo -e "$latest_version\n$current_version" | sort -C -V; then
# Current version is the latest version, build it
snapcraft -d snap --destructive-mode
fi
- name: Upload snap
run: |
snap_version="$(echo ${GITHUB_REF} | cut -d/ -f3)"
snap_file="kata-containers_${snap_version}_amd64.snap"
# Upload the snap if it exists
if [ -f ${snap_file} ]; then
snapcraft upload --release=stable ${snap_file}
fi

27
.github/workflows/snap.yaml vendored Normal file
View File

@@ -0,0 +1,27 @@
name: snap CI
on:
pull_request:
types:
- opened
- synchronize
- reopened
- edited
jobs:
test:
runs-on: ubuntu-20.04
steps:
- name: Check out
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Install Snapcraft
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: samuelmeuli/action-snapcraft@v1
- name: Build snap
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
snapcraft -d snap --destructive-mode

View File

@@ -1,27 +0,0 @@
name: 'Automatically close stale PRs'
on:
schedule:
- cron: '0 0 * * *'
workflow_dispatch:
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
stale:
name: stale
runs-on: ubuntu-22.04
permissions:
actions: write # Needed to manage caches for state persistence across runs
pull-requests: write # Needed to add/remove labels, post comments, or close PRs
steps:
- uses: actions/stale@5bef64f19d7facfb25b37b414482c7164d639639 # v9.1.0
with:
stale-pr-message: 'This PR has been opened without activity for 180 days. Please comment on the issue or it will be closed in 7 days.'
days-before-pr-stale: 180
days-before-pr-close: 7
days-before-issue-stale: -1
days-before-issue-close: -1

View File

@@ -1,36 +0,0 @@
on:
pull_request:
types:
- opened
- synchronize
- reopened
- labeled # a workflow runs only when the 'ok-to-test' label is added
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
name: Static checks self-hosted
jobs:
skipper:
if: ${{ contains(github.event.pull_request.labels.*.name, 'ok-to-test') }}
uses: ./.github/workflows/gatekeeper-skipper.yaml
with:
commit-hash: ${{ github.event.pull_request.head.sha }}
target-branch: ${{ github.event.pull_request.base.ref }}
build-checks:
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
strategy:
fail-fast: false
matrix:
instance:
- "ubuntu-24.04-arm"
- "ubuntu-24.04-s390x"
- "ubuntu-24.04-ppc64le"
uses: ./.github/workflows/build-checks.yaml
with:
instance: ${{ matrix.instance }}

View File

@@ -5,188 +5,92 @@ on:
- edited
- reopened
- synchronize
workflow_dispatch:
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
name: Static checks
jobs:
skipper:
uses: ./.github/workflows/gatekeeper-skipper.yaml
with:
commit-hash: ${{ github.event.pull_request.head.sha }}
target-branch: ${{ github.event.pull_request.base.ref }}
check-kernel-config-version:
name: check-kernel-config-version
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
runs-on: ubuntu-22.04
steps:
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Ensure the kernel config version has been updated
run: |
kernel_dir="tools/packaging/kernel/"
kernel_version_file="${kernel_dir}kata_config_version"
modified_files=$(git diff --name-only origin/"$GITHUB_BASE_REF"..HEAD)
if git diff --name-only origin/"$GITHUB_BASE_REF"..HEAD "${kernel_dir}" | grep "${kernel_dir}"; then
echo "Kernel directory has changed, checking if $kernel_version_file has been updated"
if echo "$modified_files" | grep -v "README.md" | grep "${kernel_dir}" >>"/dev/null"; then
echo "$modified_files" | grep "$kernel_version_file" >>/dev/null || ( echo "Please bump version in $kernel_version_file" && exit 1)
else
echo "Readme file changed, no need for kernel config version update."
fi
echo "Check passed"
fi
build-checks:
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
uses: ./.github/workflows/build-checks.yaml
with:
instance: ubuntu-22.04
build-checks-depending-on-kvm:
name: build-checks-depending-on-kvm
runs-on: ubuntu-22.04
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
test:
strategy:
fail-fast: false
matrix:
component:
- runtime-rs
include:
- component: runtime-rs
command: "sudo -E env PATH=$PATH LIBC=gnu SUPPORT_VIRTUALIZATION=true make test"
- component: runtime-rs
component-path: src/dragonball
steps:
- name: Checkout the code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Install system deps
run: |
sudo apt-get update && sudo apt-get install -y build-essential musl-tools
- name: Install yq
run: |
sudo -E ./ci/install_yq.sh
env:
INSTALL_IN_GOPATH: false
- name: Install rust
run: |
export PATH="$PATH:/usr/local/bin"
./tests/install_rust.sh
- name: Running `${{ matrix.command }}` for ${{ matrix.component }}
run: |
export PATH="$PATH:${HOME}/.cargo/bin"
cd "${COMPONENT_PATH}"
eval "${COMMAND}"
env:
COMMAND: ${{ matrix.command }}
COMPONENT_PATH: ${{ matrix.component-path }}
RUST_BACKTRACE: "1"
RUST_LIB_BACKTRACE: "0"
static-checks:
name: static-checks
runs-on: ubuntu-22.04
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
strategy:
fail-fast: false
matrix:
cmd:
- "make static-checks"
go-version: [1.16.x, 1.17.x]
os: [ubuntu-20.04]
runs-on: ${{ matrix.os }}
env:
GOPATH: ${{ github.workspace }}
permissions:
contents: read # for checkout
packages: write # for push to ghcr.io
TRAVIS: "true"
TRAVIS_BRANCH: ${{ github.base_ref }}
TRAVIS_PULL_REQUEST_BRANCH: ${{ github.head_ref }}
TRAVIS_PULL_REQUEST_SHA : ${{ github.event.pull_request.head.sha }}
RUST_BACKTRACE: "1"
target_branch: ${{ github.base_ref }}
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
path: ./src/github.com/${{ github.repository }}
- name: Install yq
run: |
cd "${GOPATH}/src/github.com/${GITHUB_REPOSITORY}"
./ci/install_yq.sh
env:
INSTALL_IN_GOPATH: false
- name: Install golang
run: |
cd "${GOPATH}/src/github.com/${GITHUB_REPOSITORY}"
./tests/install_go.sh -f -p
echo "/usr/local/go/bin" >> "$GITHUB_PATH"
- name: Install system dependencies
run: |
sudo apt-get update && sudo apt-get -y install moreutils hunspell hunspell-en-gb hunspell-en-us pandoc
- name: Install open-policy-agent
run: |
cd "${GOPATH}/src/github.com/${GITHUB_REPOSITORY}"
./tests/install_opa.sh
- name: Install regorus
env:
ARTEFACT_REPOSITORY: "${{ github.repository }}"
ARTEFACT_REGISTRY_USERNAME: "${{ github.actor }}"
ARTEFACT_REGISTRY_PASSWORD: "${{ secrets.GITHUB_TOKEN }}"
run: |
"${GOPATH}/src/github.com/${GITHUB_REPOSITORY}/tests/install_regorus.sh"
- name: Run check
env:
CMD: ${{ matrix.cmd }}
run: |
export PATH="${PATH}:${GOPATH}/bin"
cd "${GOPATH}/src/github.com/${GITHUB_REPOSITORY}" && ${CMD}
govulncheck:
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
uses: ./.github/workflows/govulncheck.yaml
codegen:
name: codegen
runs-on: ubuntu-22.04
needs: skipper
if: ${{ needs.skipper.outputs.skip_static != 'yes' }}
permissions:
contents: read # for checkout
steps:
- name: Checkout code
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: generate
run: make -C src/agent generate-protocols
- name: check for diff
run: |
diff=$(git diff)
if [[ -z "${diff}" ]]; then
echo "No diff detected."
exit 0
fi
cat << EOF >> "${GITHUB_STEP_SUMMARY}"
Run \`make -C src/agent generate-protocols\` to update protobuf bindings.
\`\`\`diff
${diff}
\`\`\`
EOF
echo "::error::Golang protobuf bindings need to be regenerated (see Github step summary for diff)."
exit 1
- name: Install Go
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/setup-go@v2
with:
go-version: ${{ matrix.go-version }}
env:
GOPATH: ${{ runner.workspace }}/kata-containers
- name: Setup GOPATH
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
echo "TRAVIS_BRANCH: ${TRAVIS_BRANCH}"
echo "TRAVIS_PULL_REQUEST_BRANCH: ${TRAVIS_PULL_REQUEST_BRANCH}"
echo "TRAVIS_PULL_REQUEST_SHA: ${TRAVIS_PULL_REQUEST_SHA}"
echo "TRAVIS: ${TRAVIS}"
- name: Set env
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
echo "GOPATH=${{ github.workspace }}" >> $GITHUB_ENV
echo "${{ github.workspace }}/bin" >> $GITHUB_PATH
- name: Checkout code
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
uses: actions/checkout@v2
with:
fetch-depth: 0
path: ./src/github.com/${{ github.repository }}
- name: Setup travis references
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
echo "TRAVIS_BRANCH=${TRAVIS_BRANCH:-$(echo $GITHUB_REF | awk 'BEGIN { FS = \"/\" } ; { print $3 }')}"
target_branch=${TRAVIS_BRANCH}
- name: Setup
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && ./ci/setup.sh
env:
GOPATH: ${{ runner.workspace }}/kata-containers
- name: Installing rust
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && ./ci/install_rust.sh
PATH=$PATH:"$HOME/.cargo/bin"
rustup target add x86_64-unknown-linux-musl
rustup component add rustfmt clippy
- name: Setup seccomp
run: |
libseccomp_install_dir=$(mktemp -d -t libseccomp.XXXXXXXXXX)
gperf_install_dir=$(mktemp -d -t gperf.XXXXXXXXXX)
cd ${GOPATH}/src/github.com/${{ github.repository }} && ./ci/install_libseccomp.sh "${libseccomp_install_dir}" "${gperf_install_dir}"
echo "Set environment variables for the libseccomp crate to link the libseccomp library statically"
echo "LIBSECCOMP_LINK_TYPE=static" >> $GITHUB_ENV
echo "LIBSECCOMP_LIB_PATH=${libseccomp_install_dir}/lib" >> $GITHUB_ENV
# Check whether the vendored code is up-to-date & working as the first thing
- name: Check vendored code
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && make vendor
- name: Static Checks
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && make static-checks
- name: Run Compiler Checks
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && make check
- name: Run Unit Tests
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && make test
- name: Run Unit Tests As Root User
if: ${{ !contains(github.event.pull_request.labels.*.name, 'force-skip-ci') }}
run: |
cd ${GOPATH}/src/github.com/${{ github.repository }} && sudo -E PATH="$PATH" make test

View File

@@ -1,29 +0,0 @@
name: GHA security analysis
on:
pull_request:
permissions: {}
concurrency:
group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true
jobs:
zizmor:
name: zizmor
runs-on: ubuntu-22.04
steps:
- name: Checkout repository
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
with:
fetch-depth: 0
persist-credentials: false
- name: Run zizmor
uses: zizmorcore/zizmor-action@135698455da5c3b3e55f73f4419e481ab68cdd95 # v0.4.1
with:
advanced-security: false
annotations: true
persona: auditor
version: v1.13.0

3
.github/zizmor.yml vendored
View File

@@ -1,3 +0,0 @@
rules:
undocumented-permissions:
disable: true

11
.gitignore vendored
View File

@@ -4,19 +4,10 @@
**/*.rej
**/target
**/.vscode
**/.idea
**/.fleet
**/*.swp
**/*.swo
pkg/logging/Cargo.lock
src/agent/src/version.rs
src/agent/kata-agent.service
src/agent/protocols/src/*.rs
!src/agent/protocols/src/lib.rs
build
src/tools/log-parser/kata-log-parser
tools/packaging/static-build/agent/install_libseccomp.sh
.envrc
.direnv
**/.DS_Store
site/

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2019-2023 Intel Corporation
# Copyright (c) 2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -9,83 +9,4 @@
# Order in this file is important. Only the last match will be
# used. See https://help.github.com/articles/about-code-owners/
/CODEOWNERS @kata-containers/codeowners
VERSION @kata-containers/release
# The versions database needs careful handling
versions.yaml @kata-containers/release @kata-containers/ci @kata-containers/tests
Makefile* @kata-containers/build
*.mak @kata-containers/build
*.mk @kata-containers/build
# Documentation related files could also appear anywhere
# else in the repo.
*.md @kata-containers/documentation
*.drawio @kata-containers/documentation
*.jpg @kata-containers/documentation
*.png @kata-containers/documentation
*.svg @kata-containers/documentation
*.bash @kata-containers/shell
*.sh @kata-containers/shell
**/completions/ @kata-containers/shell
Dockerfile* @kata-containers/docker
/ci/ @kata-containers/ci
*.bats @kata-containers/tests
/tests/ @kata-containers/tests
*.rs @kata-containers/rust
*.go @kata-containers/golang
/utils/ @kata-containers/utils
# FIXME: Maybe a new "protocol" team would be better?
#
# All protocol changes must be reviewed.
# Note, we include all subdirs, including the vendor dir, as at present there are no .proto files
# in the vendor dir. Later we may have to extend this matching rule if that changes.
/src/libs/protocols/*.proto @kata-containers/architecture-committee @kata-containers/builder @kata-containers/packaging
# GitHub Actions
/.github/workflows/ @kata-containers/action-admins @kata-containers/ci
/ci/ @kata-containers/ci @kata-containers/tests
/docs/ @kata-containers/documentation
/src/agent/ @kata-containers/agent
/src/runtime*/ @kata-containers/runtime
/src/runtime/ @kata-containers/golang
src/runtime-rs/ @kata-containers/rust
src/libs/ @kata-containers/rust
src/dragonball/ @kata-containers/dragonball
/tools/osbuilder/ @kata-containers/builder
/tools/packaging/ @kata-containers/packaging
/tools/packaging/kernel/ @kata-containers/kernel
/tools/packaging/kata-deploy/ @kata-containers/kata-deploy
/tools/packaging/qemu/ @kata-containers/qemu
/tools/packaging/release/ @kata-containers/release
**/vendor/ @kata-containers/vendoring
# Handle arch specific files last so they match more specifically than
# the kernel packaging files.
**/*aarch64* @kata-containers/arch-aarch64
**/*arm64* @kata-containers/arch-aarch64
**/*amd64* @kata-containers/arch-amd64
**/*x86-64* @kata-containers/arch-amd64
**/*x86_64* @kata-containers/arch-amd64
**/*ppc64* @kata-containers/arch-ppc64le
**/*s390x* @kata-containers/arch-s390x
*.md @kata-containers/documentation

6290
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -1,140 +0,0 @@
[workspace.package]
authors = ["The Kata Containers community <kata-dev@lists.katacontainers.io>"]
edition = "2018"
license = "Apache-2.0"
rust-version = "1.88"
[workspace]
members = [
# Dragonball
"src/dragonball",
"src/dragonball/dbs_acpi",
"src/dragonball/dbs_address_space",
"src/dragonball/dbs_allocator",
"src/dragonball/dbs_arch",
"src/dragonball/dbs_boot",
"src/dragonball/dbs_device",
"src/dragonball/dbs_interrupt",
"src/dragonball/dbs_legacy_devices",
"src/dragonball/dbs_pci",
"src/dragonball/dbs_tdx",
"src/dragonball/dbs_upcall",
"src/dragonball/dbs_utils",
"src/dragonball/dbs_virtio_devices",
# runtime-rs
"src/runtime-rs",
"src/runtime-rs/crates/agent",
"src/runtime-rs/crates/hypervisor",
"src/runtime-rs/crates/persist",
"src/runtime-rs/crates/resource",
"src/runtime-rs/crates/runtimes",
"src/runtime-rs/crates/service",
"src/runtime-rs/crates/shim",
"src/runtime-rs/crates/shim-ctl",
"src/runtime-rs/tests/utils",
]
resolver = "2"
# TODO: Add all excluded crates to root workspace
exclude = [
"src/agent",
"src/tools",
"src/libs",
# kata-deploy binary is standalone and has its own Cargo.toml for now
"tools/packaging/kata-deploy/binary",
# We are cloning and building rust packages under
# "tools/packaging/kata-deploy/local-build/build" folder, which may mislead
# those packages to think they are part of the kata root workspace
"tools/packaging/kata-deploy/local-build/build",
]
[workspace.dependencies]
# Rust-VMM crates
event-manager = "0.2.1"
kvm-bindings = "0.6.0"
kvm-ioctls = "=0.12.1"
linux-loader = "0.8.0"
seccompiler = "0.5.0"
vfio-bindings = "0.3.0"
vfio-ioctls = "0.1.0"
virtio-bindings = "0.1.0"
virtio-queue = "0.7.0"
vm-fdt = "0.2.0"
vm-memory = "0.10.0"
vm-superio = "0.5.0"
vmm-sys-util = "0.11.0"
# Local dependencies from Dragonball Sandbox crates
dragonball = { path = "src/dragonball" }
dbs-acpi = { path = "src/dragonball/dbs_acpi" }
dbs-address-space = { path = "src/dragonball/dbs_address_space" }
dbs-allocator = { path = "src/dragonball/dbs_allocator" }
dbs-arch = { path = "src/dragonball/dbs_arch" }
dbs-boot = { path = "src/dragonball/dbs_boot" }
dbs-device = { path = "src/dragonball/dbs_device" }
dbs-interrupt = { path = "src/dragonball/dbs_interrupt" }
dbs-legacy-devices = { path = "src/dragonball/dbs_legacy_devices" }
dbs-pci = { path = "src/dragonball/dbs_pci" }
dbs-tdx = { path = "src/dragonball/dbs_tdx" }
dbs-upcall = { path = "src/dragonball/dbs_upcall" }
dbs-utils = { path = "src/dragonball/dbs_utils" }
dbs-virtio-devices = { path = "src/dragonball/dbs_virtio_devices" }
# Local dependencies from runtime-rs
agent = { path = "src/runtime-rs/crates/agent" }
hypervisor = { path = "src/runtime-rs/crates/hypervisor" }
persist = { path = "src/runtime-rs/crates/persist" }
resource = { path = "src/runtime-rs/crates/resource" }
runtimes = { path = "src/runtime-rs/crates/runtimes" }
service = { path = "src/runtime-rs/crates/service" }
tests_utils = { path = "src/runtime-rs/tests/utils" }
ch-config = { path = "src/runtime-rs/crates/hypervisor/ch-config" }
common = { path = "src/runtime-rs/crates/runtimes/common" }
linux_container = { path = "src/runtime-rs/crates/runtimes/linux_container" }
virt_container = { path = "src/runtime-rs/crates/runtimes/virt_container" }
wasm_container = { path = "src/runtime-rs/crates/runtimes/wasm_container" }
# Local dependencies from `src/lib`
kata-sys-util = { path = "src/libs/kata-sys-util" }
kata-types = { path = "src/libs/kata-types", features = ["safe-path"] }
logging = { path = "src/libs/logging" }
protocols = { path = "src/libs/protocols", features = ["async"] }
runtime-spec = { path = "src/libs/runtime-spec" }
safe-path = { path = "src/libs/safe-path" }
shim-interface = { path = "src/libs/shim-interface" }
test-utils = { path = "src/libs/test-utils" }
# Outside dependencies
actix-rt = "2.7.0"
anyhow = "1.0"
async-trait = "0.1.48"
containerd-shim = { version = "0.10.0", features = ["async"] }
containerd-shim-protos = { version = "0.10.0", features = ["async"] }
go-flag = "0.1.0"
hyper = "0.14.20"
hyperlocal = "0.8.0"
lazy_static = "1.4"
libc = "0.2"
log = "0.4.14"
netns-rs = "0.1.0"
# Note: nix needs to stay sync'd with libs versions
nix = "0.26.4"
oci-spec = { version = "0.8.1", features = ["runtime"] }
protobuf = "3.7.2"
rand = "0.8.4"
serde = { version = "1.0.145", features = ["derive"] }
serde_json = "1.0.91"
sha2 = "0.10.9"
slog = "2.5.2"
slog-scope = "4.4.0"
strum = { version = "0.24.0", features = ["derive"] }
tempfile = "3.19.1"
thiserror = "1.0"
tokio = "1.46.1"
tracing = "0.1.41"
tracing-opentelemetry = "0.18.0"
ttrpc = "0.8.4"
url = "2.5.4"

View File

@@ -1,4 +1,4 @@
# Copyright (c) 2020-2023 Intel Corporation
# Copyright (c) 2020 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
#
@@ -6,28 +6,24 @@
# List of available components
COMPONENTS =
COMPONENTS += libs
COMPONENTS += agent
COMPONENTS += dragonball
COMPONENTS += runtime
COMPONENTS += runtime-rs
# List of available tools
TOOLS =
TOOLS += agent-ctl
TOOLS += kata-ctl
TOOLS += log-parser
TOOLS += trace-forwarder
STANDARD_TARGETS = build check clean install static-checks-build test vendor
# Variables for the build-and-publish-kata-debug target
KATA_DEBUG_REGISTRY ?= ""
KATA_DEBUG_TAG ?= ""
STANDARD_TARGETS = build check clean install test vendor
default: all
all: logging-crate-tests build
logging-crate-tests:
make -C src/libs/logging
include utils.mk
include ./tools/packaging/kata-deploy/local-build/Makefile
@@ -40,23 +36,13 @@ generate-protocols:
make -C src/agent generate-protocols
# Some static checks rely on generated source files of components.
static-checks: static-checks-build
bash tests/static-checks.sh
docs-url-alive-check:
bash ci/docs-url-alive-check.sh
build-and-publish-kata-debug:
bash tools/packaging/kata-debug/kata-debug-build-and-upload-payload.sh ${KATA_DEBUG_REGISTRY} ${KATA_DEBUG_TAG}
docs-serve:
docker run --rm -p 8000:8000 -v ./docs:/docs:ro -v ${PWD}/zensical.toml:/zensical.toml:ro zensical/zensical serve --config-file /zensical.toml -a 0.0.0.0:8000
static-checks: build
bash ci/static-checks.sh
.PHONY: \
all \
kata-tarball \
install-tarball \
binary-tarball \
default \
static-checks \
docs-url-alive-check \
docs-serve
install-binary-tarball \
logging-crate-tests \
static-checks

View File

@@ -1,7 +1,4 @@
<img src="https://object-storage-ca-ymq-1.vexxhost.net/swift/v1/6e4619c416ff4bd19e1c087f27a43eea/www-images-prod/openstack-logo/kata/SVG/kata-1.svg" width="900">
[![CI | Publish Kata Containers payload](https://github.com/kata-containers/kata-containers/actions/workflows/payload-after-push.yaml/badge.svg)](https://github.com/kata-containers/kata-containers/actions/workflows/payload-after-push.yaml) [![Kata Containers Nightly CI](https://github.com/kata-containers/kata-containers/actions/workflows/ci-nightly.yaml/badge.svg)](https://github.com/kata-containers/kata-containers/actions/workflows/ci-nightly.yaml)
[![OpenSSF Scorecard](https://api.scorecard.dev/projects/github.com/kata-containers/kata-containers/badge)](https://scorecard.dev/viewer/?uri=github.com/kata-containers/kata-containers)
<img src="https://www.openstack.org/assets/kata/kata-vertical-on-white.png" width="150">
# Kata Containers
@@ -74,7 +71,6 @@ See the [official documentation](docs) including:
- [Developer guide](docs/Developer-Guide.md)
- [Design documents](docs/design)
- [Architecture overview](docs/design/architecture)
- [Architecture 3.0 overview](docs/design/architecture_3.0/)
## Configuration
@@ -120,11 +116,9 @@ The table below lists the core parts of the project:
| Component | Type | Description |
|-|-|-|
| [runtime](src/runtime) | core | Main component run by a container manager and providing a containerd shimv2 runtime implementation. |
| [runtime-rs](src/runtime-rs) | core | The Rust version runtime. |
| [agent](src/agent) | core | Management process running inside the virtual machine / POD that sets up the container environment. |
| [`dragonball`](src/dragonball) | core | An optional built-in VMM brings out-of-the-box Kata Containers experience with optimizations on container workloads |
| [documentation](docs) | documentation | Documentation common to all components (such as design and install documentation). |
| [tests](tests) | tests | Excludes unit tests which live with the main code. |
| [tests](https://github.com/kata-containers/tests) | tests | Excludes unit tests which live with the main code. |
### Additional components
@@ -135,27 +129,17 @@ The table below lists the remaining parts of the project:
| [packaging](tools/packaging) | infrastructure | Scripts and metadata for producing packaged binaries<br/>(components, hypervisors, kernel and rootfs). |
| [kernel](https://www.kernel.org) | kernel | Linux kernel used by the hypervisor to boot the guest image. Patches are stored [here](tools/packaging/kernel). |
| [osbuilder](tools/osbuilder) | infrastructure | Tool to create "mini O/S" rootfs and initrd images and kernel for the hypervisor. |
| [kata-debug](tools/packaging/kata-debug/README.md) | infrastructure | Utility tool to gather Kata Containers debug information from Kubernetes clusters. |
| [`agent-ctl`](src/tools/agent-ctl) | utility | Tool that provides low-level access for testing the agent. |
| [`kata-ctl`](src/tools/kata-ctl) | utility | Tool that provides advanced commands and debug facilities. |
| [`trace-forwarder`](src/tools/trace-forwarder) | utility | Agent tracing helper. |
| [`ci`](.github/workflows) | CI | Continuous Integration configuration files and scripts. |
| [`ocp-ci`](ci/openshift-ci/README.md) | CI | Continuous Integration configuration for the OpenShift pipelines. |
| [`ci`](https://github.com/kata-containers/ci) | CI | Continuous Integration configuration files and scripts. |
| [`katacontainers.io`](https://github.com/kata-containers/www.katacontainers.io) | Source for the [`katacontainers.io`](https://www.katacontainers.io) site. |
| [`Webhook`](tools/testing/kata-webhook/README.md) | utility | Example of a simple admission controller webhook to annotate pods with the Kata runtime class |
### Packaging and releases
Kata Containers is now
[available natively for most distributions](docs/install/README.md#packaged-installation-methods).
## General tests
See the [tests documentation](tests/README.md).
## Metrics tests
See the [metrics documentation](tests/metrics/README.md).
However, packaging scripts and metadata are still used to generate snap and GitHub releases. See
the [components](#components) section for further details.
## Glossary of Terms

View File

@@ -1 +1 @@
3.26.0
2.4.1

View File

@@ -1,416 +0,0 @@
# Kata Containers CI
> [!WARNING]
> While this project's CI has several areas for improvement, it is constantly
> evolving. This document attempts to describe its current state, but due to
> ongoing changes, you may notice some outdated information here. Feel free to
> modify/improve this document as you use the CI and notice anything odd. The
> community appreciates it!
## Introduction
The Kata Containers CI relies on [GitHub Actions][gh-actions], where the actions
themselves can be found in the `.github/workflows` directory, and they may call
helper scripts, which are located under the `tests` directory, to actually
perform the tasks required for each test case.
## The different workflows
There are a few different sets of workflows that are running as part of our CI,
and here we're going to cover the ones that are less likely to get rotten. With
this said, it's fair to advise that if the reader finds something that got
rotten, opening an issue to the project pointing to the problem is a nice way to
help, and providing a fix for the issue is a very encouraging way to help.
### Jobs that run automatically when a PR is raised
These are a bunch of tests that will automatically run as soon as a PR is
opened, they're mostly running on "cost free" runners, and they do some
pre-checks to evaluate that your PR may be okay to start getting reviewed.
Mind, though, that the community expects the contributors to, at least, build
their code before submitting a PR, which the community sees as a very fair
request.
Without getting into the weeds with details on this, those jobs are the ones
responsible for ensuring that:
- The commit message is in the expected format
- There's no missing Developer's Certificate of Origin
- Static checks are passing
### Jobs that require a maintainer's approval to run
There are some tests, and our so-called "CI". These require a
maintainer's approval to run as parts of those jobs will be running on "paid
runners", which are currently using Azure infrastructure.
Once a maintainer of the project gives "the green light" (currently by adding an
`ok-to-test` label to the PR, soon to be changed to commenting "/test" as part
of a PR review), the following tests will be executed:
- Build all the components (runs on free cost runners, or bare-metal depending on the architecture)
- Create a tarball with all the components (runs on free cost runners, or bare-metal depending on the architecture)
- Create a kata-deploy payload with the tarball generated in the previous step (runs on free costs runner, or bare-metal depending on the architecture)
- Run the following tests:
- Tests depending on the generated tarball
- Metrics (runs on bare-metal)
- `docker` (runs on cost free runners)
- `nerdctl` (runs on cost free runners)
- `kata-monitor` (runs on cost free runners)
- `cri-containerd` (runs on cost free runners)
- `nydus` (runs on cost free runners)
- `vfio` (runs on cost free runners)
- Tests depending on the generated kata-deploy payload
- kata-deploy (runs on cost free runners)
- Tests are performed using different "Kubernetes flavors", such as k0s, k3s, rke2, and Azure Kubernetes Service (AKS).
- Kubernetes (runs in Azure small and medium instances depending on what's required by each test, and on TEE bare-metal machines)
- Tests are performed with different runtime engines, such as CRI-O and containerd.
- Tests are performed with different snapshotters for containerd, namely OverlayFS and devmapper.
- Tests are performed with all the supported hypervisors, which are Cloud Hypervisor, Dragonball, Firecracker, and QEMU.
For all the tests relying on Azure instances, real money is being spent, so the
community asks for the maintainers to be mindful about those, and avoid abusing
them to merely debug issues.
## The different runners
In the previous section we've mentioned using different runners, now in this section we'll go through each type of runner used.
- Cost free runners: Those are the runners provided by GitHub itself, and
those are fairly small machines with virtualization capabilities enabled.
- Azure small instances: Those are runners which have virtualization
capabilities enabled, 2 CPUs, and 8GB of RAM. These runners have a "-smaller"
suffix to their name.
- Azure normal instances: Those are runners which have virtualization
capabilities enabled, 4 CPUs, and 16GB of RAM. These runners are usually
`garm` ones with no "-smaller" suffix.
- Bare-metal runners: Those are runners provided by community contributors,
and they may vary in architecture, size and virtualization capabilities.
Builder runners don't actually require any virtualization capabilities, while
runners which will be actually performing the tests must have virtualization
capabilities and a reasonable amount for CPU and RAM available (at least
matching the Azure normal instances).
## Adding new tests
Before someone decides to add a new test, we strongly recommend them to go
through [GitHub Actions Documentation][gh-actions],
which will provide you a very sensible background on how to read and understand
current tests we have, and also become familiar with how to write a new test.
On the Kata Containers land, there are basically two sets of tests: "standalone"
and "part of something bigger".
The "standalone" tests, for example the commit message check, won't be covered
here as they're better covered by the GitHub Actions documentation pasted above.
The "part of something bigger" is the more complicated one and not so
straightforward to add, so we'll be focusing our efforts on describing the
addition of those.
> [!NOTE]
> TODO: Currently, this document refers to "tests" when it actually means the
> jobs (or workflows) of GitHub. In an ideal world, except in some specific cases,
> new tests should be added without the need to add new workflows. In the
> not-too-distant future (hopefully), we will improve the workflows to support
> this.
### Adding a new test that's "part of something bigger"
The first important thing here is to align expectations, and we must say that
the community strongly prefers receiving tests that already come with:
- Instructions how to run them
- A proven run where it's passing
There are several ways to achieve those two requirements, and an example of that
can be seen in PR #8115.
With the expectations aligned, adding a test consists in:
- Adding a new yaml file for your test, and ensure it's called from the
"bigger" yaml. See the [Kata Monitor test example][monitor-ex01].
- Adding the helper scripts needed for your test to run. Again, use the [Kata Monitor script as example][monitor-ex02].
Following those examples, the community advice during the review, and even
asking the community directly on Slack are the best ways to get your test
accepted.
## Required tests
In our CI we have two categories of jobs - required and non-required:
- Required jobs need to all pass for a PR to be merged normally and
should cover all the core features on Kata Containers that we want to
ensure don't have regressions.
- The non-required jobs are for unstable tests, or for features that
are experimental and not-fully supported. We'd like those tests to also
pass on all PRs ideally, but don't block merging if they don't as it's
not necessarily an indication of the PR code causing regressions.
### Transitioning between required and non-required status
Required jobs that fail block merging of PRs, so we want to ensure that
jobs are stable and maintained before we make them required.
The [Kata Containers CI Dashboard](https://kata-containers.github.io/)
is a useful resource to check when collecting evidence of job stability.
At time of writing it reports the last ten days of Kata CI nightly test
results for each job. This isn't perfect as it doesn't currently capture
results on PRs, but is a good guideline for stability.
> [!NOTE]
> Below are general guidelines about jobs being marked as
> required/non-required, but they are subject to change and the Kata
> Architecture Committee may overrule these guidelines at their
> discretion.
#### Initial marking as required
For new jobs, or jobs that haven't been marked as required recently,
the criteria to be initially marked as required is ten days
of passing tests, with no relevant PR failures reported in that time.
Required jobs also need one or more nominated maintainers that are
responsible for the stability of their jobs. Maintainers can be registered
in [`maintainers.yml`](https://github.com/kata-containers/kata-containers.github.io/blob/main/maintainers.yml)
and will then show on the CI Dashboard.
To add transparency to making jobs required/non-required and to keep the
GitHub UI in sync with the [Gatekeeper job](../tools/testing/gatekeeper),
the process to update a job's required state is as follows:
1. Create a PR to update `maintainers.yml`, if new maintainers are being
declared on a CI job.
1. Create a PR which updates
[`required-tests.yaml`](../tools/testing/gatekeeper/required-tests.yaml)
adding the new job and listing the evidence that the job meets the
requirements above. Ensure that all maintainers and
@kata-containers/architecture-committee are notified to give them the
opportunity to review the PR. See
[#11015](https://github.com/kata-containers/kata-containers/pull/11015)
as an example.
1. The maintainers and Architecture Committee get a chance to review the PR.
It can be discussed in an AC meeting to get broader input.
1. Once the PR has been merged, a Kata Containers admin should be notified
to ensure that the GitHub UI is updated to reflect the change in
`required-tests.yaml`.
#### Expectation of required job maintainers
Due to the nature of the Kata Containers community having contributors
spread around the world, required jobs being blocked due to infrastructure,
or test issues can have a big impact on work. As such, the expectation is
that when a problem with a required job is noticed/reported, the maintainers
have one working day to acknowledge the issue, perform an initial
investigation and then either fix it, or get it marked as non-required
whilst the investigation and/or fix it done.
### Re-marking of required status
Once a job has been removed from the required list, it requires two
consecutive successful nightly test runs before being made required
again.
## Running tests
### Running the tests as part of the CI
If you're a maintainer of the project, you'll be able to kick in the tests by
yourself. With the current approach, you just need to add the `ok-to-test`
label and the tests will automatically start. We're moving, though, to use a
`/test` command as part of a GitHub review comment, which will simplify this
process.
If you're not a maintainer, please, send a message on Slack or wait till one of
the maintainers reviews your PR. Maintainers will then kick in the tests on
your behalf.
In case a test fails and there's the suspicion it happens due to flakiness in
the test itself, please, create an issue for us, and then re-run (or asks
maintainers to re-run) the tests following these steps:
- Locate which tests is failing
- Click in "details"
- In the top right corner, click in "Re-run jobs"
- And then in "Re-run failed jobs"
- And finally click in the green "Re-run jobs" button
> [!NOTE]
> TODO: We need figures here
### Running the tests locally
In this section, aligning expectations is also something very important, as one
will not be able to run the tests exactly in the same way the tests are running
in the CI, as one most likely won't have access to an Azure subscription.
However, we're trying our best here to provide you with instructions on how to
run the tests in an environment that's "close enough" and will help you to debug
issues you find with the current tests, or even provide a proof-of-concept to
the new test you're trying to add.
The basic steps, which we will cover in details down below are:
1. Create a VM matching the configuration of the target runner
2. Generate the artifacts you'll need for the test, or download them from a
current failed run
3. Follow the steps provided in the action itself to run the tests.
Although the general overview looks easy, we know that some tricks need to be
shared, and we'll go through the general process of debugging one non-Kubernetes
and one Kubernetes specific test for educational purposes.
One important thing to note is that "Create a VM" can be done in innumerable
different ways, using the tools of your choice. For the sake of simplicity on
this guide, we'll be using `kcli`, which we strongly recommend in case you're a
non-experienced user, and happen to be developing on a Linux box.
For both non-Kubernetes and Kubernetes cases, we'll be using PR #8070 as an
example, which at the time this document is being written serves us very well
the purpose, as you can see that we have `nerdctl` and Kubernetes tests failing.
## Debugging tests
### Debugging a non Kubernetes test
As shown above, the `nerdctl` test is failing.
As a developer you can go ahead to the details of the job, and expand the job
that's failing in order to gather more information.
But when that doesn't help, we need to set up our own environment to debug
what's going on.
Taking a look at the `nerdctl` test, which is located here, you can easily see
that it runs-on a `garm-ubuntu-2304-smaller` virtual machine.
The important parts to understand are `ubuntu-2304`, which is the OS where the
test is running on; and "smaller", which means we're running it on a machine
with 2 CPUs and 8GB of RAM.
With this information, we can go ahead and create a similar VM locally using `kcli`.
```bash
$ sudo kcli create vm -i ubuntu2304 -P disks=[60] -P numcpus=2 -P memory=8192 -P cpumodel=host-passthrough debug-nerdctl-pr8070
```
In order to run the tests, you'll need the "kata-tarball" artifacts, which you
can build your own using "make kata-tarball" (see below), or simply get them
from the PR where the tests failed. To download them, click on the "Summary"
button that's on the top left corner, and then scroll down till you see the
artifacts, as shown below.
Unfortunately GitHub doesn't give us a link that we can download those from
inside the VM, but we can download them on our local box, and then `scp` the
tarball to the newly created VM that will be used for debugging purposes.
> [!NOTE]
> Those artifacts are only available (for 15 days) when all jobs are finished.
Once you have the `kata-static.tar.zst` in your VM, you can login to the VM with
`kcli ssh debug-nerdctl-pr8070`, go ahead and then clone your development branch
```bash
$ git clone --branch feat_add-fc-runtime-rs https://github.com/nubificus/kata-containers
```
Add the upstream as a remote, set up your git, and rebase your branch atop of the upstream main one
```bash
$ git remote add upstream https://github.com/kata-containers/kata-containers
$ git remote update
$ git config --global user.email "you@example.com"
$ git config --global user.name "Your Name"
$ git rebase upstream/main
```
Now copy the `kata-static.tar.zst` into your `kata-containers/kata-artifacts` directory
```bash
$ mkdir kata-artifacts
$ cp ../kata-static.tar.zst kata-artifacts/
```
> [!NOTE]
> If you downloaded the .zip from GitHub you need to uncompress first to see `kata-static.tar.zst`
And finally run the tests following what's in the yaml file for the test you're
debugging.
In our case, the `run-nerdctl-tests-on-garm.yaml`.
When looking at the file you'll notice that some environment variables are set,
such as `KATA_HYPERVISOR`, and should be aware that, for this particular example,
the important steps to follow are:
Install the dependencies
Install kata
Run the tests
Let's now run the steps mentioned above exporting the expected environment variables
```bash
$ export KATA_HYPERVISOR=dragonball
$ bash ./tests/integration/nerdctl/gha-run.sh install-dependencies
$ bash ./tests/integration/nerdctl/gha-run.sh install-kata
$ bash tests/integration/nerdctl/gha-run.sh run
```
And with this you should've been able to reproduce exactly the same issue found
in the CI, and from now on you can build your own code, use your own binaries,
and have fun debugging and hacking!
### Debugging a Kubernetes test
Steps for debugging the Kubernetes tests are very similar to the ones for
debugging non-Kubernetes tests, with the caveat that what you'll need, this
time, is not the `kata-static.tar.zst` tarball, but rather a payload to be used
with kata-deploy.
In order to generate your own kata-deploy image you can generate your own
`kata-static.tar.zst` and then take advantage of the following script. Be aware
that the image generated and uploaded must be accessible by the VM where you'll
be performing your tests.
In case you want to take advantage of the payload that was already generated
when you faced the CI failure, which is considerably easier, take a look at the
failed job, then click in "Deploy Kata" and expand the "Final kata-deploy.yaml
that is used in the test" section. From there you can see exactly what you'll
have to use when deploying kata-deploy in your local cluster.
> [!NOTE]
> TODO: WAINER TO FINISH THIS PART BASED ON HIS PR TO RUN A LOCAL CI
## Adding new runners
Any admin of the project is able to add or remove GitHub runners, and those are
the folks you should rely on.
If you need a new runner added, please, tag @ac in the Kata Containers slack,
and someone from that group will be able to help you.
If you're part of that group and you're looking for information on how to help
someone, this is simple, and must be done in private. Basically what you have to
do is:
- Go to the kata-containers/kata-containers repo
- Click on the Settings button, located in the top right corner
- On the left panel, under "Code and automation", click on "Actions"
- Click on "Runners"
If you want to add a new self-hosted runner:
- In the top right corner there's a green button called "New self-hosted runner"
If you want to remove a current self-hosted runner:
- For each runner there's a "..." menu, where you can just click and the
"Remove runner" option will show up
## Known limitations
As the GitHub actions are structured right now we cannot: Test the addition of a
GitHub action that's not triggered by a pull_request event as part of the PR.
[gh-actions]: https://docs.github.com/en/actions
[monitor-ex01]: https://github.com/kata-containers/kata-containers/commit/a3fb067f1bccde0cbd3fd4d5de12dfb3d8c28b60
[monitor-ex02]: https://github.com/kata-containers/kata-containers/commit/489caf1ad0fae27cfd00ba3c9ed40e3d512fa492

View File

@@ -7,17 +7,16 @@
set -e
cidir=$(dirname "$0")
runtimedir=${cidir}/../src/runtime
genpolicydir=${cidir}/../src/tools/genpolicy
runtimedir=$cidir/../src/runtime
build_working_packages() {
# working packages:
device_api=${runtimedir}/pkg/device/api
device_config=${runtimedir}/pkg/device/config
device_drivers=${runtimedir}/pkg/device/drivers
device_manager=${runtimedir}/pkg/device/manager
rc_pkg_dir=${runtimedir}/pkg/resourcecontrol/
utils_pkg_dir=${runtimedir}/virtcontainers/utils
device_api=$runtimedir/virtcontainers/device/api
device_config=$runtimedir/virtcontainers/device/config
device_drivers=$runtimedir/virtcontainers/device/drivers
device_manager=$runtimedir/virtcontainers/device/manager
rc_pkg_dir=$runtimedir/pkg/resourcecontrol/
utils_pkg_dir=$runtimedir/virtcontainers/utils
# broken packages :( :
#katautils=$runtimedir/pkg/katautils
@@ -25,15 +24,15 @@ build_working_packages() {
#vc=$runtimedir/virtcontainers
pkgs=(
"${device_api}"
"${device_config}"
"${device_drivers}"
"${device_manager}"
"${utils_pkg_dir}"
"${rc_pkg_dir}")
"$device_api"
"$device_config"
"$device_drivers"
"$device_manager"
"$utils_pkg_dir"
"$rc_pkg_dir")
for pkg in "${pkgs[@]}"; do
echo building "${pkg}"
pushd "${pkg}" &>/dev/null
echo building "$pkg"
pushd "$pkg" &>/dev/null
go build
go test
popd &>/dev/null
@@ -41,11 +40,3 @@ build_working_packages() {
}
build_working_packages
build_genpolicy() {
echo "building genpolicy"
pushd "${genpolicydir}" &>/dev/null
make TRIPLE=aarch64-apple-darwin build
}
build_genpolicy

View File

@@ -1,12 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2021 Easystack Inc.
#
# SPDX-License-Identifier: Apache-2.0
set -e
cidir=$(dirname "$0")
source "${cidir}/../tests/common.bash"
run_docs_url_alive_check

View File

@@ -1,184 +0,0 @@
#!/bin/bash
# Copyright (c) 2020 Intel Corporation
# Copyright (c) 2024 IBM Corporation
#
# SPDX-License-Identifier: Apache-2.0
set -o errexit
set -o errtrace
set -o nounset
set -o pipefail
[[ -n "${DEBUG:-}" ]] && set -o xtrace
script_name=${0##*/}
#---------------------------------------------------------------------
die()
{
echo >&2 "$*"
exit 1
}
usage()
{
cat <<EOF
Usage: ${script_name} [OPTIONS] [command] [arguments]
Description: Utility to expand the abilities of the GitHub CLI tool, gh.
Command descriptions:
list-issues-for-pr List issues linked to a PR.
list-labels-for-issue List labels, in json format for an issue
Commands and arguments:
list-issues-for-pr <pr>
list-labels-for-issue <issue>
Options:
-h Show this help statement.
-r <owner/repo> Optional <org/repo> specification. Default: 'kata-containers/kata-containers'
Examples:
- List issues for a Pull Request 123 in kata-containers/kata-containers repo
$ ${script_name} list-issues-for-pr 123
EOF
}
list_issues_for_pr()
{
local pr="${1:-}"
local repo="${2:-kata-containers/kata-containers}"
[[ -z "${pr}" ]] && die "need PR"
local commits
commits=$(gh pr view "${pr}" --repo "${repo}" --json commits --jq .commits[].messageBody)
[[ -z "${commits}" ]] && die "cannot determine commits for PR ${pr}"
# Extract the issue number(s) from the commits.
#
# This needs to be careful to take account of lines like this:
#
# fixes 99
# fixes: 77
# fixes #123.
# Fixes: #1, #234, #5678.
#
# Note the exclusion of lines starting with whitespace which is
# specifically to ignore vendored git log comments, which are whitespace
# indented and in the format:
#
# "<git-commit> <git-commit-msg>"
#
local issues
issues=$(echo "${commits}" |\
grep -v -E "^( | )" |\
grep -i -E "fixes:* *(#*[0-9][0-9]*)" |\
tr ' ' '\n' |\
grep "[0-9][0-9]*" |\
sed 's/[.,\#]//g' |\
sort -nu || true)
[[ -z "${issues}" ]] && die "cannot determine issues for PR ${pr}"
echo "# Issues linked to PR"
echo "#"
echo "# Fields: issue_number"
local issue
echo "${issues}" | while read -r issue
do
printf "%s\n" "${issue}"
done
}
list_labels_for_issue()
{
local issue="${1:-}"
[[ -z "${issue}" ]] && die "need issue number"
local labels
labels=$(gh issue view "${issue}" --repo kata-containers/kata-containers --json labels)
[[ -z "${labels}" ]] && die "cannot determine labels for issue ${issue}"
echo "${labels}"
}
setup()
{
for cmd in gh jq
do
command -v "${cmd}" &>/dev/null || die "need command: ${cmd}"
done
}
handle_args()
{
setup
local opt
while getopts "hr:" opt "$@"
do
case "${opt}" in
h) usage && exit 0 ;;
r) repo="${OPTARG}" ;;
*) echo "use '-h' to get list of supprted aruments" && exit 1 ;;
esac
done
shift $((OPTIND - 1))
local repo="${repo:-kata-containers/kata-containers}"
local cmd="${1:-}"
case "${cmd}" in
list-issues-for-pr) ;;
list-labels-for-issue) ;;
"") usage && exit 0 ;;
*) die "invalid command: '${cmd}'" ;;
esac
# Consume the command name
shift
local issue=""
local pr=""
case "${cmd}" in
list-issues-for-pr)
pr="${1:-}"
list_issues_for_pr "${pr}" "${repo}"
;;
list-labels-for-issue)
issue="${1:-}"
list_labels_for_issue "${issue}"
;;
*) die "impossible situation: cmd: '${cmd}'" ;;
esac
exit 0
}
main()
{
handle_args "$@"
}
main "$@"

12
ci/go-test.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/usr/bin/env bash
#
# Copyright (c) 2020 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
set -e
cidir=$(dirname "$0")
source "${cidir}/lib.sh"
run_go_test

22
ci/install_go.sh Executable file
View File

@@ -0,0 +1,22 @@
#!/usr/bin/env bash
#
# Copyright (c) 2019 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
set -e
cidir=$(dirname "$0")
source "${cidir}/lib.sh"
clone_tests_repo
new_goroot=/usr/local/go
pushd "${tests_repo_dir}"
# Force overwrite the current version of golang
[ -z "${GOROOT}" ] || rm -rf "${GOROOT}"
.ci/install_go.sh -p -f -d "$(dirname ${new_goroot})"
[ -z "${GOROOT}" ] || sudo ln -sf "${new_goroot}" "${GOROOT}"
go version
popd

View File

@@ -7,129 +7,103 @@
set -o errexit
script_dir="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
cidir=$(dirname "$0")
source "${cidir}/lib.sh"
source "${script_dir}/../tests/common.bash"
clone_tests_repo
# Path to the ORAS cache helper for downloading tarballs (sourced when needed)
# Use ORAS_CACHE_HELPER env var (set by build.sh in Docker) or fallback to repo path
oras_cache_helper="${ORAS_CACHE_HELPER:-${script_dir}/../tools/packaging/scripts/download-with-oras-cache.sh}"
source "${tests_repo_dir}/.ci/lib.sh"
# The following variables if set on the environment will change the behavior
# of gperf and libseccomp configure scripts, that may lead this script to
# fail. So let's ensure they are unset here.
unset PREFIX DESTDIR
arch=${ARCH:-$(uname -m)}
arch=$(uname -m)
workdir="$(mktemp -d --tmpdir build-libseccomp.XXXXX)"
# Variables for libseccomp
libseccomp_version="${LIBSECCOMP_VERSION:-""}"
if [[ -z "${libseccomp_version}" ]]; then
libseccomp_version=$(get_from_kata_deps ".externals.libseccomp.version")
fi
libseccomp_url="${LIBSECCOMP_URL:-""}"
if [[ -z "${libseccomp_url}" ]]; then
libseccomp_url=$(get_from_kata_deps ".externals.libseccomp.url")
fi
# Currently, specify the libseccomp version directly without using `versions.yaml`
# because the current Snap workflow is incomplete.
# After solving the issue, replace this code by using the `versions.yaml`.
# libseccomp_version=$(get_version "externals.libseccomp.version")
# libseccomp_url=$(get_version "externals.libseccomp.url")
libseccomp_version="2.5.1"
libseccomp_url="https://github.com/seccomp/libseccomp"
libseccomp_tarball="libseccomp-${libseccomp_version}.tar.gz"
libseccomp_tarball_url="${libseccomp_url}/releases/download/v${libseccomp_version}/${libseccomp_tarball}"
cflags="-O2"
# Variables for gperf
gperf_version="${GPERF_VERSION:-""}"
if [[ -z "${gperf_version}" ]]; then
gperf_version=$(get_from_kata_deps ".externals.gperf.version")
fi
gperf_url="${GPERF_URL:-""}"
if [[ -z "${gperf_url}" ]]; then
gperf_url=$(get_from_kata_deps ".externals.gperf.url")
fi
# Currently, specify the gperf version directly without using `versions.yaml`
# because the current Snap workflow is incomplete.
# After solving the issue, replace this code by using the `versions.yaml`.
# gperf_version=$(get_version "externals.gperf.version")
# gperf_url=$(get_version "externals.gperf.url")
gperf_version="3.1"
gperf_url="https://ftp.gnu.org/gnu/gperf"
gperf_tarball="gperf-${gperf_version}.tar.gz"
gperf_tarball_url="${gperf_url}/${gperf_tarball}"
# Use ORAS cache for gperf downloads (gperf upstream can be unreliable)
USE_ORAS_CACHE="${USE_ORAS_CACHE:-yes}"
# We need to build the libseccomp library from sources to create a static
# library for the musl libc.
# However, ppc64le, riscv64 and s390x have no musl targets in Rust. Hence, we do
# not set cflags for the musl libc.
if [[ "${arch}" != "ppc64le" ]] && [[ "${arch}" != "riscv64" ]] && [[ "${arch}" != "s390x" ]]; then
# Set FORTIFY_SOURCE=1 because the musl-libc does not have some functions about FORTIFY_SOURCE=2
cflags="-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -O2"
# We need to build the libseccomp library from sources to create a static library for the musl libc.
# However, ppc64le and s390x have no musl targets in Rust. Hence, we do not set cflags for the musl libc.
if ([ "${arch}" != "ppc64le" ] && [ "${arch}" != "s390x" ]); then
# Set FORTIFY_SOURCE=1 because the musl-libc does not have some functions about FORTIFY_SOURCE=2
cflags="-U_FORTIFY_SOURCE -D_FORTIFY_SOURCE=1 -O2"
fi
die() {
msg="$*"
echo "[Error] ${msg}" >&2
exit 1
msg="$*"
echo "[Error] ${msg}" >&2
exit 1
}
finish() {
rm -rf "${workdir}"
rm -rf "${workdir}"
}
trap finish EXIT
build_and_install_gperf() {
echo "Build and install gperf version ${gperf_version}"
mkdir -p "${gperf_install_dir}"
# Use ORAS cache if available and enabled
if [[ "${USE_ORAS_CACHE}" == "yes" ]] && [[ -f "${oras_cache_helper}" ]]; then
echo "Using ORAS cache for gperf download"
source "${oras_cache_helper}"
local cached_tarball
cached_tarball=$(download_component gperf "$(pwd)")
if [[ -f "${cached_tarball}" ]]; then
gperf_tarball="${cached_tarball}"
else
echo "ORAS cache download failed, falling back to direct download"
curl -sLO "${gperf_tarball_url}"
fi
else
curl -sLO "${gperf_tarball_url}"
fi
tar -xf "${gperf_tarball}"
pushd "gperf-${gperf_version}"
# Unset $CC for configure, we will always use native for gperf
CC="" ./configure --prefix="${gperf_install_dir}"
make
make install
export PATH=${PATH}:"${gperf_install_dir}"/bin
popd
echo "Gperf installed successfully"
echo "Build and install gperf version ${gperf_version}"
mkdir -p "${gperf_install_dir}"
curl -sLO "${gperf_tarball_url}"
tar -xf "${gperf_tarball}"
pushd "gperf-${gperf_version}"
./configure --prefix="${gperf_install_dir}"
make
make install
export PATH=$PATH:"${gperf_install_dir}"/bin
popd
echo "Gperf installed successfully"
}
build_and_install_libseccomp() {
echo "Build and install libseccomp version ${libseccomp_version}"
mkdir -p "${libseccomp_install_dir}"
curl -sLO "${libseccomp_tarball_url}"
tar -xf "${libseccomp_tarball}"
pushd "libseccomp-${libseccomp_version}"
[[ "${arch}" == $(uname -m) ]] && cc_name="" || cc_name="${arch}-linux-gnu-gcc"
CC=${cc_name} ./configure --prefix="${libseccomp_install_dir}" CFLAGS="${cflags}" --enable-static --host="${arch}"
make
make install
popd
echo "Libseccomp installed successfully"
echo "Build and install libseccomp version ${libseccomp_version}"
mkdir -p "${libseccomp_install_dir}"
curl -sLO "${libseccomp_tarball_url}"
tar -xf "${libseccomp_tarball}"
pushd "libseccomp-${libseccomp_version}"
./configure --prefix="${libseccomp_install_dir}" CFLAGS="${cflags}" --enable-static
make
make install
popd
echo "Libseccomp installed successfully"
}
main() {
local libseccomp_install_dir="${1:-}"
local gperf_install_dir="${2:-}"
local libseccomp_install_dir="${1:-}"
local gperf_install_dir="${2:-}"
if [[ -z "${libseccomp_install_dir}" ]] || [[ -z "${gperf_install_dir}" ]]; then
die "Usage: ${0} <libseccomp-install-dir> <gperf-install-dir>"
fi
if [ -z "${libseccomp_install_dir}" ] || [ -z "${gperf_install_dir}" ]; then
die "Usage: ${0} <libseccomp-install-dir> <gperf-install-dir>"
fi
pushd "${workdir}"
# gperf is required for building the libseccomp.
build_and_install_gperf
build_and_install_libseccomp
popd
pushd "$workdir"
# gperf is required for building the libseccomp.
build_and_install_gperf
build_and_install_libseccomp
popd
}
main "$@"

24
ci/install_musl.sh Executable file
View File

@@ -0,0 +1,24 @@
#!/usr/bin/env bash
# Copyright (c) 2020 Ant Group
#
# SPDX-License-Identifier: Apache-2.0
#
set -e
install_aarch64_musl() {
local arch=$(uname -m)
if [ "${arch}" == "aarch64" ]; then
local musl_tar="${arch}-linux-musl-native.tgz"
local musl_dir="${arch}-linux-musl-native"
pushd /tmp
if curl -sLO --fail https://musl.cc/${musl_tar}; then
tar -zxf ${musl_tar}
mkdir -p /usr/local/musl/
cp -r ${musl_dir}/* /usr/local/musl/
fi
popd
fi
}
install_aarch64_musl

16
ci/install_rust.sh Executable file
View File

@@ -0,0 +1,16 @@
#!/usr/bin/env bash
# Copyright (c) 2019 Ant Financial
#
# SPDX-License-Identifier: Apache-2.0
#
set -e
cidir=$(dirname "$0")
source "${cidir}/lib.sh"
clone_tests_repo
pushd ${tests_repo_dir}
.ci/install_rust.sh ${1:-}
popd

19
ci/install_vc.sh Executable file
View File

@@ -0,0 +1,19 @@
#!/usr/bin/env bash
#
# Copyright (c) 2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
set -e
cidir=$(dirname "$0")
vcdir="${cidir}/../src/runtime/virtcontainers/"
source "${cidir}/lib.sh"
export CI_JOB="${CI_JOB:-default}"
clone_tests_repo
if [ "${CI_JOB}" != "PODMAN" ]; then
echo "Install virtcontainers"
make -C "${vcdir}" && chronic sudo make -C "${vcdir}" install
fi

View File

@@ -5,57 +5,28 @@
# SPDX-License-Identifier: Apache-2.0
#
[[ -n "${DEBUG}" ]] && set -o xtrace
# If we fail for any reason a message will be displayed
die() {
msg="$*"
echo "ERROR: ${msg}" >&2
echo "ERROR: $msg" >&2
exit 1
}
function verify_yq_exists() {
local yq_path=$1
local yq_version=$2
local expected="yq (https://github.com/mikefarah/yq/) version ${yq_version}"
if [[ -x "${yq_path}" ]] && [[ "$(${yq_path} --version)"X == "${expected}"X ]]; then
return 0
else
return 1
fi
}
# Install the yq yaml query package from the mikefarah github repo
# Install via binary download, as we may not have golang installed at this point
function install_yq() {
local yq_pkg="github.com/mikefarah/yq"
local yq_version=v4.44.5
local precmd=""
local yq_path=""
local yq_version=3.4.1
INSTALL_IN_GOPATH=${INSTALL_IN_GOPATH:-true}
if [[ "${INSTALL_IN_GOPATH}" == "true" ]]; then
if [ "${INSTALL_IN_GOPATH}" == "true" ];then
GOPATH=${GOPATH:-${HOME}/go}
mkdir -p "${GOPATH}/bin"
yq_path="${GOPATH}/bin/yq"
local yq_path="${GOPATH}/bin/yq"
else
yq_path="/usr/local/bin/yq"
fi
if verify_yq_exists "${yq_path}" "${yq_version}"; then
echo "yq is already installed in correct version"
return
fi
if [[ "${yq_path}" == "/usr/local/bin/yq" ]]; then
# Check if we need sudo to install yq
if [[ ! -w "/usr/local/bin" ]]; then
# Check if we have sudo privileges
if ! sudo -n true 2>/dev/null; then
die "Please provide sudo privileges to install yq"
else
precmd="sudo"
fi
fi
fi
[ -x "${yq_path}" ] && [ "`${yq_path} --version`"X == "yq version ${yq_version}"X ] && return
read -r -a sysInfo <<< "$(uname -sm)"
@@ -72,19 +43,6 @@ function install_yq() {
"aarch64")
goarch=arm64
;;
"arm64")
# If we're on an apple silicon machine, just assign amd64.
# The version of yq we use doesn't have a darwin arm build,
# but Rosetta can come to the rescue here.
if [[ ${goos} == "Darwin" ]]; then
goarch=amd64
else
goarch=arm64
fi
;;
"riscv64")
goarch=riscv64
;;
"ppc64le")
goarch=ppc64le
;;
@@ -106,9 +64,10 @@ function install_yq() {
fi
## NOTE: ${var,,} => gives lowercase value of var
local yq_url="https://${yq_pkg}/releases/download/${yq_version}/yq_${goos}_${goarch}"
${precmd} curl -o "${yq_path}" -LSsf "${yq_url}" || die "Download ${yq_url} failed"
${precmd} chmod +x "${yq_path}"
local yq_url="https://${yq_pkg}/releases/download/${yq_version}/yq_${goos,,}_${goarch}"
curl -o "${yq_path}" -LSsf "${yq_url}"
[ $? -ne 0 ] && die "Download ${yq_url} failed"
chmod +x "${yq_path}"
if ! command -v "${yq_path}" >/dev/null; then
die "Cannot not get ${yq_path} executable"

46
ci/lib.sh Normal file
View File

@@ -0,0 +1,46 @@
#
# Copyright (c) 2018 Intel Corporation
#
# SPDX-License-Identifier: Apache-2.0
set -o nounset
export tests_repo="${tests_repo:-github.com/kata-containers/tests}"
export tests_repo_dir="$GOPATH/src/$tests_repo"
export branch="${target_branch:-main}"
# Clones the tests repository and checkout to the branch pointed out by
# the global $branch variable.
# If the clone exists and `CI` is exported then it does nothing. Otherwise
# it will clone the repository or `git pull` the latest code.
#
clone_tests_repo()
{
if [ -d "$tests_repo_dir" ]; then
[ -n "${CI:-}" ] && return
pushd "${tests_repo_dir}"
git checkout "${branch}"
git pull
popd
else
git clone -q "https://${tests_repo}" "$tests_repo_dir"
pushd "${tests_repo_dir}"
git checkout "${branch}"
popd
fi
}
run_static_checks()
{
clone_tests_repo
# Make sure we have the targeting branch
git remote set-branches --add origin "${branch}"
git fetch -a
bash "$tests_repo_dir/.ci/static-checks.sh" "$@"
}
run_go_test()
{
clone_tests_repo
bash "$tests_repo_dir/.ci/go-test.sh"
}

View File

@@ -1,157 +0,0 @@
OpenShift CI
============
This directory contains scripts used by
[the OpenShift CI](https://github.com/openshift/release/tree/master/ci-operator/config/kata-containers/kata-containers)
pipelines to monitor selected functional tests on OpenShift.
There are 2 pipelines, history and logs can be accessed here:
* [main - currently supported OCP](https://prow.ci.openshift.org/job-history/gs/origin-ci-test/logs/periodic-ci-kata-containers-kata-containers-main-e2e-tests)
* [next - currently under development OCP](https://prow.ci.openshift.org/job-history/gs/origin-ci-test/logs/periodic-ci-kata-containers-kata-containers-main-next-e2e-tests)
Running openshift-tests on OCP with kata-containers manually
============================================================
To run openshift-tests (or other suites) with kata-containers one can use
the kata-webhook. To deploy everything you can mimic the CI pipeline by:
```bash
#!/bin/bash -e
# Setup your kubectl and check it's accessible by
kubectl nodes
# Deploy kata (set KATA_DEPLOY_IMAGE to override the default kata-deploy-ci:latest image)
./test.sh
# Deploy the webhook
KATA_RUNTIME=kata-qemu cluster/deploy_webhook.sh
```
This should ensure kata-containers as well as kata-webhook are installed and
working. Before running the openshift-tests it's (currently) recommended to
ignore some security features by:
```bash
#!/bin/bash -e
oc adm policy add-scc-to-group privileged system:authenticated system:serviceaccounts
oc adm policy add-scc-to-group anyuid system:authenticated system:serviceaccounts
oc label --overwrite ns default pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/warn=baseline pod-security.kubernetes.io/audit=baseline
```
Now you should be ready to run the openshift-tests. Our CI only uses a subset
of tests, to get the current ``TEST_SKIPS`` see
[the pipeline config](https://github.com/openshift/release/tree/master/ci-operator/config/kata-containers/kata-containers).
Following steps require the [openshift tests](https://github.com/openshift/origin)
being cloned and built in the current directory:
```bash
#!/bin/bash -e
# Define tests to be skipped (see the pipeline config for the current version)
TEST_SKIPS="\[sig-node\] Security Context should support seccomp runtime/default\|\[sig-node\] Variable Expansion should allow substituting values in a volume subpath\|\[k8s.io\] Probing container should be restarted with a docker exec liveness probe with timeout\|\[sig-node\] Pods Extended Pod Container lifecycle evicted pods should be terminal\|\[sig-node\] PodOSRejection \[NodeConformance\] Kubelet should reject pod when the node OS doesn't match pod's OS\|\[sig-network\].*for evicted pods\|\[sig-network\].*HAProxy router should override the route\|\[sig-network\].*HAProxy router should serve a route\|\[sig-network\].*HAProxy router should serve the correct\|\[sig-network\].*HAProxy router should run\|\[sig-network\].*when FIPS.*the HAProxy router\|\[sig-network\].*bond\|\[sig-network\].*all sysctl on whitelist\|\[sig-network\].*sysctls should not affect\|\[sig-network\] pods should successfully create sandboxes by adding pod to network"
# Get the list of tests to be executed
TESTS="$(./openshift-tests run --dry-run --provider "${TEST_PROVIDER}" "${TEST_SUITE}")"
# Store the list of tests in /tmp/tsts file
echo "${TESTS}" | grep -v "$TEST_SKIPS" > /tmp/tsts
# Remove previously-existing temporarily files as well as previous results
OUT=RESULTS/tmp
rm -Rf /tmp/*test* /tmp/e2e-*
rm -R $OUT
mkdir -p $OUT
# Run the tests ignoring the monitor health checks
./openshift-tests run --provider azure -o "$OUT/job.log" --junit-dir "$OUT" --file /tmp/tsts --max-parallel-tests 5 --cluster-stability Disruptive --run '^\[sig-node\].*|^\[sig-network\]'
```
[!NOTE]
Note we are ignoring the cluster stability checks because our public cloud is
not that stable and running with VMs instead of containers results in minor
stability issues. Some of the old monitor stability tests do not reflect
the ``--cluster-stability`` setting, one should simply ignore these. If you
get a message like ``invariant was violated`` or ``error: failed due to a
MonitorTest failure``, it's usually an indication that only those kind of
tests failed but the real tests passed. See
[wrapped-openshift-tests.sh](https://github.com/openshift/release/blob/master/ci-operator/config/kata-containers/kata-containers/wrapped-openshift-tests.sh)
for details how our pipeline deals with that.
[!TIP]
To compare multiple results locally one can use
[junit2html](https://github.com/inorton/junit2html) tool.
Best-effort kata-containers cleanup
===================================
If you need to cleanup the cluster after testing, you can use the
``cleanup.sh`` script from the current directory. It tries to delete all
resources created by ``test.sh`` as well as ``cluster/deploy_webhook.sh``
ignoring all failures. The primary purpose of this script is to allow
soft-cleanup after deployment to test different versions without
re-provisioning everything.
[!WARNING]
Do not rely on this script in production, return codes are not checked!**
Bisecting e2e tests failures
============================
Let's say the OCP pipeline passed running with
``quay.io/kata-containers/kata-deploy-ci:kata-containers-d7afd31fd40e37a675b25c53618904ab57e74ccd-amd64``
but failed running with
``quay.io/kata-containers/kata-deploy-ci:kata-containers-9f512c016e75599a4a921bd84ea47559fe610057-amd64``
and you'd like to know which PR caused the regression. You can either run with
all the 60 tags between or you can utilize the [bisecter](https://github.com/ldoktor/bisecter)
to optimize the number of steps in between.
Before running the bisection you need a reproducer script. Sample one called
``sample-test-reproducer.sh`` is provided in this directory but you might
want to copy and modify it, especially:
* ``OCP_DIR`` - directory where your openshift/release is located (can be exported)
* ``E2E_TEST`` - openshift-test(s) to be executed (can be exported)
* behaviour of SETUP (returning 125 skips the current image tag, returning
>=128 interrupts the execution, everything else reports the tag as failure
* what should be executed (perhaps running the setup is enough for you or
you might want to be looking for specific failures...)
* use ``timeout`` to interrupt execution in case you know things should be faster
Executing that script with the GOOD commit should pass
``./sample-test-reproducer.sh quay.io/kata-containers/kata-deploy-ci:kata-containers-d7afd31fd40e37a675b25c53618904ab57e74ccd-amd64``
and fail when executed with the BAD commit
``./sample-test-reproducer.sh quay.io/kata-containers/kata-deploy-ci:kata-containers-9f512c016e75599a4a921bd84ea47559fe610057-amd64``.
To get the list of all tags in between those two PRs you can use the
``bisect-range.sh`` script
```bash
./bisect-range.sh d7afd31fd40e37a675b25c53618904ab57e74ccd 9f512c016e75599a4a921bd84ea47559fe610057
```
[!NOTE]
The tagged images are only built per PR, not for individual commits. See
[kata-deploy-ci](https://quay.io/kata-containers/kata-deploy-ci) to see the
available images.
To find out which PR caused this regression, you can either manually try the
individual commits or you can simply execute:
```bash
bisecter start "$(./bisect-range.sh d7afd31fd40 9f512c016)"
OCP_DIR=/path/to/openshift/release bisecter run ./sample-test-reproducer.sh
```
[!NOTE]
If you use ``KATA_WITH_SYSTEM_QEMU=yes`` you might want to deploy once with
it and skip it for the cleanup. That way you might (in most cases) test
all images with a single MCP update instead of per-image MCP update.
[!TIP]
You can check the bisection progress during/after execution by running
``bisecter log`` from the current directory. Before starting a new
bisection you need to execute ``bisecter reset``.
Peer pods
=========
It's possible to run similar testing on peer-pods using cloud-api-adaptor.
Our CI configuration to run inside azure's OCP is in ``peer-pods-azure.sh``
and can be used to replace the `test.sh` step in snippets above.

View File

@@ -1,30 +0,0 @@
#!/bin/bash
# Copyright (c) 2024 Red Hat, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
if [[ "$#" -gt 2 ]] || [[ "$#" -lt 1 ]] ; then
echo "Usage: $0 GOOD [BAD]"
echo "Prints list of available kata-deploy-ci tags between GOOD and BAD commits (by default BAD is the latest available tag)"
exit 255
fi
GOOD="$1"
[[ -n "$2" ]] && BAD="$2"
ARCH=amd64
REPO="quay.io/kata-containers/kata-deploy-ci"
TAGS=$(skopeo list-tags "docker://${REPO}")
# For testing
#echo "$TAGS" > tags
#TAGS=$(cat tags)
# Only amd64
TAGS=$(echo "${TAGS}" | jq '.Tags' | jq "map(select(endswith(\"${ARCH}\")))" | jq -r '.[]')
# Sort by git
SORTED=""
[[ -n "${BAD}" ]] && LOG_ARGS="${GOOD}~1..${BAD}" || LOG_ARGS="${GOOD}~1.."
for TAG in $(git log --merges --pretty=format:%H --reverse "${LOG_ARGS}"); do
[[ "${TAGS}" =~ ${TAG} ]] && SORTED+="
kata-containers-${TAG}-${ARCH}"
done
# Comma separated tags with repo
echo "${SORTED}" | tail -n +2 | sed -e "s@^@${REPO}:@" | paste -s -d, -

View File

@@ -1,57 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2024 Red Hat, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# This script tries to removes most of the resources added by `test.sh` script
# from the cluster.
scripts_dir=$(dirname "$0")
deployments_dir=${scripts_dir}/cluster/deployments
# shellcheck disable=SC1091 # import based on variable
source "${scripts_dir}/lib.sh"
# Set your katacontainers repo dir location
[[ -z "${katacontainers_repo_dir}" ]] && echo "Please set katacontainers_repo_dir variable to your kata repo"
# Set to 'yes' if you want to configure SELinux to permissive on the cluster
# workers.
#
SELINUX_PERMISSIVE=${SELINUX_PERMISSIVE:-no}
# Enable workaround for OCP 4.13 https://github.com/kata-containers/kata-containers/pull/9206
#
WORKAROUND_9206_CRIO=${WORKAROUND_9206_CRIO:-no}
# Ignore errors as we want best-effort-approach here
trap - ERR
# Delete webhook resources
oc delete -f "${scripts_dir}/../../tools/testing/kata-webhook/deploy"
oc delete -f "${scripts_dir}/cluster/deployments/configmap_kata-webhook.yaml.in"
# Delete potential smoke-test resources
oc delete -f "${scripts_dir}/smoke/service.yaml"
oc delete -f "${scripts_dir}/smoke/service_kubernetes.yaml"
oc delete -f "${scripts_dir}/smoke/http-server.yaml"
# Delete test.sh resources
oc delete -f "${deployments_dir}/relabel_selinux.yaml"
if [[ "${WORKAROUND_9206_CRIO}" == "yes" ]]; then
oc delete -f "${deployments_dir}/workaround-9206-crio-ds.yaml"
oc delete -f "${deployments_dir}/workaround-9206-crio.yaml"
fi
[[ ${SELINUX_PERMISSIVE} == "yes" ]] && oc delete -f "${deployments_dir}/machineconfig_selinux.yaml.in"
# Delete kata-containers
helm uninstall kata-deploy --wait --namespace kube-system
oc -n kube-system wait --timeout=10m --for=delete -l name=kata-deploy pod
echo "Wait for all related pods to be gone"
( repeats=1; for _ in $(seq 1 600); do
oc get pods -l name="kubelet-kata-cleanup" --no-headers=true -n kube-system 2>&1 | grep "No resources found" -q && ((repeats++)) || repeats=1
[[ "${repeats}" -gt 5 ]] && echo kata-cleanup finished && break
sleep 1
done) || { echo "There are still some kata-cleanup related pods after 600 iterations"; oc get all -n kube-system; exit 1; }
oc delete -f runtimeclasses/kata-runtimeClasses.yaml

View File

@@ -1,6 +0,0 @@
# Copyright (c) 2020 Red Hat, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
SELINUX=permissive
SELINUXTYPE=targeted

View File

@@ -1,34 +0,0 @@
#!/bin/bash
#
# Copyright (c) 2021 Red Hat, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# This script builds the kata-webhook and deploys it in the test cluster.
#
# You should export the KATA_RUNTIME variable with the runtimeclass name
# configured in your cluster in case it is not the default "kata-ci".
#
set -e
set -o nounset
set -o pipefail
script_dir="$(realpath "$(dirname "$0")")"
webhook_dir="${script_dir}/../../../tools/testing/kata-webhook"
# shellcheck disable=SC1091 # import based on variable
source "${script_dir}/../lib.sh"
KATA_RUNTIME=${KATA_RUNTIME:-kata-ci}
pushd "${webhook_dir}" >/dev/null
# Build and deploy the webhook
#
info "Builds the kata-webhook"
./create-certs.sh
info "Override our KATA_RUNTIME ConfigMap"
sed -i deploy/webhook.yaml -e "s/runtime_class: .*$/runtime_class: ${KATA_RUNTIME}/g"
info "Deploys the kata-webhook"
oc apply -f deploy/
# Check the webhook was deployed and is working.
RUNTIME_CLASS="${KATA_RUNTIME}" ./webhook-check.sh
popd >/dev/null

View File

@@ -1,13 +0,0 @@
# Copyright (c) 2021 Red Hat, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Instruct the daemonset installer to configure Kata Containers to use the
# host kernel.
#
apiVersion: v1
kind: ConfigMap
metadata:
name: ci.kata.installer.kernel
data:
host_kernel: "yes"

View File

@@ -1,14 +0,0 @@
# Copyright (c) 2021 Red Hat, Inc.
#
# SPDX-License-Identifier: Apache-2.0
#
# Instruct the daemonset installer to configure Kata Containers to use the
# system QEMU.
#
apiVersion: v1
kind: ConfigMap
metadata:
name: ci.kata.installer.qemu
data:
qemu_path: /usr/libexec/qemu-kvm
host_kernel: "yes"

View File

@@ -1,9 +0,0 @@
apiVersion: machineconfiguration.openshift.io/v1
kind: MachineConfig
metadata:
labels:
machineconfiguration.openshift.io/role: worker
name: 50-enable-sandboxed-containers-extension
spec:
extensions:
- sandboxed-containers

Some files were not shown because too many files have changed in this diff Show More