As the CCv0 effort is not using the runtime-rs, let's add a mechanism to
avoid building it.
The easiest way to do so, is to simply *not* build the runtime-rs if the
RUST_VERSION is not provided, and then not providing the RUST_VERSION as
part of the cc-shim-v2-tarball target.
Fixes: #5462
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
There was a typo in the registry name, which should be
quay.io/confidential-containers/runtime-payload-ci instead of
quay.io/repository/confidential-containers/runtime-payload-ci
Fixes: #5469
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Differently than every single other bit that's part of our repo, QEMU
has been using a single Dockerfile that prepares an environment where
the project can be built, but *also* building the project as part of
that very same Dockerfile.
This is a problem, for several different reasons, including:
* It's very hard to have a reproducible build if you don't have an
archived image of the builder
* One cannot cache / ipload the image of the builder, as that contains
already a specific version of QEMU
* Every single CI run we end up building the builder image, which
includes building dependencies (such as liburing)
Let's split the logic into a new build script, and pass the build script
to be executed inside the builder image, which will be only responsible
for providing an environment where QEMU can be built.
Fixes: #5464
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's create a GitHub action to automate the Kata Containers payload
generation for the Confidential Containers project.
This GitHub action builds the artefacts (in parallel), merges them into
a single tarball, generates the payload with the resulting tarball, and
uploads the payload to the Confidential Containers quay.io.
It expects the tags to be used to be in the `CC-x.y.z` format, with x,
y, and z being numbers.
Fixes: #5330
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's have a GitHub action to publish the Kata Containers payload, after
every push to the CCv0 branch, to the Confidential Containers
`runtime-payload-ci` registry.
The intention of this action is to allow developers to test new
features, and easily bisect breakages that could've happened during the
development process. Ideally we'd have a CI/CD pipeline where every
single change would be tested with the operator, but we're not yet
there. In any case, this work would still be needed. :-)
It's very important to mention that this should be carefully considered
on whether it should or should not be merged back to `main`, as the flow
of PRs there is way higher than what we currently have as part of the
CCv0 branch.
Fixes: #5460
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's modify the script so we allow passing an extra tag, which will be
used as part of the Kata Containers pyload for Confidential Containers
CI GitHub action.
With this we can pass a `latest` tag, which will make things easier for
the integration on the operator side.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's make the registry an optional argument to be passed to the
`kata-deploy-build-and-upload-payload.sh` script, defaulting to the
official Confidential Containers payload registry.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
before setting a limit, otherwise paths may not be found.
guest supporting different hugepage size is more likely with peer-pods where
podvm may use different flavor.
Fixes: #5191
Signed-off-by: Snir Sheriber <ssheribe@redhat.com>
In runtime-rs makefile, we use
```
```
to let make help print out help information for variables and targets,
but later commits forgot this rule.
So we need to follow the previous rule and change the current comments.
fixes: #5413
Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
Instad of bailing out whenever the docker group doesn't exist, just
consider podman is being used, and set the docker_gid to the user's gid.
Also, let's ensure to pass `--privileged` to the container, so
`/run/podman/podman.socket` (which is what `/var/run/docker.sock` points
to) can be passed to the container.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
We need to pass the RUST_VERSION, in the same way done for GO_VERSION,
as nowadays both the go and the rust runtime are built.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
If the needed libraries (for virtfs) are installed on the host,
QEMU will pick it up and enable it. If not installed and you
do not enable the flag, QEMU will just ignore it, and you end
up without 9p support. Enabling it explicitly will fail if the
needed libs are not installed so this way we can be sure that
it gets build.
Fixes: #5418
Signed-off-by: Zvonko Kaiser <zkaiser@nvidia.com>
s390x apparently does not support rust-tls, which is required by the
network check (due to the `reqwest` crate dependency).
Disable the network check on s390x until we can find a solution to the
problem.
> **Note:**
>
> This fix is assumed to be a temporary one until we find a solution.
> Hence, I have not moved the network check code (which should be entirely
> generic) into an architecture specific module.
Fixes: #5435.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
Rework the architecture-specific `check()` call by moving all the
conditional logic out of the function.
Fixes: #5402.
Signed-off-by: James O. D. Hunt <james.o.hunt@intel.com>
Let's build virtiofsd using the kata-deploy build scripts, which
simplifies and unifies the way we build our components.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 0bc5baafb9)
Let's have the docker installation / configuration as part of its own
task, which can be set as a dependency of other tasks whcih may or may
not depend on docker.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit cb4ef4734f)
When moving to building the CI artefacts using the kata-deploy scripts,
we've noticed that the build would fail on any machine where the tarball
wasn't officially provided.
This happens as rust is missing from the 1st layer container. However,
it's a very common practice to leave the 1st layer container with the
minimum possible dependencies and install whatever is needed for
building a specific component in a 2nd layer container, which virtiofsd
never had.
In this commit we introduce the second layer containers (yes,
comtainers), one for building virtiofsd using musl, and one for building
virtiofsd using glibc. The reason for taking this approach was to
actually simplify the scripts and avoid building the dependencies
(libseccomp, libcap-ng) using musl libc.
Fixes: #5425
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 7e5941c578)
Let's build virtiofsd using the kata-deploy build scripts, which
simplifies and unifies the way we build our components.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's have the docker installation / configuration as part of its own
task, which can be set as a dependency of other tasks whcih may or may
not depend on docker.
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
When moving to building the CI artefacts using the kata-deploy scripts,
we've noticed that the build would fail on any machine where the tarball
wasn't officially provided.
This happens as rust is missing from the 1st layer container. However,
it's a very common practice to leave the 1st layer container with the
minimum possible dependencies and install whatever is needed for
building a specific component in a 2nd layer container, which virtiofsd
never had.
In this commit we introduce the second layer containers (yes,
comtainers), one for building virtiofsd using musl, and one for building
virtiofsd using glibc. The reason for taking this approach was to
actually simplify the scripts and avoid building the dependencies
(libseccomp, libcap-ng) using musl libc.
Fixes: #5425
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The previously used repo will be removed by Intel, as done with the one
used for TDX kernel. The TDX team has already worked on providing the
patches that were hosted atop of the QEMU commit with the following hash
4c127fdbe81d66e7cafed90908d0fd1f6f2a6cd0 as a tarball in the
https://github.com/intel/tdx-tools repo, see
https://github.com/intel/tdx-tools/pull/162.
On the Kata Containers side, in order to simplify the process and to
avoid adding hundreds of patches to our repo, we've revived the
https://github.com/kata-containers/qemu repo, and created a branch and a
tag with those hundreds of patches atop of the QEMU commit hash
4c127fdbe81d66e7cafed90908d0fd1f6f2a6cd0. The branch is called
4c127fdbe81d66e7cafed90908d0fd1f6f2a6cd0-plus-TDX-v3.1 and the tag is
called TDX-v3.1.
Knowing the whole background, let's switch the repo we're getting the
TDX QEMU from.
Fixes: #5419
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 35d52d30fd)
The previously used repo has been removed by Intel. As this happened,
the TDX team worked on providing the patches that were hosted atop of
the v5.15 kernel as a tarball present in the
https://github.com/intel/tdx-tools repos, see
https://github.com/intel/tdx-tools/pull/161.
On the Kata Containers side, in order to simplify the process and to
avoid adding ~1400 kernel patches to our repo, we've revived the
https://github.com/kata-containers/linux repo, and created a branch and
a tag with those ~1400 patches atop of the v5.15. The branch is called
v5.15-plus-TDX, and the tag is called 5.15-plus-TDX (in order to avoid
having to change how the kernel builder script deals with versioning).
Knowing the whole background, let's switch the repo we're getting the
TDX kernel from.
Fixes: #5326
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
(cherry picked from commit 9eb73d543a)