The agent configuration file, which is part of the docs, is used by the
confidential containers CIs and, right now, cannot be run behind a
firewall, which is exactly how the TDX CIs are reunning, as https_proxy
is not set there.
Fixes: #5020
Depends-on: github.com/kata-containers/tests#5080
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Guest log is showing a hang on systemd getty start.
Adding symlink for /dev/ttyS0 resolves issue.
Fixes: #4932
Signed-Off-By: Ryan Savino <ryan.savino@amd.com>
Somewhere is lack of log info, add more details about
the storage and log when error will help understand
what happened.
Fixes: #4962
Signed-off-by: Bin Liu <bin@hyper.sh>
As the previous commit added a new runtime class to be used with TDX,
let's make sure this gets shipped and configured as part of the
kata-deploy-cc script, which is used by the Confidential Containers
Operator.
This commit also cleans up all the extra artefacts that will be
installed in order to run the CLH TDX workloads.
Fixes: #4833
Depends-on: github.com/kata-containers/tests#5070
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Let's add a new configuration file for using a cloud hypervisor (and all
the needed artefacts) that are TDX capable.
This PR extends the Makefile in order to provide variables to be set
during the build time that are needed for the proper configuration of
the VMM, such as:
* Specific kernel parameters to be used with TDX
* Specific kernel features to be used when using TDX
* Artefacts path for the artefacts built to be used with TDX
* Kernel
* TD-Shim
The reason we don't hack into the current Cloud Hypervisor configuration
file is because we want to ship both configurations, with for the
non-TEE use case and one for the TDX use case.
It's important to note that the Cloud Hypervisor used upstream is
already built with TDX support.
Fixes: #4831
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
TDX kernel is based on a kernel version which doesn't have the
CONFIG_SPECULATION_MITIGATIONS option.
Having this in the allow list for missing configs avoids a breakage in
the TDX CI.
Fixes: #4998
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
TDX kernel is based on a kernel version which doesn't have the
CONFIG_SPECULATION_MITIGATIONS option.
Having this in the allow list for missing configs avoids a breakage in
the TDX CI.
Fixes: #4998
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
With the current TDX kernel used with Kata Containers, `tdx_guest` is
not needed, as TDX_GUEST is now a kernel configuration.
With this in mind, let's just drop the kernel parameter.
Fixes: #4981
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As right now the TDX guest kernel doesn't support "serial" console,
let's switch to using HVC in this case.
Fixes: #4980
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The runtime will crash when trying to resize memory when memory hotplug
is not allowed.
This happens because we cannot simply set the hotplug amount to zero,
leading is to not set memory hotplug at all, and later then trying to
access the value of a nil pointer.
Fixes: #4979
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
While doing tests using `ctr`, I've noticed that I've been hitting those
timeouts more frequently than expected.
Till we find the root cause of the issue (which is *not* in the Kata
Containers), let's increase the timeouts when dealing with a
Confidential Guest.
Fixes: #4978
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
When booting the TDX kernel with `tdx_disable_filter`, as it's been done
for QEMU, VirtioFS can work without any issues.
Whether this will be part of the upstream kernel or not is a different
story, but it easily could make it there as Cloud Hypervisor relies on
the VIRTIO_F_IOMMU_PLATFORM feature, which forces the guest to use the
DMA API, making these devices compatible with TDX.
See Sebastien Boeuf's explanation of this in the
3c973fa7ce208e7113f69424b7574b83f584885d commit:
"""
By using DMA API, the guest triggers the TDX codepath to share some of
the guest memory, in particular the virtqueues and associated buffers so
that the VMM and vhost-user backends/processes can access this memory.
"""
Fixes: #4977
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Users can specify the kernel module to be loaded through the agent
configuration in kata configuration file or in pod anotation file.
And information of those modules will be sent to kata agent when
sandbox is created.
Fixes: #4894
Signed-off-by: Yushuo <y-shuo@linux.alibaba.com>
The `force_tdx_guest` kernel parameter was only needed in the early
development stages of the TDX kernel driver. We can safely drop it with
the kernel version we've been currently using.
Fixes: #4985
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
With the current TDX kernel used with Kata Containers, `tdx_guest` is
not needed, as TDX_GUEST is now a kernel configuration.
With this in mind, let's just drop the kernel parameter.
Fixes: #4981
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
As right now the TDX guest kernel doesn't support "serial" console,
let's switch to using HVC in this case.
Fixes: #4980
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
The runtime will crash when trying to resize memory when memory hotplug
is not allowed.
This happens because we cannot simply set the hotplug amount to zero,
leading is to not set memory hotplug at all, and later then trying to
access the value of a nil pointer.
Fixes: #4979
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
While doing tests using `ctr`, I've noticed that I've been hitting those
timeouts more frequently than expected.
Till we find the root cause of the issue (which is *not* in the Kata
Containers), let's increase the timeouts when dealing with a
Confidential Guest.
Fixes: #4978
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Add cli message for init command to tell the user
not to run this command directly.
Fixes: #4367
Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
When booting the TDX kernel with `tdx_disable_filter`, as it's been done
for QEMU, VirtioFS can work without any issues.
Whether this will be part of the upstream kernel or not is a different
story, but it easily could make it there as Cloud Hypervisor relies on
the VIRTIO_F_IOMMU_PLATFORM feature, which forces the guest to use the
DMA API, making these devices compatible with TDX.
See Sebastien Boeuf's explanation of this in the
3c973fa7ce208e7113f69424b7574b83f584885d commit:
"""
By using DMA API, the guest triggers the TDX codepath to share some of
the guest memory, in particular the virtqueues and associated buffers so
that the VMM and vhost-user backends/processes can access this memory.
"""
Fixes: #4977
Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
Move delete logic to `libcontainer` crate to make the code clean
like other commands.
Fixes: #4975
Signed-off-by: Manabu Sugimoto <Manabu.Sugimoto@sony.com>
Let's swith to depending on podman which also simplies indirect
dependency on kubernetes components. And it helps to avoid cri-o
security issues like CVE-2022-1708 as well.
Fixes: #4972
Signed-off-by: Peng Tao <bergwolf@hyper.sh>