18 Commits

Author SHA1 Message Date
Tobin Feldman-Fitzthum
d4668f800c docs: add release notes for v0.11.0
Not a ton of new features since we didn't bump guest-components or
trustee in this release, but we do pick up some really nice changes from
Kata.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-11-25 14:23:18 -05:00
Ariel Adam
5ae29d6da1 Merge pull request #264 from magowan/UpdateRoadmap
Update roadmap.md
2024-11-20 13:50:07 +02:00
James Magowan
92e87a443b Update roadmap.md
Bringing roadmap.md into line with current roadmap and processes .

Signed-off-by: James Magowan <magowan@uk.ibm.com>
2024-11-19 14:57:35 +00:00
Chris Porter
89933dd404 Add coco threat model diagram
Insert the diagram into the existing trust-model doc.
Add some supporting text aroudn it.
Also add the diagram to the archiecture diagrams slide deck.

Signed-off-by: Chris Porter <cporterbox@gmail.com>
2024-11-18 10:32:53 -06:00
Chris Porter
6cf0c51e58 Increase header for out of scope and summary
These two headings do not seem to belong under Required Documentation,
so move them out

Signed-off-by: Chris Porter <porter@ibm.com>
2024-11-18 10:32:53 -06:00
Arvind Kumar
802e66cb5c docs: Updating SNP and SEV and quickstart guides
Updating the SEV and SNP guides to include instructions on launching CoCo with SEV and SNP memory encryption.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
2024-11-15 14:47:07 -05:00
stevenhorsman
5035fbae1a doc: Add IBM Z to OSC list
Add statement agreed by IBM product manager

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2024-11-14 09:16:45 +00:00
Ariel Adam
9748f9dd5e Update ADOPTERS.md
Updating the adopters list with RH

Signed-off-by: Ariel Adam <aadam@redhat.com>
2024-11-13 13:37:34 -05:00
Pradipta Banerjee
31c7ab6a9d Merge pull request #259 from nyrahul/main
adding kubearmor/5gsec as adopter
2024-11-11 14:56:15 +05:30
Rahul Jadhav
930165f19e adding kubearmor/5gsec as adopter
Signed-off-by: Rahul Jadhav <nyrahul@gmail.com>
2024-11-11 13:55:59 +05:30
Arvind Kumar
19fb57f3ed Docs: update quickstart
Reorganizing the quickstart guide and adding a new guide page for CoCo-dev instructions for testing CoCo without the use of memory encryption or attestation.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
2024-11-07 09:27:49 -05:00
Mikko Ylinen
4a357bdd5a MAINTAINERS: update to match with governance.md
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-30 10:33:19 -04:00
Mikko Ylinen
1af7e78194 MAINTAINERS: update Intel rep
Same as in commit a57f058b.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-30 10:33:19 -04:00
Xynnn007
bb7ef72658 ADOPTERS: add alibaba cloud case
Signed-off-by: Xynnn007 <xynnn@linux.alibaba.com>
2024-10-24 12:39:29 -04:00
Mikko Ylinen
4679bfb055 README: add a link to our project board
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-22 13:24:50 -04:00
Mikko Ylinen
9be365e507 drop orphan images that are leftovers from CONTRIBUTING.md
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-22 13:24:50 -04:00
Tobin Feldman-Fitzthum
54a9abf965 governance: add process for removing maintainers
Add a formal process for cleaning up our maintainer teams.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-10-22 11:43:08 -04:00
Tobin Feldman-Fitzthum
f515a232cb governance: mention GitHub teams
No changes to policies.

Update the wording to clarify that we manage maintainers
with GitHub teams rather than by putting everyone in the
codeowners file.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-10-22 11:43:08 -04:00
20 changed files with 651 additions and 737 deletions

1
.lycheeignore Normal file
View File

@@ -0,0 +1 @@
https://sigs.centos.org/virt/tdx/

View File

@@ -9,9 +9,12 @@ See list of adopter types at the bottom of this page.
| Organization/Company | Project/Product | Usage level | Adopter type | Details |
|-------------------------------------------------------------------|---------------------------------------------------------------|--------------------------|----------------------------------|---------------------------------------------------------------------------|
|[Alibaba Cloud (Aliyun)](https://www.alibabacloud.com/)| [Elastic Algorithm Service](https://www.alibabacloud.com/help/en/pai/user-guide/eas-model-serving/?spm=a2c63.p38356.0.0.2b2b6679Pjozxy) and [Elastic GPU Service](https://www.alibabacloud.com/help/en/egs/) | Beta | Service Provider | Both services use sub-projects of confidential containers to protect the user data and AI model from being exposed to CSP (For details mading.ma@alibaba-inc.com) |
| [Edgeless Systems](https://www.edgeless.systems/) | [Contrast](https://github.com/edgelesssys/contrast) | Beta | Service Provider / Consultancy | Contrast runs confidential container deployments on Kubernetes at scale. |
| [IBM](https://www.ibm.com/z) | [IBM LinuxONE](https://www.ibm.com/linuxone) | Beta | Service Provider | Confidential Containers with Red Hat OpenShift Container Platform and IBM® Secure Execution for Linux (see [details](https://www.ibm.com/blog/confidential-containers-with-red-hat-openshift-container-platform-and-ibm-secure-execution-for-linux/)) |
|NanhuLab|Trusted Big Data Sharing System |Beta |Service Provider |The system uses confidential containers to ensure that data users can utilize the data without being able to view the raw data.(No official website yet. For details: yzc@nanhulab.ac.cn) |
| [KubeArmor](https://www.kubearmor.io/) | Runtime Security | Beta | Another project | An open source project that leverages CoCo as part of their solution, integrates with for compatibility and interoperability, or is used in the supply chain of another project [(5GSEC)](https://github.com/5GSEC/nimbus/blob/main/examples/clusterscoped/coco-workload-si-sib.yaml). |
| [Red Hat](https://www.redhat.com/en) | [OpenShift confidential containers](https://www.redhat.com/en/blog/learn-about-confidential-containers) | Beta | Service Provider | Confidential Containers are available from [OpenShift sandboxed containers release version 1.7.0](https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.7/) as a tech preview on Azure cloud for both Intel TDX and AMD SEV-SNP. The tech preview also includes support for confidential containers on IBM Z and LinuxONE using Secure Execution for Linux (IBM SEL).|
|TBD| | | | |
|TBD| | | | |

View File

@@ -1,15 +1,18 @@
# CoCo Steering Committee / Maintainers
#
# Github ID, Name, Email Address
ariel-adam, Ariel Adam, aadam@redhat.com
bpradipt, Pradipta Banerjee, prbanerj@redhat.com
dcmiddle, Dan Middleton, dan.middleton@intel.com
fitzthum, Tobin Feldman-Fitzthum, tobin@ibm.com
jiazhang0, Zhang Jia, zhang.jia@linux.alibaba.com
Jiang Liu, jiangliu, gerry@linux.alibaba.com
larrydewey, Larry Dewey, Larry.Dewey@amd.com
magowan, James Magowan, magowan@uk.ibm.com
peterzcst, Peter Zhu, peter.j.zhu@intel.com
sameo, Samuel Ortiz, samuel.e.ortiz@protonmail.com
# Github ID, Name, Affiliation
ariel-adam, Ariel Adam, Redhat
bpradipt, Pradipta Banerjee, Redhat
peterzcst, Peter Zhu, Intel
mythi, Mikko Ylinen, Intel
magowan, James Magowan, IBM
fitzthum, Tobin Feldman-Fitzthum, IBM
jiazhang0, Zhang Jia, Alibaba
jiangliu, Jiang Liu, Alibaba
larrydewey, Larry Dewey, AMD
ryansavino, Ryan Savino, AMD
sameo, Samuel Ortiz, Rivos
zvonkok, Zvonko Kaiser, NVIDIA
vbatts, Vincent Batts, Microsoft
danmihai1, Dan Mihai, Microsoft

View File

@@ -24,7 +24,7 @@ delivering Confidential Computing for guest applications or data inside the TEE
### Get started quickly...
- [Kubernetes Operator for Confidential
Computing](https://github.com/confidential-containers/confidential-containers-operator) : An
Computing](https://github.com/confidential-containers/operator) : An
operator to deploy confidential containers runtime (and required configs) on a Kubernetes cluster
@@ -35,6 +35,7 @@ delivering Confidential Computing for guest applications or data inside the TEE
- [Project Overview](./overview.md)
- [Project Architecture](./architecture.md)
- [Our Roadmap](./roadmap.md)
- [Our Release Content Planning](https://github.com/orgs/confidential-containers/projects/6)
- [Alignment with other Projects](alignment.md)

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -40,15 +40,25 @@ Project maintainers are first and foremost active *Contributors* to the project
* Creating and assigning project issues.
* Enforcing the [Code of Conduct](https://github.com/confidential-containers/community/blob/main/CODE_OF_CONDUCT.md).
The list of maintainers for a project is defined by the project `CODEOWNERS` file placed at the top-level of each project's repository.
Project maintainers are managed via GitHub teams. The maintainer team for a project is referenced in the `CODEOWNERS` file
at the top level of each project repository.
### Becoming a project maintainer
Existing maintainers may decide to elevate a *Contributor* to the *Maintainer* role based on the contributor established trust and contributions relevance.
This decision process is not formally defined and is based on lazy consensus from the existing maintainers.
Any contributor may request for becoming a project maintainer by opening a pull request (PR) against the `CODEOWNERS` file, and adding *all* current maintainers as reviewer of the PR.
Maintainers may also pro-actively promote contributors based on their contributions and leadership track record.
A contributor can propose themself or someone else as a maintainer by opening an issue in the repository for the project in question.
### Removing project maintainers
Inactive maintainers can be removed by the Steering Committee.
Maintainers are considered inactive if they have made no GitHub contributions relating to the project they maintain
for more than six months.
Before removing a maintainer, the Steering Commitee should notify the maintainer of their status.
Not all inactive maintainers must be removed.
This process should mainly be used to remove maintainers that have permanently moved on from the project.
## Steering Committee Member

252
guides/coco-dev.md Normal file
View File

@@ -0,0 +1,252 @@
# Running a workload
## Creating a sample CoCo workload
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the `kata-qemu-coco-dev` runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
In this example, we will be using the bitnami/nginx image as described in the following yaml:
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata-qemu-coco-dev
```
Setting the `runtimeClassName` is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the [guides](../guides) for
more details.
With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host, you should log into the k8s node and ensure the following command returns an empty result:
```shell
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload:
```shell
kubectl apply -f nginx.yaml
```
Output:
```shell
pod/nginx created
```
Ensure the pod was created successfully (in running state):
```shell
kubectl get pods
```
Output:
```shell
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
```
Now go back to the k8s node and ensure that you dont have any bitnami/nginx images on it:
```shell
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
## Encrypted and/or signed images with attestation
The previous example does not involve any attestation because the workload container isn't signed or encrypted,
and the workload itself does not require any secrets.
This is not the case for most real workloads. It is recommended to use CoCo with signed and/or encrypted images.
The workload itself can also request secrets from the attestation agent in the guest.
Secrets are provisioned to the guest in conjunction with an attestation, which is based on hardware evidence.
The rest of this guide focuses on setting up more substantial encrypted/signed workloads using attestation
and confidential hardware.
CoCo has a modular attestation interface and there are a few options for attestation.
CoCo provides a generic Key Broker Service (KBS) that the rest of this guide will be focused on.
The SEV runtime class uses `simple-kbs`, which is described in the [SEV guide](../guides/sev.md).
### Select Runtime Class
To use CoCo with confidential hardware, first switch to the appropriate runtime class.
TDX has one runtime class, `kata-qemu-tdx`.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](../guides/sev.md).
For SNP, use the `kata-qemu-snp` runtime class and follow the [SNP guide](../guides/snp.md).
For `enclave-cc` follow the [enclave-cc guide](../guides/enclave-cc.md).
### Deploy and Configure tenant-side CoCo Key Broker System cluster
The following describes how to run and provision the generic KBS.
The KBS should be run in a trusted environment. The KBS is not just one service,
but a combination of several.
A tenant-side CoCo Key Broker System cluster includes:
- Key Broker Service (KBS): Brokering service for confidential resources.
- Attestation Service (AS): Verifier for remote attestation.
- Reference Value Provider Service (RVPS): Provides reference values for AS.
- CoCo Keyprovider: Component to encrypt the images following ocicrypt spec.
To quick start the KBS cluster, a `docker compose` yaml is provided to launch.
```shell
# Clone KBS git repository
git clone https://github.com/confidential-containers/trustee.git
cd trustee/kbs
export KBS_DIR_PATH=$(pwd)
# Generate a user auth key pair
openssl genpkey -algorithm ed25519 > config/private.key
openssl pkey -in config/private.key -pubout -out config/public.pub
cd ..
# Start KBS cluster
docker compose up -d
```
If configuration of KBS cluster is required, edit the following config files and restart the KBS cluster with `docker compose`:
- `$KBS_DIR_PATH/config/kbs-config.toml`: configuration for Key Broker Service.
- `$KBS_DIR_PATH/config/as-config.json`: configuration for Attestation Service.
- `$KBS_DIR_PATH/config/sgx_default_qcnl.conf`: configuration for Intel TDX/SGX verification. See [details](https://github.com/confidential-containers/trustee/blob/main/attestation-service/docs/grpc-as.md#quick-start).
When KBS cluster is running, you can modify the policy file used by AS policy engine ([OPA](https://www.openpolicyagent.org/)) at any time:
- `$KBS_DIR_PATH/data/attestation-service/opa/default.rego`: Policy file for evidence verification of AS, refer to [AS Policy Engine](https://github.com/confidential-containers/attestation-service#policy-engine) for more infomation.
### Encrypting an Image
[skopeo](https://github.com/containers/skopeo) is required to encrypt the container image.
Follow the [instructions](https://github.com/containers/skopeo/blob/main/install.md) to install `skopeo`.
If building with Ubuntu 22.04, make sure to follow the instructions to build skopeo from source, otherwise
there will be errors regarding version incompatibility between ocicrypt and skopeo. Make sure the downloaded
skopeo version is at least 1.16.0. Ubuntu 22.04 builds skopeo with an outdated ocicrypt version, which does
not support the keyprovider protocol we depend on.
Use `skopeo` to encrypt an image on the same node of the KBS cluster (use busybox:latest for example):
```shell
# edit ocicrypt.conf
tee > ocicrypt.conf <<EOF
{
"key-providers": {
"attestation-agent": {
"grpc": "127.0.0.1:50000"
}
}
}
EOF
# encrypt the image
OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://library/busybox oci:busybox:encrypted
```
The image will be encrypted, and things happens in the KBS cluster background include:
- CoCo Keyprovider generates a random key and a key-id. Then encrypts the image using the key.
- CoCo Keyprovider registers the key with key-id into KBS.
Then push the image to a registry:
```shell
skopeo copy oci:busybox:encrypted [SCHEME]://[REGISTRY_URL]:encrypted
```
Be sure to replace `[SCHEME]` with registry scheme type like `docker`, replace `[REGISTRY_URL]` with the desired registry URL like `docker.io/encrypt_test/busybox`.
### Signing an Image
[cosign](https://github.com/sigstore/cosign) is required to sign the container image. Follow the instructions here to install `cosign`:
To install cosign, find and unpackage the corresponding package to the machine being used from their [release page](https://github.com/sigstore/cosign/releases).
Generate a cosign key pair and register the public key to KBS storage:
```shell
cosign generate-key-pair
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/cosign-key && cp cosign.pub $KBS_DIR_PATH/data/kbs-storage/default/cosign-key/1
```
Sign the encrypted image with cosign private key:
```shell
cosign sign --key cosign.key [REGISTRY_URL]:encrypted
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous steps.
Then edit an image pulling validation policy file.
Here is a sample policy file `security-policy.json`:
```json
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"[REGISTRY_URL]": [
{
"type": "sigstoreSigned",
"keyPath": "kbs:///default/cosign-key/1"
}
]
}
}
}
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image.
Register the image pulling validation policy file to KBS storage:
```shell
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/security-policy
cp security-policy.json $KBS_DIR_PATH/data/kbs-storage/default/security-policy/test
```
### Deploy an Encrypted Image as a CoCo workload on CC HW
Here is a sample yaml for encrypted image deploying:
```shell
cat << EOT | tee encrypted-image-test-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: encrypted-image-test-busybox
name: encrypted-image-test-busybox
annotations:
io.containerd.cri.runtime-handler: [RUNTIME_CLASS]
spec:
containers:
- image: [REGISTRY_URL]:encrypted
name: busybox
dnsPolicy: ClusterFirst
runtimeClassName: [RUNTIME_CLASS]
EOT
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous step, replace `[RUNTIME_CLASS]` with kata runtime class for CC HW.
Then configure `/opt/confidential-containers/share/defaults/kata-containers/configuration-<RUNTIME_CLASS_SUFFIX>.toml` to add `agent.aa_kbc_params=cc_kbc::<KBS_URI>` to kernel parameters. Here `RUNTIME_CLASS_SUFFIX` is something like `qemu-coco-dev`, `KBS_URI` is the address of Key Broker Service in KBS cluster like `http://123.123.123.123:8080`.
Deploy encrypted image as a workload:
```shell
kubectl apply -f encrypted-image-test-busybox.yaml
```

View File

@@ -1,54 +0,0 @@
# EAA Verdictd Guide
**EAA/Verdictd support has been deprecated in Confidential Containers**
EAA is used to perform attestation at runtime and provide guest with confidential resources such as keys.
It is based on [rats-tls](https://github.com/inclavare-containers/rats-tls).
[Verdictd](https://github.com/inclavare-containers/verdictd) is the Key Broker Service and Attestation Service of EAA.
The EAA KBC is an optional module in the attestation-agent at compile time,
which can be used to communicate with Verdictd.
The communication is established on the encrypted channel provided by rats-tls.
EAA can now be used on Intel TDX and Intel SGX platforms.
## Create encrypted image
Before build encrypted image, you need to make sure Skopeo and Verdictd(EAA KBS) have been installed:
- [Skopeo](https://github.com/containers/skopeo): the command line utility to perform encryption operations.
- [Verdictd](https://github.com/inclavare-containers/verdictd): EAA Key Broker Service and Attestation Service.
1. Pull unencrypted image.
Here use `alpine:latest` for example:
```sh
${SKOPEO_HOME}/bin/skopeo copy --insecure-policy docker://docker.io/library/alpine:latest oci:busybox
```
2. Follow the [Verdictd README #Generate encrypted container image](https://github.com/inclavare-containers/verdictd#generate-encrypted-container-image) to encrypt the image.
3. Publish the encrypted image to your registry.
## Deploy encrypted image
1. Build rootfs with EAA component:
Specify `AA_KBC=eaa_kbc` parameters when using kata-containers `rootfs.sh` scripts to create rootfs.
2. Launch Verdictd
Verdictd performs remote attestation at runtime and provides the key needed to decrypt the image.
It is actually both Key Broker Service and Attestation Service of EAA.
So when deploy the encrypted image, Verdictd is needed to be launched:
```sh
verdictd --listen <$ip>:<$port> --mutual
```
> **Note** The communication between Verdictd and EAA KBC is based on rats-tls,
so you need to confirm that [rats-tls](https://github.com/inclavare-containers/rats-tls) has been correctly installed in your running environment.
3. Agent Configuration
Add configuration `aa_kbc_params= 'eaa_kbc::<$IP>:<$PORT>'` to agent config file, the IP and PORT should be consistent with verdictd.

View File

@@ -1,201 +1,109 @@
# SEV-ES Guide
Confidential Containers supports SEV(-ES) with pre-attestation using
[simple-kbs](https://github.com/confidential-containers/simple-kbs).
This guide covers platform-specific setup for SEV(-ES) and walks through
complete flows for attestation and encrypted images.
## Creating a CoCo workload using a pre-existing encrypted image on SEV
### Platform Setup
To enable SEV on the host platform, first ensure that it is supported. Then follow these instructions to enable SEV:
[AMD SEV - Prepare Host OS](https://github.com/AMDESE/AMDSEV#prepare-host-os)
### Install sevctl and Export SEV Certificate Chain
[sevctl](https://github.com/virtee/sevctl) is the SEV command line utility and is needed to export the SEV certificate chain.
Follow these steps to install `sevctl`:
* Debian / Ubuntu:
```
# Rust must be installed to build
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
sudo apt install -y musl-dev musl-tools
# Additional packages are required to build
sudo apt install -y pkg-config libssl-dev asciidoctor
# Clone the repository
git clone https://github.com/virtee/sevctl.git
# Build
(cd sevctl && cargo build)
```
* CentOS / Fedora / RHEL:
```
sudo dnf install sevctl
```
> **Note** Due to [this bug](https://bugzilla.redhat.com/show_bug.cgi?id=2037963) on sevctl for RHEL and Fedora you might need to build the tool from sources to pick the fix up.
If using the SEV kata configuration template file, the SEV certificate chain must be placed in `/opt/sev`. Export the SEV certificate chain using the following commands:
```
sudo mkdir -p /opt/sev
sudo ./sevctl/target/debug/sevctl export --full /opt/sev/cert_chain.cert
```
### Setup and Run the simple-kbs
## Platform Setup
By default, the `kata-qemu-sev` runtime class uses pre-attestation with the
`online-sev-kbc` and [simple-kbs](https://github.com/confidential-containers/simple-kbs) to attest the guest and provision secrets.
`simple-kbs` is a basic prototype key broker which can validate a guest measurement according to a specified policy and conditionally release secrets.
To use encrypted images, signed images, or authenticated registries with SEV, you should setup `simple-kbs`.
If you simply want to run an unencrypted container image, you can disable pre-attestation by adding the following annotation
`io.katacontainers.config.pre_attestation.enabled: "false"` to your pod.
> [!WARNING]
>
> In order to launch SEV or SNP memory encrypted guests, the host must be prepared with a compatible kernel, `6.8.0-rc5-next-20240221-snp-host-cc2568386`. AMD custom changes and required components and repositories will eventually be upstreamed.
If you are using pre-attestation, you will need to add an annotation to your pod configuration which contains the URI of a `simple-kbs` instance.
This annotation should be of the form `io.katacontainers.config.pre_attestation.uri: "<KBS IP>:44444"`.
Port 44444 is the default port per the directions below, but it may be configured to use another port.
The KBS IP must be accessible from inside the guest.
Usually it should be the public IP of the node where `simple-kbs` runs.
> [Sev-utils](https://github.com/amd/sev-utils/blob/coco-202402240000/docs/snp.md) is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, they are already packaged and delivered with Kata.
Alternatively, refer to the [AMDESE guide](https://github.com/confidential-containers/amdese-amdsev/tree/amd-snp-202402240000?tab=readme-ov-file#prepare-host) to manually build the host kernel and other components.
The SEV policy can also be set by adding `io.katacontainers.config.sev.policy: "<SEV POLICY>"` to your pod configuration. The default policy for SEV and SEV-ES are, respectively, "3" and "7", where the following bits are enabled:
## Getting Started
| Bit| Name| Description |
| --- | --- | --- |
|0|NODBG| Debugging of the guest is disallowed |
|1|NOKS| Sharing keys with other guests is disallowed |
|2|ES| SEV-ES is required |
This guide covers platform-specific setup for SEV and walks through the complete flows for the different CoCo use cases:
For more information about SEV policy, see chapter 3 of the [Secure Encrypted Virtualization API](https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf) (PDF).
- [Container Launch with Memory Encryption](#container-launch-with-memory-encryption)
- [Pre-Attestation Utilizing Signed and Encrypted Images](#pre-attestation-utilizing-signed-and-encrypted-images)
>Note: the SEV policy is not the same as the policies that drive `simple-kbs`.
## Container Launch With Memory Encryption
The CoCo project has created a sample encrypted container image ([ghcr.io/confidential-containers/test-container:encrypted](https://github.com/orgs/confidential-containers/packages/container/test-container/82546314?tag=encrypted)). This image is encrypted using a key that comes already provisioned inside the `simple-kbs` for ease of testing. No `simple-kbs` policy is required to get things running.
### Launch a Confidential Service
The image encryption key and key for SSH access have been attached to the CoCo sample encrypted container image as docker labels. This image is meant for TEST purposes only as these keys are published publicly. In a production use case, these keys would be generated by the workload administrator and kept secret. For further details, see the section how to [Create an Encrypted Image](#create-an-encrypted-image).
To launch a container with SEV memory encryption, the SEV runtime class (`kata-qemu-sev`) must be specified as an annotation in the yaml. A base alpine docker container ([Dockerfile](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile)) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a [SSH public key](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted.pub) for validation purposes.
To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-firmware-measurement).
Here is a sample service yaml specifying the SEV runtime class:
`docker compose` is required to run the `simple-kbs` and its database in docker containers. Installation instructions are available on [Docker's website](https://docs.docker.com/compose/install/linux/).
Clone the repository for specified tag:
```
simple_kbs_tag="0.1.1"
git clone https://github.com/confidential-containers/simple-kbs.git
(cd simple-kbs && git checkout -b "branch_${simple_kbs_tag}" "${simple_kbs_tag}")
```
Run the service with `docker compose`:
```
(cd simple-kbs && sudo docker compose up -d)
```
### Launch the Pod and Verify SEV Encryption
Here is a sample kubernetes service yaml for an encrypted image:
```
```yaml
kind: Service
apiVersion: v1
metadata:
name: encrypted-image-tests
name: "confidential-unencrypted"
spec:
selector:
app: encrypted-image-tests
app: "confidential-unencrypted"
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: encrypted-image-tests
name: "confidential-unencrypted"
spec:
selector:
matchLabels:
app: encrypted-image-tests
app: "confidential-unencrypted"
template:
metadata:
labels:
app: encrypted-image-tests
app: "confidential-unencrypted"
annotations:
io.containerd.cri.runtime-handler: kata-qemu-sev
spec:
runtimeClassName: kata-qemu-sev
containers:
- name: encrypted-image-tests
image: ghcr.io/fitzthum/encrypted-image-tests:encrypted
- name: "confidential-unencrypted"
image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always
```
Save this service yaml to a file named `encrypted-image-tests.yaml`. Notice the image URL specified points to the previously described CoCo sample encrypted container image. `kata-qemu-sev` must also be specified as the `runtimeClassName`.
Save the contents of this yaml to a file called `confidential-unencrypted.yaml`.
Start the service:
```
kubectl apply -f encrypted-image-tests.yaml
```shell
kubectl apply -f confidential-unencrypted.yaml
```
Check for pod errors:
Check for errors:
```
pod_name=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $1;}')
kubectl describe pod ${pod_name}
```shell
kubectl describe pod confidential-unencrypted
```
If there are no errors, a CoCo encrypted container with SEV has been successfully launched!
If there are no errors in the Events section, then the container has been successfully created with SEV memory encryption.
### Verify SEV Memory Encryption
### Validate SEV Memory Encryption
The container `dmesg` report can be parsed to verify SEV memory encryption.
The container dmesg log can be parsed to indicate that SEV memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get pod IP:
Get the pod IP:
```
pod_ip=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $6;}')
```shell
pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
```
Get the CoCo sample encrypted container image SSH access key from docker image label and save it to a file.
Currently the docker client cannot pull encrypted images. We can inspect the unencrypted image instead,
which has the same labels. You could also use `skopeo inspect` to get the labels from the encrypted image.
Download and save the [SSH private key](https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted) and set the permissions.
```
docker pull ghcr.io/fitzthum/encrypted-image-tests:unencrypted
docker inspect ghcr.io/fitzthum/encrypted-image-tests:unencrypted | \
jq -r '.[0].Config.Labels.ssh_key' \
| sed "s|\(-----BEGIN OPENSSH PRIVATE KEY-----\)|\1\n|g" \
| sed "s|\(-----END OPENSSH PRIVATE KEY-----\)|\n\1|g" \
> encrypted-image-tests
```shell
wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
chmod 600 confidential-image-ssh-key
```
Set permissions on the SSH private key file:
The following command will run a remote SSH command on the container to check if SEV memory encryption is active:
```
chmod 600 encrypted-image-tests
```
Run a SSH command to parse the container `dmesg` output for SEV enabled messages:
```
ssh -i encrypted-image-tests \
```shell
ssh -i confidential-image-ssh-key \
-o "StrictHostKeyChecking no" \
-t root@${pod_ip} \
'dmesg | grep SEV'
'dmesg | grep "Memory Encryption Features"'
```
The output should look something like this:
```
If SEV is enabled and active, the output should return:
```shell
[ 0.150045] Memory Encryption Features active: AMD SEV
```
@@ -203,7 +111,7 @@ The output should look something like this:
If SSH access to the container is desired, create a keypair:
```
```shell
ssh-keygen -t ed25519 -f encrypted-image-tests -P "" -C "" <<< y
```
@@ -211,7 +119,7 @@ The above command will save the keypair in a file named `encrypted-image-tests`.
Here is a sample Dockerfile to create a docker image:
```
```Dockerfile
FROM alpine:3.16
# Update and install openssh-server
@@ -236,13 +144,13 @@ Store this `Dockerfile` in the same directory as the `encrypted-image-tests` ssh
Build image:
```
```shell
docker build -t encrypted-image-tests .
```
Tag and upload this unencrypted docker image to a registry:
```
```shell
docker tag encrypted-image-tests:latest [REGISTRY_URL]:unencrypted
docker push [REGISTRY_URL]:unencrypted
```
@@ -255,7 +163,7 @@ Be sure to replace `[REGISTRY_URL]` with the desired registry URL.
The Attestation Agent hosts a grpc service to support encrypting the image. Clone the repository:
```
```shell
attestation_agent_tag="v0.1.0"
git clone https://github.com/confidential-containers/attestation-agent.git
(cd attestation-agent && git checkout -b "branch_${attestation_agent_tag}" "${attestation_agent_tag}")
@@ -263,14 +171,14 @@ git clone https://github.com/confidential-containers/attestation-agent.git
Run the offline_fs_kbs:
```
```shell
(cd attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs \
&& cargo run --release --features offline_fs_kbs -- --keyprovider_sock 127.0.0.1:50001 &)
```
Create the Attestation Agent keyprovider:
```
```shell
cat > attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf <<EOF
{
"key-providers": {
@@ -282,13 +190,13 @@ EOF
Set a desired value for the encryption key that should be a 32-bytes and base64 encoded value:
```
```shell
enc_key="RcHGava52DPvj1uoIk/NVDYlwxi0A6yyIZ8ilhEX3X4="
```
Create a Key file:
```
```shell
cat > keys.json <<EOF
{
"key_id1":"${enc_key}"
@@ -298,7 +206,7 @@ EOF
Run skopeo to encrypt the image created in the previous section:
```
```shell
sudo OCICRYPT_KEYPROVIDER_CONFIG=$(pwd)/attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf \
skopeo copy --insecure-policy \
docker:[REGISTRY_URL]:unencrypted \
@@ -316,13 +224,13 @@ response. A remote registry known to support encrypted images like GitHub Contai
At this point it is a good idea to inspect the image was really encrypted as skopeo can silently leave it unencrypted. Use
`skopeo inspect` as shown below to check that the layers MIME types are **application/vnd.oci.image.layer.v1.tar+gzip+encrypted**:
```
```shell
skopeo inspect docker-daemon:[REGISTRY_URL]:encrypted
```
Push the encrypted image to the registry:
```
```shell
docker push [REGISTRY_URL]:encrypted
```
@@ -330,13 +238,13 @@ docker push [REGISTRY_URL]:encrypted
* Debian / Ubuntu:
```
```shell
sudo apt install mysql-client jq
```
* CentOS / Fedora / RHEL:
```
```shell
sudo dnf install [ mysql | mariadb | community-mysql ] jq
```
@@ -344,7 +252,7 @@ The `mysql-client` package name may differ depending on OS flavor and version.
The `simple-kbs` uses default settings and credentials for the MySQL database. These settings can be changed by the `simple-kbs` administrator and saved into a credential file. For the purposes of this quick start, set them in the environment for use with the MySQL client command line:
```
```shell
KBS_DB_USER="kbsuser"
KBS_DB_PW="kbspassword"
KBS_DB="simple_kbs"
@@ -353,7 +261,7 @@ KBS_DB_TYPE="mysql"
Retrieve the host address of the MySQL database container:
```
```shell
KBS_DB_HOST=$(docker network inspect simple-kbs_default \
| jq -r '.[].Containers[] | select(.Name | test("simple-kbs[_-]db.*")).IPv4Address' \
| sed "s|/.*$||g")
@@ -361,7 +269,7 @@ KBS_DB_HOST=$(docker network inspect simple-kbs_default \
Add the key to the `simple-kbs` database without any verification policy:
```
```shell
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', NULL);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', NULL);
@@ -376,159 +284,3 @@ Return to step [Launch the Pod and Verify SEV Encryption](#launch-the-pod-and-ve
To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-guest-firmware-measurement).
## Creating a simple-kbs Policy to Verify the SEV Guest Firmware Measurement
The `simple-kbs` can be configured with a policy that requires the kata shim to provide a matching SEV guest firmware measurement to release the key for decrypting the image. At launch time, the kata shim will collect the SEV guest firmware measurement and forward it in a key request to the `simple-kbs`.
These steps will use the CoCo sample encrypted container image, but the image URL can be replaced with a user created image registry URL.
To create the policy, the value of the SEV guest firmware measurement must be calculated.
`pip` is required to install the `sev-snp-measure` utility.
* Debian / Ubuntu:
```
sudo apt install python3-pip
```
* CentOS / Fedora / RHEL:
```
sudo dnf install python3
```
[sev-snp-measure](https://github.com/IBM/sev-snp-measure) is a utility used to calculate the SEV guest firmware measurement with provided ovmf, initrd, kernel and kernel append input parameters. Install it using the following command:
```
sudo pip install sev-snp-measure
```
The path to the guest binaries required for measurement is specified in the kata configuration. Set them:
```
ovmf_path="/opt/confidential-containers/share/ovmf/OVMF.fd"
kernel_path="/opt/confidential-containers/share/kata-containers/vmlinuz-sev.container"
initrd_path="/opt/confidential-containers/share/kata-containers/kata-containers-initrd.img"
```
The kernel append line parameters are included in the SEV guest firmware measurement. A placeholder will be initially set, and the actual value will be retrieved later from the qemu command line:
```
append="PLACEHOLDER"
```
Use the `sev-snp-measure` utility to calculate the SEV guest firmware measurement using the binary variables previously set:
```
measurement=$(sev-snp-measure --mode=sev --output-format=base64 \
--ovmf="${ovmf_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
)
```
If the container image is not already present, pull it:
```
encrypted_image_url="ghcr.io/fitzthum/encrypted-image-tests:unencrypted"
docker pull "${encrypted_image_url}"
```
Retrieve the encryption key from docker image label:
```
enc_key=$(docker inspect ${encrypted_image_url} \
| jq -r '.[0].Config.Labels.enc_key')
```
Add the key, keyset and policy with measurement to the `simple-kbs` database:
```
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', 10);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', 10);
REPLACE INTO policy VALUES (10, '["${measurement}"]', '[]', 0, 0, '[]', now(), NULL, 1);
EOF
```
Using the same service yaml from the section on [Launch the Pod and Verify SEV Encryption](#launch-the-pod-and-verify-sev-encryption), launch the service:
```
kubectl apply -f encrypted-image-tests.yaml
```
Check for pod errors:
```
pod_name=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $1;}')
kubectl describe pod ${pod_name}
```
The pod will error out on the key retrieval request to the `simple-kbs` because the policy verification failed due to a mismatch in the SEV guest firmware measurement. This is the error message that should display:
```
Policy validation failed: fw digest not valid
```
The `PLACEHOLDER` value that was set for the kernel append line when the SEV guest firmware measurement was calculated does not match what was measured by the kata shim. The kernel append line parameters can be retrieved from the qemu command line using the following scripting commands, as long as kubernetes is still trying to launch the pod:
```
duration=$((SECONDS+30))
set append
while [ $SECONDS -lt $duration ]; do
qemu_process=$(ps aux | grep qemu | grep append || true)
if [ -n "${qemu_process}" ]; then
append=$(echo ${qemu_process} \
| sed "s|.*-append \(.*$\)|\1|g" \
| sed "s| -.*$||")
break
fi
sleep 1
done
echo "${append}"
```
The above check will only work if the `encrypted-image-tests` guest launch is the only consuming qemu process running.
Now, recalculate the SEV guest firmware measurement and store the `simple-kbs` policy in the database:
```
measurement=$(sev-snp-measure --mode=sev --output-format=base64 \
--ovmf="${ovmf_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
)
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', 10);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', 10);
REPLACE INTO policy VALUES (10, '["${measurement}"]', '[]', 0, 0, '[]', now(), NULL, 1);
EOF
```
The pod should now show a successful launch:
```
kubectl describe pod ${pod_name}
```
If the service is hung up, delete the pod and try to launch again:
```
# Delete
kubectl delete -f encrypted-image-tests.yaml
# Verify pod cleaned up
kubectl describe pod ${pod_name}
# Relaunch
kubectl apply -f encrypted-image-tests.yaml
```
Testing the SEV encrypted container launch can be completed by returning to the section on how to [Verify SEV Memory Encryption](#verify-sev-memory-encryption).

107
guides/snp.md Normal file
View File

@@ -0,0 +1,107 @@
# SNP Guide
## Platform Setup
> [!WARNING]
>
> In order to launch SEV or SNP memory encrypted guests, the host must be prepared with a compatible kernel, `6.8.0-rc5-next-20240221-snp-host-cc2568386`. AMD custom changes and required components and repositories will eventually be upstreamed.
> [Sev-utils](https://github.com/amd/sev-utils/blob/coco-202402240000/docs/snp.md) is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, they are already packaged and delivered with Kata.
Alternatively, refer to the [AMDESE guide](https://github.com/confidential-containers/amdese-amdsev/tree/amd-snp-202402240000?tab=readme-ov-file#prepare-host) to manually build the host kernel and other components.
## Getting Started
This guide covers platform-specific setup for SNP and walks through the complete flows for the different CoCo use cases:
- [Container Launch with Memory Encryption](#container-launch-with-memory-encryption)
## Container Launch With Memory Encryption
### Launch a Confidential Service
To launch a container with SNP memory encryption, the SNP runtime class (`kata-qemu-snp`) must be specified as an annotation in the yaml. A base alpine docker container ([Dockerfile](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile)) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a [SSH public key](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted.pub) for validation purposes.
Here is a sample service yaml specifying the SNP runtime class:
```yaml
kind: Service
apiVersion: v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
app: "confidential-unencrypted"
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
matchLabels:
app: "confidential-unencrypted"
template:
metadata:
labels:
app: "confidential-unencrypted"
annotations:
io.containerd.cri.runtime-handler: kata-qemu-snp
spec:
runtimeClassName: kata-qemu-snp
containers:
- name: "confidential-unencrypted"
image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always
```
Save the contents of this yaml to a file called `confidential-unencrypted.yaml`.
Start the service:
```shell
kubectl apply -f confidential-unencrypted.yaml
```
Check for errors:
```shell
kubectl describe pod confidential-unencrypted
```
If there are no errors in the Events section, then the container has been successfully created with SNP memory encryption.
### Validate SNP Memory Encryption
The container dmesg log can be parsed to indicate that SNP memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get the pod IP:
```shell
pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
```
Download and save the SSH private key and set the permissions.
```shell
wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
chmod 600 confidential-image-ssh-key
```
The following command will run a remote SSH command on the container to check if SNP memory encryption is active:
```shell
ssh -i confidential-image-ssh-key \
-o "StrictHostKeyChecking no" \
-t root@${pod_ip} \
'dmesg | grep "Memory Encryption Features""'
```
If SNP is enabled and active, the output should return:
```shell
[ 0.150045] Memory Encryption Features active: AMD SNP
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

View File

@@ -31,7 +31,7 @@ To run the operator you must have an existing Kubernetes cluster that meets the
- Ensure a minimum of 8GB RAM and 4 vCPU for the Kubernetes cluster node
- Only containerd runtime based Kubernetes clusters are supported with the current CoCo release
- The minimum Kubernetes version should be 1.24
- Ensure at least one Kubernetes node in the cluster is having the label `node.kubernetes.io/worker=`
- Ensure at least one Kubernetes node in the cluster has the labels `node-role.kubernetes.io/worker=` or `node.kubernetes.io/worker=`. This will assign the worker role to a node in your cluster, making it responsible for running your applications and services
- Ensure SELinux is disabled or not enforced (https://github.com/confidential-containers/operator/issues/115)
For more details on the operator, including the custom resources managed by the operator, refer to the operator [docs](https://github.com/confidential-containers/operator).
@@ -59,18 +59,18 @@ on the worker nodes is **not** on an overlayfs mount but the path is a `hostPath
Deploy the operator by running the following command where `<RELEASE_VERSION>` needs to be substituted
with the desired [release tag](https://github.com/confidential-containers/operator/tags).
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=<RELEASE_VERSION>
```
For example, to deploy the `v0.8.0` release run:
```
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.8.0
For example, to deploy the `v0.10.0` release run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0
```
Wait until each pod has the STATUS of Running.
```
```shell
kubectl get pods -n confidential-containers-system --watch
```
@@ -81,25 +81,25 @@ kubectl get pods -n confidential-containers-system --watch
Creating a custom resource installs the required CC runtime pieces into the cluster node and creates
the `RuntimeClasses`
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/<CCRUNTIME_OVERLAY>?ref=<RELEASE_VERSION>
```
The current present overlays are: `default` and `s390x`
For example, to deploy the `v0.8.0` release for `x86_64`, run:
```
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.8.0
For example, to deploy the `v0.10.0` release for `x86_64`, run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.10.0
```
And to deploy `v0.8.0` release for `s390x`, run:
```
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.8.0
And to deploy `v0.10.0` release for `s390x`, run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.10.0
```
Wait until each pod has the STATUS of Running.
```
```shell
kubectl get pods -n confidential-containers-system --watch
```
@@ -114,11 +114,11 @@ Please see the [enclave-cc guide](./guides/enclave-cc.md) for more information.
`enclave-cc` is a form of Confidential Containers that uses process-based isolation.
`enclave-cc` can be installed with the following custom resources.
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/sim?ref=<RELEASE_VERSION>
```
or
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/hw?ref=<RELEASE_VERSION>
```
for the **simulated** SGX mode build or **hardware** SGX mode build, respectively.
@@ -127,287 +127,84 @@ for the **simulated** SGX mode build or **hardware** SGX mode build, respectivel
Check the `RuntimeClasses` that got created.
```
```shell
kubectl get runtimeclass
```
Output:
```
```shell
NAME HANDLER AGE
kata kata 9m55s
kata-clh kata-clh 9m55s
kata-clh-tdx kata-clh-tdx 9m55s
kata-qemu kata-qemu 9m55s
kata-qemu-tdx kata-qemu-tdx 9m55s
kata-qemu-sev kata-qemu-sev 9m55s
kata kata-qemu 8d
kata-clh kata-clh 8d
kata-qemu kata-qemu 8d
kata-qemu-coco-dev kata-qemu-coco-dev 8d
kata-qemu-sev kata-qemu-sev 8d
kata-qemu-snp kata-qemu-snp 8d
kata-qemu-tdx kata-qemu-tdx 8d
```
Details on each of the runtime classes:
- *kata* - standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non CC HW
- *kata-clh* - standard kata runtime using the cloud hypervisor including all CoCo building blocks for a non CC HW
- *kata-clh-tdx* - using the Cloud Hypervisor, with TD-Shim, and support for Intel TDX CC HW
- *kata* - Convenience runtime that uses the handler of the default runtime
- *kata-clh* - standard kata runtime using the cloud hypervisor
- *kata-qemu* - same as kata
- *kata-qemu-tdx* - using QEMU, with TDVF, and support for Intel TDX CC HW, prepared for using Verdictd and EAA KBC.
- *kata-qemu-coco-dev* - standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non CC HW
- *kata-qemu-sev* - using QEMU, and support for AMD SEV HW
- *kata-qemu-snp* - using QEMU, and support for AMD SNP HW
- *kata-qemu-tdx* -using QEMU, and support Intel TDX HW based on what's provided by [Ubuntu](https://github.com/canonical/tdx) and [CentOS 9 Stream](https://sigs.centos.org/virt/tdx/).
If you are using `enclave-cc` you should see the following runtime classes.
```
```shell
kubectl get runtimeclass
```
Output:
```
```shell
NAME HANDLER AGE
enclave-cc enclave-cc 9m55s
```
The CoCo operator environment has been setup and deployed!
### Platform Setup
While the operator deploys all the required binaries and artifacts and sets up runtime classes that use them,
certain platforms may require additional configuration to enable confidential computing. For example, the host
kernel and firmware might need to be configured.
See the [guides](./guides) for more information.
certain platforms may require additional configuration to enable confidential computing. For example, a specific
host kernel or firmware may be required. See the [guides](./guides/) for more information.
# Running a workload
## Using CoCo
## Creating a sample CoCo workload
Below is a brief summary and description of some of the CoCo use cases and features:
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the `kata` runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
- **Container Launch with Only Memory Encryption (No Attestation)** - Launch a container with memory encryption
- **Container Launch with Encrypted Image** - Launch an encrypted container by proving the workload is running
in a TEE in order to retrieve the decryption key
- **Container Launch with Image Signature Verification** - Launch a container and verify the authenticity and
integrity of an image by proving the workload is running in a TEE
- **Sealed secret** - Implement wrapped kubernetes secrets that are confidential to the workload owner and are
automatically decrypted by proving the workload is running in a TEE
- **Ephemeral Storage** - Temporary storage that is used during the lifecycle of the container but is cleared out
when a pod is restarted or finishes its task. At the moment, only ephemeral storage of the container itself is
supported and it has to be explicityly configured.
- **Authenticated Registries** - Create secure container registries that require authentication to access and manage container
images that ensures that only trusted images are deployed in the Confidential Container. The host must have access
to the registry credentials.
- **Secure Storage** - Mechanisms and technologies used to protect data at rest, ensuring that sensitive information
remains confidential and tamper-proof.
- **Peer Pods** - Enable the creation of VMs on any environment without requiring bare metal servers or nested
virtualization support. More information about this feature can be found [here](https://github.com/confidential-containers/cloud-api-adaptor/tree/main).
In our example we will be using the bitnami/nginx image as described in the following yaml:
```
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata
```
## Platforms
Setting the runtimeClassName is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the [guides](./guides) for
more details.
With some TEEs, the CoCo use cases and/or configurations are implemented differently. Those are described in each corresponding
[guide](./guides) section. To get started using CoCo without TEE hardware, follow the CoCo-dev guide below:
With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host you should log into the k8s node and ensure the following command returns an empty result:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload:
```
kubectl apply -f nginx.yaml
```
Output:
```
pod/nginx created
```
Ensure the pod was created successfully (in running state):
```
kubectl get pods
```
Output:
```
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
```
Now go back to the k8s node and ensure that you still dont have any bitnami/nginx images on it:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
## Encrypted and/or signed images with attestation
The previous example does not involve any attestation because the workload container isn't signed or encrypted
and the workload itself does not require any secrets.
This is not the case for most real workloads. It is recommended to use CoCo with signed and/or encrypted images.
The workload itself can also request secrets from the attestation agent in the guest.
Secrets are provisioned to the guest in conjunction with an attestation, which is based on hardware evidence.
The rest of this guide focuses on setting up more substantial encrypted/signed workloads using attestation
and confidential hardware.
CoCo has a modular attestation interface and there are a few options for attestation.
CoCo provides a generic Key Broker Service (KBS) that the rest of this guide will be focused on.
The SEV runtime class uses `simple-kbs`, which is described in the [SEV guide](./guides/sev.md).
There is also `eaa_kbc`/`verdictd` which is described [here](./guides/eaa_verdictd.md).
### Select Runtime Class
To use CoCo with confidential hardware, first switch to the appropriate runtime class.
TDX has two runtime classes, `kata-qemu-tdx` and `kata-clh-tdx`. One uses QEMU as VMM and TDVF as firmware. The other uses Cloud Hypervisor as VMM and TD-Shim as firmware.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](./guides/sev.md).
For `enclave-cc` follow the [enclave-cc guide](./guides/enclave-cc.md).
### Deploy and Configure tenant-side CoCo Key Broker System cluster
The following describes how to run and provision the generic KBS.
The KBS should be run in a trusted environment. The KBS is not just one service,
but a combination of several.
A tenant-side CoCo Key Broker System cluster includes:
- Key Broker Service (KBS): Brokering service for confidential resources.
- Attestation Service (AS): Verifier for remote attestation.
- Reference Value Provicer Service (RVPS): Provides reference values for AS.
- CoCo Keyprovider: Component to encrypt the images following ocicrypt spec.
To quick start the KBS cluster, a `docker compose` yaml is provided to launch.
```shell
# Clone KBS git repository
git clone https://github.com/confidential-containers/kbs.git
cd kbs/kbs
export KBS_DIR_PATH=$(pwd)
# Generate a user auth key pair
openssl genpkey -algorithm ed25519 > config/private.key
openssl pkey -in config/private.key -pubout -out config/public.pub
# Start KBS cluster
docker compose up -d
```
If configuration of KBS cluster is required, edit the following config files and restart the KBS cluster with `docker compose`:
- `$KBS_DIR_PATH/config/kbs-config.json`: configuration for Key Broker Service.
- `$KBS_DIR_PATH/config/as-config.json`: configuration for Attestation Service.
- `$KBS_DIR_PATH/config/sgx_default_qcnl.conf`: configuration for Intel TDX/SGX verification.
When KBS cluster is running, you can modify the policy file used by AS policy engine ([OPA](https://www.openpolicyagent.org/)) at any time:
- `$KBS_DIR_PATH/data/attestation-service/opa/policy.rego`: Policy file for evidence verification of AS, refer to [AS Policy Engine](https://github.com/confidential-containers/attestation-service#policy-engine) for more infomation.
### Encrypting an Image
[skopeo](https://github.com/containers/skopeo) is required to encrypt the container image.
Follow the [instructions](https://github.com/containers/skopeo/blob/main/install.md) to install `skopeo`.
Use `skopeo` to encrypt an image on the same node of the KBS cluster (use busybox:latest for example):
```shell
# edit ocicrypt.conf
tee > ocicrypt.conf <<EOF
{
"key-providers": {
"attestation-agent": {
"grpc": "127.0.0.1:50000"
}
}
}
EOF
# encrypt the image
OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://library/busybox oci:busybox:encrypted
```
The image will be encrypted, and things happens in the KBS cluster background include:
- CoCo Keyprovider generates a random key and a key-id. Then encrypts the image using the key.
- CoCo Keyprovider registers the key with key-id into KBS.
Then push the image to registry:
```shell
skopeo copy oci:busybox:encrypted [SCHEME]://[REGISTRY_URL]:encrypted
```
Be sure to replace `[SCHEME]` with registry scheme type like `docker`, replace `[REGISTRY_URL]` with the desired registry URL like `docker.io/encrypt_test/busybox`.
### Signing an Image
[cosign](https://github.com/sigstore/cosign) is required to sign the container image. Follow the instructions here to install `cosign`:
[cosign installation](https://docs.sigstore.dev/cosign/system_config/installation/)
Generate a cosign key pair and register the public key to KBS storage:
```shell
cosign generate-key-pair
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/cosign-key && cp cosign.pub $KBS_DIR_PATH/data/kbs-storage/default/cosign-key/1
```
Sign the encrypted image with cosign private key:
```shell
cosign sign --key cosign.key [REGISTRY_URL]:encrypted
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous steps.
Then edit an image pulling validation policy file.
Here is a sample policy file `security-policy.json`:
```json
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"[REGISTRY_URL]": [
{
"type": "sigstoreSigned",
"keyPath": "kbs:///default/cosign-key/1"
}
]
}
}
}
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image.
Register the image pulling validation policy file to KBS storage:
```shell
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/security-policy
cp security-policy.json $KBS_DIR_PATH/data/kbs-storage/default/security-policy/test
```
### Deploy encrypted image as a CoCo workload on CC HW
Here is a sample yaml for encrypted image deploying:
```shell
cat << EOT | tee encrypted-image-test-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: encrypted-image-test-busybox
name: encrypted-image-test-busybox
annotations:
io.containerd.cri.runtime-handler: [RUNTIME_CLASS]
spec:
containers:
- image: [REGISTRY_URL]:encrypted
name: busybox
dnsPolicy: ClusterFirst
runtimeClassName: [RUNTIME_CLASS]
EOT
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous step, replace `[RUNTIME_CLASS]` with kata runtime class for CC HW.
Then configure `/opt/confidential-containers/share/defaults/kata-containers/configuration-<RUNTIME_CLASS_SUFFIX>.toml` to add `agent.aa_kbc_params=cc_kbc::<KBS_URI>` to kernel parameters. Here `RUNTIME_CLASS_SUFFIX` is something like `qemu-tdx`, `KBS_URI` is the address of Key Broker Service in KBS cluster like `http://123.123.123.123:8080`.
Deploy encrypted image as a workload:
```shell
kubectl apply -f encrypted-image-test-busybox.yaml
```
- [CoCo-dev](./guides/coco-dev.md)
- [SEV(-ES)](./guides/sev.md)
- [SNP](./guides/snp.md)
- TDX: No additional steps required.
- [SGX](./guides/enclave-cc.md)
- [IBM Secure Execution](./guides/ibm-se.md)
- ...

84
releases/v0.11.0.md Normal file
View File

@@ -0,0 +1,84 @@
# Release Notes for v0.11.0
Release Date: November 25th, 2024
This release is based on [3.11.0](https://github.com/kata-containers/kata-containers/releases/tag/3.11.0) of Kata Containers
and [v0.11.0](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.11.0) of enclave-cc.
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for
definitions of the acronyms used in this document.
## What's new
* Sealed secrets can be exposed as volumes
* TDX guests support measured rootfs with dm-verity
* Policy generation improvements
* Test coverage improved for s390x
* Community reached 100% ("passing") on OpenSSF best practices badge
## Hardware Support
Attestation is supported and tested on three platforms: Intel TDX, AMD SEV-SNP, and IBM SE.
Not all feature have been tested on every platform, but those based on attestation
are expected to work on the platforms above.
Make sure your host platform is compatible with the hypervisor and guest kernel
provisioned by coco.
This release has been tested on the following stacks:
### AMD SEV-SNP
* Processor: AMD EPYC 7413
* Kernel: [6.8.0-rc5-next-20240221-snp-host-cc2568386](https://github.com/confidential-containers/linux/tree/amd-snp-host-202402240000)
* OS: Ubuntu 22.04.4 LTS
* k8s: v1.30.1 (Kubeadm)
* Kustomize: v4.5.4
### Intel TDX
* Kernel: [6.8.0-1004-intel](https://git.launchpad.net/~kobuk-team/ubuntu/+source/linux-intel/tree/?h=noble-main-next)
* OS: Ubuntu 24.04 LTS
* k8s: v1.30.2 (Kubeadm)
* Kustomize: v5.0.4-0.20230601165947-6ce0bf390ce3
### Secure Execution on IBM zSystems (s390x) running LinuxONE
* Hardware: IBM Z16 LPAR
* Kernel: 5.15.0-113-generic
* OS: Ubuntu 22.04.1 LTS
* k8s: v1.28.4 (Kubeadm)
* Kustomize: v5.3.0
## Limitations
The following are known limitations of this release:
* SEV(-ES) does not support attestation.
* Credentials for authenticated registries are exposed to the host.
* Not all features are tested on all platforms.
* Nydus snapshotter support is not mature.
* Nydus snapshotter sometimes fails to pull an image.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* Complete integration with Kubernetes is still in progress.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* Container metadata such as environment variables are not measured.
* The Kata Agent allows the host to call several dangerous endpoints
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
* Policy can be used to block endpoints, but it is not yet tied to the hardware evidence.
## CVE Fixes
None

View File

@@ -1,87 +1,27 @@
# Confidential Containers Roadmap
When looking at the project's roadmap we distinguish between the short-term roadmap (2-4 months) vs.
the mid/long-term roadmap (4-12 months):
- The **short-term roadmap** is focused on achieving an end-to-end, easy to deploy confidential
containers solution using at least one HW encryption solution and integrated to k8s (with forked
versions if needed)
- The **mid/long-term solutions** focuses on maturing the components of the short-term solution
and adding a number of enhancements both to the solution and the project (such as CI,
interoperability with other projects etc.)
# Short-Term Roadmap
The short-term roadmap aims to achieve the following:
- MVP stack for running confidential containers
- Based on and compatible with Kata Containers 2
- Based on at least one confidential computing implementation (SEV, TDX, SE, etc)
- Integration with Kubernetes: kubectl apply -f confidential-pod.yaml
When looking at the project's roadmap we distinguish between the short-term roadmap (2-6 months) vs. the mid/long-term roadmap (6-18 months):
- The [short term roadmap](#short-term-roadmap) is focused on achieving an end-to-end, easy to deploy and stable confidential containers solution. We track this work on a number of github boards.
- The [mid and long term roadmap](#mid-and-long-term-roadmap) focuses on use case driven development.
The work is targeted to be completed by end of November 2021 and includes 3 milestones:
![September 2021](./images/RoadmapSept2021.jpg)
- **September 2021**
- Unencrypted image pulled inside the guest, kept in tmpfs
- Pod/Container runs from pulled image
- Agent API is restricted
- crictl only
# Short Term Roadmap
The short-term roadmap is based on our github boards and delivered through our on-going releases
![October 2021](./images/RoadmapOct2021.jpg)
- **October 2021**
- Encrypted image pulled inside the guest, kept in tmpfs
- Image is decrypted with a pre-provisioned key (No attestation)
- [Confidential containers github board](https://github.com/orgs/confidential-containers/projects/6/views/22)
- [Trustee github board](https://github.com/orgs/confidential-containers/projects/10/views/1)
![November 2021](./images/RoadmapNov2021.jpg)
- **November 2021**
- Image is optionally stored on an encrypted, ephemeral block device
- Image is decrypted with a key obtained from a key brokering service (KBS)
- Integration with kubelet
# Mid and Long Term Roadmap
For additional details on each milestone see [Confidential Containers v0](https://docs.google.com/presentation/d/1SIqLogbauLf6lG53cIBPMOFadRT23aXuTGC8q-Ernfw/edit#slide=id.p).
In CoCo use case driven development we identify the main functional requirements the community requires by focusing on key use cases.
Tasks are tracked on a weekly basis through a dedicated spreadsheet.
For more information see [Confidential Containers V0 Plan](https://docs.google.com/spreadsheets/d/1M_MijAutym4hMg8KtIye1jIDAUMUWsFCri9nq4dqGvA/edit#gid=0&fvid=1397558749).
This helps the community deliver releases which address real use cases customers require and focusing on the right priorities. The use case driven development approach also includes developing the relevant CI/CDs to ensure end-to-end use cases the community delivers work over time.
We target the following use cases:
# Mid-Term Roadmap
- Confidential Federated Learning
- Multi-party Computing (data clean room, confidential spaces etc)
- Trusted Pipeline (Supply Chain)
- Confidential RAG LLMs
Continue our journey using knowledge and support of Subject Matter Experts (SME's) in other
projects to form stronger opinions on what is needed from components which can be integrated to
deliver the confidential containers objectives.
- Harden the code used for the demos
- Improve CI/CD pipeline
- Clarify the release process
- Establish processes and tools to support planning, prioritisation, and work in progress
- Simple process to get up and running regardless of underlying Trusted Execution Environment
technology
- Develop a small, simple, secure, lightweight and high performance OCI container image
management library [image-rs](https://github.com/confidential-containers/image-rs) for
confidential containers.
- Develop small, simple shim firmware ([td-shim](https://github.com/confidential-containers/td-shim))
in support of trusted execution environment for use with cloud native confidential containers.
- Document threat model and trust model, what are we protecting, how are we achieving it.
- Identify technical convergence points with other confidential computing projects both inside
and outside CNCF.
# Longer-Term Roadmap
Focused meetings will be set up to discuss architecture and the priority of longer-term objectives
in the process of being set up.
Each meeting will have an agreed focus with people sharing material/thoughts ahead of time.
Topics under consideration:
- CI/CD + repositories
- Community structure and expectations
- 2 on Mid-Term Architecture
- Attestation
- Images
- Runtimes
Proposed Topics to influence long-term direction/architecture:
- Baremetal / Peer Pod
- Composability of alternative technologies to deliver confidential containers
- Performance
- Identity / Service Mesh
- Reproducible builds/demos
- Edge Computing
- Reduce footprint of image pull
A dedicated working group leads this effort. For additional details we recommend reviewing the working group's notes: [Confidential containers use cases driven development](https://docs.google.com/document/d/1LnGNeyUyPM61Iv4kBKFbfgmBr3RmxHYZ7Ev88obN0_E/edit?tab=t.0#heading=h.b0rnn2bw76n)

View File

@@ -69,7 +69,25 @@ This means our trust and threat modelling should
- Consider existing Cloud Native technologies and the role they can play for confidential containers.
- Consider additional technologies to fulfil a role in Cloud Native exploitation of TEEs.
### Out of Scope
## Illustration
The following diagram shows which components in a Confidential Containers setup
are part of the TEE (green boxes labeled TEE). The hardware and guest work in
tandem to establish a TEE for the pod, which provides the isolation and
integrity protection for data in use.
![Threat model](./images/coco-threat-model.png)
Not depicted: Process-based isolation from the enclave-cc runtime class. That isolation model further removes the guest operating system from the trust boundary. See the enclave-cc sub-project for more details:
https://github.com/confidential-containers/enclave-cc/
Untrusted components include:
1. The host operating system, including its hypervisor, KVM
2. Other Cloud Provider host software beyond the host OS and hypervisor
3. Other virtual machines (and their processes) resident on the same host
4. Any other processes on the host machine (including the kubernetes control plane).
## Out of Scope
The following items are considered out-of-scope for the trust/threat modelling within confidential
containers :
@@ -82,7 +100,7 @@ containers :
and will only highlight them where they become relevant to the trust model or threats we
consider.
### Summary
## Summary
In practice, those deploying workloads into TEE environments may have varying levels of trust
in the personas who have privileges regarding orchestration or hosting the workload. This trust