Docs: update quickstart

Reorganizing the quickstart guide and adding a new guide page for CoCo-dev instructions for testing CoCo without the use of memory encryption or attestation.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
This commit is contained in:
Arvind Kumar 2024-09-04 10:18:31 -05:00 committed by Tobin Feldman-Fitzthum
parent 4a357bdd5a
commit 19fb57f3ed
4 changed files with 308 additions and 313 deletions

1
.lycheeignore Normal file
View File

@ -0,0 +1 @@
https://sigs.centos.org/virt/tdx/

250
guides/coco-dev.md Normal file
View File

@ -0,0 +1,250 @@
# Running a workload
## Creating a sample CoCo workload
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the `kata-qemu-coco-dev` runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
In this example, we will be using the bitnami/nginx image as described in the following yaml:
```
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata-qemu-coco-dev
```
Setting the `runtimeClassName` is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the [guides](../guides) for
more details.
With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host, you should log into the k8s node and ensure the following command returns an empty result:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload:
```
kubectl apply -f nginx.yaml
```
Output:
```
pod/nginx created
```
Ensure the pod was created successfully (in running state):
```
kubectl get pods
```
Output:
```
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
```
Now go back to the k8s node and ensure that you dont have any bitnami/nginx images on it:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
## Encrypted and/or signed images with attestation
The previous example does not involve any attestation because the workload container isn't signed or encrypted,
and the workload itself does not require any secrets.
This is not the case for most real workloads. It is recommended to use CoCo with signed and/or encrypted images.
The workload itself can also request secrets from the attestation agent in the guest.
Secrets are provisioned to the guest in conjunction with an attestation, which is based on hardware evidence.
The rest of this guide focuses on setting up more substantial encrypted/signed workloads using attestation
and confidential hardware.
CoCo has a modular attestation interface and there are a few options for attestation.
CoCo provides a generic Key Broker Service (KBS) that the rest of this guide will be focused on.
The SEV runtime class uses `simple-kbs`, which is described in the [SEV guide](../guides/sev.md).
### Select Runtime Class
To use CoCo with confidential hardware, first switch to the appropriate runtime class.
TDX has one runtime class, `kata-qemu-tdx`.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](../guides/sev.md).
For SNP, use the `kata-qemu-snp` runtime class and follow the SNP guide.
For `enclave-cc` follow the [enclave-cc guide](../guides/enclave-cc.md).
### Deploy and Configure tenant-side CoCo Key Broker System cluster
The following describes how to run and provision the generic KBS.
The KBS should be run in a trusted environment. The KBS is not just one service,
but a combination of several.
A tenant-side CoCo Key Broker System cluster includes:
- Key Broker Service (KBS): Brokering service for confidential resources.
- Attestation Service (AS): Verifier for remote attestation.
- Reference Value Provider Service (RVPS): Provides reference values for AS.
- CoCo Keyprovider: Component to encrypt the images following ocicrypt spec.
To quick start the KBS cluster, a `docker compose` yaml is provided to launch.
```shell
# Clone KBS git repository
git clone https://github.com/confidential-containers/trustee.git
cd trustee/kbs
export KBS_DIR_PATH=$(pwd)
# Generate a user auth key pair
openssl genpkey -algorithm ed25519 > config/private.key
openssl pkey -in config/private.key -pubout -out config/public.pub
cd ..
# Start KBS cluster
docker compose up -d
```
If configuration of KBS cluster is required, edit the following config files and restart the KBS cluster with `docker compose`:
- `$KBS_DIR_PATH/config/kbs-config.toml`: configuration for Key Broker Service.
- `$KBS_DIR_PATH/config/as-config.json`: configuration for Attestation Service.
- `$KBS_DIR_PATH/config/sgx_default_qcnl.conf`: configuration for Intel TDX/SGX verification. See [details](https://github.com/confidential-containers/trustee/blob/main/attestation-service/docs/grpc-as.md#quick-start).
When KBS cluster is running, you can modify the policy file used by AS policy engine ([OPA](https://www.openpolicyagent.org/)) at any time:
- `$KBS_DIR_PATH/data/attestation-service/opa/default.rego`: Policy file for evidence verification of AS, refer to [AS Policy Engine](https://github.com/confidential-containers/attestation-service#policy-engine) for more infomation.
### Encrypting an Image
[skopeo](https://github.com/containers/skopeo) is required to encrypt the container image.
Follow the [instructions](https://github.com/containers/skopeo/blob/main/install.md) to install `skopeo`.
If building with Ubuntu 22.04, make sure to follow the instructions to build skopeo from source, otherwise
there will be errors regarding version incompatibility between ocicrypt and skopeo. Make sure the downloaded
skopeo version is at least 1.16.0. Ubuntu 22.04 builds skopeo with an outdated ocicrypt version, which does
not support the keyprovider protocol we depend on.
Use `skopeo` to encrypt an image on the same node of the KBS cluster (use busybox:latest for example):
```shell
# edit ocicrypt.conf
tee > ocicrypt.conf <<EOF
{
"key-providers": {
"attestation-agent": {
"grpc": "127.0.0.1:50000"
}
}
}
EOF
# encrypt the image
OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://library/busybox oci:busybox:encrypted
```
The image will be encrypted, and things happens in the KBS cluster background include:
- CoCo Keyprovider generates a random key and a key-id. Then encrypts the image using the key.
- CoCo Keyprovider registers the key with key-id into KBS.
Then push the image to a registry:
```shell
skopeo copy oci:busybox:encrypted [SCHEME]://[REGISTRY_URL]:encrypted
```
Be sure to replace `[SCHEME]` with registry scheme type like `docker`, replace `[REGISTRY_URL]` with the desired registry URL like `docker.io/encrypt_test/busybox`.
### Signing an Image
[cosign](https://github.com/sigstore/cosign) is required to sign the container image. Follow the instructions here to install `cosign`:
To install cosign, find and unpackage the corresponding package to the machine being used from their [release page](https://github.com/sigstore/cosign/releases).
Generate a cosign key pair and register the public key to KBS storage:
```shell
cosign generate-key-pair
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/cosign-key && cp cosign.pub $KBS_DIR_PATH/data/kbs-storage/default/cosign-key/1
```
Sign the encrypted image with cosign private key:
```shell
cosign sign --key cosign.key [REGISTRY_URL]:encrypted
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous steps.
Then edit an image pulling validation policy file.
Here is a sample policy file `security-policy.json`:
```json
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"[REGISTRY_URL]": [
{
"type": "sigstoreSigned",
"keyPath": "kbs:///default/cosign-key/1"
}
]
}
}
}
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image.
Register the image pulling validation policy file to KBS storage:
```shell
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/security-policy
cp security-policy.json $KBS_DIR_PATH/data/kbs-storage/default/security-policy/test
```
### Deploy an Encrypted Image as a CoCo workload on CC HW
Here is a sample yaml for encrypted image deploying:
```shell
cat << EOT | tee encrypted-image-test-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: encrypted-image-test-busybox
name: encrypted-image-test-busybox
annotations:
io.containerd.cri.runtime-handler: [RUNTIME_CLASS]
spec:
containers:
- image: [REGISTRY_URL]:encrypted
name: busybox
dnsPolicy: ClusterFirst
runtimeClassName: [RUNTIME_CLASS]
EOT
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous step, replace `[RUNTIME_CLASS]` with kata runtime class for CC HW.
Then configure `/opt/confidential-containers/share/defaults/kata-containers/configuration-<RUNTIME_CLASS_SUFFIX>.toml` to add `agent.aa_kbc_params=cc_kbc::<KBS_URI>` to kernel parameters. Here `RUNTIME_CLASS_SUFFIX` is something like `qemu-coco-dev`, `KBS_URI` is the address of Key Broker Service in KBS cluster like `http://123.123.123.123:8080`.
Deploy encrypted image as a workload:
```shell
kubectl apply -f encrypted-image-test-busybox.yaml
```

View File

@ -1,54 +0,0 @@
# EAA Verdictd Guide
**EAA/Verdictd support has been deprecated in Confidential Containers**
EAA is used to perform attestation at runtime and provide guest with confidential resources such as keys.
It is based on [rats-tls](https://github.com/inclavare-containers/rats-tls).
[Verdictd](https://github.com/inclavare-containers/verdictd) is the Key Broker Service and Attestation Service of EAA.
The EAA KBC is an optional module in the attestation-agent at compile time,
which can be used to communicate with Verdictd.
The communication is established on the encrypted channel provided by rats-tls.
EAA can now be used on Intel TDX and Intel SGX platforms.
## Create encrypted image
Before build encrypted image, you need to make sure Skopeo and Verdictd(EAA KBS) have been installed:
- [Skopeo](https://github.com/containers/skopeo): the command line utility to perform encryption operations.
- [Verdictd](https://github.com/inclavare-containers/verdictd): EAA Key Broker Service and Attestation Service.
1. Pull unencrypted image.
Here use `alpine:latest` for example:
```sh
${SKOPEO_HOME}/bin/skopeo copy --insecure-policy docker://docker.io/library/alpine:latest oci:busybox
```
2. Follow the [Verdictd README #Generate encrypted container image](https://github.com/inclavare-containers/verdictd#generate-encrypted-container-image) to encrypt the image.
3. Publish the encrypted image to your registry.
## Deploy encrypted image
1. Build rootfs with EAA component:
Specify `AA_KBC=eaa_kbc` parameters when using kata-containers `rootfs.sh` scripts to create rootfs.
2. Launch Verdictd
Verdictd performs remote attestation at runtime and provides the key needed to decrypt the image.
It is actually both Key Broker Service and Attestation Service of EAA.
So when deploy the encrypted image, Verdictd is needed to be launched:
```sh
verdictd --listen <$ip>:<$port> --mutual
```
> **Note** The communication between Verdictd and EAA KBC is based on rats-tls,
so you need to confirm that [rats-tls](https://github.com/inclavare-containers/rats-tls) has been correctly installed in your running environment.
3. Agent Configuration
Add configuration `aa_kbc_params= 'eaa_kbc::<$IP>:<$PORT>'` to agent config file, the IP and PORT should be consistent with verdictd.

View File

@ -63,9 +63,9 @@ with the desired [release tag](https://github.com/confidential-containers/operat
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=<RELEASE_VERSION>
```
For example, to deploy the `v0.8.0` release run:
For example, to deploy the `v0.10.0` release run:
```
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.8.0
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0
```
Wait until each pod has the STATUS of Running.
@ -87,14 +87,14 @@ kubectl apply -k github.com/confidential-containers/operator/config/samples/ccru
The current present overlays are: `default` and `s390x`
For example, to deploy the `v0.8.0` release for `x86_64`, run:
For example, to deploy the `v0.10.0` release for `x86_64`, run:
```
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.8.0
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.10.0
```
And to deploy `v0.8.0` release for `s390x`, run:
And to deploy `v0.10.0` release for `s390x`, run:
```
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.8.0
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.10.0
```
Wait until each pod has the STATUS of Running.
@ -132,23 +132,29 @@ kubectl get runtimeclass
```
Output:
```
NAME HANDLER AGE
kata kata 9m55s
kata-clh kata-clh 9m55s
kata-clh-tdx kata-clh-tdx 9m55s
kata-qemu kata-qemu 9m55s
kata-qemu-tdx kata-qemu-tdx 9m55s
kata-qemu-sev kata-qemu-sev 9m55s
NAME HANDLER AGE
kata kata-qemu 8d
kata-clh kata-clh 8d
kata-qemu kata-qemu 8d
kata-qemu-coco-dev kata-qemu-coco-dev 8d
kata-qemu-sev kata-qemu-sev 8d
kata-qemu-snp kata-qemu-snp 8d
kata-qemu-tdx kata-qemu-tdx 8d
```
Details on each of the runtime classes:
- *kata* - standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non CC HW
- *kata-clh* - standard kata runtime using the cloud hypervisor including all CoCo building blocks for a non CC HW
- *kata-clh-tdx* - using the Cloud Hypervisor, with TD-Shim, and support for Intel TDX CC HW
- *kata* - Convenience runtime that uses the handler of the default runtime
- *kata-clh* - standard kata runtime using the cloud hypervisor
- *kata-qemu* - same as kata
- *kata-qemu-tdx* - using QEMU, with TDVF, and support for Intel TDX CC HW, prepared for using Verdictd and EAA KBC.
- *kata-qemu-coco-dev* - standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non CC HW
- *kata-qemu-sev* - using QEMU, and support for AMD SEV HW
- *kata-qemu-snp* - using QEMU, and support for AMD SNP HW
- *kata-qemu-tdx* -using QEMU, and support Intel TDX HW based on what's provided by [Ubuntu](https://github.com/canonical/tdx) and [CentOS 9 Stream](https://sigs.centos.org/virt/tdx/).
If you are using `enclave-cc` you should see the following runtime classes.
@ -161,253 +167,45 @@ NAME HANDLER AGE
enclave-cc enclave-cc 9m55s
```
The CoCo operator environment has been setup and deployed!
### Platform Setup
While the operator deploys all the required binaries and artifacts and sets up runtime classes that use them,
certain platforms may require additional configuration to enable confidential computing. For example, the host
kernel and firmware might need to be configured.
See the [guides](./guides) for more information.
certain platforms may require additional configuration to enable confidential computing. For example, a specific
host kernel or firmware may be required. See the [guides](./guides/) for more information.
# Running a workload
## Using CoCo
## Creating a sample CoCo workload
Below is a brief summary and description of some of the CoCo use cases and features:
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the `kata` runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
- **Container Launch with Only Memory Encryption (No Attestation)** - Launch a container with memory encryption
- **Container Launch with Encrypted Image** - Launch an encrypted container by proving the workload is running
in a TEE in order to retrieve the decryption key
- **Container Launch with Image Signature Verification** - Launch a container and verify the authenticity and
integrity of an image by proving the workload is running in a TEE
- **Sealed secret** - Implement wrapped kubernetes secrets that are confidential to the workload owner and are
automatically decrypted by proving the workload is running in a TEE
- **Ephemeral Storage** - Temporary storage that is used during the lifecycle of the container but is cleared out
when a pod is restarted or finishes its task. At the moment, only ephemeral storage of the container itself is
supported and it has to be explicityly configured.
- **Authenticated Registries** - Create secure container registries that require authentication to access and manage container
images that ensures that only trusted images are deployed in the Confidential Container. The host must have access
to the registry credentials.
- **Secure Storage** - Mechanisms and technologies used to protect data at rest, ensuring that sensitive information
remains confidential and tamper-proof.
- **Peer Pods** - Enable the creation of VMs on any environment without requiring bare metal servers or nested
virtualization support. More information about this feature can be found [here](https://github.com/confidential-containers/cloud-api-adaptor/tree/main).
## Platforms
In our example we will be using the bitnami/nginx image as described in the following yaml:
```
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata
```
With some TEEs, the CoCo use cases and/or configurations are implemented differently. Those are described in each corresponding
[guide](./guides) section. To get started using CoCo without TEE hardware, follow the CoCo-dev guide below:
Setting the runtimeClassName is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the [guides](./guides) for
more details.
With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host you should log into the k8s node and ensure the following command returns an empty result:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload:
```
kubectl apply -f nginx.yaml
```
Output:
```
pod/nginx created
```
Ensure the pod was created successfully (in running state):
```
kubectl get pods
```
Output:
```
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
```
Now go back to the k8s node and ensure that you still dont have any bitnami/nginx images on it:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
## Encrypted and/or signed images with attestation
The previous example does not involve any attestation because the workload container isn't signed or encrypted
and the workload itself does not require any secrets.
This is not the case for most real workloads. It is recommended to use CoCo with signed and/or encrypted images.
The workload itself can also request secrets from the attestation agent in the guest.
Secrets are provisioned to the guest in conjunction with an attestation, which is based on hardware evidence.
The rest of this guide focuses on setting up more substantial encrypted/signed workloads using attestation
and confidential hardware.
CoCo has a modular attestation interface and there are a few options for attestation.
CoCo provides a generic Key Broker Service (KBS) that the rest of this guide will be focused on.
The SEV runtime class uses `simple-kbs`, which is described in the [SEV guide](./guides/sev.md).
There is also `eaa_kbc`/`verdictd` which is described [here](./guides/eaa_verdictd.md).
### Select Runtime Class
To use CoCo with confidential hardware, first switch to the appropriate runtime class.
TDX has two runtime classes, `kata-qemu-tdx` and `kata-clh-tdx`. One uses QEMU as VMM and TDVF as firmware. The other uses Cloud Hypervisor as VMM and TD-Shim as firmware.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](./guides/sev.md).
For `enclave-cc` follow the [enclave-cc guide](./guides/enclave-cc.md).
### Deploy and Configure tenant-side CoCo Key Broker System cluster
The following describes how to run and provision the generic KBS.
The KBS should be run in a trusted environment. The KBS is not just one service,
but a combination of several.
A tenant-side CoCo Key Broker System cluster includes:
- Key Broker Service (KBS): Brokering service for confidential resources.
- Attestation Service (AS): Verifier for remote attestation.
- Reference Value Provicer Service (RVPS): Provides reference values for AS.
- CoCo Keyprovider: Component to encrypt the images following ocicrypt spec.
To quick start the KBS cluster, a `docker compose` yaml is provided to launch.
```shell
# Clone KBS git repository
git clone https://github.com/confidential-containers/kbs.git
cd kbs/kbs
export KBS_DIR_PATH=$(pwd)
# Generate a user auth key pair
openssl genpkey -algorithm ed25519 > config/private.key
openssl pkey -in config/private.key -pubout -out config/public.pub
# Start KBS cluster
docker compose up -d
```
If configuration of KBS cluster is required, edit the following config files and restart the KBS cluster with `docker compose`:
- `$KBS_DIR_PATH/config/kbs-config.json`: configuration for Key Broker Service.
- `$KBS_DIR_PATH/config/as-config.json`: configuration for Attestation Service.
- `$KBS_DIR_PATH/config/sgx_default_qcnl.conf`: configuration for Intel TDX/SGX verification.
When KBS cluster is running, you can modify the policy file used by AS policy engine ([OPA](https://www.openpolicyagent.org/)) at any time:
- `$KBS_DIR_PATH/data/attestation-service/opa/policy.rego`: Policy file for evidence verification of AS, refer to [AS Policy Engine](https://github.com/confidential-containers/attestation-service#policy-engine) for more infomation.
### Encrypting an Image
[skopeo](https://github.com/containers/skopeo) is required to encrypt the container image.
Follow the [instructions](https://github.com/containers/skopeo/blob/main/install.md) to install `skopeo`.
Use `skopeo` to encrypt an image on the same node of the KBS cluster (use busybox:latest for example):
```shell
# edit ocicrypt.conf
tee > ocicrypt.conf <<EOF
{
"key-providers": {
"attestation-agent": {
"grpc": "127.0.0.1:50000"
}
}
}
EOF
# encrypt the image
OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://library/busybox oci:busybox:encrypted
```
The image will be encrypted, and things happens in the KBS cluster background include:
- CoCo Keyprovider generates a random key and a key-id. Then encrypts the image using the key.
- CoCo Keyprovider registers the key with key-id into KBS.
Then push the image to registry:
```shell
skopeo copy oci:busybox:encrypted [SCHEME]://[REGISTRY_URL]:encrypted
```
Be sure to replace `[SCHEME]` with registry scheme type like `docker`, replace `[REGISTRY_URL]` with the desired registry URL like `docker.io/encrypt_test/busybox`.
### Signing an Image
[cosign](https://github.com/sigstore/cosign) is required to sign the container image. Follow the instructions here to install `cosign`:
[cosign installation](https://docs.sigstore.dev/cosign/system_config/installation/)
Generate a cosign key pair and register the public key to KBS storage:
```shell
cosign generate-key-pair
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/cosign-key && cp cosign.pub $KBS_DIR_PATH/data/kbs-storage/default/cosign-key/1
```
Sign the encrypted image with cosign private key:
```shell
cosign sign --key cosign.key [REGISTRY_URL]:encrypted
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous steps.
Then edit an image pulling validation policy file.
Here is a sample policy file `security-policy.json`:
```json
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"[REGISTRY_URL]": [
{
"type": "sigstoreSigned",
"keyPath": "kbs:///default/cosign-key/1"
}
]
}
}
}
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image.
Register the image pulling validation policy file to KBS storage:
```shell
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/security-policy
cp security-policy.json $KBS_DIR_PATH/data/kbs-storage/default/security-policy/test
```
### Deploy encrypted image as a CoCo workload on CC HW
Here is a sample yaml for encrypted image deploying:
```shell
cat << EOT | tee encrypted-image-test-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: encrypted-image-test-busybox
name: encrypted-image-test-busybox
annotations:
io.containerd.cri.runtime-handler: [RUNTIME_CLASS]
spec:
containers:
- image: [REGISTRY_URL]:encrypted
name: busybox
dnsPolicy: ClusterFirst
runtimeClassName: [RUNTIME_CLASS]
EOT
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous step, replace `[RUNTIME_CLASS]` with kata runtime class for CC HW.
Then configure `/opt/confidential-containers/share/defaults/kata-containers/configuration-<RUNTIME_CLASS_SUFFIX>.toml` to add `agent.aa_kbc_params=cc_kbc::<KBS_URI>` to kernel parameters. Here `RUNTIME_CLASS_SUFFIX` is something like `qemu-tdx`, `KBS_URI` is the address of Key Broker Service in KBS cluster like `http://123.123.123.123:8080`.
Deploy encrypted image as a workload:
```shell
kubectl apply -f encrypted-image-test-busybox.yaml
```
- [CoCo-dev](./guides/coco-dev.md)
- [SEV(-ES)](./guides/sev.md)
- SNP
- TDX: No additional steps required.
- [SGX](./guides/enclave-cc.md)
- [IBM Secure Execution](./guides/ibm-se.md)
- ...