docs: Updating SNP and SEV and quickstart guides

Updating the SEV and SNP guides to include instructions on launching CoCo with SEV and SNP memory encryption.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
This commit is contained in:
Arvind Kumar
2024-11-12 10:43:10 -06:00
committed by Tobin Feldman-Fitzthum
parent 5035fbae1a
commit 802e66cb5c
4 changed files with 197 additions and 337 deletions

View File

@@ -7,7 +7,7 @@ First, we will use the `kata-qemu-coco-dev` runtime class which uses CoCo withou
Initially we will try this with an unencrypted container image. Initially we will try this with an unencrypted container image.
In this example, we will be using the bitnami/nginx image as described in the following yaml: In this example, we will be using the bitnami/nginx image as described in the following yaml:
``` ```yaml
apiVersion: v1 apiVersion: v1
kind: Pod kind: Pod
metadata: metadata:
@@ -30,7 +30,7 @@ more details.
With Confidential Containers, the workload container images are never downloaded on the host. With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host, you should log into the k8s node and ensure the following command returns an empty result: For verifying that the container image doesnt exist on the host, you should log into the k8s node and ensure the following command returns an empty result:
``` ```shell
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
``` ```
You will run this command again after the container has started. You will run this command again after the container has started.
@@ -38,26 +38,28 @@ You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) . Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload: Create the workload:
``` ```shell
kubectl apply -f nginx.yaml kubectl apply -f nginx.yaml
``` ```
Output: Output:
``` ```shell
pod/nginx created pod/nginx created
``` ```
Ensure the pod was created successfully (in running state): Ensure the pod was created successfully (in running state):
``` ```shell
kubectl get pods kubectl get pods
``` ```
Output: Output:
``` ```shell
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s nginx 1/1 Running 0 3m50s
``` ```
Now go back to the k8s node and ensure that you dont have any bitnami/nginx images on it: Now go back to the k8s node and ensure that you dont have any bitnami/nginx images on it:
``` ```shell
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
``` ```
@@ -84,7 +86,7 @@ TDX has one runtime class, `kata-qemu-tdx`.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](../guides/sev.md). For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](../guides/sev.md).
For SNP, use the `kata-qemu-snp` runtime class and follow the SNP guide. For SNP, use the `kata-qemu-snp` runtime class and follow the [SNP guide](../guides/snp.md).
For `enclave-cc` follow the [enclave-cc guide](../guides/enclave-cc.md). For `enclave-cc` follow the [enclave-cc guide](../guides/enclave-cc.md).

View File

@@ -1,201 +1,109 @@
# SEV-ES Guide # SEV-ES Guide
Confidential Containers supports SEV(-ES) with pre-attestation using ## Platform Setup
[simple-kbs](https://github.com/confidential-containers/simple-kbs).
This guide covers platform-specific setup for SEV(-ES) and walks through
complete flows for attestation and encrypted images.
## Creating a CoCo workload using a pre-existing encrypted image on SEV
### Platform Setup
To enable SEV on the host platform, first ensure that it is supported. Then follow these instructions to enable SEV:
[AMD SEV - Prepare Host OS](https://github.com/AMDESE/AMDSEV#prepare-host-os)
### Install sevctl and Export SEV Certificate Chain
[sevctl](https://github.com/virtee/sevctl) is the SEV command line utility and is needed to export the SEV certificate chain.
Follow these steps to install `sevctl`:
* Debian / Ubuntu:
```
# Rust must be installed to build
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
sudo apt install -y musl-dev musl-tools
# Additional packages are required to build
sudo apt install -y pkg-config libssl-dev asciidoctor
# Clone the repository
git clone https://github.com/virtee/sevctl.git
# Build
(cd sevctl && cargo build)
```
* CentOS / Fedora / RHEL:
```
sudo dnf install sevctl
```
> **Note** Due to [this bug](https://bugzilla.redhat.com/show_bug.cgi?id=2037963) on sevctl for RHEL and Fedora you might need to build the tool from sources to pick the fix up.
If using the SEV kata configuration template file, the SEV certificate chain must be placed in `/opt/sev`. Export the SEV certificate chain using the following commands:
```
sudo mkdir -p /opt/sev
sudo ./sevctl/target/debug/sevctl export --full /opt/sev/cert_chain.cert
```
### Setup and Run the simple-kbs
By default, the `kata-qemu-sev` runtime class uses pre-attestation with the > [!WARNING]
`online-sev-kbc` and [simple-kbs](https://github.com/confidential-containers/simple-kbs) to attest the guest and provision secrets. >
`simple-kbs` is a basic prototype key broker which can validate a guest measurement according to a specified policy and conditionally release secrets. > In order to launch SEV or SNP memory encrypted guests, the host must be prepared with a compatible kernel, `6.8.0-rc5-next-20240221-snp-host-cc2568386`. AMD custom changes and required components and repositories will eventually be upstreamed.
To use encrypted images, signed images, or authenticated registries with SEV, you should setup `simple-kbs`.
If you simply want to run an unencrypted container image, you can disable pre-attestation by adding the following annotation
`io.katacontainers.config.pre_attestation.enabled: "false"` to your pod.
If you are using pre-attestation, you will need to add an annotation to your pod configuration which contains the URI of a `simple-kbs` instance. > [Sev-utils](https://github.com/amd/sev-utils/blob/coco-202402240000/docs/snp.md) is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, they are already packaged and delivered with Kata.
This annotation should be of the form `io.katacontainers.config.pre_attestation.uri: "<KBS IP>:44444"`. Alternatively, refer to the [AMDESE guide](https://github.com/confidential-containers/amdese-amdsev/tree/amd-snp-202402240000?tab=readme-ov-file#prepare-host) to manually build the host kernel and other components.
Port 44444 is the default port per the directions below, but it may be configured to use another port.
The KBS IP must be accessible from inside the guest.
Usually it should be the public IP of the node where `simple-kbs` runs.
The SEV policy can also be set by adding `io.katacontainers.config.sev.policy: "<SEV POLICY>"` to your pod configuration. The default policy for SEV and SEV-ES are, respectively, "3" and "7", where the following bits are enabled: ## Getting Started
| Bit| Name| Description | This guide covers platform-specific setup for SEV and walks through the complete flows for the different CoCo use cases:
| --- | --- | --- |
|0|NODBG| Debugging of the guest is disallowed |
|1|NOKS| Sharing keys with other guests is disallowed |
|2|ES| SEV-ES is required |
For more information about SEV policy, see chapter 3 of the [Secure Encrypted Virtualization API](https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf) (PDF). - [Container Launch with Memory Encryption](#container-launch-with-memory-encryption)
- [Pre-Attestation Utilizing Signed and Encrypted Images](#pre-attestation-utilizing-signed-and-encrypted-images)
>Note: the SEV policy is not the same as the policies that drive `simple-kbs`. ## Container Launch With Memory Encryption
The CoCo project has created a sample encrypted container image ([ghcr.io/confidential-containers/test-container:encrypted](https://github.com/orgs/confidential-containers/packages/container/test-container/82546314?tag=encrypted)). This image is encrypted using a key that comes already provisioned inside the `simple-kbs` for ease of testing. No `simple-kbs` policy is required to get things running. ### Launch a Confidential Service
The image encryption key and key for SSH access have been attached to the CoCo sample encrypted container image as docker labels. This image is meant for TEST purposes only as these keys are published publicly. In a production use case, these keys would be generated by the workload administrator and kept secret. For further details, see the section how to [Create an Encrypted Image](#create-an-encrypted-image). To launch a container with SEV memory encryption, the SEV runtime class (`kata-qemu-sev`) must be specified as an annotation in the yaml. A base alpine docker container ([Dockerfile](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile)) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a [SSH public key](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted.pub) for validation purposes.
To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-firmware-measurement). Here is a sample service yaml specifying the SEV runtime class:
`docker compose` is required to run the `simple-kbs` and its database in docker containers. Installation instructions are available on [Docker's website](https://docs.docker.com/compose/install/linux/). ```yaml
Clone the repository for specified tag:
```
simple_kbs_tag="0.1.1"
git clone https://github.com/confidential-containers/simple-kbs.git
(cd simple-kbs && git checkout -b "branch_${simple_kbs_tag}" "${simple_kbs_tag}")
```
Run the service with `docker compose`:
```
(cd simple-kbs && sudo docker compose up -d)
```
### Launch the Pod and Verify SEV Encryption
Here is a sample kubernetes service yaml for an encrypted image:
```
kind: Service kind: Service
apiVersion: v1 apiVersion: v1
metadata: metadata:
name: encrypted-image-tests name: "confidential-unencrypted"
spec: spec:
selector: selector:
app: encrypted-image-tests app: "confidential-unencrypted"
ports: ports:
- port: 22 - port: 22
--- ---
kind: Deployment kind: Deployment
apiVersion: apps/v1 apiVersion: apps/v1
metadata: metadata:
name: encrypted-image-tests name: "confidential-unencrypted"
spec: spec:
selector: selector:
matchLabels: matchLabels:
app: encrypted-image-tests app: "confidential-unencrypted"
template: template:
metadata: metadata:
labels: labels:
app: encrypted-image-tests app: "confidential-unencrypted"
annotations: annotations:
io.containerd.cri.runtime-handler: kata-qemu-sev io.containerd.cri.runtime-handler: kata-qemu-sev
spec: spec:
runtimeClassName: kata-qemu-sev runtimeClassName: kata-qemu-sev
containers: containers:
- name: encrypted-image-tests - name: "confidential-unencrypted"
image: ghcr.io/fitzthum/encrypted-image-tests:encrypted image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always imagePullPolicy: Always
``` ```
Save this service yaml to a file named `encrypted-image-tests.yaml`. Notice the image URL specified points to the previously described CoCo sample encrypted container image. `kata-qemu-sev` must also be specified as the `runtimeClassName`. Save the contents of this yaml to a file called `confidential-unencrypted.yaml`.
Start the service: Start the service:
``` ```shell
kubectl apply -f encrypted-image-tests.yaml kubectl apply -f confidential-unencrypted.yaml
``` ```
Check for pod errors: Check for errors:
``` ```shell
pod_name=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $1;}') kubectl describe pod confidential-unencrypted
kubectl describe pod ${pod_name}
``` ```
If there are no errors, a CoCo encrypted container with SEV has been successfully launched! If there are no errors in the Events section, then the container has been successfully created with SEV memory encryption.
### Verify SEV Memory Encryption ### Validate SEV Memory Encryption
The container `dmesg` report can be parsed to verify SEV memory encryption. The container dmesg log can be parsed to indicate that SEV memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get pod IP: Get the pod IP:
``` ```shell
pod_ip=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $6;}') pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
``` ```
Get the CoCo sample encrypted container image SSH access key from docker image label and save it to a file. Download and save the [SSH private key](https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted) and set the permissions.
Currently the docker client cannot pull encrypted images. We can inspect the unencrypted image instead,
which has the same labels. You could also use `skopeo inspect` to get the labels from the encrypted image.
``` ```shell
docker pull ghcr.io/fitzthum/encrypted-image-tests:unencrypted wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
docker inspect ghcr.io/fitzthum/encrypted-image-tests:unencrypted | \
jq -r '.[0].Config.Labels.ssh_key' \ chmod 600 confidential-image-ssh-key
| sed "s|\(-----BEGIN OPENSSH PRIVATE KEY-----\)|\1\n|g" \
| sed "s|\(-----END OPENSSH PRIVATE KEY-----\)|\n\1|g" \
> encrypted-image-tests
``` ```
Set permissions on the SSH private key file: The following command will run a remote SSH command on the container to check if SEV memory encryption is active:
``` ```shell
chmod 600 encrypted-image-tests ssh -i confidential-image-ssh-key \
```
Run a SSH command to parse the container `dmesg` output for SEV enabled messages:
```
ssh -i encrypted-image-tests \
-o "StrictHostKeyChecking no" \ -o "StrictHostKeyChecking no" \
-t root@${pod_ip} \ -t root@${pod_ip} \
'dmesg | grep SEV' 'dmesg | grep "Memory Encryption Features"'
``` ```
The output should look something like this: If SEV is enabled and active, the output should return:
```
```shell
[ 0.150045] Memory Encryption Features active: AMD SEV [ 0.150045] Memory Encryption Features active: AMD SEV
``` ```
@@ -203,7 +111,7 @@ The output should look something like this:
If SSH access to the container is desired, create a keypair: If SSH access to the container is desired, create a keypair:
``` ```shell
ssh-keygen -t ed25519 -f encrypted-image-tests -P "" -C "" <<< y ssh-keygen -t ed25519 -f encrypted-image-tests -P "" -C "" <<< y
``` ```
@@ -211,7 +119,7 @@ The above command will save the keypair in a file named `encrypted-image-tests`.
Here is a sample Dockerfile to create a docker image: Here is a sample Dockerfile to create a docker image:
``` ```Dockerfile
FROM alpine:3.16 FROM alpine:3.16
# Update and install openssh-server # Update and install openssh-server
@@ -236,13 +144,13 @@ Store this `Dockerfile` in the same directory as the `encrypted-image-tests` ssh
Build image: Build image:
``` ```shell
docker build -t encrypted-image-tests . docker build -t encrypted-image-tests .
``` ```
Tag and upload this unencrypted docker image to a registry: Tag and upload this unencrypted docker image to a registry:
``` ```shell
docker tag encrypted-image-tests:latest [REGISTRY_URL]:unencrypted docker tag encrypted-image-tests:latest [REGISTRY_URL]:unencrypted
docker push [REGISTRY_URL]:unencrypted docker push [REGISTRY_URL]:unencrypted
``` ```
@@ -255,7 +163,7 @@ Be sure to replace `[REGISTRY_URL]` with the desired registry URL.
The Attestation Agent hosts a grpc service to support encrypting the image. Clone the repository: The Attestation Agent hosts a grpc service to support encrypting the image. Clone the repository:
``` ```shell
attestation_agent_tag="v0.1.0" attestation_agent_tag="v0.1.0"
git clone https://github.com/confidential-containers/attestation-agent.git git clone https://github.com/confidential-containers/attestation-agent.git
(cd attestation-agent && git checkout -b "branch_${attestation_agent_tag}" "${attestation_agent_tag}") (cd attestation-agent && git checkout -b "branch_${attestation_agent_tag}" "${attestation_agent_tag}")
@@ -263,14 +171,14 @@ git clone https://github.com/confidential-containers/attestation-agent.git
Run the offline_fs_kbs: Run the offline_fs_kbs:
``` ```shell
(cd attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs \ (cd attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs \
&& cargo run --release --features offline_fs_kbs -- --keyprovider_sock 127.0.0.1:50001 &) && cargo run --release --features offline_fs_kbs -- --keyprovider_sock 127.0.0.1:50001 &)
``` ```
Create the Attestation Agent keyprovider: Create the Attestation Agent keyprovider:
``` ```shell
cat > attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf <<EOF cat > attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf <<EOF
{ {
"key-providers": { "key-providers": {
@@ -282,13 +190,13 @@ EOF
Set a desired value for the encryption key that should be a 32-bytes and base64 encoded value: Set a desired value for the encryption key that should be a 32-bytes and base64 encoded value:
``` ```shell
enc_key="RcHGava52DPvj1uoIk/NVDYlwxi0A6yyIZ8ilhEX3X4=" enc_key="RcHGava52DPvj1uoIk/NVDYlwxi0A6yyIZ8ilhEX3X4="
``` ```
Create a Key file: Create a Key file:
``` ```shell
cat > keys.json <<EOF cat > keys.json <<EOF
{ {
"key_id1":"${enc_key}" "key_id1":"${enc_key}"
@@ -298,7 +206,7 @@ EOF
Run skopeo to encrypt the image created in the previous section: Run skopeo to encrypt the image created in the previous section:
``` ```shell
sudo OCICRYPT_KEYPROVIDER_CONFIG=$(pwd)/attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf \ sudo OCICRYPT_KEYPROVIDER_CONFIG=$(pwd)/attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf \
skopeo copy --insecure-policy \ skopeo copy --insecure-policy \
docker:[REGISTRY_URL]:unencrypted \ docker:[REGISTRY_URL]:unencrypted \
@@ -316,27 +224,27 @@ response. A remote registry known to support encrypted images like GitHub Contai
At this point it is a good idea to inspect the image was really encrypted as skopeo can silently leave it unencrypted. Use At this point it is a good idea to inspect the image was really encrypted as skopeo can silently leave it unencrypted. Use
`skopeo inspect` as shown below to check that the layers MIME types are **application/vnd.oci.image.layer.v1.tar+gzip+encrypted**: `skopeo inspect` as shown below to check that the layers MIME types are **application/vnd.oci.image.layer.v1.tar+gzip+encrypted**:
``` ```shell
skopeo inspect docker-daemon:[REGISTRY_URL]:encrypted skopeo inspect docker-daemon:[REGISTRY_URL]:encrypted
``` ```
Push the encrypted image to the registry: Push the encrypted image to the registry:
``` ```shell
docker push [REGISTRY_URL]:encrypted docker push [REGISTRY_URL]:encrypted
``` ```
`mysql-client` is required to insert the key into the `simple-kbs` database. `jq` is required to json parse responses on the command line. `mysql-client` is required to insert the key into the `simple-kbs` database. `jq` is required to json parse responses on the command line.
* Debian / Ubuntu: * Debian / Ubuntu:
``` ```shell
sudo apt install mysql-client jq sudo apt install mysql-client jq
``` ```
* CentOS / Fedora / RHEL: * CentOS / Fedora / RHEL:
``` ```shell
sudo dnf install [ mysql | mariadb | community-mysql ] jq sudo dnf install [ mysql | mariadb | community-mysql ] jq
``` ```
@@ -344,7 +252,7 @@ The `mysql-client` package name may differ depending on OS flavor and version.
The `simple-kbs` uses default settings and credentials for the MySQL database. These settings can be changed by the `simple-kbs` administrator and saved into a credential file. For the purposes of this quick start, set them in the environment for use with the MySQL client command line: The `simple-kbs` uses default settings and credentials for the MySQL database. These settings can be changed by the `simple-kbs` administrator and saved into a credential file. For the purposes of this quick start, set them in the environment for use with the MySQL client command line:
``` ```shell
KBS_DB_USER="kbsuser" KBS_DB_USER="kbsuser"
KBS_DB_PW="kbspassword" KBS_DB_PW="kbspassword"
KBS_DB="simple_kbs" KBS_DB="simple_kbs"
@@ -353,7 +261,7 @@ KBS_DB_TYPE="mysql"
Retrieve the host address of the MySQL database container: Retrieve the host address of the MySQL database container:
``` ```shell
KBS_DB_HOST=$(docker network inspect simple-kbs_default \ KBS_DB_HOST=$(docker network inspect simple-kbs_default \
| jq -r '.[].Containers[] | select(.Name | test("simple-kbs[_-]db.*")).IPv4Address' \ | jq -r '.[].Containers[] | select(.Name | test("simple-kbs[_-]db.*")).IPv4Address' \
| sed "s|/.*$||g") | sed "s|/.*$||g")
@@ -361,7 +269,7 @@ KBS_DB_HOST=$(docker network inspect simple-kbs_default \
Add the key to the `simple-kbs` database without any verification policy: Add the key to the `simple-kbs` database without any verification policy:
``` ```shell
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', NULL); REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', NULL);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', NULL); REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', NULL);
@@ -376,159 +284,3 @@ Return to step [Launch the Pod and Verify SEV Encryption](#launch-the-pod-and-ve
To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-guest-firmware-measurement). To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-guest-firmware-measurement).
## Creating a simple-kbs Policy to Verify the SEV Guest Firmware Measurement
The `simple-kbs` can be configured with a policy that requires the kata shim to provide a matching SEV guest firmware measurement to release the key for decrypting the image. At launch time, the kata shim will collect the SEV guest firmware measurement and forward it in a key request to the `simple-kbs`.
These steps will use the CoCo sample encrypted container image, but the image URL can be replaced with a user created image registry URL.
To create the policy, the value of the SEV guest firmware measurement must be calculated.
`pip` is required to install the `sev-snp-measure` utility.
* Debian / Ubuntu:
```
sudo apt install python3-pip
```
* CentOS / Fedora / RHEL:
```
sudo dnf install python3
```
[sev-snp-measure](https://github.com/IBM/sev-snp-measure) is a utility used to calculate the SEV guest firmware measurement with provided ovmf, initrd, kernel and kernel append input parameters. Install it using the following command:
```
sudo pip install sev-snp-measure
```
The path to the guest binaries required for measurement is specified in the kata configuration. Set them:
```
ovmf_path="/opt/confidential-containers/share/ovmf/OVMF.fd"
kernel_path="/opt/confidential-containers/share/kata-containers/vmlinuz-sev.container"
initrd_path="/opt/confidential-containers/share/kata-containers/kata-containers-initrd.img"
```
The kernel append line parameters are included in the SEV guest firmware measurement. A placeholder will be initially set, and the actual value will be retrieved later from the qemu command line:
```
append="PLACEHOLDER"
```
Use the `sev-snp-measure` utility to calculate the SEV guest firmware measurement using the binary variables previously set:
```
measurement=$(sev-snp-measure --mode=sev --output-format=base64 \
--ovmf="${ovmf_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
)
```
If the container image is not already present, pull it:
```
encrypted_image_url="ghcr.io/fitzthum/encrypted-image-tests:unencrypted"
docker pull "${encrypted_image_url}"
```
Retrieve the encryption key from docker image label:
```
enc_key=$(docker inspect ${encrypted_image_url} \
| jq -r '.[0].Config.Labels.enc_key')
```
Add the key, keyset and policy with measurement to the `simple-kbs` database:
```
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', 10);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', 10);
REPLACE INTO policy VALUES (10, '["${measurement}"]', '[]', 0, 0, '[]', now(), NULL, 1);
EOF
```
Using the same service yaml from the section on [Launch the Pod and Verify SEV Encryption](#launch-the-pod-and-verify-sev-encryption), launch the service:
```
kubectl apply -f encrypted-image-tests.yaml
```
Check for pod errors:
```
pod_name=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $1;}')
kubectl describe pod ${pod_name}
```
The pod will error out on the key retrieval request to the `simple-kbs` because the policy verification failed due to a mismatch in the SEV guest firmware measurement. This is the error message that should display:
```
Policy validation failed: fw digest not valid
```
The `PLACEHOLDER` value that was set for the kernel append line when the SEV guest firmware measurement was calculated does not match what was measured by the kata shim. The kernel append line parameters can be retrieved from the qemu command line using the following scripting commands, as long as kubernetes is still trying to launch the pod:
```
duration=$((SECONDS+30))
set append
while [ $SECONDS -lt $duration ]; do
qemu_process=$(ps aux | grep qemu | grep append || true)
if [ -n "${qemu_process}" ]; then
append=$(echo ${qemu_process} \
| sed "s|.*-append \(.*$\)|\1|g" \
| sed "s| -.*$||")
break
fi
sleep 1
done
echo "${append}"
```
The above check will only work if the `encrypted-image-tests` guest launch is the only consuming qemu process running.
Now, recalculate the SEV guest firmware measurement and store the `simple-kbs` policy in the database:
```
measurement=$(sev-snp-measure --mode=sev --output-format=base64 \
--ovmf="${ovmf_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
)
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', 10);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', 10);
REPLACE INTO policy VALUES (10, '["${measurement}"]', '[]', 0, 0, '[]', now(), NULL, 1);
EOF
```
The pod should now show a successful launch:
```
kubectl describe pod ${pod_name}
```
If the service is hung up, delete the pod and try to launch again:
```
# Delete
kubectl delete -f encrypted-image-tests.yaml
# Verify pod cleaned up
kubectl describe pod ${pod_name}
# Relaunch
kubectl apply -f encrypted-image-tests.yaml
```
Testing the SEV encrypted container launch can be completed by returning to the section on how to [Verify SEV Memory Encryption](#verify-sev-memory-encryption).

107
guides/snp.md Normal file
View File

@@ -0,0 +1,107 @@
# SNP Guide
## Platform Setup
> [!WARNING]
>
> In order to launch SEV or SNP memory encrypted guests, the host must be prepared with a compatible kernel, `6.8.0-rc5-next-20240221-snp-host-cc2568386`. AMD custom changes and required components and repositories will eventually be upstreamed.
> [Sev-utils](https://github.com/amd/sev-utils/blob/coco-202402240000/docs/snp.md) is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, they are already packaged and delivered with Kata.
Alternatively, refer to the [AMDESE guide](https://github.com/confidential-containers/amdese-amdsev/tree/amd-snp-202402240000?tab=readme-ov-file#prepare-host) to manually build the host kernel and other components.
## Getting Started
This guide covers platform-specific setup for SNP and walks through the complete flows for the different CoCo use cases:
- [Container Launch with Memory Encryption](#container-launch-with-memory-encryption)
## Container Launch With Memory Encryption
### Launch a Confidential Service
To launch a container with SNP memory encryption, the SNP runtime class (`kata-qemu-snp`) must be specified as an annotation in the yaml. A base alpine docker container ([Dockerfile](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile)) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a [SSH public key](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted.pub) for validation purposes.
Here is a sample service yaml specifying the SNP runtime class:
```yaml
kind: Service
apiVersion: v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
app: "confidential-unencrypted"
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
matchLabels:
app: "confidential-unencrypted"
template:
metadata:
labels:
app: "confidential-unencrypted"
annotations:
io.containerd.cri.runtime-handler: kata-qemu-snp
spec:
runtimeClassName: kata-qemu-snp
containers:
- name: "confidential-unencrypted"
image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always
```
Save the contents of this yaml to a file called `confidential-unencrypted.yaml`.
Start the service:
```shell
kubectl apply -f confidential-unencrypted.yaml
```
Check for errors:
```shell
kubectl describe pod confidential-unencrypted
```
If there are no errors in the Events section, then the container has been successfully created with SNP memory encryption.
### Validate SNP Memory Encryption
The container dmesg log can be parsed to indicate that SNP memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get the pod IP:
```shell
pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
```
Download and save the SSH private key and set the permissions.
```shell
wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
chmod 600 confidential-image-ssh-key
```
The following command will run a remote SSH command on the container to check if SNP memory encryption is active:
```shell
ssh -i confidential-image-ssh-key \
-o "StrictHostKeyChecking no" \
-t root@${pod_ip} \
'dmesg | grep "Memory Encryption Features""'
```
If SNP is enabled and active, the output should return:
```shell
[ 0.150045] Memory Encryption Features active: AMD SNP
```

View File

@@ -31,7 +31,7 @@ To run the operator you must have an existing Kubernetes cluster that meets the
- Ensure a minimum of 8GB RAM and 4 vCPU for the Kubernetes cluster node - Ensure a minimum of 8GB RAM and 4 vCPU for the Kubernetes cluster node
- Only containerd runtime based Kubernetes clusters are supported with the current CoCo release - Only containerd runtime based Kubernetes clusters are supported with the current CoCo release
- The minimum Kubernetes version should be 1.24 - The minimum Kubernetes version should be 1.24
- Ensure at least one Kubernetes node in the cluster is having the label `node.kubernetes.io/worker=` - Ensure at least one Kubernetes node in the cluster has the labels `node-role.kubernetes.io/worker=` or `node.kubernetes.io/worker=`. This will assign the worker role to a node in your cluster, making it responsible for running your applications and services
- Ensure SELinux is disabled or not enforced (https://github.com/confidential-containers/operator/issues/115) - Ensure SELinux is disabled or not enforced (https://github.com/confidential-containers/operator/issues/115)
For more details on the operator, including the custom resources managed by the operator, refer to the operator [docs](https://github.com/confidential-containers/operator). For more details on the operator, including the custom resources managed by the operator, refer to the operator [docs](https://github.com/confidential-containers/operator).
@@ -59,18 +59,18 @@ on the worker nodes is **not** on an overlayfs mount but the path is a `hostPath
Deploy the operator by running the following command where `<RELEASE_VERSION>` needs to be substituted Deploy the operator by running the following command where `<RELEASE_VERSION>` needs to be substituted
with the desired [release tag](https://github.com/confidential-containers/operator/tags). with the desired [release tag](https://github.com/confidential-containers/operator/tags).
``` ```shell
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=<RELEASE_VERSION> kubectl apply -k github.com/confidential-containers/operator/config/release?ref=<RELEASE_VERSION>
``` ```
For example, to deploy the `v0.10.0` release run: For example, to deploy the `v0.10.0` release run:
``` ```shell
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0 kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0
``` ```
Wait until each pod has the STATUS of Running. Wait until each pod has the STATUS of Running.
``` ```shell
kubectl get pods -n confidential-containers-system --watch kubectl get pods -n confidential-containers-system --watch
``` ```
@@ -81,25 +81,25 @@ kubectl get pods -n confidential-containers-system --watch
Creating a custom resource installs the required CC runtime pieces into the cluster node and creates Creating a custom resource installs the required CC runtime pieces into the cluster node and creates
the `RuntimeClasses` the `RuntimeClasses`
``` ```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/<CCRUNTIME_OVERLAY>?ref=<RELEASE_VERSION> kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/<CCRUNTIME_OVERLAY>?ref=<RELEASE_VERSION>
``` ```
The current present overlays are: `default` and `s390x` The current present overlays are: `default` and `s390x`
For example, to deploy the `v0.10.0` release for `x86_64`, run: For example, to deploy the `v0.10.0` release for `x86_64`, run:
``` ```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.10.0 kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.10.0
``` ```
And to deploy `v0.10.0` release for `s390x`, run: And to deploy `v0.10.0` release for `s390x`, run:
``` ```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.10.0 kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.10.0
``` ```
Wait until each pod has the STATUS of Running. Wait until each pod has the STATUS of Running.
``` ```shell
kubectl get pods -n confidential-containers-system --watch kubectl get pods -n confidential-containers-system --watch
``` ```
@@ -114,11 +114,11 @@ Please see the [enclave-cc guide](./guides/enclave-cc.md) for more information.
`enclave-cc` is a form of Confidential Containers that uses process-based isolation. `enclave-cc` is a form of Confidential Containers that uses process-based isolation.
`enclave-cc` can be installed with the following custom resources. `enclave-cc` can be installed with the following custom resources.
``` ```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/sim?ref=<RELEASE_VERSION> kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/sim?ref=<RELEASE_VERSION>
``` ```
or or
``` ```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/hw?ref=<RELEASE_VERSION> kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/hw?ref=<RELEASE_VERSION>
``` ```
for the **simulated** SGX mode build or **hardware** SGX mode build, respectively. for the **simulated** SGX mode build or **hardware** SGX mode build, respectively.
@@ -127,11 +127,11 @@ for the **simulated** SGX mode build or **hardware** SGX mode build, respectivel
Check the `RuntimeClasses` that got created. Check the `RuntimeClasses` that got created.
``` ```shell
kubectl get runtimeclass kubectl get runtimeclass
``` ```
Output: Output:
``` ```shell
NAME HANDLER AGE NAME HANDLER AGE
kata kata-qemu 8d kata kata-qemu 8d
kata-clh kata-clh 8d kata-clh kata-clh 8d
@@ -140,7 +140,6 @@ kata-qemu-coco-dev kata-qemu-coco-dev 8d
kata-qemu-sev kata-qemu-sev 8d kata-qemu-sev kata-qemu-sev 8d
kata-qemu-snp kata-qemu-snp 8d kata-qemu-snp kata-qemu-snp 8d
kata-qemu-tdx kata-qemu-tdx 8d kata-qemu-tdx kata-qemu-tdx 8d
``` ```
Details on each of the runtime classes: Details on each of the runtime classes:
@@ -158,11 +157,11 @@ Details on each of the runtime classes:
If you are using `enclave-cc` you should see the following runtime classes. If you are using `enclave-cc` you should see the following runtime classes.
``` ```shell
kubectl get runtimeclass kubectl get runtimeclass
``` ```
Output: Output:
``` ```shell
NAME HANDLER AGE NAME HANDLER AGE
enclave-cc enclave-cc 9m55s enclave-cc enclave-cc 9m55s
``` ```
@@ -204,7 +203,7 @@ With some TEEs, the CoCo use cases and/or configurations are implemented differe
- [CoCo-dev](./guides/coco-dev.md) - [CoCo-dev](./guides/coco-dev.md)
- [SEV(-ES)](./guides/sev.md) - [SEV(-ES)](./guides/sev.md)
- SNP - [SNP](./guides/snp.md)
- TDX: No additional steps required. - TDX: No additional steps required.
- [SGX](./guides/enclave-cc.md) - [SGX](./guides/enclave-cc.md)
- [IBM Secure Execution](./guides/ibm-se.md) - [IBM Secure Execution](./guides/ibm-se.md)