cleanup: Fixing some grammar and wording.

This adds some cleanup for the existing documentation, adds some
language specifiers for code blocks, as well as some fixes for minor
spelling issues.

Signed-off-by: Larry Dewey <larry.dewey@amd.com>
This commit is contained in:
Larry Dewey 2023-04-18 10:09:03 -05:00 committed by Tobin Feldman-Fitzthum
parent 33d1a067d8
commit c29278b0c7
6 changed files with 37 additions and 38 deletions

View File

@ -8,7 +8,7 @@ The EAA KBC is an optional module in the attestation-agent at compile time,
which can be used to communicate with Verdictd.
The communication is established on the encrypted channel provided by rats-tls.
EAA can now be used on intel TDX and intel SGX platforms.
EAA can now be used on Intel TDX and Intel SGX platforms.
## Create encrypted image

View File

@ -13,7 +13,7 @@ must be made **before** deploying it.
Enclave CC supports Verdictd and in order to use it, users will have to
properly configure a decrypt_config.conf, in order to set the `KBC` (`sample_kbc`
or `eaa_kbc`) `IP`,`PORT`, and the `SECURITY_VALIDATE` (`false` or `true`)
```
```json
{
"key_provider": "provider:attestation-agent:KBC::IP:PORT",
"security_validate": SECURITY_VALIDATE
@ -32,7 +32,7 @@ The deployment below assumes the hardware SGX mode build is installed by the ope
use simulate SGX mode build.
The example uses a trivial hello world C application:
```
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -61,7 +61,7 @@ resource request must be added to the pod spec.
Again, create a pod YAML file as previously described (this time we named it `enclave-cc-pod.yaml`) .
Create the workload:
```
```sh
kubectl apply -f enclave-cc-pod.yaml
```
Output:
@ -70,7 +70,7 @@ pod/enclave-cc-pod created
```
Ensure the pod was created successfully (in running state):
```
```sh
kubectl get pods
```
Output:
@ -80,7 +80,7 @@ enclave-cc-pod 1/1 Running 0 22s
```
Check the pod is running as expected:
```
```sh
kubectl logs enclave-cc-pod | head -5
```
Output:
@ -93,5 +93,6 @@ Hello world!
```
We can also verify the host does not have the image for others to use:
```
```sh
crictl -r unix:///run/containerd/containerd.sock image ls | grep helloworld_enc
```

View File

@ -7,7 +7,7 @@ Since memory is an expensive resource, CoCo implemented [trusted ephemeral stora
This solution is verified with Kubernetes CSI driver [open-local](https://github.com/alibaba/open-local). Please follow this [user guide](https://github.com/alibaba/open-local/blob/main/docs/user-guide/user-guide.md) to install open-local.
We can use following example `trusted_store_cc.yaml` to have a try:
```
```yaml
apiVersion: v1
kind: Pod
metadata:
@ -48,12 +48,12 @@ spec:
Before deploy the workload, we can follow this [documentation](https://github.com/kata-containers/kata-containers/blob/CCv0/docs/how-to/how-to-build-and-test-ccv0.md) and use [ccv0.sh](https://github.com/kata-containers/kata-containers/blob/CCv0/docs/how-to/ccv0.sh) to enable CoCo console debug(optional, check whether working as expected).
Create the workload:
```
```sh
kubectl apply -f trusted_store_cc.yaml
```
Ensure the pod was created successfully (in running state):
```
```sh
kubectl get pods
```
@ -64,12 +64,12 @@ trusted-lvm-block 2/2 Running 0 31s
```
After we enable the debug option, we can login into the VM with `ccv0.sh` script:
```
```sh
./ccv0.sh -d open_kata_shell
```
Check container image is saved in encrypted storage with following commands:
```
```sh
root@localhost:/# lsblk --fs
NAME FSTYPE LABEL UUID FSAVAIL FSUSE% MOUNTPOINT
sda
@ -93,5 +93,3 @@ root@localhost:/# mount|grep image
root@localhost:/# ls /run/image/
layers lost+found overlay
```

View File

@ -2,7 +2,7 @@
Without Confidential Computing hardware, there is no way to securely provision
the keys for an encrypted image. Nonetheless, in this demo we describe how to
test encrypted images suppot with the nontee `kata`/`kata-qemu` runtimeclass.
test encrypted images support with the non-tee `kata`/`kata-qemu` runtimeclass.
## Creating a CoCo workload using a pre-existing encrypted image
@ -19,21 +19,21 @@ We have prepared a sample CoCo operator custom resource that is based on the sta
### Swap out the standard custom resource for our sample
Support for multiple custom resources in not available in the current release. Consequently, if a custom resource already exists, then you'll need to remove it first before deploying a new one. We can remove the standard custom resource with:
```
```sh
kubectl delete -k github.com/confidential-containers/operator/config/samples/ccruntime/<CCRUNTIME_OVERLAY>?ref=<RELEASE_VERSION>
```
and in it's place install the modified version with the sample container's decryption key:
```
```sh
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/ssh-demo?ref=<RELEASE_VERSION>
```
Wait until each pod has the STATUS of Running.
```
```sh
kubectl get pods -n confidential-containers-system --watch
```
### Test creating a workload from the sample encrypted image
Create a new Kubernetes deployment that uses the `docker.io/katadocker/ccv0-ssh` container image with:
```
```sh
cat << EOF > ccv0-ssh-demo.yaml
kind: Service
apiVersion: v1
@ -67,24 +67,24 @@ EOF
```
Apply this with:
```
```sh
kubectl apply -f ccv0-ssh-demo.yaml
```
and waiting for the pod to start. This process should show that we are able to pull the encrypted image and using the decryption key configured in the CoCo sample guest image decrypt the container image and create a workload using it.
and wait for the pod to start. This process should show that we are able to pull the encrypted image, and using the decryption key configured in the CoCo sample guest image, decrypt the container image and create a workload using it.
The demo image has an SSH host key embedded in it, which is protected by it's encryption, but we can download the sample private key and use this to ssh into the container and validate the host key to ensure that it hasn't been tampered with.
The demo image has an SSH host key embedded in it, which is protected by it's encryption, but we can download the sample private key and use this to ssh into the container to validate it hasn't been tampered with.
Download the SSH key with:
```
```sh
curl -Lo ccv0-ssh https://raw.githubusercontent.com/confidential-containers/documentation/main/demos/ssh-demo/ccv0-ssh
```
Ensure that the permissions are set correctly with:
```
```sh
chmod 600 ccv0-ssh
```
We can then use the key to ssh into the container:
```
```sh
$ ssh -i ccv0-ssh root@$(kubectl get service ccv0-ssh -o jsonpath="{.spec.clusterIP}")
```
You will be prompted about whether the host key fingerprint is correct. This fingerprint should match the one specified in the container image: `wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0.`

View File

@ -58,20 +58,20 @@ sudo ./sevctl/target/debug/sevctl export --full /opt/sev/cert_chain.cert
By default, the `kata-qemu-sev` runtime class uses pre-attestation with the
`online-sev-kbc` and [simple-kbs](https://github.com/confidential-containers/simple-kbs) to attest the guest and provision secrets.
`simple-kbs` is a basic prototype key broker that can validate a guest measurement according to a policy and conditionally release secrets.
`simple-kbs` is a basic prototype key broker which can validate a guest measurement according to a specified policy and conditionally release secrets.
To use encrypted images, signed images, or authenticated registries with SEV, you should setup `simple-kbs`.
If you simply want to run an unencrypted container image, you can disable pre-attestation by adding the following annotation
`io.katacontainers.config.pre_attestation.enabled: "false"` to your pod.
If you are using pre-attestation, you will need to add an annotation to your pod that contains the URI of `simple-kbs`.
If you are using pre-attestation, you will need to add an annotation to your pod configuration which contains the URI of a `simple-kbs` instance.
This annotation should be of the form `io.katacontainers.config.pre_attestation.uri: "<KBS IP>:44444"`.
Port 44444 is the default port per the directions below, but it can be configured.
Port 44444 is the default port per the directions below, but it may be configured to use another port.
The KBS IP must be accessible from inside the guest.
Usually it should be the public IP of the node where `simple-kbs` runs.
The SEV policy can also be set by adding `io.katacontainers.config.sev.policy: "<SEV POLICY>"` to your pod configuration.
Setting the second bit of the policy enables SEV-ES.
For more information see chapter 3 of the AMD Secure Encrypted Virtualization API.
For more information see chapter 3 of the [Secure Encrypted Virtualization API](https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf#page=31).
The SEV policy is not the same as the policies that drive `simple-kbs`.
The CoCo project has created a sample encrypted container image ([encrypted-image-tests](ghcr.io/fitzthum/encrypted-image-tests:encrypted)). This image is encrypted using a key that comes already provisioned inside the `simple-kbs` for ease of testing. No `simple-kbs` policy is required to get things running.

View File

@ -26,7 +26,7 @@ generic message such as the following:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to create containerd task: failed to create shim: Failed to Check if grpc server is working: rpc error: code = DeadlineExceeded desc = timed out connecting to vsock 637456061:1024: unknown
```
Unfortunately this is a generic message. You'll need to go deeper to figure out
Unfortunately, because this is a generic message, you'll need to go deeper to figure out
what is going on.
## CoCo Debugging
@ -37,7 +37,7 @@ You can see if there is a hypervisor process running with something like this.
ps -ef | grep qemu
```
If you are using a different hypervisor, adjust command accordingly.
If you are using a different hypervisor, adjust your command accordingly.
If there are no hypervisor processes running on the worker node, the VM has
either failed to start or was shutdown. If there is a hypervisor process,
the problem is probably inside the guest.
@ -47,7 +47,7 @@ To do this, first look at the containerd config file located at
`/etc/containerd/config.toml`. At the bottom of the file there should
be a section for each runtime class. For example:
```
```toml
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.kata-qemu-sev]
cri_handler = "cc"
runtime_type = "io.containerd.kata-qemu-sev.v2"
@ -75,7 +75,7 @@ sudo journalctl -xeu containerd
```
Kata writes many messages to this log. It's good to know what you're looking for. There are many
generic messages that are not significant, often arising from a VM not shutting down cleanly
generic messages that are insignificant, often arising from a VM not shutting down cleanly
after an unrelated issue.
### VM Doesn't Start
@ -122,9 +122,9 @@ the Kata agent.
#### failed to create shim task: failed to mount "/run/kata-containers/shared/containers/CONTAINER_NAME/rootfs"
If your CoCo Pod gets an error like showed below then it is likely the image pull policy is set to **IfNotPresent** and the image has been found in the kubelet cache. It fails because the container runtime will not delegate to the Kata agent to pull the image inside the VM and the agent in turn will try to mount the bundle rootfs that only exist in the host filesystem.
If your CoCo Pod gets an error like the one shown below, then it is likely the image pull policy is set to **IfNotPresent**, and the image has been found in the kubelet cache. It fails because the container runtime will not delegate to the Kata agent to pull the image inside the VM and the agent in turn will try to mount the bundle rootfs that only exist in the host filesystem.
Therefore, you must ensure that the image pull policy is set to **Always** for any CoCo Pod. This ways the images are always handled entirely by the agent inside the VM. Worth mentioning we recognize that this behavior is suboptimal and so the community has worked on solutions to avoid constant images downloads for each and every workload.
Therefore, you must ensure that the image pull policy is set to **Always** for any CoCo pod. This way the images are always handled entirely by the agent inside the VM. It is worth mentioning we recognize that this behavior is sub-optimal, so the community provides solutions to avoid constant image downloads for each workload.
```
Events:
@ -139,7 +139,7 @@ Events:
#### Debug Console
One very useful deugging tool is the Kata guest debug console. You can
One very useful debugging tool is the Kata guest debug console. You can
enable this by editing the Kata agent configuration file and adding the lines
``` toml
debug_console = true
@ -148,7 +148,7 @@ debug_console_vport = 1026
Enabling the debug console via the Kata Configuration file will overwrite
any settings in the agent configuration file in the guest initrd.
Enabling the debug console will change the launch measurement.
Enabling the debug console will also change the launch measurement.
Once you've started a pod with the new configuration, get the id of the pod
you want to access. Do this via `ps -ef | grep qemu` or equivalent.
@ -167,7 +167,7 @@ investigate missing dependencies or incorrect configurations.
#### Guest Firmware Logs
If the VM is running but there is no guest output in the log,
the guest might have stalled in the firmware. Firmware output will
the guest may have stalled in the firmware. Firmware output will
depend on your firmware and hypervisor. If you are using QEMU and OVMF,
you can see the OVMF output by adding `-global isa-debugcon.iobase=0x402`
and `-debugcon file:/tmp/ovmf.log` to the QEMU command line using the