Simplify the operator install instructions

Signed-off-by: Pradipta Banerjee <pradipta.banerjee@gmail.com>
This commit is contained in:
Pradipta Banerjee
2022-09-28 16:02:14 +05:30
committed by Samuel Ortiz
parent 361991be5b
commit fd75db206a

View File

@@ -39,140 +39,59 @@ Confidential Containers is still maturing. See [release notes](./releases) for c
You can enable Confidential Containers in an existing Kubernetes cluster using the Confidential Containers Operator. You can enable Confidential Containers in an existing Kubernetes cluster using the Confidential Containers Operator.
If you need to quickly deploy a single-node test cluster, you can use the [run-local.sh script](https://github.com/confidential-containers/operator/blob/main/tests/e2e/run-local.sh) from the operator test suite, which will setup a single-node cluster on your machine for testing purpose. This script requires `ansible-playbook`, which you can install on CentOS/RHEL using `dnf install ansible-core`, and the Ansible `docker_container` module, which you can get using `ansble-galaxy colection install community.docker`. :information_source: If you need to quickly deploy a single-node test cluster, you can
use the [run-local.sh
script](https://github.com/confidential-containers/operator/blob/main/tests/e2e/run-local.sh)
from the operator test suite, which will setup a single-node cluster on your
machine for testing purpose.
This script requires `ansible-playbook`, which you can install on CentOS/RHEL using
`dnf install ansible-core`, and the Ansible `docker_container` module, which you can
get using `ansible-galaxy colection install community.docker`.
* *TBD: we will move the below sections to the operator documentation and only refer to that link :information_source: You can also use a Kind or Minikube cluster with containerd runtime to try out the CoCo stack
Installing the operator* * for development purposes.
Follow the steps described in https://github.com/confidential-containers/operator/blob/main/docs/INSTALL.md ## Prerequisites
Assuming the operator was installed successfully you can move on to creating a workload (**the following section is optional**). - Ensure a minimum of 8GB RAM and 2 vCPU for the Kubernetes cluster node
- Only containerd runtime based Kubernetes clusters are supported with the current CoCo release
- The minimum Kubernetes version should be 1.24
- Ensure at least one Kubernetes node in the cluster is having the label `node-role.kubernetes.io/worker=`
## Details on the CC operator installation
A few points to mention if your interested in the details: For more details on the operator, including the custom resources managed by the operator, refer to the operator [docs](https://github.com/confidential-containers/operator)
### Deploy the the operator: ## Operator Installation
### Deploy the the operator
``` ```
kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/main/deploy/deploy.yaml kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/main/deploy/deploy.yaml
``` ```
You may get the following error when deploying the operator: Wait until each pod has the STATUS of Running.
``` ```
Error from server (Timeout): error when creating "https://raw.githubusercontent.com/confidential-containers/operator/main/deploy/deploy.yaml": Timeout: request did not complete within requested timeout - context deadline exceeded kubectl get pods -n confidential-containers-system --watch
``` ```
This is a timeout on the `kubectl` side and simply run the command again which will solve the problem. ### Create the custom resource
After you deployed the operator and before you create the custom resource run the following command and observer the expected output (STATUS is ready): Creating a custom resource installs the required CC runtime pieces into the cluster node and creates
``` the `RuntimeClasses`
kubectl get pods -n confidential-containers-system
```
Output:
```
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-5df7584679-kffzf 2/2 Running 0 4m35s
```
### Deploying the operator vs a custom resource
The operator is responsible for creating the custom resource definition (CRD) which we can then use for creating a custom resource (CR).
In our case the operator has created the ccruntime CRD as can be observed in the following command:
```
kubectl get crd | grep ccruntime
```
Output:
```
ccruntimes.confidentialcontainers.org 2022-09-08T06:10:37Z
```
The following command provides the details on the CcRuntime CRD:
```
kubectl explain ccruntimes.confidentialcontainers.org
```
Output:
```
KIND: CcRuntime
VERSION: confidentialcontainers.org/v1beta1
DESCRIPTION:
CcRuntime is the Schema for the ccruntimes API
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
CcRuntimeSpec defines the desired state of CcRuntime
status <Object>
CcRuntimeStatus defines the observed state of CcRuntime
```
The complete CRD can be seen by running the following command:
```
kubectl explain --recursive=true ccruntimes.confidentialcontainers.org
```
You can also see the details of the CcRuntime CRD in the following .go file: https://github.com/confidential-containers/operator/blob/main/api/v1beta1/ccruntime_types.go#L90
Create the custom resource:
``` ```
kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/main/config/samples/ccruntime.yaml kubectl apply -f https://raw.githubusercontent.com/confidential-containers/operator/main/config/samples/ccruntime.yaml
``` ```
Check that the ccruntime was created successfully: Wait until each pod has the STATUS of Running.
``` ```
kubectl get ccruntimes kubectl get pods -n confidential-containers-system --watch
```
Output:
```
NAME AGE
ccruntime-sample 5s
``` ```
Use the following command to observe the details of the CR yaml:: Check the `RuntimeClasses` that got created.
```
kubectl get ccruntimes ccruntime-sample -o yaml | less
```
Note that we are using runtimeName: kataataame: kata
If we were use enclave-cc for example we would observe that runtimeName: enclave-cc
Once we also create the custom resource the validation will show us 2 additional pods created:
```
kubectl get pods -n confidential-containers-system
```
Output:
```
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-5df7584679-kffzf 2/2 Running 0 21m
cc-operator-daemon-install-xz697 1/1 Running 0 6m45s
cc-operator-pre-install-daemon-rtdls 1/1 Running 0 7m2s
```
Once the CR was created you will notice we have multiple runtime classes:
``` ```
kubectl get runtimeclass kubectl get runtimeclass
``` ```
@@ -194,9 +113,10 @@ Details on each of the runtime classes:
- *kata-clh-tdx* - using the Cloud Hypervisor, with TD-Shim, and support for Intel TDX CC HW - *kata-clh-tdx* - using the Cloud Hypervisor, with TD-Shim, and support for Intel TDX CC HW
- *kata-qemu* - same as kata - *kata-qemu* - same as kata
- *kata-qemu-tdx* - using QEMU, with TDVF, and support for Intel TDX CC HW - *kata-qemu-tdx* - using QEMU, with TDVF, and support for Intel TDX CC HW
- *TBD: we need to add the SEV runtimes as well* - *kata-qemu-sev* - using QEMU, and support for AMD SEV HW
# Running a workload # Running a workload
## Creating a sample CoCo workload ## Creating a sample CoCo workload
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class. Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.