32 Commits

Author SHA1 Message Date
Tobin Feldman-Fitzthum
d4668f800c docs: add release notes for v0.11.0
Not a ton of new features since we didn't bump guest-components or
trustee in this release, but we do pick up some really nice changes from
Kata.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-11-25 14:23:18 -05:00
Ariel Adam
5ae29d6da1 Merge pull request #264 from magowan/UpdateRoadmap
Update roadmap.md
2024-11-20 13:50:07 +02:00
James Magowan
92e87a443b Update roadmap.md
Bringing roadmap.md into line with current roadmap and processes .

Signed-off-by: James Magowan <magowan@uk.ibm.com>
2024-11-19 14:57:35 +00:00
Chris Porter
89933dd404 Add coco threat model diagram
Insert the diagram into the existing trust-model doc.
Add some supporting text aroudn it.
Also add the diagram to the archiecture diagrams slide deck.

Signed-off-by: Chris Porter <cporterbox@gmail.com>
2024-11-18 10:32:53 -06:00
Chris Porter
6cf0c51e58 Increase header for out of scope and summary
These two headings do not seem to belong under Required Documentation,
so move them out

Signed-off-by: Chris Porter <porter@ibm.com>
2024-11-18 10:32:53 -06:00
Arvind Kumar
802e66cb5c docs: Updating SNP and SEV and quickstart guides
Updating the SEV and SNP guides to include instructions on launching CoCo with SEV and SNP memory encryption.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
2024-11-15 14:47:07 -05:00
stevenhorsman
5035fbae1a doc: Add IBM Z to OSC list
Add statement agreed by IBM product manager

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2024-11-14 09:16:45 +00:00
Ariel Adam
9748f9dd5e Update ADOPTERS.md
Updating the adopters list with RH

Signed-off-by: Ariel Adam <aadam@redhat.com>
2024-11-13 13:37:34 -05:00
Pradipta Banerjee
31c7ab6a9d Merge pull request #259 from nyrahul/main
adding kubearmor/5gsec as adopter
2024-11-11 14:56:15 +05:30
Rahul Jadhav
930165f19e adding kubearmor/5gsec as adopter
Signed-off-by: Rahul Jadhav <nyrahul@gmail.com>
2024-11-11 13:55:59 +05:30
Arvind Kumar
19fb57f3ed Docs: update quickstart
Reorganizing the quickstart guide and adding a new guide page for CoCo-dev instructions for testing CoCo without the use of memory encryption or attestation.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
2024-11-07 09:27:49 -05:00
Mikko Ylinen
4a357bdd5a MAINTAINERS: update to match with governance.md
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-30 10:33:19 -04:00
Mikko Ylinen
1af7e78194 MAINTAINERS: update Intel rep
Same as in commit a57f058b.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-30 10:33:19 -04:00
Xynnn007
bb7ef72658 ADOPTERS: add alibaba cloud case
Signed-off-by: Xynnn007 <xynnn@linux.alibaba.com>
2024-10-24 12:39:29 -04:00
Mikko Ylinen
4679bfb055 README: add a link to our project board
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-22 13:24:50 -04:00
Mikko Ylinen
9be365e507 drop orphan images that are leftovers from CONTRIBUTING.md
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-22 13:24:50 -04:00
Tobin Feldman-Fitzthum
54a9abf965 governance: add process for removing maintainers
Add a formal process for cleaning up our maintainer teams.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-10-22 11:43:08 -04:00
Tobin Feldman-Fitzthum
f515a232cb governance: mention GitHub teams
No changes to policies.

Update the wording to clarify that we manage maintainers
with GitHub teams rather than by putting everyone in the
codeowners file.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-10-22 11:43:08 -04:00
Chris Porter
89fcc4ddcd Release: v0.10.0 release notes
Signed-off-by: Chris Porter <cporterbox@gmail.com>
2024-09-27 13:48:45 -04:00
Steve Horsman
06c81daa12 Merge pull request #198 from lysliu/doc-fix
Doc update: correct worker node label
2024-09-20 09:12:32 +01:00
Hyounggyu Choi
9ee377f1ab docs: Add guide for IBM Secure Execution
This commit migrates the documentation for IBM Secure Execution
from the operator to the confidential-containers repo.
It will be referred by the QuickStart.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2024-09-16 12:33:44 +02:00
Hyounggyu Choi
1a2dec79a7 docs: Fix broken link to cosign installation
This commit updates a broken link to the cosign installation.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2024-09-16 12:33:44 +02:00
Mikko Ylinen
4344346d23 gh: drop project creation and cncf onboarding issue templates
CNCF onboarding is obsolete. Project creation has not been used
so drop that too to make the list of issue creation options a bit
shorter.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-09-03 09:10:09 -04:00
Mikko Ylinen
71da676226 gh: add issue template configuration
Add a suggestion for the newcomers and community to prioritize
confidential-containers Slack channel(s) for discussions and Q&A.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-09-03 09:10:09 -04:00
Mikko Ylinen
7707096004 docs: fix broken links
The links checker reported that the Cloud Native whitepaper
links are broken.

Update to their new URLs with permalinks.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-08-26 07:30:08 -05:00
Mikko Ylinen
ee6300b5b5 guides: update enclave-cc notes for SGX hardware mode
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-08-26 07:30:08 -05:00
Chase
7a7808d489 Update ADOPTERS.md:Add NanhuLab to the adopter.
Signed-off-by: Chase <zhichao.yan@outlook.com>
2024-08-15 09:06:58 -05:00
Ariel Adam
edbc70b053 Merge pull request #226 from ariel-adam/main
Create ADOPTERS.md
2024-08-13 14:16:01 +03:00
Ariel Adam
d476c6a017 Create ADOPTERS.md
Adding the list of adopters for CoCo

Signed-off-by: Ariel Adam <aadam@redhat.com>

Update ADOPTERS.md

Update ADOPTERS.md
2024-08-13 09:52:24 +03:00
Tobin Feldman-Fitzthum
396160da67 docs: add release notes for v0.9.0
Add new features, limitations, and expand the hw support section.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-07-26 10:01:25 -04:00
Chris Porter
d07b43cf24 Release: checklist improvements during v0.9.0-alpha1 release
Signed-off-by: Chris Porter <porter@ibm.com>
2024-07-16 09:04:27 -04:00
Yan Song Liu
620cb347a0 fix #196
label should be node.kubernetes.io/worker

Signed-off-by: Yan Song Liu <lysliu@cn.ibm.com>
2024-03-04 13:42:32 +08:00
29 changed files with 1047 additions and 791 deletions

View File

@@ -1,15 +0,0 @@
---
name: CNCF onboarding
about: CNCF onboarding issue tracker
title: "[CNCF]"
labels: cncf-onboarding
assignees: ''
---
### Parts of the [CNCF onboarding issue](https://github.com/cncf/toc/issues/799) tracked by this issue
*This is the list of all bullet points from the [CNCF onboarding issue](https://github.com/cncf/toc/issues/799) that this issue will track:*
* [ ] *Bullet point foo*
* [ ] *Bullet point bar*

7
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,7 @@
contact_links:
- name: CNCF Slack for Community discussions and Q&A
url: https://slack.cncf.io/
about: |
Join `#confidential-containers` channel by first getting an invitation for the
CNCF Slack workspace. Our channel is the best place for getting questions
answered and to chat with the community and project maintainers.

View File

@@ -1,20 +0,0 @@
---
name: New Project Creation
about: Request for creating a new Confidential Containers project and associated repos
title: "[New Project]"
labels: repo-creation
assignees: ''
---
## Project Description
*Describe the project: Goals, high level architecture*
## Alignment with Confidential Containers
*Why this should become a Confidential Containers project?*
## GitHub repositories needed
*Name of the repos to be created to support this new project*

View File

@@ -51,6 +51,8 @@ Releases of most subprojects are now decoupled from releases of the CoCo project
## The Steps
Note: It may be useful when doing these steps to refer to a previous example. The v0.9.0-alpha1 release applied [these changes](https://github.com/confidential-containers/operator/pull/388/files). After following steps 1-5 below, you should end up with a similar set of changes.
### Determine release builds
Identify/create the bundles that we will release for Kata and enclave-cc.
@@ -70,29 +72,34 @@ Identify/create the bundles that we will release for Kata and enclave-cc.
If you absolutely cannot use a Kata release,
you can consider releasing one of these bundles.
- [ ] 3. :eyes: **Create a peer pods release**
Create a peer pods release based on the Kata release, by following the [documented flow](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/docs/Release-Process.md).
### Test Release with Operator
- [ ] 3. :eyes: **Check operator pre-installation and open PR if needed**
- [ ] 4. :eyes: **Check operator pre-installation and open PR if needed**
The operator uses a pre-install container to setup the node.
Check that the container matches the dependencies used in Kata
and that the operator pulls the most recent version of the container.
* Check that the version of the `nydus-snapshotter` used by Kata matches the one used by the operator
* Compare `nydus-snapshotter` version in Kata [versions.yaml](https://github.com/kata-containers/kata-containers/blob/main/versions.yaml#L325) with the [Makefile](https://github.com/confidential-containers/operator/blob/main/install/pre-install-payload/Makefile#L4) for the operator pre-install container.
* Compare the `nydus-snapshotter` version in Kata [versions.yaml](https://github.com/kata-containers/kata-containers/blob/main/versions.yaml) (search for `nydus-snapshotter` and check its `version` field) with the [Makefile](https://github.com/confidential-containers/operator/blob/main/install/pre-install-payload/Makefile) (check the `NYDUS_SNAPSHOTTER_VERSION` value) for the operator pre-install container.
* **If they do not match, stop and open a PR now. In the PR, update the operator's Makefile to match the version used in kata. After the PR is merged, continue.**
- [ ] 4. :wrench: **Open a PR to the operator to update the release artifacts**
- [ ] 5. :wrench: **Open a PR to the operator to update the release artifacts**
Update the operator to use the payloads identified in steps 1, 2, and 3.
Update the operator to use the payloads identified in steps 1, 2, 3, and 4.
Make sure that the operator pulls the most recent version of the pre-install container
* Find the last commit in the [pre-install directory](https://github.com/confidential-containers/operator/tree/main/install/pre-install-payload)
* As a sanity check, the sha hash of the last commit in that pre-install directory will correspond to a pre-install image in quay, i.e. a reqs-payload image [here](quay.io/confidential-containers/reqs-payload).
* Make sure that the commit matches the preInstall / postUninstall image specified for [enclave-cc CRD](https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml) and [ccruntime CRD](https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/default/kustomization.yaml)
* If these do not match (for instance if you changed the snapshotter in step 3), update the operator so that they do match.
There are a number of places where the payloads are referenced. Make sure to update all of the following to the tag matching the latest commit hash from steps 1 and 2:
* Find the last commit in the [pre-install directory](https://github.com/confidential-containers/operator/tree/main/install/pre-install-payload)
* As a sanity check, the sha hash of the last commit in that pre-install directory will correspond to a pre-install image in quay, i.e. a reqs-payload image [here](https://quay.io/confidential-containers/reqs-payload).
* Make sure that the commit matches the preInstall / postUninstall image specified for [enclave-cc CRD](https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml) and [ccruntime CRD](https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/default/kustomization.yaml)
* If these do not match (for instance if you changed the snapshotter in step 4), update the operator so that they do match.
There are a number of places where the payloads are referenced. Make sure to update all of the following to the tag matching the latest commit hash from steps 1, 2, and 3:
* Enclave CC:
* [sim](https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/sim/kustomization.yaml)
* [hw](https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/hw/kustomization.yaml)
@@ -103,17 +110,17 @@ Identify/create the bundles that we will release for Kata and enclave-cc.
* [peer-pods](https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/peer-pods/kustomization.yaml)
Note that we need the quay.io/confidential-containers/runtime-payload-ci registry and kata-containers-latest tag
**Also, update the [operator version](https://github.com/confidential-containers/operator/blob/main/config/release/kustomization.yaml#L7)**
**Also, update the [operator version](https://github.com/confidential-containers/operator/blob/main/config/release/kustomization.yaml) (update the `newTag` value)**
### Final Touches
- [ ] 5. :trophy: **Cut an operator release using the GitHub release tool**
- [ ] 6. :trophy: **Cut an operator release using the GitHub release tool**
- [ ] 6. :green_book: **Make sure to update the [release notes](https://github.com/confidential-containers/confidential-containers/tree/main/releases) and tag/release the confidential-containers repo using the GitHub release tool.**
- [ ] 7. :green_book: **Make sure to update the [release notes](https://github.com/confidential-containers/confidential-containers/tree/main/releases) and tag/release the confidential-containers repo using the GitHub release tool.**
- [ ] 7. :hammer: **Poke Wainer Moschetta (@wainersm) to update the release to the OperatorHub. Find the documented flow [here](https://github.com/confidential-containers/operator/blob/main/docs/OPERATOR_HUB.md).**
- [ ] 8. :hammer: **Poke Wainer Moschetta (@wainersm) to update the release to the OperatorHub. Find the documented flow [here](https://github.com/confidential-containers/operator/blob/main/docs/OPERATOR_HUB.md).**
### Post-release
- [ ] 8. :wrench: **Open a PR to the operator to go back to latest payloads after release**
After the release, the operator's payloads need to go back to what they were (e.g. using "latest" instead of a specific commit sha). As an example, step 4 for the v0.9.0-alpha0 release applied [these changes](https://github.com/confidential-containers/operator/pull/368/files), and for this step, you should use `git revert` to undo such changes you made during the release.
- [ ] 9. :wrench: **Open a PR to the operator to go back to latest payloads after release**
After the release, the operator's payloads need to go back to what they were (e.g. using "latest" instead of a specific commit sha). As an example, the v0.9.0-alpha1 release applied [these changes](https://github.com/confidential-containers/operator/pull/389/files). You should use `git revert -s` for this.

1
.lycheeignore Normal file
View File

@@ -0,0 +1 @@
https://sigs.centos.org/virt/tdx/

38
ADOPTERS.md Normal file
View File

@@ -0,0 +1,38 @@
# Confidential Containers Adopters
This page contains a list of organizations/companies and projects/products who use confidential containers as adopters in different usage levels (development, beta, production, GA etc...)
**NOTE:** For adding your organization/company and project/product to this table (alphabetical order) fork the repository and open a PR with the required change.
See list of adopter types at the bottom of this page.
## Adopters
| Organization/Company | Project/Product | Usage level | Adopter type | Details |
|-------------------------------------------------------------------|---------------------------------------------------------------|--------------------------|----------------------------------|---------------------------------------------------------------------------|
|[Alibaba Cloud (Aliyun)](https://www.alibabacloud.com/)| [Elastic Algorithm Service](https://www.alibabacloud.com/help/en/pai/user-guide/eas-model-serving/?spm=a2c63.p38356.0.0.2b2b6679Pjozxy) and [Elastic GPU Service](https://www.alibabacloud.com/help/en/egs/) | Beta | Service Provider | Both services use sub-projects of confidential containers to protect the user data and AI model from being exposed to CSP (For details mading.ma@alibaba-inc.com) |
| [Edgeless Systems](https://www.edgeless.systems/) | [Contrast](https://github.com/edgelesssys/contrast) | Beta | Service Provider / Consultancy | Contrast runs confidential container deployments on Kubernetes at scale. |
| [IBM](https://www.ibm.com/z) | [IBM LinuxONE](https://www.ibm.com/linuxone) | Beta | Service Provider | Confidential Containers with Red Hat OpenShift Container Platform and IBM® Secure Execution for Linux (see [details](https://www.ibm.com/blog/confidential-containers-with-red-hat-openshift-container-platform-and-ibm-secure-execution-for-linux/)) |
|NanhuLab|Trusted Big Data Sharing System |Beta |Service Provider |The system uses confidential containers to ensure that data users can utilize the data without being able to view the raw data.(No official website yet. For details: yzc@nanhulab.ac.cn) |
| [KubeArmor](https://www.kubearmor.io/) | Runtime Security | Beta | Another project | An open source project that leverages CoCo as part of their solution, integrates with for compatibility and interoperability, or is used in the supply chain of another project [(5GSEC)](https://github.com/5GSEC/nimbus/blob/main/examples/clusterscoped/coco-workload-si-sib.yaml). |
| [Red Hat](https://www.redhat.com/en) | [OpenShift confidential containers](https://www.redhat.com/en/blog/learn-about-confidential-containers) | Beta | Service Provider | Confidential Containers are available from [OpenShift sandboxed containers release version 1.7.0](https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.7/) as a tech preview on Azure cloud for both Intel TDX and AMD SEV-SNP. The tech preview also includes support for confidential containers on IBM Z and LinuxONE using Secure Execution for Linux (IBM SEL).|
|TBD| | | | |
|TBD| | | | |
## Adopter types
See CNCF [definition of an adopter](https://github.com/cncf/toc/blob/main/FAQ.md#what-is-the-definition-of-an-adopter) <br>
Any single company can fall under several categories at once in which case it should enumerate all that apply and only be listed once
- **End-User** (CNCF member) - Companies and organizations who are End-User members of the CNCF
- **Another project** - an open source project that leverages a CNCF project as part of their solution, integrates with for compatibility and interoperability,
or is used in the supply chain of another project
- **end users** (non CNCF member)- companies and organizations that are not CNCF End-User members that use the project and cloud native technologies internally, or build upon
a cloud native open source project but do not sell the cloud native project externally as a service offering (those are Service Providers). This group is identified in the written
form by the convention end user, uncapitalized and unhyphenated
- **Service Provider** - a Service Provider is an organization that repackages an open source project as a core component of a service offering, sells cloud native services externally.
A Service Providers customers are considered transitive adopters and should be excluded from identification within the ADOPTERS.md file.
Examples of Service Providers (and not end users) include cloud providers (e.g., Alibaba Cloud, AWS, Google Cloud, Microsoft Azure), some infrastructure software vendors,
and telecom operators (e.g., AT&T, China Mobile)
- **Consultancy** - an entity whose purpose is to assist other organizations in developing a solution leveraging cloud native technology. They may be embedded in the end user team and
is responsible for the execution of the service. Service Providers may also provide consultancy services, they may also package cloud native technologies for reuse
as part of their offerings. These function as proxies for an end user

View File

@@ -1,15 +1,18 @@
# CoCo Steering Committee / Maintainers
#
# Github ID, Name, Email Address
ariel-adam, Ariel Adam, aadam@redhat.com
bpradipt, Pradipta Banerjee, prbanerj@redhat.com
dcmiddle, Dan Middleton, dan.middleton@intel.com
fitzthum, Tobin Feldman-Fitzthum, tobin@ibm.com
jiazhang0, Zhang Jia, zhang.jia@linux.alibaba.com
Jiang Liu, jiangliu, gerry@linux.alibaba.com
larrydewey, Larry Dewey, Larry.Dewey@amd.com
magowan, James Magowan, magowan@uk.ibm.com
peterzcst, Peter Zhu, peter.j.zhu@intel.com
sameo, Samuel Ortiz, samuel.e.ortiz@protonmail.com
# Github ID, Name, Affiliation
ariel-adam, Ariel Adam, Redhat
bpradipt, Pradipta Banerjee, Redhat
peterzcst, Peter Zhu, Intel
mythi, Mikko Ylinen, Intel
magowan, James Magowan, IBM
fitzthum, Tobin Feldman-Fitzthum, IBM
jiazhang0, Zhang Jia, Alibaba
jiangliu, Jiang Liu, Alibaba
larrydewey, Larry Dewey, AMD
ryansavino, Ryan Savino, AMD
sameo, Samuel Ortiz, Rivos
zvonkok, Zvonko Kaiser, NVIDIA
vbatts, Vincent Batts, Microsoft
danmihai1, Dan Mihai, Microsoft

View File

@@ -24,7 +24,7 @@ delivering Confidential Computing for guest applications or data inside the TEE
### Get started quickly...
- [Kubernetes Operator for Confidential
Computing](https://github.com/confidential-containers/confidential-containers-operator) : An
Computing](https://github.com/confidential-containers/operator) : An
operator to deploy confidential containers runtime (and required configs) on a Kubernetes cluster
@@ -35,6 +35,7 @@ delivering Confidential Computing for guest applications or data inside the TEE
- [Project Overview](./overview.md)
- [Project Architecture](./architecture.md)
- [Our Roadmap](./roadmap.md)
- [Our Release Content Planning](https://github.com/orgs/confidential-containers/projects/6)
- [Alignment with other Projects](alignment.md)

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -40,15 +40,25 @@ Project maintainers are first and foremost active *Contributors* to the project
* Creating and assigning project issues.
* Enforcing the [Code of Conduct](https://github.com/confidential-containers/community/blob/main/CODE_OF_CONDUCT.md).
The list of maintainers for a project is defined by the project `CODEOWNERS` file placed at the top-level of each project's repository.
Project maintainers are managed via GitHub teams. The maintainer team for a project is referenced in the `CODEOWNERS` file
at the top level of each project repository.
### Becoming a project maintainer
Existing maintainers may decide to elevate a *Contributor* to the *Maintainer* role based on the contributor established trust and contributions relevance.
This decision process is not formally defined and is based on lazy consensus from the existing maintainers.
Any contributor may request for becoming a project maintainer by opening a pull request (PR) against the `CODEOWNERS` file, and adding *all* current maintainers as reviewer of the PR.
Maintainers may also pro-actively promote contributors based on their contributions and leadership track record.
A contributor can propose themself or someone else as a maintainer by opening an issue in the repository for the project in question.
### Removing project maintainers
Inactive maintainers can be removed by the Steering Committee.
Maintainers are considered inactive if they have made no GitHub contributions relating to the project they maintain
for more than six months.
Before removing a maintainer, the Steering Commitee should notify the maintainer of their status.
Not all inactive maintainers must be removed.
This process should mainly be used to remove maintainers that have permanently moved on from the project.
## Steering Committee Member

252
guides/coco-dev.md Normal file
View File

@@ -0,0 +1,252 @@
# Running a workload
## Creating a sample CoCo workload
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the `kata-qemu-coco-dev` runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
In this example, we will be using the bitnami/nginx image as described in the following yaml:
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata-qemu-coco-dev
```
Setting the `runtimeClassName` is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the [guides](../guides) for
more details.
With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host, you should log into the k8s node and ensure the following command returns an empty result:
```shell
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload:
```shell
kubectl apply -f nginx.yaml
```
Output:
```shell
pod/nginx created
```
Ensure the pod was created successfully (in running state):
```shell
kubectl get pods
```
Output:
```shell
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
```
Now go back to the k8s node and ensure that you dont have any bitnami/nginx images on it:
```shell
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
## Encrypted and/or signed images with attestation
The previous example does not involve any attestation because the workload container isn't signed or encrypted,
and the workload itself does not require any secrets.
This is not the case for most real workloads. It is recommended to use CoCo with signed and/or encrypted images.
The workload itself can also request secrets from the attestation agent in the guest.
Secrets are provisioned to the guest in conjunction with an attestation, which is based on hardware evidence.
The rest of this guide focuses on setting up more substantial encrypted/signed workloads using attestation
and confidential hardware.
CoCo has a modular attestation interface and there are a few options for attestation.
CoCo provides a generic Key Broker Service (KBS) that the rest of this guide will be focused on.
The SEV runtime class uses `simple-kbs`, which is described in the [SEV guide](../guides/sev.md).
### Select Runtime Class
To use CoCo with confidential hardware, first switch to the appropriate runtime class.
TDX has one runtime class, `kata-qemu-tdx`.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](../guides/sev.md).
For SNP, use the `kata-qemu-snp` runtime class and follow the [SNP guide](../guides/snp.md).
For `enclave-cc` follow the [enclave-cc guide](../guides/enclave-cc.md).
### Deploy and Configure tenant-side CoCo Key Broker System cluster
The following describes how to run and provision the generic KBS.
The KBS should be run in a trusted environment. The KBS is not just one service,
but a combination of several.
A tenant-side CoCo Key Broker System cluster includes:
- Key Broker Service (KBS): Brokering service for confidential resources.
- Attestation Service (AS): Verifier for remote attestation.
- Reference Value Provider Service (RVPS): Provides reference values for AS.
- CoCo Keyprovider: Component to encrypt the images following ocicrypt spec.
To quick start the KBS cluster, a `docker compose` yaml is provided to launch.
```shell
# Clone KBS git repository
git clone https://github.com/confidential-containers/trustee.git
cd trustee/kbs
export KBS_DIR_PATH=$(pwd)
# Generate a user auth key pair
openssl genpkey -algorithm ed25519 > config/private.key
openssl pkey -in config/private.key -pubout -out config/public.pub
cd ..
# Start KBS cluster
docker compose up -d
```
If configuration of KBS cluster is required, edit the following config files and restart the KBS cluster with `docker compose`:
- `$KBS_DIR_PATH/config/kbs-config.toml`: configuration for Key Broker Service.
- `$KBS_DIR_PATH/config/as-config.json`: configuration for Attestation Service.
- `$KBS_DIR_PATH/config/sgx_default_qcnl.conf`: configuration for Intel TDX/SGX verification. See [details](https://github.com/confidential-containers/trustee/blob/main/attestation-service/docs/grpc-as.md#quick-start).
When KBS cluster is running, you can modify the policy file used by AS policy engine ([OPA](https://www.openpolicyagent.org/)) at any time:
- `$KBS_DIR_PATH/data/attestation-service/opa/default.rego`: Policy file for evidence verification of AS, refer to [AS Policy Engine](https://github.com/confidential-containers/attestation-service#policy-engine) for more infomation.
### Encrypting an Image
[skopeo](https://github.com/containers/skopeo) is required to encrypt the container image.
Follow the [instructions](https://github.com/containers/skopeo/blob/main/install.md) to install `skopeo`.
If building with Ubuntu 22.04, make sure to follow the instructions to build skopeo from source, otherwise
there will be errors regarding version incompatibility between ocicrypt and skopeo. Make sure the downloaded
skopeo version is at least 1.16.0. Ubuntu 22.04 builds skopeo with an outdated ocicrypt version, which does
not support the keyprovider protocol we depend on.
Use `skopeo` to encrypt an image on the same node of the KBS cluster (use busybox:latest for example):
```shell
# edit ocicrypt.conf
tee > ocicrypt.conf <<EOF
{
"key-providers": {
"attestation-agent": {
"grpc": "127.0.0.1:50000"
}
}
}
EOF
# encrypt the image
OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://library/busybox oci:busybox:encrypted
```
The image will be encrypted, and things happens in the KBS cluster background include:
- CoCo Keyprovider generates a random key and a key-id. Then encrypts the image using the key.
- CoCo Keyprovider registers the key with key-id into KBS.
Then push the image to a registry:
```shell
skopeo copy oci:busybox:encrypted [SCHEME]://[REGISTRY_URL]:encrypted
```
Be sure to replace `[SCHEME]` with registry scheme type like `docker`, replace `[REGISTRY_URL]` with the desired registry URL like `docker.io/encrypt_test/busybox`.
### Signing an Image
[cosign](https://github.com/sigstore/cosign) is required to sign the container image. Follow the instructions here to install `cosign`:
To install cosign, find and unpackage the corresponding package to the machine being used from their [release page](https://github.com/sigstore/cosign/releases).
Generate a cosign key pair and register the public key to KBS storage:
```shell
cosign generate-key-pair
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/cosign-key && cp cosign.pub $KBS_DIR_PATH/data/kbs-storage/default/cosign-key/1
```
Sign the encrypted image with cosign private key:
```shell
cosign sign --key cosign.key [REGISTRY_URL]:encrypted
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous steps.
Then edit an image pulling validation policy file.
Here is a sample policy file `security-policy.json`:
```json
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"[REGISTRY_URL]": [
{
"type": "sigstoreSigned",
"keyPath": "kbs:///default/cosign-key/1"
}
]
}
}
}
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image.
Register the image pulling validation policy file to KBS storage:
```shell
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/security-policy
cp security-policy.json $KBS_DIR_PATH/data/kbs-storage/default/security-policy/test
```
### Deploy an Encrypted Image as a CoCo workload on CC HW
Here is a sample yaml for encrypted image deploying:
```shell
cat << EOT | tee encrypted-image-test-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: encrypted-image-test-busybox
name: encrypted-image-test-busybox
annotations:
io.containerd.cri.runtime-handler: [RUNTIME_CLASS]
spec:
containers:
- image: [REGISTRY_URL]:encrypted
name: busybox
dnsPolicy: ClusterFirst
runtimeClassName: [RUNTIME_CLASS]
EOT
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous step, replace `[RUNTIME_CLASS]` with kata runtime class for CC HW.
Then configure `/opt/confidential-containers/share/defaults/kata-containers/configuration-<RUNTIME_CLASS_SUFFIX>.toml` to add `agent.aa_kbc_params=cc_kbc::<KBS_URI>` to kernel parameters. Here `RUNTIME_CLASS_SUFFIX` is something like `qemu-coco-dev`, `KBS_URI` is the address of Key Broker Service in KBS cluster like `http://123.123.123.123:8080`.
Deploy encrypted image as a workload:
```shell
kubectl apply -f encrypted-image-test-busybox.yaml
```

View File

@@ -1,54 +0,0 @@
# EAA Verdictd Guide
**EAA/Verdictd support has been deprecated in Confidential Containers**
EAA is used to perform attestation at runtime and provide guest with confidential resources such as keys.
It is based on [rats-tls](https://github.com/inclavare-containers/rats-tls).
[Verdictd](https://github.com/inclavare-containers/verdictd) is the Key Broker Service and Attestation Service of EAA.
The EAA KBC is an optional module in the attestation-agent at compile time,
which can be used to communicate with Verdictd.
The communication is established on the encrypted channel provided by rats-tls.
EAA can now be used on Intel TDX and Intel SGX platforms.
## Create encrypted image
Before build encrypted image, you need to make sure Skopeo and Verdictd(EAA KBS) have been installed:
- [Skopeo](https://github.com/containers/skopeo): the command line utility to perform encryption operations.
- [Verdictd](https://github.com/inclavare-containers/verdictd): EAA Key Broker Service and Attestation Service.
1. Pull unencrypted image.
Here use `alpine:latest` for example:
```sh
${SKOPEO_HOME}/bin/skopeo copy --insecure-policy docker://docker.io/library/alpine:latest oci:busybox
```
2. Follow the [Verdictd README #Generate encrypted container image](https://github.com/inclavare-containers/verdictd#generate-encrypted-container-image) to encrypt the image.
3. Publish the encrypted image to your registry.
## Deploy encrypted image
1. Build rootfs with EAA component:
Specify `AA_KBC=eaa_kbc` parameters when using kata-containers `rootfs.sh` scripts to create rootfs.
2. Launch Verdictd
Verdictd performs remote attestation at runtime and provides the key needed to decrypt the image.
It is actually both Key Broker Service and Attestation Service of EAA.
So when deploy the encrypted image, Verdictd is needed to be launched:
```sh
verdictd --listen <$ip>:<$port> --mutual
```
> **Note** The communication between Verdictd and EAA KBC is based on rats-tls,
so you need to confirm that [rats-tls](https://github.com/inclavare-containers/rats-tls) has been correctly installed in your running environment.
3. Agent Configuration
Add configuration `aa_kbc_params= 'eaa_kbc::<$IP>:<$PORT>'` to agent config file, the IP and PORT should be consistent with verdictd.

View File

@@ -5,6 +5,17 @@ This guide assumes that you already have a Kubernetes cluster
and have deployed the operator as described in the **Installation**
section of the [quickstart guide](../quickstart.md).
## Configuring Kubernetes cluster when using SGX hardware mode build
Additional setup steps when using the hardware SGX mode are needed:
1. The cluster needs to have [Intel Software Guard Extensions (SGX) device plugin for Kubernetes](
https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/sgx_plugin/README.html#prerequisites) running.
1. The cluster needs to have [Intel DCAP aesmd](
https://github.com/intel/SGXDataCenterAttestationPrimitives) running on every SGX node and the nodes must be registered.
**Note** kind/minikube based clusters are not recommended when using hardware mode SGX.
## Configuring enclave-cc custom resource to use a different KBC
**Note** Before configuring KBC, please refer to the
@@ -66,7 +77,7 @@ spec:
containers:
- image: ghcr.io/confidential-containers/test-container-enclave-cc:encrypted
name: hello-world
workingDir: "/run/rune/boot_instance/"
workingDir: "/run/rune/occlum_instance/"
resources:
limits:
sgx.intel.com/epc: 600Mi
@@ -74,7 +85,7 @@ spec:
- name: OCCLUM_RELEASE_ENCLAVE
value: "1"
command:
- /run/rune/boot_instance/build/bin/occlum-run
- /run/rune/occlum_instance/build/bin/occlum-run
- /bin/hello_world
runtimeClassName: enclave-cc
@@ -117,6 +128,9 @@ Hello world!
```
**NOTE** When running in the hardware SGX mode, the logging is disabled
by default.
We can also verify the host does not have the image for others to use:
```sh
crictl -r unix:///run/containerd/containerd.sock image ls | grep helloworld_enc

128
guides/ibm-se.md Normal file
View File

@@ -0,0 +1,128 @@
# IBM Secure Execution Guide
This document explains how to install and run a confidential container on an IBM Secure
Execution-enabled Z machine (s390x). A secure image is an encrypted Linux image comprising a kernel image,
an initial RAM file system (initrd) image, and a file specifying kernel parameters (parmfile).
It is an essential component for running a confidential container. The public key used for
encryption is associated with a private key managed by a trusted firmware called
[ultravisor](https://www.ibm.com/docs/en/linux-on-systems?topic=execution-components).
This means that a secure image is machine-specific, resulting in its absence from a released
payload image in `ccruntime`. To use it, you need to build a secure image with your own public
key and create a payload image bundled with it. The following sections elaborate on how to
accomplish this step-by-step.
## Prerequisites
Kindly review the [section](https://github.com/confidential-containers/confidential-containers/blob/main/quickstart.md#prerequisites) titled identically in the `QuickStart`.
- `kustomize`: Kubernetes native configuration management tool which can be installed simply by:
```
$ mkdir -p $GOPATH/src/github.com/confidential-containers
$ cd $GOPATH/src/github.com/confidential-containers
$ git clone https://github.com/confidential-containers/operator.git && cd operator
$ make kustomize
$ export PATH=$PATH:$(pwd)/bin
```
Or simply follow the official [documentation](https://kubectl.docs.kubernetes.io/installation/kustomize/) based on your environment.
## Build a Payload Image via kata-deploy
If you have a local container registry running at `localhost:5000`, refer to the
[document](https://github.com/kata-containers/kata-containers/blob/main/docs/how-to/how-to-run-kata-containers-with-SE-VMs.md#using-kata-deploy-with-confidential-containers-operator)
on Kata Containers for details on building a payload image.
## Install Operator
Let us install an operator with:
```
$ cd $GOPATH/src/github.com/confidential-containers/operator
$ export IMG=localhost:5000/cc-operator
$ make docker-build && make docker-push
$ make install && make deploy
namespace/confidential-containers-system created
customresourcedefinition.apiextensions.k8s.io/ccruntimes.confidentialcontainers.org created
serviceaccount/cc-operator-controller-manager created
role.rbac.authorization.k8s.io/cc-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/cc-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/cc-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/cc-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/cc-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/cc-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/cc-operator-proxy-rolebinding created
configmap/cc-operator-manager-config created
service/cc-operator-controller-manager-metrics-service created
deployment.apps/cc-operator-controller-manager created
$ kubectl get pods -n confidential-containers-system --watch
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-6d6b78b7f5-m9chj 2/2 Running 0 61s
$ # Press ctrl C to stop watching
```
## Install Custom Resource (CR)
You can install `ccruntime` with an existing overlay directory named
[`s390x`](https://github.com/confidential-containers/operator/tree/main/config/samples/ccruntime/s390x), by replacing the image name and tag
for a payload image with the ones you pushed to the local registry
(e.g. `localhost:5000/build-kata-deploy:latest`):
```
$ cd $GOPATH/src/github.com/confidential-containers/operator/config/samples/ccruntime/s390x
$ kustomize edit set image quay.io/kata-containers/kata-deploy=localhost:5000/build-kata-deploy:latest
$ kubectl create -k .
ccruntime.confidentialcontainers.org/ccruntime-sample-s390x created
$ kubectl get pods -n confidential-containers-system --watch
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-6d6b78b7f5-m9chj 2/2 Running 0 11m
cc-operator-daemon-install-srmc4 1/1 Running 0 2m13s
cc-operator-pre-install-daemon-t9r2h 1/1 Running 0 3m6s
$ # To verify if a payload image is pulled from the updated location
$ kubectl get pods -oyaml -n confidential-containers-system -l 'name=cc-operator-daemon-install' | grep image:
image: localhost:5000/build-kata-deploy:test
image: localhost:5000/build-kata-deploy:test
```
You have to wait until a set of runtime classes is deployed like:
```
$ kubectl get runtimeclass
NAME HANDLER AGE
kata kata-qemu 60s
kata-qemu kata-qemu 61s
kata-qemu-se kata-qemu-se 61s
```
## Verify the Installation
To verify the installation, use the following runtime class: `kata-qemu-se`:
```
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx-kata
spec:
runtimeClassName: kata-qemu-se
containers:
- name: nginx
image: nginx
EOF
pod/nginx-kata created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-kata 1/1 Running 0 15s
```
## Uninstall Resources
You can uninstall confidential containers by removing the resources in reverse order:
```
$ cd $GOPATH/src/github.com/confidential-containers/operator
$ kubectl delete -k config/samples/ccruntime/s390x
$ make undeploy
```

View File

@@ -1,201 +1,109 @@
# SEV-ES Guide
Confidential Containers supports SEV(-ES) with pre-attestation using
[simple-kbs](https://github.com/confidential-containers/simple-kbs).
This guide covers platform-specific setup for SEV(-ES) and walks through
complete flows for attestation and encrypted images.
## Creating a CoCo workload using a pre-existing encrypted image on SEV
### Platform Setup
To enable SEV on the host platform, first ensure that it is supported. Then follow these instructions to enable SEV:
[AMD SEV - Prepare Host OS](https://github.com/AMDESE/AMDSEV#prepare-host-os)
### Install sevctl and Export SEV Certificate Chain
[sevctl](https://github.com/virtee/sevctl) is the SEV command line utility and is needed to export the SEV certificate chain.
Follow these steps to install `sevctl`:
* Debian / Ubuntu:
```
# Rust must be installed to build
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
sudo apt install -y musl-dev musl-tools
# Additional packages are required to build
sudo apt install -y pkg-config libssl-dev asciidoctor
# Clone the repository
git clone https://github.com/virtee/sevctl.git
# Build
(cd sevctl && cargo build)
```
* CentOS / Fedora / RHEL:
```
sudo dnf install sevctl
```
> **Note** Due to [this bug](https://bugzilla.redhat.com/show_bug.cgi?id=2037963) on sevctl for RHEL and Fedora you might need to build the tool from sources to pick the fix up.
If using the SEV kata configuration template file, the SEV certificate chain must be placed in `/opt/sev`. Export the SEV certificate chain using the following commands:
```
sudo mkdir -p /opt/sev
sudo ./sevctl/target/debug/sevctl export --full /opt/sev/cert_chain.cert
```
### Setup and Run the simple-kbs
## Platform Setup
By default, the `kata-qemu-sev` runtime class uses pre-attestation with the
`online-sev-kbc` and [simple-kbs](https://github.com/confidential-containers/simple-kbs) to attest the guest and provision secrets.
`simple-kbs` is a basic prototype key broker which can validate a guest measurement according to a specified policy and conditionally release secrets.
To use encrypted images, signed images, or authenticated registries with SEV, you should setup `simple-kbs`.
If you simply want to run an unencrypted container image, you can disable pre-attestation by adding the following annotation
`io.katacontainers.config.pre_attestation.enabled: "false"` to your pod.
> [!WARNING]
>
> In order to launch SEV or SNP memory encrypted guests, the host must be prepared with a compatible kernel, `6.8.0-rc5-next-20240221-snp-host-cc2568386`. AMD custom changes and required components and repositories will eventually be upstreamed.
If you are using pre-attestation, you will need to add an annotation to your pod configuration which contains the URI of a `simple-kbs` instance.
This annotation should be of the form `io.katacontainers.config.pre_attestation.uri: "<KBS IP>:44444"`.
Port 44444 is the default port per the directions below, but it may be configured to use another port.
The KBS IP must be accessible from inside the guest.
Usually it should be the public IP of the node where `simple-kbs` runs.
> [Sev-utils](https://github.com/amd/sev-utils/blob/coco-202402240000/docs/snp.md) is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, they are already packaged and delivered with Kata.
Alternatively, refer to the [AMDESE guide](https://github.com/confidential-containers/amdese-amdsev/tree/amd-snp-202402240000?tab=readme-ov-file#prepare-host) to manually build the host kernel and other components.
The SEV policy can also be set by adding `io.katacontainers.config.sev.policy: "<SEV POLICY>"` to your pod configuration. The default policy for SEV and SEV-ES are, respectively, "3" and "7", where the following bits are enabled:
## Getting Started
| Bit| Name| Description |
| --- | --- | --- |
|0|NODBG| Debugging of the guest is disallowed |
|1|NOKS| Sharing keys with other guests is disallowed |
|2|ES| SEV-ES is required |
This guide covers platform-specific setup for SEV and walks through the complete flows for the different CoCo use cases:
For more information about SEV policy, see chapter 3 of the [Secure Encrypted Virtualization API](https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf) (PDF).
- [Container Launch with Memory Encryption](#container-launch-with-memory-encryption)
- [Pre-Attestation Utilizing Signed and Encrypted Images](#pre-attestation-utilizing-signed-and-encrypted-images)
>Note: the SEV policy is not the same as the policies that drive `simple-kbs`.
## Container Launch With Memory Encryption
The CoCo project has created a sample encrypted container image ([ghcr.io/confidential-containers/test-container:encrypted](https://github.com/orgs/confidential-containers/packages/container/test-container/82546314?tag=encrypted)). This image is encrypted using a key that comes already provisioned inside the `simple-kbs` for ease of testing. No `simple-kbs` policy is required to get things running.
### Launch a Confidential Service
The image encryption key and key for SSH access have been attached to the CoCo sample encrypted container image as docker labels. This image is meant for TEST purposes only as these keys are published publicly. In a production use case, these keys would be generated by the workload administrator and kept secret. For further details, see the section how to [Create an Encrypted Image](#create-an-encrypted-image).
To launch a container with SEV memory encryption, the SEV runtime class (`kata-qemu-sev`) must be specified as an annotation in the yaml. A base alpine docker container ([Dockerfile](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile)) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a [SSH public key](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted.pub) for validation purposes.
To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-firmware-measurement).
Here is a sample service yaml specifying the SEV runtime class:
`docker compose` is required to run the `simple-kbs` and its database in docker containers. Installation instructions are available on [Docker's website](https://docs.docker.com/compose/install/linux/).
Clone the repository for specified tag:
```
simple_kbs_tag="0.1.1"
git clone https://github.com/confidential-containers/simple-kbs.git
(cd simple-kbs && git checkout -b "branch_${simple_kbs_tag}" "${simple_kbs_tag}")
```
Run the service with `docker compose`:
```
(cd simple-kbs && sudo docker compose up -d)
```
### Launch the Pod and Verify SEV Encryption
Here is a sample kubernetes service yaml for an encrypted image:
```
```yaml
kind: Service
apiVersion: v1
metadata:
name: encrypted-image-tests
name: "confidential-unencrypted"
spec:
selector:
app: encrypted-image-tests
app: "confidential-unencrypted"
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: encrypted-image-tests
name: "confidential-unencrypted"
spec:
selector:
matchLabels:
app: encrypted-image-tests
app: "confidential-unencrypted"
template:
metadata:
labels:
app: encrypted-image-tests
app: "confidential-unencrypted"
annotations:
io.containerd.cri.runtime-handler: kata-qemu-sev
spec:
runtimeClassName: kata-qemu-sev
containers:
- name: encrypted-image-tests
image: ghcr.io/fitzthum/encrypted-image-tests:encrypted
- name: "confidential-unencrypted"
image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always
```
Save this service yaml to a file named `encrypted-image-tests.yaml`. Notice the image URL specified points to the previously described CoCo sample encrypted container image. `kata-qemu-sev` must also be specified as the `runtimeClassName`.
Save the contents of this yaml to a file called `confidential-unencrypted.yaml`.
Start the service:
```
kubectl apply -f encrypted-image-tests.yaml
```shell
kubectl apply -f confidential-unencrypted.yaml
```
Check for pod errors:
Check for errors:
```
pod_name=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $1;}')
kubectl describe pod ${pod_name}
```shell
kubectl describe pod confidential-unencrypted
```
If there are no errors, a CoCo encrypted container with SEV has been successfully launched!
If there are no errors in the Events section, then the container has been successfully created with SEV memory encryption.
### Verify SEV Memory Encryption
### Validate SEV Memory Encryption
The container `dmesg` report can be parsed to verify SEV memory encryption.
The container dmesg log can be parsed to indicate that SEV memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get pod IP:
Get the pod IP:
```
pod_ip=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $6;}')
```shell
pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
```
Get the CoCo sample encrypted container image SSH access key from docker image label and save it to a file.
Currently the docker client cannot pull encrypted images. We can inspect the unencrypted image instead,
which has the same labels. You could also use `skopeo inspect` to get the labels from the encrypted image.
Download and save the [SSH private key](https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted) and set the permissions.
```
docker pull ghcr.io/fitzthum/encrypted-image-tests:unencrypted
docker inspect ghcr.io/fitzthum/encrypted-image-tests:unencrypted | \
jq -r '.[0].Config.Labels.ssh_key' \
| sed "s|\(-----BEGIN OPENSSH PRIVATE KEY-----\)|\1\n|g" \
| sed "s|\(-----END OPENSSH PRIVATE KEY-----\)|\n\1|g" \
> encrypted-image-tests
```shell
wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
chmod 600 confidential-image-ssh-key
```
Set permissions on the SSH private key file:
The following command will run a remote SSH command on the container to check if SEV memory encryption is active:
```
chmod 600 encrypted-image-tests
```
Run a SSH command to parse the container `dmesg` output for SEV enabled messages:
```
ssh -i encrypted-image-tests \
```shell
ssh -i confidential-image-ssh-key \
-o "StrictHostKeyChecking no" \
-t root@${pod_ip} \
'dmesg | grep SEV'
'dmesg | grep "Memory Encryption Features"'
```
The output should look something like this:
```
If SEV is enabled and active, the output should return:
```shell
[ 0.150045] Memory Encryption Features active: AMD SEV
```
@@ -203,7 +111,7 @@ The output should look something like this:
If SSH access to the container is desired, create a keypair:
```
```shell
ssh-keygen -t ed25519 -f encrypted-image-tests -P "" -C "" <<< y
```
@@ -211,7 +119,7 @@ The above command will save the keypair in a file named `encrypted-image-tests`.
Here is a sample Dockerfile to create a docker image:
```
```Dockerfile
FROM alpine:3.16
# Update and install openssh-server
@@ -236,13 +144,13 @@ Store this `Dockerfile` in the same directory as the `encrypted-image-tests` ssh
Build image:
```
```shell
docker build -t encrypted-image-tests .
```
Tag and upload this unencrypted docker image to a registry:
```
```shell
docker tag encrypted-image-tests:latest [REGISTRY_URL]:unencrypted
docker push [REGISTRY_URL]:unencrypted
```
@@ -255,7 +163,7 @@ Be sure to replace `[REGISTRY_URL]` with the desired registry URL.
The Attestation Agent hosts a grpc service to support encrypting the image. Clone the repository:
```
```shell
attestation_agent_tag="v0.1.0"
git clone https://github.com/confidential-containers/attestation-agent.git
(cd attestation-agent && git checkout -b "branch_${attestation_agent_tag}" "${attestation_agent_tag}")
@@ -263,14 +171,14 @@ git clone https://github.com/confidential-containers/attestation-agent.git
Run the offline_fs_kbs:
```
```shell
(cd attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs \
&& cargo run --release --features offline_fs_kbs -- --keyprovider_sock 127.0.0.1:50001 &)
```
Create the Attestation Agent keyprovider:
```
```shell
cat > attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf <<EOF
{
"key-providers": {
@@ -282,13 +190,13 @@ EOF
Set a desired value for the encryption key that should be a 32-bytes and base64 encoded value:
```
```shell
enc_key="RcHGava52DPvj1uoIk/NVDYlwxi0A6yyIZ8ilhEX3X4="
```
Create a Key file:
```
```shell
cat > keys.json <<EOF
{
"key_id1":"${enc_key}"
@@ -298,7 +206,7 @@ EOF
Run skopeo to encrypt the image created in the previous section:
```
```shell
sudo OCICRYPT_KEYPROVIDER_CONFIG=$(pwd)/attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf \
skopeo copy --insecure-policy \
docker:[REGISTRY_URL]:unencrypted \
@@ -316,27 +224,27 @@ response. A remote registry known to support encrypted images like GitHub Contai
At this point it is a good idea to inspect the image was really encrypted as skopeo can silently leave it unencrypted. Use
`skopeo inspect` as shown below to check that the layers MIME types are **application/vnd.oci.image.layer.v1.tar+gzip+encrypted**:
```
```shell
skopeo inspect docker-daemon:[REGISTRY_URL]:encrypted
```
Push the encrypted image to the registry:
```
```shell
docker push [REGISTRY_URL]:encrypted
```
`mysql-client` is required to insert the key into the `simple-kbs` database. `jq` is required to json parse responses on the command line.
* Debian / Ubuntu:
```
```shell
sudo apt install mysql-client jq
```
* CentOS / Fedora / RHEL:
```
```shell
sudo dnf install [ mysql | mariadb | community-mysql ] jq
```
@@ -344,7 +252,7 @@ The `mysql-client` package name may differ depending on OS flavor and version.
The `simple-kbs` uses default settings and credentials for the MySQL database. These settings can be changed by the `simple-kbs` administrator and saved into a credential file. For the purposes of this quick start, set them in the environment for use with the MySQL client command line:
```
```shell
KBS_DB_USER="kbsuser"
KBS_DB_PW="kbspassword"
KBS_DB="simple_kbs"
@@ -353,7 +261,7 @@ KBS_DB_TYPE="mysql"
Retrieve the host address of the MySQL database container:
```
```shell
KBS_DB_HOST=$(docker network inspect simple-kbs_default \
| jq -r '.[].Containers[] | select(.Name | test("simple-kbs[_-]db.*")).IPv4Address' \
| sed "s|/.*$||g")
@@ -361,7 +269,7 @@ KBS_DB_HOST=$(docker network inspect simple-kbs_default \
Add the key to the `simple-kbs` database without any verification policy:
```
```shell
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', NULL);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', NULL);
@@ -376,159 +284,3 @@ Return to step [Launch the Pod and Verify SEV Encryption](#launch-the-pod-and-ve
To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-guest-firmware-measurement).
## Creating a simple-kbs Policy to Verify the SEV Guest Firmware Measurement
The `simple-kbs` can be configured with a policy that requires the kata shim to provide a matching SEV guest firmware measurement to release the key for decrypting the image. At launch time, the kata shim will collect the SEV guest firmware measurement and forward it in a key request to the `simple-kbs`.
These steps will use the CoCo sample encrypted container image, but the image URL can be replaced with a user created image registry URL.
To create the policy, the value of the SEV guest firmware measurement must be calculated.
`pip` is required to install the `sev-snp-measure` utility.
* Debian / Ubuntu:
```
sudo apt install python3-pip
```
* CentOS / Fedora / RHEL:
```
sudo dnf install python3
```
[sev-snp-measure](https://github.com/IBM/sev-snp-measure) is a utility used to calculate the SEV guest firmware measurement with provided ovmf, initrd, kernel and kernel append input parameters. Install it using the following command:
```
sudo pip install sev-snp-measure
```
The path to the guest binaries required for measurement is specified in the kata configuration. Set them:
```
ovmf_path="/opt/confidential-containers/share/ovmf/OVMF.fd"
kernel_path="/opt/confidential-containers/share/kata-containers/vmlinuz-sev.container"
initrd_path="/opt/confidential-containers/share/kata-containers/kata-containers-initrd.img"
```
The kernel append line parameters are included in the SEV guest firmware measurement. A placeholder will be initially set, and the actual value will be retrieved later from the qemu command line:
```
append="PLACEHOLDER"
```
Use the `sev-snp-measure` utility to calculate the SEV guest firmware measurement using the binary variables previously set:
```
measurement=$(sev-snp-measure --mode=sev --output-format=base64 \
--ovmf="${ovmf_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
)
```
If the container image is not already present, pull it:
```
encrypted_image_url="ghcr.io/fitzthum/encrypted-image-tests:unencrypted"
docker pull "${encrypted_image_url}"
```
Retrieve the encryption key from docker image label:
```
enc_key=$(docker inspect ${encrypted_image_url} \
| jq -r '.[0].Config.Labels.enc_key')
```
Add the key, keyset and policy with measurement to the `simple-kbs` database:
```
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', 10);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', 10);
REPLACE INTO policy VALUES (10, '["${measurement}"]', '[]', 0, 0, '[]', now(), NULL, 1);
EOF
```
Using the same service yaml from the section on [Launch the Pod and Verify SEV Encryption](#launch-the-pod-and-verify-sev-encryption), launch the service:
```
kubectl apply -f encrypted-image-tests.yaml
```
Check for pod errors:
```
pod_name=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $1;}')
kubectl describe pod ${pod_name}
```
The pod will error out on the key retrieval request to the `simple-kbs` because the policy verification failed due to a mismatch in the SEV guest firmware measurement. This is the error message that should display:
```
Policy validation failed: fw digest not valid
```
The `PLACEHOLDER` value that was set for the kernel append line when the SEV guest firmware measurement was calculated does not match what was measured by the kata shim. The kernel append line parameters can be retrieved from the qemu command line using the following scripting commands, as long as kubernetes is still trying to launch the pod:
```
duration=$((SECONDS+30))
set append
while [ $SECONDS -lt $duration ]; do
qemu_process=$(ps aux | grep qemu | grep append || true)
if [ -n "${qemu_process}" ]; then
append=$(echo ${qemu_process} \
| sed "s|.*-append \(.*$\)|\1|g" \
| sed "s| -.*$||")
break
fi
sleep 1
done
echo "${append}"
```
The above check will only work if the `encrypted-image-tests` guest launch is the only consuming qemu process running.
Now, recalculate the SEV guest firmware measurement and store the `simple-kbs` policy in the database:
```
measurement=$(sev-snp-measure --mode=sev --output-format=base64 \
--ovmf="${ovmf_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
)
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', 10);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', 10);
REPLACE INTO policy VALUES (10, '["${measurement}"]', '[]', 0, 0, '[]', now(), NULL, 1);
EOF
```
The pod should now show a successful launch:
```
kubectl describe pod ${pod_name}
```
If the service is hung up, delete the pod and try to launch again:
```
# Delete
kubectl delete -f encrypted-image-tests.yaml
# Verify pod cleaned up
kubectl describe pod ${pod_name}
# Relaunch
kubectl apply -f encrypted-image-tests.yaml
```
Testing the SEV encrypted container launch can be completed by returning to the section on how to [Verify SEV Memory Encryption](#verify-sev-memory-encryption).

107
guides/snp.md Normal file
View File

@@ -0,0 +1,107 @@
# SNP Guide
## Platform Setup
> [!WARNING]
>
> In order to launch SEV or SNP memory encrypted guests, the host must be prepared with a compatible kernel, `6.8.0-rc5-next-20240221-snp-host-cc2568386`. AMD custom changes and required components and repositories will eventually be upstreamed.
> [Sev-utils](https://github.com/amd/sev-utils/blob/coco-202402240000/docs/snp.md) is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, they are already packaged and delivered with Kata.
Alternatively, refer to the [AMDESE guide](https://github.com/confidential-containers/amdese-amdsev/tree/amd-snp-202402240000?tab=readme-ov-file#prepare-host) to manually build the host kernel and other components.
## Getting Started
This guide covers platform-specific setup for SNP and walks through the complete flows for the different CoCo use cases:
- [Container Launch with Memory Encryption](#container-launch-with-memory-encryption)
## Container Launch With Memory Encryption
### Launch a Confidential Service
To launch a container with SNP memory encryption, the SNP runtime class (`kata-qemu-snp`) must be specified as an annotation in the yaml. A base alpine docker container ([Dockerfile](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile)) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a [SSH public key](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted.pub) for validation purposes.
Here is a sample service yaml specifying the SNP runtime class:
```yaml
kind: Service
apiVersion: v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
app: "confidential-unencrypted"
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
matchLabels:
app: "confidential-unencrypted"
template:
metadata:
labels:
app: "confidential-unencrypted"
annotations:
io.containerd.cri.runtime-handler: kata-qemu-snp
spec:
runtimeClassName: kata-qemu-snp
containers:
- name: "confidential-unencrypted"
image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always
```
Save the contents of this yaml to a file called `confidential-unencrypted.yaml`.
Start the service:
```shell
kubectl apply -f confidential-unencrypted.yaml
```
Check for errors:
```shell
kubectl describe pod confidential-unencrypted
```
If there are no errors in the Events section, then the container has been successfully created with SNP memory encryption.
### Validate SNP Memory Encryption
The container dmesg log can be parsed to indicate that SNP memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get the pod IP:
```shell
pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
```
Download and save the SSH private key and set the permissions.
```shell
wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
chmod 600 confidential-image-ssh-key
```
The following command will run a remote SSH command on the container to check if SNP memory encryption is active:
```shell
ssh -i confidential-image-ssh-key \
-o "StrictHostKeyChecking no" \
-t root@${pod_ip} \
'dmesg | grep "Memory Encryption Features""'
```
If SNP is enabled and active, the output should return:
```shell
[ 0.150045] Memory Encryption Features active: AMD SNP
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

View File

@@ -31,7 +31,7 @@ To run the operator you must have an existing Kubernetes cluster that meets the
- Ensure a minimum of 8GB RAM and 4 vCPU for the Kubernetes cluster node
- Only containerd runtime based Kubernetes clusters are supported with the current CoCo release
- The minimum Kubernetes version should be 1.24
- Ensure at least one Kubernetes node in the cluster is having the label `node-role.kubernetes.io/worker=`
- Ensure at least one Kubernetes node in the cluster has the labels `node-role.kubernetes.io/worker=` or `node.kubernetes.io/worker=`. This will assign the worker role to a node in your cluster, making it responsible for running your applications and services
- Ensure SELinux is disabled or not enforced (https://github.com/confidential-containers/operator/issues/115)
For more details on the operator, including the custom resources managed by the operator, refer to the operator [docs](https://github.com/confidential-containers/operator).
@@ -59,18 +59,18 @@ on the worker nodes is **not** on an overlayfs mount but the path is a `hostPath
Deploy the operator by running the following command where `<RELEASE_VERSION>` needs to be substituted
with the desired [release tag](https://github.com/confidential-containers/operator/tags).
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=<RELEASE_VERSION>
```
For example, to deploy the `v0.8.0` release run:
```
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.8.0
For example, to deploy the `v0.10.0` release run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0
```
Wait until each pod has the STATUS of Running.
```
```shell
kubectl get pods -n confidential-containers-system --watch
```
@@ -81,25 +81,25 @@ kubectl get pods -n confidential-containers-system --watch
Creating a custom resource installs the required CC runtime pieces into the cluster node and creates
the `RuntimeClasses`
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/<CCRUNTIME_OVERLAY>?ref=<RELEASE_VERSION>
```
The current present overlays are: `default` and `s390x`
For example, to deploy the `v0.8.0` release for `x86_64`, run:
```
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.8.0
For example, to deploy the `v0.10.0` release for `x86_64`, run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.10.0
```
And to deploy `v0.8.0` release for `s390x`, run:
```
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.8.0
And to deploy `v0.10.0` release for `s390x`, run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.10.0
```
Wait until each pod has the STATUS of Running.
```
```shell
kubectl get pods -n confidential-containers-system --watch
```
@@ -114,11 +114,11 @@ Please see the [enclave-cc guide](./guides/enclave-cc.md) for more information.
`enclave-cc` is a form of Confidential Containers that uses process-based isolation.
`enclave-cc` can be installed with the following custom resources.
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/sim?ref=<RELEASE_VERSION>
```
or
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/hw?ref=<RELEASE_VERSION>
```
for the **simulated** SGX mode build or **hardware** SGX mode build, respectively.
@@ -127,287 +127,84 @@ for the **simulated** SGX mode build or **hardware** SGX mode build, respectivel
Check the `RuntimeClasses` that got created.
```
```shell
kubectl get runtimeclass
```
Output:
```
NAME HANDLER AGE
kata kata 9m55s
kata-clh kata-clh 9m55s
kata-clh-tdx kata-clh-tdx 9m55s
kata-qemu kata-qemu 9m55s
kata-qemu-tdx kata-qemu-tdx 9m55s
kata-qemu-sev kata-qemu-sev 9m55s
```shell
NAME HANDLER AGE
kata kata-qemu 8d
kata-clh kata-clh 8d
kata-qemu kata-qemu 8d
kata-qemu-coco-dev kata-qemu-coco-dev 8d
kata-qemu-sev kata-qemu-sev 8d
kata-qemu-snp kata-qemu-snp 8d
kata-qemu-tdx kata-qemu-tdx 8d
```
Details on each of the runtime classes:
- *kata* - standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non CC HW
- *kata-clh* - standard kata runtime using the cloud hypervisor including all CoCo building blocks for a non CC HW
- *kata-clh-tdx* - using the Cloud Hypervisor, with TD-Shim, and support for Intel TDX CC HW
- *kata* - Convenience runtime that uses the handler of the default runtime
- *kata-clh* - standard kata runtime using the cloud hypervisor
- *kata-qemu* - same as kata
- *kata-qemu-tdx* - using QEMU, with TDVF, and support for Intel TDX CC HW, prepared for using Verdictd and EAA KBC.
- *kata-qemu-coco-dev* - standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non CC HW
- *kata-qemu-sev* - using QEMU, and support for AMD SEV HW
- *kata-qemu-snp* - using QEMU, and support for AMD SNP HW
- *kata-qemu-tdx* -using QEMU, and support Intel TDX HW based on what's provided by [Ubuntu](https://github.com/canonical/tdx) and [CentOS 9 Stream](https://sigs.centos.org/virt/tdx/).
If you are using `enclave-cc` you should see the following runtime classes.
```
```shell
kubectl get runtimeclass
```
Output:
```
```shell
NAME HANDLER AGE
enclave-cc enclave-cc 9m55s
```
The CoCo operator environment has been setup and deployed!
### Platform Setup
While the operator deploys all the required binaries and artifacts and sets up runtime classes that use them,
certain platforms may require additional configuration to enable confidential computing. For example, the host
kernel and firmware might need to be configured.
See the [guides](./guides) for more information.
certain platforms may require additional configuration to enable confidential computing. For example, a specific
host kernel or firmware may be required. See the [guides](./guides/) for more information.
# Running a workload
## Using CoCo
## Creating a sample CoCo workload
Below is a brief summary and description of some of the CoCo use cases and features:
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the `kata` runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
- **Container Launch with Only Memory Encryption (No Attestation)** - Launch a container with memory encryption
- **Container Launch with Encrypted Image** - Launch an encrypted container by proving the workload is running
in a TEE in order to retrieve the decryption key
- **Container Launch with Image Signature Verification** - Launch a container and verify the authenticity and
integrity of an image by proving the workload is running in a TEE
- **Sealed secret** - Implement wrapped kubernetes secrets that are confidential to the workload owner and are
automatically decrypted by proving the workload is running in a TEE
- **Ephemeral Storage** - Temporary storage that is used during the lifecycle of the container but is cleared out
when a pod is restarted or finishes its task. At the moment, only ephemeral storage of the container itself is
supported and it has to be explicityly configured.
- **Authenticated Registries** - Create secure container registries that require authentication to access and manage container
images that ensures that only trusted images are deployed in the Confidential Container. The host must have access
to the registry credentials.
- **Secure Storage** - Mechanisms and technologies used to protect data at rest, ensuring that sensitive information
remains confidential and tamper-proof.
- **Peer Pods** - Enable the creation of VMs on any environment without requiring bare metal servers or nested
virtualization support. More information about this feature can be found [here](https://github.com/confidential-containers/cloud-api-adaptor/tree/main).
## Platforms
In our example we will be using the bitnami/nginx image as described in the following yaml:
```
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata
```
With some TEEs, the CoCo use cases and/or configurations are implemented differently. Those are described in each corresponding
[guide](./guides) section. To get started using CoCo without TEE hardware, follow the CoCo-dev guide below:
Setting the runtimeClassName is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the [guides](./guides) for
more details.
With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host you should log into the k8s node and ensure the following command returns an empty result:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload:
```
kubectl apply -f nginx.yaml
```
Output:
```
pod/nginx created
```
Ensure the pod was created successfully (in running state):
```
kubectl get pods
```
Output:
```
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
```
Now go back to the k8s node and ensure that you still dont have any bitnami/nginx images on it:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
## Encrypted and/or signed images with attestation
The previous example does not involve any attestation because the workload container isn't signed or encrypted
and the workload itself does not require any secrets.
This is not the case for most real workloads. It is recommended to use CoCo with signed and/or encrypted images.
The workload itself can also request secrets from the attestation agent in the guest.
Secrets are provisioned to the guest in conjunction with an attestation, which is based on hardware evidence.
The rest of this guide focuses on setting up more substantial encrypted/signed workloads using attestation
and confidential hardware.
CoCo has a modular attestation interface and there are a few options for attestation.
CoCo provides a generic Key Broker Service (KBS) that the rest of this guide will be focused on.
The SEV runtime class uses `simple-kbs`, which is described in the [SEV guide](./guides/sev.md).
There is also `eaa_kbc`/`verdictd` which is described [here](./guides/eaa_verdictd.md).
### Select Runtime Class
To use CoCo with confidential hardware, first switch to the appropriate runtime class.
TDX has two runtime classes, `kata-qemu-tdx` and `kata-clh-tdx`. One uses QEMU as VMM and TDVF as firmware. The other uses Cloud Hypervisor as VMM and TD-Shim as firmware.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](./guides/sev.md).
For `enclave-cc` follow the [enclave-cc guide](./guides/enclave-cc.md).
### Deploy and Configure tenant-side CoCo Key Broker System cluster
The following describes how to run and provision the generic KBS.
The KBS should be run in a trusted environment. The KBS is not just one service,
but a combination of several.
A tenant-side CoCo Key Broker System cluster includes:
- Key Broker Service (KBS): Brokering service for confidential resources.
- Attestation Service (AS): Verifier for remote attestation.
- Reference Value Provicer Service (RVPS): Provides reference values for AS.
- CoCo Keyprovider: Component to encrypt the images following ocicrypt spec.
To quick start the KBS cluster, a `docker compose` yaml is provided to launch.
```shell
# Clone KBS git repository
git clone https://github.com/confidential-containers/kbs.git
cd kbs/kbs
export KBS_DIR_PATH=$(pwd)
# Generate a user auth key pair
openssl genpkey -algorithm ed25519 > config/private.key
openssl pkey -in config/private.key -pubout -out config/public.pub
# Start KBS cluster
docker compose up -d
```
If configuration of KBS cluster is required, edit the following config files and restart the KBS cluster with `docker compose`:
- `$KBS_DIR_PATH/config/kbs-config.json`: configuration for Key Broker Service.
- `$KBS_DIR_PATH/config/as-config.json`: configuration for Attestation Service.
- `$KBS_DIR_PATH/config/sgx_default_qcnl.conf`: configuration for Intel TDX/SGX verification.
When KBS cluster is running, you can modify the policy file used by AS policy engine ([OPA](https://www.openpolicyagent.org/)) at any time:
- `$KBS_DIR_PATH/data/attestation-service/opa/policy.rego`: Policy file for evidence verification of AS, refer to [AS Policy Engine](https://github.com/confidential-containers/attestation-service#policy-engine) for more infomation.
### Encrypting an Image
[skopeo](https://github.com/containers/skopeo) is required to encrypt the container image.
Follow the [instructions](https://github.com/containers/skopeo/blob/main/install.md) to install `skopeo`.
Use `skopeo` to encrypt an image on the same node of the KBS cluster (use busybox:latest for example):
```shell
# edit ocicrypt.conf
tee > ocicrypt.conf <<EOF
{
"key-providers": {
"attestation-agent": {
"grpc": "127.0.0.1:50000"
}
}
}
EOF
# encrypt the image
OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://library/busybox oci:busybox:encrypted
```
The image will be encrypted, and things happens in the KBS cluster background include:
- CoCo Keyprovider generates a random key and a key-id. Then encrypts the image using the key.
- CoCo Keyprovider registers the key with key-id into KBS.
Then push the image to registry:
```shell
skopeo copy oci:busybox:encrypted [SCHEME]://[REGISTRY_URL]:encrypted
```
Be sure to replace `[SCHEME]` with registry scheme type like `docker`, replace `[REGISTRY_URL]` with the desired registry URL like `docker.io/encrypt_test/busybox`.
### Signing an Image
[cosign](https://github.com/sigstore/cosign) is required to sign the container image. Follow the instructions here to install `cosign`:
[cosign installation](https://docs.sigstore.dev/cosign/installation/)
Generate a cosign key pair and register the public key to KBS storage:
```shell
cosign generate-key-pair
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/cosign-key && cp cosign.pub $KBS_DIR_PATH/data/kbs-storage/default/cosign-key/1
```
Sign the encrypted image with cosign private key:
```shell
cosign sign --key cosign.key [REGISTRY_URL]:encrypted
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous steps.
Then edit an image pulling validation policy file.
Here is a sample policy file `security-policy.json`:
```json
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"[REGISTRY_URL]": [
{
"type": "sigstoreSigned",
"keyPath": "kbs:///default/cosign-key/1"
}
]
}
}
}
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image.
Register the image pulling validation policy file to KBS storage:
```shell
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/security-policy
cp security-policy.json $KBS_DIR_PATH/data/kbs-storage/default/security-policy/test
```
### Deploy encrypted image as a CoCo workload on CC HW
Here is a sample yaml for encrypted image deploying:
```shell
cat << EOT | tee encrypted-image-test-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: encrypted-image-test-busybox
name: encrypted-image-test-busybox
annotations:
io.containerd.cri.runtime-handler: [RUNTIME_CLASS]
spec:
containers:
- image: [REGISTRY_URL]:encrypted
name: busybox
dnsPolicy: ClusterFirst
runtimeClassName: [RUNTIME_CLASS]
EOT
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous step, replace `[RUNTIME_CLASS]` with kata runtime class for CC HW.
Then configure `/opt/confidential-containers/share/defaults/kata-containers/configuration-<RUNTIME_CLASS_SUFFIX>.toml` to add `agent.aa_kbc_params=cc_kbc::<KBS_URI>` to kernel parameters. Here `RUNTIME_CLASS_SUFFIX` is something like `qemu-tdx`, `KBS_URI` is the address of Key Broker Service in KBS cluster like `http://123.123.123.123:8080`.
Deploy encrypted image as a workload:
```shell
kubectl apply -f encrypted-image-test-busybox.yaml
```
- [CoCo-dev](./guides/coco-dev.md)
- [SEV(-ES)](./guides/sev.md)
- [SNP](./guides/snp.md)
- TDX: No additional steps required.
- [SGX](./guides/enclave-cc.md)
- [IBM Secure Execution](./guides/ibm-se.md)
- ...

92
releases/v0.10.0.md Normal file
View File

@@ -0,0 +1,92 @@
# Release Notes for v0.10.0
Release Date: September 27th, 2024
This release is based on [3.9.0](https://github.com/kata-containers/kata-containers/releases/tag/3.9.0) of Kata Containers
and [v0.10.0](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.10.0) of enclave-cc.
This is the first release of Confidential Containers which has feature parity with CCv0.
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for
definitions of the acronyms used in this document.
## What's new
* Support to pull and verify `cosign`-signed images
* Trusted image storage on guest
* Support Intel Tiber Trust Services as the verifier with Trustee for both Kata bare metal and peer-pods deployments
* Init-data support for peer pods
* Image-rs support for whiteouts and for layers with hard-link filenames over 100 characters
* enclave-cc updated to Ubuntu 22.04 based runtime instance
## Hardware Support
Attestation is supported and tested on three platforms: Intel TDX, AMD SEV-SNP, and IBM SE.
Not all feature have been tested on every platform, but those based on attestation
are expected to work on the platforms above.
Make sure your host platform is compatible with the hypervisor and guest kernel
provisioned by coco.
This release has been tested on the following stacks:
### AMD SEV-SNP
* Processor: AMD EPYC 7413
* Kernel: [6.8.0-rc5-next-20240221-snp-host-cc2568386](https://github.com/confidential-containers/linux/tree/amd-snp-host-202402240000)
* OS: Ubuntu 22.04.4 LTS
* k8s: v1.30.1 (Kubeadm)
* Kustomize: v4.5.4
### Intel TDX
* Kernel: [6.8.0-1004-intel](https://git.launchpad.net/~kobuk-team/ubuntu/+source/linux-intel/tree/?h=noble-main-next)
* OS: Ubuntu 24.04 LTS
* k8s: v1.30.2 (Kubeadm)
* Kustomize: v5.0.4-0.20230601165947-6ce0bf390ce3
### Secure Execution on IBM zSystems (s390x) running LinuxONE
* Hardware: IBM Z16 LPAR
* Kernel: 5.15.0-113-generic
* OS: Ubuntu 22.04.1 LTS
* k8s: v1.28.4 (k3s)
* Kustomize: v5.3.0
## Limitations
The following are known limitations of this release:
* SEV(-ES) does not support attestation.
* Sealed secrets only supports secrets in environment variables.
* Credentials for authenticated registries are exposed to the host.
* Not all features are tested on all platforms.
* Nydus snapshotter support is not mature.
* Nydus snapshotter sometimes fails to pull an image.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* Complete integration with Kubernetes is still in progress.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* We track our status with the OpenSSF Best Practices Badge, which remained at 75% at the time of this release.
* Community has adopted a security reporting protocol. The status of this is:
* The operator now uses CodeQL for static scans, and it will be added for all other Go-based repositories in the next release.
* Dependencies are now better handled with automatic updates using dependabot.
* Static scan for Rust-based repos will be "N/A".
* Container metadata such as environment variables are not measured.
* The Kata Agent allows the host to call several dangerous endpoints
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
* Policy can be used to block endpoints, but it is not yet tied to the hardware evidence.
## CVE Fixes
None

84
releases/v0.11.0.md Normal file
View File

@@ -0,0 +1,84 @@
# Release Notes for v0.11.0
Release Date: November 25th, 2024
This release is based on [3.11.0](https://github.com/kata-containers/kata-containers/releases/tag/3.11.0) of Kata Containers
and [v0.11.0](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.11.0) of enclave-cc.
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for
definitions of the acronyms used in this document.
## What's new
* Sealed secrets can be exposed as volumes
* TDX guests support measured rootfs with dm-verity
* Policy generation improvements
* Test coverage improved for s390x
* Community reached 100% ("passing") on OpenSSF best practices badge
## Hardware Support
Attestation is supported and tested on three platforms: Intel TDX, AMD SEV-SNP, and IBM SE.
Not all feature have been tested on every platform, but those based on attestation
are expected to work on the platforms above.
Make sure your host platform is compatible with the hypervisor and guest kernel
provisioned by coco.
This release has been tested on the following stacks:
### AMD SEV-SNP
* Processor: AMD EPYC 7413
* Kernel: [6.8.0-rc5-next-20240221-snp-host-cc2568386](https://github.com/confidential-containers/linux/tree/amd-snp-host-202402240000)
* OS: Ubuntu 22.04.4 LTS
* k8s: v1.30.1 (Kubeadm)
* Kustomize: v4.5.4
### Intel TDX
* Kernel: [6.8.0-1004-intel](https://git.launchpad.net/~kobuk-team/ubuntu/+source/linux-intel/tree/?h=noble-main-next)
* OS: Ubuntu 24.04 LTS
* k8s: v1.30.2 (Kubeadm)
* Kustomize: v5.0.4-0.20230601165947-6ce0bf390ce3
### Secure Execution on IBM zSystems (s390x) running LinuxONE
* Hardware: IBM Z16 LPAR
* Kernel: 5.15.0-113-generic
* OS: Ubuntu 22.04.1 LTS
* k8s: v1.28.4 (Kubeadm)
* Kustomize: v5.3.0
## Limitations
The following are known limitations of this release:
* SEV(-ES) does not support attestation.
* Credentials for authenticated registries are exposed to the host.
* Not all features are tested on all platforms.
* Nydus snapshotter support is not mature.
* Nydus snapshotter sometimes fails to pull an image.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* Complete integration with Kubernetes is still in progress.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* Container metadata such as environment variables are not measured.
* The Kata Agent allows the host to call several dangerous endpoints
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
* Policy can be used to block endpoints, but it is not yet tied to the hardware evidence.
## CVE Fixes
None

94
releases/v0.9.0.md Normal file
View File

@@ -0,0 +1,94 @@
# Release Notes for v0.9.0
Release Date: July 26th, 2024
This release is based on [3.7.0](https://github.com/kata-containers/kata-containers/releases/tag/3.7.0) of Kata Containers
and [v0.9.1](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.9.1) of enclave-cc.
This is the first non-alpha release of Confidential Containers to be based on the main branch of Kata Containers.
This release does not have complete parity with releases based on CCv0, but it supports most features.
See the limitations section for more details.
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for
definitions of the acronyms used in this document.
## What's new
* Attestation is supported on SEV-SNP and IBM SE
* Encrypted container images are supported
* Authenticated registries are supported
* Pods with init containers can be run wth Nydus
* Sealed secrets (as environment variables) are supported
## Hardware Support
Attestation is supported and tested on three platforms: Intel TDX, AMD SEV-SNP, and IBM SE.
Not all feature have been tested on every platform, but those based on attestation
are expected to work on the platforms above.
Make sure your host platform is compatible with the hypervisor and guest kernel
provisioned by coco.
This release has been tested on the following stacks:
### AMD SEV-SNP
* Processor: AMD EPYC 7413
* Kernel: [6.8.0-rc5-next-20240221-snp-host-cc2568386](https://github.com/confidential-containers/linux/tree/amd-snp-host-202402240000)
* OS: Ubuntu 22.04.4 LTS
* k8s: v1.24.0 (Kubeadm)
* Kustomize: v4.5.4
### Intel TDX
* Kernel: [6.8.0-1004-intel](https://git.launchpad.net/~kobuk-team/ubuntu/+source/linux-intel/tree/?h=noble-main-next)
* OS: Ubuntu 24.04 LTS
* k8s: v1.30.2 (Kubeadm)
* Kustomize: v5.0.4-0.20230601165947-6ce0bf390ce3
### Secure Execution on IBM zSystems (s390x) running LinuxONE
* Hardware: IBM Z16 LPAR
* Kernel: 5.15.0-113-generic
* OS: Ubuntu 22.04.1 LTS
* k8s: v1.28.4 (k3s)
* Kustomize: v5.3.0
## Limitations
The following are known limitations of this release:
* Signed images are not yet supported.
* Secure storage is not yet supported.
* SEV(-ES) does not support attestation.
* Sealed secrets only supports secrets in environment variables.
* Credentials for authenticated registries are exposed to the host.
* Not all features are tested on all platforms.
* Nydus snapshotter support is not mature.
* Nydus snapshotter sometimes fails to pull an image.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* Complete integration with Kubernetes is still in progress.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* We track our status with the OpenSSF Best Practices Badge, which remained at 75% at the time of this release.
* Community has adopted a security reporting protocol, but application and documentation of static and dynamic analysis still needed.
* Container metadata such as environment variables are not measured.
* The Kata Agent allows the host to call several dangerous endpoints
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
* Policy can be used to block endpoints, but it is not yet tied to the hardware evidence.
## CVE Fixes
None

View File

@@ -1,87 +1,27 @@
# Confidential Containers Roadmap
When looking at the project's roadmap we distinguish between the short-term roadmap (2-4 months) vs.
the mid/long-term roadmap (4-12 months):
- The **short-term roadmap** is focused on achieving an end-to-end, easy to deploy confidential
containers solution using at least one HW encryption solution and integrated to k8s (with forked
versions if needed)
- The **mid/long-term solutions** focuses on maturing the components of the short-term solution
and adding a number of enhancements both to the solution and the project (such as CI,
interoperability with other projects etc.)
# Short-Term Roadmap
The short-term roadmap aims to achieve the following:
- MVP stack for running confidential containers
- Based on and compatible with Kata Containers 2
- Based on at least one confidential computing implementation (SEV, TDX, SE, etc)
- Integration with Kubernetes: kubectl apply -f confidential-pod.yaml
When looking at the project's roadmap we distinguish between the short-term roadmap (2-6 months) vs. the mid/long-term roadmap (6-18 months):
- The [short term roadmap](#short-term-roadmap) is focused on achieving an end-to-end, easy to deploy and stable confidential containers solution. We track this work on a number of github boards.
- The [mid and long term roadmap](#mid-and-long-term-roadmap) focuses on use case driven development.
The work is targeted to be completed by end of November 2021 and includes 3 milestones:
![September 2021](./images/RoadmapSept2021.jpg)
- **September 2021**
- Unencrypted image pulled inside the guest, kept in tmpfs
- Pod/Container runs from pulled image
- Agent API is restricted
- crictl only
![October 2021](./images/RoadmapOct2021.jpg)
- **October 2021**
- Encrypted image pulled inside the guest, kept in tmpfs
- Image is decrypted with a pre-provisioned key (No attestation)
![November 2021](./images/RoadmapNov2021.jpg)
- **November 2021**
- Image is optionally stored on an encrypted, ephemeral block device
- Image is decrypted with a key obtained from a key brokering service (KBS)
- Integration with kubelet
# Short Term Roadmap
The short-term roadmap is based on our github boards and delivered through our on-going releases
For additional details on each milestone see [Confidential Containers v0](https://docs.google.com/presentation/d/1SIqLogbauLf6lG53cIBPMOFadRT23aXuTGC8q-Ernfw/edit#slide=id.p).
- [Confidential containers github board](https://github.com/orgs/confidential-containers/projects/6/views/22)
- [Trustee github board](https://github.com/orgs/confidential-containers/projects/10/views/1)
Tasks are tracked on a weekly basis through a dedicated spreadsheet.
For more information see [Confidential Containers V0 Plan](https://docs.google.com/spreadsheets/d/1M_MijAutym4hMg8KtIye1jIDAUMUWsFCri9nq4dqGvA/edit#gid=0&fvid=1397558749).
# Mid and Long Term Roadmap
In CoCo use case driven development we identify the main functional requirements the community requires by focusing on key use cases.
# Mid-Term Roadmap
This helps the community deliver releases which address real use cases customers require and focusing on the right priorities. The use case driven development approach also includes developing the relevant CI/CDs to ensure end-to-end use cases the community delivers work over time.
Continue our journey using knowledge and support of Subject Matter Experts (SME's) in other
projects to form stronger opinions on what is needed from components which can be integrated to
deliver the confidential containers objectives.
We target the following use cases:
- Harden the code used for the demos
- Improve CI/CD pipeline
- Clarify the release process
- Establish processes and tools to support planning, prioritisation, and work in progress
- Simple process to get up and running regardless of underlying Trusted Execution Environment
technology
- Develop a small, simple, secure, lightweight and high performance OCI container image
management library [image-rs](https://github.com/confidential-containers/image-rs) for
confidential containers.
- Develop small, simple shim firmware ([td-shim](https://github.com/confidential-containers/td-shim))
in support of trusted execution environment for use with cloud native confidential containers.
- Document threat model and trust model, what are we protecting, how are we achieving it.
- Identify technical convergence points with other confidential computing projects both inside
and outside CNCF.
- Confidential Federated Learning
- Multi-party Computing (data clean room, confidential spaces etc)
- Trusted Pipeline (Supply Chain)
- Confidential RAG LLMs
# Longer-Term Roadmap
Focused meetings will be set up to discuss architecture and the priority of longer-term objectives
in the process of being set up.
Each meeting will have an agreed focus with people sharing material/thoughts ahead of time.
Topics under consideration:
- CI/CD + repositories
- Community structure and expectations
- 2 on Mid-Term Architecture
- Attestation
- Images
- Runtimes
Proposed Topics to influence long-term direction/architecture:
- Baremetal / Peer Pod
- Composability of alternative technologies to deliver confidential containers
- Performance
- Identity / Service Mesh
- Reproducible builds/demos
- Edge Computing
- Reduce footprint of image pull
A dedicated working group leads this effort. For additional details we recommend reviewing the working group's notes: [Confidential containers use cases driven development](https://docs.google.com/document/d/1LnGNeyUyPM61Iv4kBKFbfgmBr3RmxHYZ7Ev88obN0_E/edit?tab=t.0#heading=h.b0rnn2bw76n)

View File

@@ -37,7 +37,7 @@ Further documentation will highlight specific [threat vectors](./threats_overvie
considering risk,
impact, mitigation etc as the project progresses. The Security Assurance section, Page 31, of
Cloud Native Computing Foundation (CNCF)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
will guide this more detailed threat vector effort.
### Related Prior Effort
@@ -51,7 +51,7 @@ For example:
"[A Technical Analysis of Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/10/2023/03/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_unlocked.pdf)"
section 5 of which defines the threat model for confidential computing.
- CNCF Security Technical Advisory Group published
"[Cloud Native Security Whitepaper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)"
"[Cloud Native Security Whitepaper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)"
- Kubernetes provides documentation :
"[Overview of Cloud Native Security](https://kubernetes.io/docs/concepts/security/overview/)"
- Open Web Application Security Project -
@@ -69,7 +69,25 @@ This means our trust and threat modelling should
- Consider existing Cloud Native technologies and the role they can play for confidential containers.
- Consider additional technologies to fulfil a role in Cloud Native exploitation of TEEs.
### Out of Scope
## Illustration
The following diagram shows which components in a Confidential Containers setup
are part of the TEE (green boxes labeled TEE). The hardware and guest work in
tandem to establish a TEE for the pod, which provides the isolation and
integrity protection for data in use.
![Threat model](./images/coco-threat-model.png)
Not depicted: Process-based isolation from the enclave-cc runtime class. That isolation model further removes the guest operating system from the trust boundary. See the enclave-cc sub-project for more details:
https://github.com/confidential-containers/enclave-cc/
Untrusted components include:
1. The host operating system, including its hypervisor, KVM
2. Other Cloud Provider host software beyond the host OS and hypervisor
3. Other virtual machines (and their processes) resident on the same host
4. Any other processes on the host machine (including the kubernetes control plane).
## Out of Scope
The following items are considered out-of-scope for the trust/threat modelling within confidential
containers :
@@ -82,7 +100,7 @@ containers :
and will only highlight them where they become relevant to the trust model or threats we
consider.
### Summary
## Summary
In practice, those deploying workloads into TEE environments may have varying levels of trust
in the personas who have privileges regarding orchestration or hosting the workload. This trust

View File

@@ -5,7 +5,7 @@ Otherwise referred to as actors or agents, these are individuals or groups capab
carrying out a particular threat.
In identifying personas we consider :
- The Runtime Environment, Figure 5, Page 19 of CNCF
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf).
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf).
This highlights three layers, Cloud/Environment, Workload Orchestration, Application.
- The Kubernetes
[Overview of Cloud Native Security](https://kubernetes.io/docs/concepts/security/overview/)
@@ -19,7 +19,7 @@ In identifying personas we consider :
In considering personas we recognise that a trust boundary exists between each persona and we
explore how the least privilege principle (as described on Page 40 of
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
) should apply to any actions which cross these boundaries.
Confidential containers can provide enhancements to ensure that the expected code/containers
@@ -136,7 +136,7 @@ the images they need but also support the verification method they require. A k
relationship is the Workload Provider applying Supply Chain
Security practices (as
described on Page 42 of
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
) when considering Container
Image Providers. So the Container Image Provider must support the Workload Providers
ability to provide assurance to the Data Owner regarding integrity of the code.