52 Commits

Author SHA1 Message Date
Tobin Feldman-Fitzthum
d4668f800c docs: add release notes for v0.11.0
Not a ton of new features since we didn't bump guest-components or
trustee in this release, but we do pick up some really nice changes from
Kata.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-11-25 14:23:18 -05:00
Ariel Adam
5ae29d6da1 Merge pull request #264 from magowan/UpdateRoadmap
Update roadmap.md
2024-11-20 13:50:07 +02:00
James Magowan
92e87a443b Update roadmap.md
Bringing roadmap.md into line with current roadmap and processes .

Signed-off-by: James Magowan <magowan@uk.ibm.com>
2024-11-19 14:57:35 +00:00
Chris Porter
89933dd404 Add coco threat model diagram
Insert the diagram into the existing trust-model doc.
Add some supporting text aroudn it.
Also add the diagram to the archiecture diagrams slide deck.

Signed-off-by: Chris Porter <cporterbox@gmail.com>
2024-11-18 10:32:53 -06:00
Chris Porter
6cf0c51e58 Increase header for out of scope and summary
These two headings do not seem to belong under Required Documentation,
so move them out

Signed-off-by: Chris Porter <porter@ibm.com>
2024-11-18 10:32:53 -06:00
Arvind Kumar
802e66cb5c docs: Updating SNP and SEV and quickstart guides
Updating the SEV and SNP guides to include instructions on launching CoCo with SEV and SNP memory encryption.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
2024-11-15 14:47:07 -05:00
stevenhorsman
5035fbae1a doc: Add IBM Z to OSC list
Add statement agreed by IBM product manager

Signed-off-by: stevenhorsman <steven@uk.ibm.com>
2024-11-14 09:16:45 +00:00
Ariel Adam
9748f9dd5e Update ADOPTERS.md
Updating the adopters list with RH

Signed-off-by: Ariel Adam <aadam@redhat.com>
2024-11-13 13:37:34 -05:00
Pradipta Banerjee
31c7ab6a9d Merge pull request #259 from nyrahul/main
adding kubearmor/5gsec as adopter
2024-11-11 14:56:15 +05:30
Rahul Jadhav
930165f19e adding kubearmor/5gsec as adopter
Signed-off-by: Rahul Jadhav <nyrahul@gmail.com>
2024-11-11 13:55:59 +05:30
Arvind Kumar
19fb57f3ed Docs: update quickstart
Reorganizing the quickstart guide and adding a new guide page for CoCo-dev instructions for testing CoCo without the use of memory encryption or attestation.

Signed-off-by: Arvind Kumar <arvinkum@amd.com>
2024-11-07 09:27:49 -05:00
Mikko Ylinen
4a357bdd5a MAINTAINERS: update to match with governance.md
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-30 10:33:19 -04:00
Mikko Ylinen
1af7e78194 MAINTAINERS: update Intel rep
Same as in commit a57f058b.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-30 10:33:19 -04:00
Xynnn007
bb7ef72658 ADOPTERS: add alibaba cloud case
Signed-off-by: Xynnn007 <xynnn@linux.alibaba.com>
2024-10-24 12:39:29 -04:00
Mikko Ylinen
4679bfb055 README: add a link to our project board
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-22 13:24:50 -04:00
Mikko Ylinen
9be365e507 drop orphan images that are leftovers from CONTRIBUTING.md
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-10-22 13:24:50 -04:00
Tobin Feldman-Fitzthum
54a9abf965 governance: add process for removing maintainers
Add a formal process for cleaning up our maintainer teams.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-10-22 11:43:08 -04:00
Tobin Feldman-Fitzthum
f515a232cb governance: mention GitHub teams
No changes to policies.

Update the wording to clarify that we manage maintainers
with GitHub teams rather than by putting everyone in the
codeowners file.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-10-22 11:43:08 -04:00
Chris Porter
89fcc4ddcd Release: v0.10.0 release notes
Signed-off-by: Chris Porter <cporterbox@gmail.com>
2024-09-27 13:48:45 -04:00
Steve Horsman
06c81daa12 Merge pull request #198 from lysliu/doc-fix
Doc update: correct worker node label
2024-09-20 09:12:32 +01:00
Hyounggyu Choi
9ee377f1ab docs: Add guide for IBM Secure Execution
This commit migrates the documentation for IBM Secure Execution
from the operator to the confidential-containers repo.
It will be referred by the QuickStart.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2024-09-16 12:33:44 +02:00
Hyounggyu Choi
1a2dec79a7 docs: Fix broken link to cosign installation
This commit updates a broken link to the cosign installation.

Signed-off-by: Hyounggyu Choi <Hyounggyu.Choi@ibm.com>
2024-09-16 12:33:44 +02:00
Mikko Ylinen
4344346d23 gh: drop project creation and cncf onboarding issue templates
CNCF onboarding is obsolete. Project creation has not been used
so drop that too to make the list of issue creation options a bit
shorter.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-09-03 09:10:09 -04:00
Mikko Ylinen
71da676226 gh: add issue template configuration
Add a suggestion for the newcomers and community to prioritize
confidential-containers Slack channel(s) for discussions and Q&A.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-09-03 09:10:09 -04:00
Mikko Ylinen
7707096004 docs: fix broken links
The links checker reported that the Cloud Native whitepaper
links are broken.

Update to their new URLs with permalinks.

Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-08-26 07:30:08 -05:00
Mikko Ylinen
ee6300b5b5 guides: update enclave-cc notes for SGX hardware mode
Signed-off-by: Mikko Ylinen <mikko.ylinen@intel.com>
2024-08-26 07:30:08 -05:00
Chase
7a7808d489 Update ADOPTERS.md:Add NanhuLab to the adopter.
Signed-off-by: Chase <zhichao.yan@outlook.com>
2024-08-15 09:06:58 -05:00
Ariel Adam
edbc70b053 Merge pull request #226 from ariel-adam/main
Create ADOPTERS.md
2024-08-13 14:16:01 +03:00
Ariel Adam
d476c6a017 Create ADOPTERS.md
Adding the list of adopters for CoCo

Signed-off-by: Ariel Adam <aadam@redhat.com>

Update ADOPTERS.md

Update ADOPTERS.md
2024-08-13 09:52:24 +03:00
Tobin Feldman-Fitzthum
396160da67 docs: add release notes for v0.9.0
Add new features, limitations, and expand the hw support section.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-07-26 10:01:25 -04:00
Chris Porter
d07b43cf24 Release: checklist improvements during v0.9.0-alpha1 release
Signed-off-by: Chris Porter <porter@ibm.com>
2024-07-16 09:04:27 -04:00
Wainer Moschetta
165dba4572 Merge pull request #217 from fitzthum/rn090a1
release: add release notes for v0.9.0-alpha1
2024-06-21 17:41:52 -03:00
Tobin Feldman-Fitzthum
aaefc563e9 release: add release notes for v0.9.0-alpha1
Document the progress we have made in this release
and explain that this is an alpha release.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-06-21 14:36:06 -04:00
Wainer Moschetta
7861710aad Merge pull request #210 from portersrc/fixes-for-v0.9.0-release
Release checklist improvements during v0.9.0-alpha0 release
2024-06-12 16:00:27 -03:00
Fabiano Fidêncio
55b7108a8e Merge pull request #214 from fidencio/topic/update-Intel-membership
governance: Update Intel's representation
2024-05-31 08:34:37 +02:00
Fabiano Fidêncio
a57f058ba3 governance: Update Intel's representation
As I consider the merge to main really close to be finished at this
point, and the most important things to come, at least for Intel, are
related to ITA support on Trustee and, of course, Confidential
Containers incubation, I'd like to nominate Mikko Ylinen to take my seat
during this time.

I do believe that Peter Zhu and Mikko Ylinen are the key pieces to be
representing Intel as part of the short-term future. :-)

Meanwhile, I'll still be around and contributing, but from the back
seat, allowing Mikko and Peter to focus on the current goals.

With that said, please, join me to welcome Mikko to the Confidential
Containers SC!

Signed-off-by: Fabiano Fidêncio <fabiano.fidencio@intel.com>
2024-05-29 14:15:37 +02:00
Chris Porter
ea0eb314f3 Release: checklist improvements
During the v0.9.0-alpha0 release, we found a few places to improve
the checklist for next time: a line number fix, a missing PR
step in some cases, misnumbering, and a post-release step

Signed-off-by: Chris Porter <porter@ibm.com>
2024-05-24 16:35:15 -04:00
Wainer Moschetta
b00e015a5d Merge pull request #211 from fitzthum/remove-nontee
docs: remove outdated guide
2024-05-15 15:34:22 -03:00
Tobin Feldman-Fitzthum
08c031e9fb docs: remove outdated guide
The non-tee guide predates the sample attester, which
allows us to use the attestation flow without hardware
support.

Before that we had a workaround in the operator
that would provision a guest image with certain
keys already baked into that.

This is known as the ssh-demo in the operator,
but it shoudn't be confused with the ssh-demo
that we have in this repo, which is just a container
that ships with an ssh daemon inside of it.

The ssh-demo in this repo doesn't necessarily require
attestation and is unrelated.

We are removing the ssh-demo operator CRD so the nontee
guide should go as well.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-05-02 17:06:51 -04:00
Tobin Feldman-Fitzthum
8de20e19e0 docs: add release notes for v0.9.0-alpha0
This is an alpha release, so let's be clear about exactly
what the limitations are.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-05-02 10:20:14 -04:00
Tobin Feldman-Fitzthum
243224fc4a release: update release checklist for v0.9.0
For release v0.9.0 we will be using Kata main (among other changes).
Update/overhaul the release checklist to account for these differences.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2024-04-24 10:01:33 -04:00
Yan Song Liu
620cb347a0 fix #196
label should be node.kubernetes.io/worker

Signed-off-by: Yan Song Liu <lysliu@cn.ibm.com>
2024-03-04 13:42:32 +08:00
Fabiano Fidêncio
fe829c58f2 Merge pull request #174 from larrydewey/main
Updating AMD Representation
2024-02-02 15:16:27 +01:00
Wainer dos Santos Moschetta
6341e73c27 release-check-list: add pointer to operatorhub doc
On last release I created a document on CoCo's operator explaining how
the bundle can be updated to the Operator Hub. Updated this release
check-list to link to that document.

Signed-off-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
2024-01-23 15:39:31 -06:00
Dan Middleton
110f616894 Add OpenSSF Best Practices Badge
Signed-off-by: Dan Middleton <dan.middleton@intel.com>
2024-01-23 08:54:20 -06:00
Gabriela Cervantes
36ef4d0e3d quickstart: Update docker compose command
This PR updates the docker compose command to avoid failures while
running `docker-compose` which is not a valid command.

Signed-off-by: Gabriela Cervantes <gabriela.cervantes.tellez@intel.com>
2024-01-22 17:28:00 -06:00
ChengyuZhu6
3861810143 quickstart: Correct the path when deploying KBS
Correct the path when deploying KBS.

Signed-off-by: ChengyuZhu6 <chengyu.zhu@intel.com>
2024-01-22 17:27:46 -06:00
Fabiano Fidêncio
e573995129 Merge pull request #179 from angarg05/update-tsc-msft-membership
Update membership from Ananya to Dan
2023-12-20 18:17:09 -03:00
Ananya Garg
1f8b197915 Update membership from Ananya to Dan
Signed-off-by: Ananya Garg <105936475+angarg05@users.noreply.github.com>
2023-12-12 09:10:06 -08:00
Larry Dewey
28c94a52a5 Update governance.md
Adding second AMD Rep

Signed-off-by: Larry Dewey <larry.dewey@amd.com>
2023-12-01 09:39:02 -06:00
Wainer Moschetta
51915ac2d5 Merge pull request #170 from fitzthum/update-checklist-template-080
Update release checklist issue template
2023-11-22 15:22:03 -03:00
Tobin Feldman-Fitzthum
d82359bcb0 templates: update release checklist
Fixup some number and naming. Also, remove notes about
using a branch as this is not required for doing the release.

Signed-off-by: Tobin Feldman-Fitzthum <tobin@ibm.com>
2023-11-07 15:21:12 -05:00
32 changed files with 1275 additions and 963 deletions

View File

@@ -1,15 +0,0 @@
---
name: CNCF onboarding
about: CNCF onboarding issue tracker
title: "[CNCF]"
labels: cncf-onboarding
assignees: ''
---
### Parts of the [CNCF onboarding issue](https://github.com/cncf/toc/issues/799) tracked by this issue
*This is the list of all bullet points from the [CNCF onboarding issue](https://github.com/cncf/toc/issues/799) that this issue will track:*
* [ ] *Bullet point foo*
* [ ] *Bullet point bar*

7
.github/ISSUE_TEMPLATE/config.yml vendored Normal file
View File

@@ -0,0 +1,7 @@
contact_links:
- name: CNCF Slack for Community discussions and Q&A
url: https://slack.cncf.io/
about: |
Join `#confidential-containers` channel by first getting an invitation for the
CNCF Slack workspace. Our channel is the best place for getting questions
answered and to chat with the community and project maintainers.

View File

@@ -1,20 +0,0 @@
---
name: New Project Creation
about: Request for creating a new Confidential Containers project and associated repos
title: "[New Project]"
labels: repo-creation
assignees: ''
---
## Project Description
*Describe the project: Goals, high level architecture*
## Alignment with Confidential Containers
*Why this should become a Confidential Containers project?*
## GitHub repositories needed
*Name of the repos to be created to support this new project*

View File

@@ -8,111 +8,119 @@ assignees: ''
# v<TARGET_RELEASE>
## Code freeze
## Overview
- [ ] 1. Update Enclave CC to use the latest commit from image-rs
The release process mainly follows from this dependency graph.
* https://github.com/confidential-containers/enclave-cc/blob/main/src/enclave-agent/Cargo.toml
* Change the revision
* Run `cargo update -p image-rs`
Note that you can point to your own fork here, so you don't actually do changes in the other projects
before making sure this step works as expected.
```mermaid
flowchart LR
Trustee --> Versions.yaml
Guest-Components --> Versions.yaml
Kata --> kustomization.yaml
Guest-Components .-> Client-tool
Guest-Components --> enclave-agent
enclave-cc --> kustomization.yaml
Guest-Components --> versions.yaml
Trustee --> versions.yaml
Kata --> versions.yaml
- [ ] 2. Update Kata Containers to use the latest commit from image-rs, attestation-agent and td-shim
subgraph Kata
Versions.yaml
end
subgraph Guest-Components
end
subgraph Trustee
Client-tool
end
subgraph enclave-cc
enclave-agent
end
subgraph Operator
kustomization.yaml
reqs-deploy
end
subgraph cloud-api-adaptor
versions.yaml
end
```
* image-rs
* https://github.com/kata-containers/kata-containers/blob/CCv0/src/agent/Cargo.toml
* Change the revision
* Run `cargo update -p image-rs`
Note that you can point to your own fork here, so you don't actually do changes in the other projects
before making sure this step works as expected.
* attestation-agent and td-shim
* https://github.com/kata-containers/kata-containers/blob/CCv0/versions.yaml
* Change the version
Starting with v0.9.0 the release process no longer involves centralized dependency management.
In other words, when doing a CoCo release, we don't push the most recent versions of the subprojects
into Kata and enclave-cc. Instead, dependencies should be updated during the normal process of development.
Releases of most subprojects are now decoupled from releases of the CoCo project.
- [ ] 3. Wait for kata-runtime-payload-ci to be successfully built
* After the previous PR is merged wait for the kata-runtime-payload-ci (https://github.com/kata-containers/kata-containers/actions/workflows/cc-payload-after-push.yaml) has completed, so the latest kata-runtime-payload-ci contains the changes
## The Steps
- [ ] 4. Check if there are new changes in the pre install payload script
Note: It may be useful when doing these steps to refer to a previous example. The v0.9.0-alpha1 release applied [these changes](https://github.com/confidential-containers/operator/pull/388/files). After following steps 1-5 below, you should end up with a similar set of changes.
* https://github.com/confidential-containers/operator/tree/main/install/pre-install-payload
* The last commit there must match what's in the following files as preInstall / postUninstall image
* Enclave CC: https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml
* Kata Containers:
Note that for Kata Containers, we're looking for the newTag, below the quay.io/confidential-containers/container-engine-for-cc-payload image
* default: https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/default/kustomization.yaml
### Determine release builds
- [ ] 5. Ensure the Operator is using the latest CI builds and that the Operator tests are passsing
Identify/create the bundles that we will release for Kata and enclave-cc.
- [ ] 1. :eyes: **Create enclave-cc release**
Enclave-cc does not have regular releases apart from CoCo, so we need to make one.
Make sure that the CI [is green](https://github.com/confidential-containers/operator/actions/workflows/enclave-cc-cicd.yaml) and then use the Github release tool to create a tag and release.
This should create a bundle [here](https://quay.io/repository/confidential-containers/runtime-payload?tab=tags).
- [ ] 2. :eyes: **Find Kata release version**
The release will be based on an existing Kata containers bundle.
You should use a release of Kata containers.
Release bundles can be found [here](https://quay.io/repository/kata-containers/kata-deploy?tab=tags).
There is also a bundle built for [each commit](https://quay.io/repository/kata-containers/kata-deploy-ci?tab=tags).
If you absolutely cannot use a Kata release,
you can consider releasing one of these bundles.
- [ ] 3. :eyes: **Create a peer pods release**
Create a peer pods release based on the Kata release, by following the [documented flow](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/docs/Release-Process.md).
### Test Release with Operator
- [ ] 4. :eyes: **Check operator pre-installation and open PR if needed**
The operator uses a pre-install container to setup the node.
Check that the container matches the dependencies used in Kata
and that the operator pulls the most recent version of the container.
* Check that the version of the `nydus-snapshotter` used by Kata matches the one used by the operator
* Compare the `nydus-snapshotter` version in Kata [versions.yaml](https://github.com/kata-containers/kata-containers/blob/main/versions.yaml) (search for `nydus-snapshotter` and check its `version` field) with the [Makefile](https://github.com/confidential-containers/operator/blob/main/install/pre-install-payload/Makefile) (check the `NYDUS_SNAPSHOTTER_VERSION` value) for the operator pre-install container.
* **If they do not match, stop and open a PR now. In the PR, update the operator's Makefile to match the version used in kata. After the PR is merged, continue.**
- [ ] 5. :wrench: **Open a PR to the operator to update the release artifacts**
Update the operator to use the payloads identified in steps 1, 2, 3, and 4.
Make sure that the operator pulls the most recent version of the pre-install container
* Find the last commit in the [pre-install directory](https://github.com/confidential-containers/operator/tree/main/install/pre-install-payload)
* As a sanity check, the sha hash of the last commit in that pre-install directory will correspond to a pre-install image in quay, i.e. a reqs-payload image [here](https://quay.io/confidential-containers/reqs-payload).
* Make sure that the commit matches the preInstall / postUninstall image specified for [enclave-cc CRD](https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml) and [ccruntime CRD](https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/default/kustomization.yaml)
* If these do not match (for instance if you changed the snapshotter in step 4), update the operator so that they do match.
There are a number of places where the payloads are referenced. Make sure to update all of the following to the tag matching the latest commit hash from steps 1, 2, and 3:
* Enclave CC:
* SIM: https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/sim/kustomization.yaml
* HW: https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml
* Note that we need the quay.io/confidential-containers/runtime-payload-ci registry and enclave-cc-{SIM,HW}-latest tags
* [sim](https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/sim/kustomization.yaml)
* [hw](https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/hw/kustomization.yaml)
* [base](https://github.com/confidential-containers/operator/blob/main/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml)
* Kata Containers:
* default: https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/default/kustomization.yaml
* peer-pods: https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/peer-pods/kustomization.yaml
* [default](https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/default/kustomization.yaml)
* [s390x](https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/s390x/kustomization.yaml)
* [peer-pods](https://github.com/confidential-containers/operator/blob/main/config/samples/ccruntime/peer-pods/kustomization.yaml)
Note that we need the quay.io/confidential-containers/runtime-payload-ci registry and kata-containers-latest tag
- [ ] 6. Update peer-pods with latest commits of kata-containers and attestation-agent and test it, following the [release candidate testing process](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/docs/Release-Process.md#release-candidate-testing)
- [ ] 7. Cut an attestation-service v<TARGET_RELEASE> and make images for AS and RVPS, if changes happened in the project.
**Also, update the [operator version](https://github.com/confidential-containers/operator/blob/main/config/release/kustomization.yaml) (update the `newTag` value)**
* https://github.com/confidential-containers/attestation-service
* Cut a release (AS/RVPS images will be automatically built triggered by release)
### Final Touches
- [ ] 8. Cut a guest-components v<TARGET_RELEASE> release
- [ ] 6. :trophy: **Cut an operator release using the GitHub release tool**
- [ ] 9. Cut a td-shim v<TARGET_RELEASE> release, if changes happened in the project
- [ ] 7. :green_book: **Make sure to update the [release notes](https://github.com/confidential-containers/confidential-containers/tree/main/releases) and tag/release the confidential-containers repo using the GitHub release tool.**
- [ ] 10. Update kbs to use the tagged attestation-service and guest-components, cut a release and make image
- [ ] 8. :hammer: **Poke Wainer Moschetta (@wainersm) to update the release to the OperatorHub. Find the documented flow [here](https://github.com/confidential-containers/operator/blob/main/docs/OPERATOR_HUB.md).**
* https://github.com/confidential-containers/kbs/blob/main/src/api/Cargo.toml
* Change the revision for the `as-types` and `attestation-service` crates (both use `v<TARGET_RELEASE>`) and update the lock file
* https://github.com/confidential-containers/kbs/blob/main/tools/client/Cargo.toml
* Change the revision for the `as-types` and `kbs_protocol` crates (both use `v<TARGET_RELEASE>`)
* Cut a release
* kbs image will be automatically built triggered by release, so ensure that the [release workflow](https://github.com/confidential-containers/kbs/actions/workflows/release.yaml) ran successfully
### Post-release
- [ ] 11. Update Enclave CC to use the released version of image-rs
* redo step 3, but now using v<TARGET_RELEASE>
- [ ] 12. Update Kata Containers to the latest released version of:
* image-rs (redo step 4, but now using the v<TARGET_RELEASE>)
* attestation-agent (redo step 5, but now using the v<TARGET_RELEASE>)
* td-shim (redo step 6, but now using the v<TARGET_RELEASE>)
- [ ] 13. Update the operator to use the images generated from the latest commit of both Kata Containers and Enclave CC
* redo step 8, but now targetting the latest payload image generated for Kata Containers and Enclave CC
- [ ] 14. Make sure all the operator tests are passing
- [ ] 15. Cut an Enclave CC release
- [ ] 16. Add a new Kata Containers tag
- [ ] 17. Wait for release kata-runtime-payload to be successfully built
* After the Kata tag is created wait for (https://github.com/kata-containers/kata-containers/actions/workflows/cc-payload.yaml) to be successfully completed, so the latest commit kata-runtime-payload for the release is created
- [ ] 18. Update peer pods to use the release versions and then cut a release following the [documented flow](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/docs/Release-Process.md#cutting-releases)
## Release
- [ ] 19. Update the operator to use the release tags coming from Enclave CC and Kata Containers
* redo step 8, but now targeting the latest release of the payload image generated for Kata Containers eand Enclave CC
- [ ] 20. Update the Operator version
* https://github.com/confidential-containers/operator/blob/main/config/release/kustomization.yaml#L7
- [ ] 21. Cut an operator release
- [ ] 22. Make sure to update the release notes and tag the confidential-containers repository
* https://github.com/confidential-containers/documentation/tree/main/releases/v<TARGET_RELEASE>.md
- [ ] 23. Poke Wainer Moschetta (@wainersm) to update the release to the OperatorHub
- [ ] 9. :wrench: **Open a PR to the operator to go back to latest payloads after release**
After the release, the operator's payloads need to go back to what they were (e.g. using "latest" instead of a specific commit sha). As an example, the v0.9.0-alpha1 release applied [these changes](https://github.com/confidential-containers/operator/pull/389/files). You should use `git revert -s` for this.

1
.lycheeignore Normal file
View File

@@ -0,0 +1 @@
https://sigs.centos.org/virt/tdx/

38
ADOPTERS.md Normal file
View File

@@ -0,0 +1,38 @@
# Confidential Containers Adopters
This page contains a list of organizations/companies and projects/products who use confidential containers as adopters in different usage levels (development, beta, production, GA etc...)
**NOTE:** For adding your organization/company and project/product to this table (alphabetical order) fork the repository and open a PR with the required change.
See list of adopter types at the bottom of this page.
## Adopters
| Organization/Company | Project/Product | Usage level | Adopter type | Details |
|-------------------------------------------------------------------|---------------------------------------------------------------|--------------------------|----------------------------------|---------------------------------------------------------------------------|
|[Alibaba Cloud (Aliyun)](https://www.alibabacloud.com/)| [Elastic Algorithm Service](https://www.alibabacloud.com/help/en/pai/user-guide/eas-model-serving/?spm=a2c63.p38356.0.0.2b2b6679Pjozxy) and [Elastic GPU Service](https://www.alibabacloud.com/help/en/egs/) | Beta | Service Provider | Both services use sub-projects of confidential containers to protect the user data and AI model from being exposed to CSP (For details mading.ma@alibaba-inc.com) |
| [Edgeless Systems](https://www.edgeless.systems/) | [Contrast](https://github.com/edgelesssys/contrast) | Beta | Service Provider / Consultancy | Contrast runs confidential container deployments on Kubernetes at scale. |
| [IBM](https://www.ibm.com/z) | [IBM LinuxONE](https://www.ibm.com/linuxone) | Beta | Service Provider | Confidential Containers with Red Hat OpenShift Container Platform and IBM® Secure Execution for Linux (see [details](https://www.ibm.com/blog/confidential-containers-with-red-hat-openshift-container-platform-and-ibm-secure-execution-for-linux/)) |
|NanhuLab|Trusted Big Data Sharing System |Beta |Service Provider |The system uses confidential containers to ensure that data users can utilize the data without being able to view the raw data.(No official website yet. For details: yzc@nanhulab.ac.cn) |
| [KubeArmor](https://www.kubearmor.io/) | Runtime Security | Beta | Another project | An open source project that leverages CoCo as part of their solution, integrates with for compatibility and interoperability, or is used in the supply chain of another project [(5GSEC)](https://github.com/5GSEC/nimbus/blob/main/examples/clusterscoped/coco-workload-si-sib.yaml). |
| [Red Hat](https://www.redhat.com/en) | [OpenShift confidential containers](https://www.redhat.com/en/blog/learn-about-confidential-containers) | Beta | Service Provider | Confidential Containers are available from [OpenShift sandboxed containers release version 1.7.0](https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.7/) as a tech preview on Azure cloud for both Intel TDX and AMD SEV-SNP. The tech preview also includes support for confidential containers on IBM Z and LinuxONE using Secure Execution for Linux (IBM SEL).|
|TBD| | | | |
|TBD| | | | |
## Adopter types
See CNCF [definition of an adopter](https://github.com/cncf/toc/blob/main/FAQ.md#what-is-the-definition-of-an-adopter) <br>
Any single company can fall under several categories at once in which case it should enumerate all that apply and only be listed once
- **End-User** (CNCF member) - Companies and organizations who are End-User members of the CNCF
- **Another project** - an open source project that leverages a CNCF project as part of their solution, integrates with for compatibility and interoperability,
or is used in the supply chain of another project
- **end users** (non CNCF member)- companies and organizations that are not CNCF End-User members that use the project and cloud native technologies internally, or build upon
a cloud native open source project but do not sell the cloud native project externally as a service offering (those are Service Providers). This group is identified in the written
form by the convention end user, uncapitalized and unhyphenated
- **Service Provider** - a Service Provider is an organization that repackages an open source project as a core component of a service offering, sells cloud native services externally.
A Service Providers customers are considered transitive adopters and should be excluded from identification within the ADOPTERS.md file.
Examples of Service Providers (and not end users) include cloud providers (e.g., Alibaba Cloud, AWS, Google Cloud, Microsoft Azure), some infrastructure software vendors,
and telecom operators (e.g., AT&T, China Mobile)
- **Consultancy** - an entity whose purpose is to assist other organizations in developing a solution leveraging cloud native technology. They may be embedded in the end user team and
is responsible for the execution of the service. Service Providers may also provide consultancy services, they may also package cloud native technologies for reuse
as part of their offerings. These function as proxies for an end user

View File

@@ -1,15 +1,18 @@
# CoCo Steering Committee / Maintainers
#
# Github ID, Name, Email Address
ariel-adam, Ariel Adam, aadam@redhat.com
bpradipt, Pradipta Banerjee, prbanerj@redhat.com
dcmiddle, Dan Middleton, dan.middleton@intel.com
fitzthum, Tobin Feldman-Fitzthum, tobin@ibm.com
jiazhang0, Zhang Jia, zhang.jia@linux.alibaba.com
Jiang Liu, jiangliu, gerry@linux.alibaba.com
larrydewey, Larry Dewey, Larry.Dewey@amd.com
magowan, James Magowan, magowan@uk.ibm.com
peterzcst, Peter Zhu, peter.j.zhu@intel.com
sameo, Samuel Ortiz, samuel.e.ortiz@protonmail.com
# Github ID, Name, Affiliation
ariel-adam, Ariel Adam, Redhat
bpradipt, Pradipta Banerjee, Redhat
peterzcst, Peter Zhu, Intel
mythi, Mikko Ylinen, Intel
magowan, James Magowan, IBM
fitzthum, Tobin Feldman-Fitzthum, IBM
jiazhang0, Zhang Jia, Alibaba
jiangliu, Jiang Liu, Alibaba
larrydewey, Larry Dewey, AMD
ryansavino, Ryan Savino, AMD
sameo, Samuel Ortiz, Rivos
zvonkok, Zvonko Kaiser, NVIDIA
vbatts, Vincent Batts, Microsoft
danmihai1, Dan Mihai, Microsoft

View File

@@ -2,6 +2,8 @@
# Confidential Containers
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5719/badge)](https://bestpractices.coreinfrastructure.org/projects/5719)
## Welcome to confidential-containers
Confidential Containers is an open source community working to leverage
@@ -22,7 +24,7 @@ delivering Confidential Computing for guest applications or data inside the TEE
### Get started quickly...
- [Kubernetes Operator for Confidential
Computing](https://github.com/confidential-containers/confidential-containers-operator) : An
Computing](https://github.com/confidential-containers/operator) : An
operator to deploy confidential containers runtime (and required configs) on a Kubernetes cluster
@@ -33,6 +35,7 @@ delivering Confidential Computing for guest applications or data inside the TEE
- [Project Overview](./overview.md)
- [Project Architecture](./architecture.md)
- [Our Roadmap](./roadmap.md)
- [Our Release Content Planning](https://github.com/orgs/confidential-containers/projects/6)
- [Alignment with other Projects](alignment.md)
@@ -42,3 +45,4 @@ delivering Confidential Computing for guest applications or data inside the TEE
## License
[![FOSSA Status](https://app.fossa.com/api/projects/git%2Bgithub.com%2Fconfidential-containers%2Fcommunity.svg?type=large)](https://app.fossa.com/projects/git%2Bgithub.com%2Fconfidential-containers%2Fcommunity?ref=badge_large)

Binary file not shown.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 29 KiB

View File

@@ -40,15 +40,25 @@ Project maintainers are first and foremost active *Contributors* to the project
* Creating and assigning project issues.
* Enforcing the [Code of Conduct](https://github.com/confidential-containers/community/blob/main/CODE_OF_CONDUCT.md).
The list of maintainers for a project is defined by the project `CODEOWNERS` file placed at the top-level of each project's repository.
Project maintainers are managed via GitHub teams. The maintainer team for a project is referenced in the `CODEOWNERS` file
at the top level of each project repository.
### Becoming a project maintainer
Existing maintainers may decide to elevate a *Contributor* to the *Maintainer* role based on the contributor established trust and contributions relevance.
This decision process is not formally defined and is based on lazy consensus from the existing maintainers.
Any contributor may request for becoming a project maintainer by opening a pull request (PR) against the `CODEOWNERS` file, and adding *all* current maintainers as reviewer of the PR.
Maintainers may also pro-actively promote contributors based on their contributions and leadership track record.
A contributor can propose themself or someone else as a maintainer by opening an issue in the repository for the project in question.
### Removing project maintainers
Inactive maintainers can be removed by the Steering Committee.
Maintainers are considered inactive if they have made no GitHub contributions relating to the project they maintain
for more than six months.
Before removing a maintainer, the Steering Commitee should notify the maintainer of their status.
Not all inactive maintainers must be removed.
This process should mainly be used to remove maintainers that have permanently moved on from the project.
## Steering Committee Member
@@ -75,14 +85,14 @@ Further, as leaders in the community, the SC members will make themselves famili
The current members of the SC are:
* Larry Dewey (@larrydewey) - AMD
* Larry Dewey (@larrydewey) and Ryan Savino (@ryansavino) - AMD
* Jiang Liu (@jiangliu) and Jia Zhang (@jiazhang0) - Alibaba
* James Magowan (@magowan) and Tobin Feldman-Fitzthum (@fitzthum) - IBM
* Peter Zhu (@peterzcst) and Fabiano Fidêncio (@fidencio) - Intel
* Peter Zhu (@peterzcst) and Mikko Ylinen (@mythi) - Intel
* Pradipta Banerjee (@bpradipt) and Ariel Adam (@ariel-adam) - Red Hat
* Samuel Ortiz (@sameo) - Rivos
* Zvonko Kaiser (@zvonkok) - NVIDIA
* Vincent Batts (@vbatts) and Ananya Garg (@angarg05) - Microsoft
* Vincent Batts (@vbatts) and Dan Mihai (@danmihai1) - Microsoft
### Emeritus Members

252
guides/coco-dev.md Normal file
View File

@@ -0,0 +1,252 @@
# Running a workload
## Creating a sample CoCo workload
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the `kata-qemu-coco-dev` runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
In this example, we will be using the bitnami/nginx image as described in the following yaml:
```yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata-qemu-coco-dev
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata-qemu-coco-dev
```
Setting the `runtimeClassName` is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the [guides](../guides) for
more details.
With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host, you should log into the k8s node and ensure the following command returns an empty result:
```shell
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload:
```shell
kubectl apply -f nginx.yaml
```
Output:
```shell
pod/nginx created
```
Ensure the pod was created successfully (in running state):
```shell
kubectl get pods
```
Output:
```shell
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
```
Now go back to the k8s node and ensure that you dont have any bitnami/nginx images on it:
```shell
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
## Encrypted and/or signed images with attestation
The previous example does not involve any attestation because the workload container isn't signed or encrypted,
and the workload itself does not require any secrets.
This is not the case for most real workloads. It is recommended to use CoCo with signed and/or encrypted images.
The workload itself can also request secrets from the attestation agent in the guest.
Secrets are provisioned to the guest in conjunction with an attestation, which is based on hardware evidence.
The rest of this guide focuses on setting up more substantial encrypted/signed workloads using attestation
and confidential hardware.
CoCo has a modular attestation interface and there are a few options for attestation.
CoCo provides a generic Key Broker Service (KBS) that the rest of this guide will be focused on.
The SEV runtime class uses `simple-kbs`, which is described in the [SEV guide](../guides/sev.md).
### Select Runtime Class
To use CoCo with confidential hardware, first switch to the appropriate runtime class.
TDX has one runtime class, `kata-qemu-tdx`.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](../guides/sev.md).
For SNP, use the `kata-qemu-snp` runtime class and follow the [SNP guide](../guides/snp.md).
For `enclave-cc` follow the [enclave-cc guide](../guides/enclave-cc.md).
### Deploy and Configure tenant-side CoCo Key Broker System cluster
The following describes how to run and provision the generic KBS.
The KBS should be run in a trusted environment. The KBS is not just one service,
but a combination of several.
A tenant-side CoCo Key Broker System cluster includes:
- Key Broker Service (KBS): Brokering service for confidential resources.
- Attestation Service (AS): Verifier for remote attestation.
- Reference Value Provider Service (RVPS): Provides reference values for AS.
- CoCo Keyprovider: Component to encrypt the images following ocicrypt spec.
To quick start the KBS cluster, a `docker compose` yaml is provided to launch.
```shell
# Clone KBS git repository
git clone https://github.com/confidential-containers/trustee.git
cd trustee/kbs
export KBS_DIR_PATH=$(pwd)
# Generate a user auth key pair
openssl genpkey -algorithm ed25519 > config/private.key
openssl pkey -in config/private.key -pubout -out config/public.pub
cd ..
# Start KBS cluster
docker compose up -d
```
If configuration of KBS cluster is required, edit the following config files and restart the KBS cluster with `docker compose`:
- `$KBS_DIR_PATH/config/kbs-config.toml`: configuration for Key Broker Service.
- `$KBS_DIR_PATH/config/as-config.json`: configuration for Attestation Service.
- `$KBS_DIR_PATH/config/sgx_default_qcnl.conf`: configuration for Intel TDX/SGX verification. See [details](https://github.com/confidential-containers/trustee/blob/main/attestation-service/docs/grpc-as.md#quick-start).
When KBS cluster is running, you can modify the policy file used by AS policy engine ([OPA](https://www.openpolicyagent.org/)) at any time:
- `$KBS_DIR_PATH/data/attestation-service/opa/default.rego`: Policy file for evidence verification of AS, refer to [AS Policy Engine](https://github.com/confidential-containers/attestation-service#policy-engine) for more infomation.
### Encrypting an Image
[skopeo](https://github.com/containers/skopeo) is required to encrypt the container image.
Follow the [instructions](https://github.com/containers/skopeo/blob/main/install.md) to install `skopeo`.
If building with Ubuntu 22.04, make sure to follow the instructions to build skopeo from source, otherwise
there will be errors regarding version incompatibility between ocicrypt and skopeo. Make sure the downloaded
skopeo version is at least 1.16.0. Ubuntu 22.04 builds skopeo with an outdated ocicrypt version, which does
not support the keyprovider protocol we depend on.
Use `skopeo` to encrypt an image on the same node of the KBS cluster (use busybox:latest for example):
```shell
# edit ocicrypt.conf
tee > ocicrypt.conf <<EOF
{
"key-providers": {
"attestation-agent": {
"grpc": "127.0.0.1:50000"
}
}
}
EOF
# encrypt the image
OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://library/busybox oci:busybox:encrypted
```
The image will be encrypted, and things happens in the KBS cluster background include:
- CoCo Keyprovider generates a random key and a key-id. Then encrypts the image using the key.
- CoCo Keyprovider registers the key with key-id into KBS.
Then push the image to a registry:
```shell
skopeo copy oci:busybox:encrypted [SCHEME]://[REGISTRY_URL]:encrypted
```
Be sure to replace `[SCHEME]` with registry scheme type like `docker`, replace `[REGISTRY_URL]` with the desired registry URL like `docker.io/encrypt_test/busybox`.
### Signing an Image
[cosign](https://github.com/sigstore/cosign) is required to sign the container image. Follow the instructions here to install `cosign`:
To install cosign, find and unpackage the corresponding package to the machine being used from their [release page](https://github.com/sigstore/cosign/releases).
Generate a cosign key pair and register the public key to KBS storage:
```shell
cosign generate-key-pair
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/cosign-key && cp cosign.pub $KBS_DIR_PATH/data/kbs-storage/default/cosign-key/1
```
Sign the encrypted image with cosign private key:
```shell
cosign sign --key cosign.key [REGISTRY_URL]:encrypted
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous steps.
Then edit an image pulling validation policy file.
Here is a sample policy file `security-policy.json`:
```json
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"[REGISTRY_URL]": [
{
"type": "sigstoreSigned",
"keyPath": "kbs:///default/cosign-key/1"
}
]
}
}
}
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image.
Register the image pulling validation policy file to KBS storage:
```shell
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/security-policy
cp security-policy.json $KBS_DIR_PATH/data/kbs-storage/default/security-policy/test
```
### Deploy an Encrypted Image as a CoCo workload on CC HW
Here is a sample yaml for encrypted image deploying:
```shell
cat << EOT | tee encrypted-image-test-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: encrypted-image-test-busybox
name: encrypted-image-test-busybox
annotations:
io.containerd.cri.runtime-handler: [RUNTIME_CLASS]
spec:
containers:
- image: [REGISTRY_URL]:encrypted
name: busybox
dnsPolicy: ClusterFirst
runtimeClassName: [RUNTIME_CLASS]
EOT
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous step, replace `[RUNTIME_CLASS]` with kata runtime class for CC HW.
Then configure `/opt/confidential-containers/share/defaults/kata-containers/configuration-<RUNTIME_CLASS_SUFFIX>.toml` to add `agent.aa_kbc_params=cc_kbc::<KBS_URI>` to kernel parameters. Here `RUNTIME_CLASS_SUFFIX` is something like `qemu-coco-dev`, `KBS_URI` is the address of Key Broker Service in KBS cluster like `http://123.123.123.123:8080`.
Deploy encrypted image as a workload:
```shell
kubectl apply -f encrypted-image-test-busybox.yaml
```

View File

@@ -1,54 +0,0 @@
# EAA Verdictd Guide
**EAA/Verdictd support has been deprecated in Confidential Containers**
EAA is used to perform attestation at runtime and provide guest with confidential resources such as keys.
It is based on [rats-tls](https://github.com/inclavare-containers/rats-tls).
[Verdictd](https://github.com/inclavare-containers/verdictd) is the Key Broker Service and Attestation Service of EAA.
The EAA KBC is an optional module in the attestation-agent at compile time,
which can be used to communicate with Verdictd.
The communication is established on the encrypted channel provided by rats-tls.
EAA can now be used on Intel TDX and Intel SGX platforms.
## Create encrypted image
Before build encrypted image, you need to make sure Skopeo and Verdictd(EAA KBS) have been installed:
- [Skopeo](https://github.com/containers/skopeo): the command line utility to perform encryption operations.
- [Verdictd](https://github.com/inclavare-containers/verdictd): EAA Key Broker Service and Attestation Service.
1. Pull unencrypted image.
Here use `alpine:latest` for example:
```sh
${SKOPEO_HOME}/bin/skopeo copy --insecure-policy docker://docker.io/library/alpine:latest oci:busybox
```
2. Follow the [Verdictd README #Generate encrypted container image](https://github.com/inclavare-containers/verdictd#generate-encrypted-container-image) to encrypt the image.
3. Publish the encrypted image to your registry.
## Deploy encrypted image
1. Build rootfs with EAA component:
Specify `AA_KBC=eaa_kbc` parameters when using kata-containers `rootfs.sh` scripts to create rootfs.
2. Launch Verdictd
Verdictd performs remote attestation at runtime and provides the key needed to decrypt the image.
It is actually both Key Broker Service and Attestation Service of EAA.
So when deploy the encrypted image, Verdictd is needed to be launched:
```sh
verdictd --listen <$ip>:<$port> --mutual
```
> **Note** The communication between Verdictd and EAA KBC is based on rats-tls,
so you need to confirm that [rats-tls](https://github.com/inclavare-containers/rats-tls) has been correctly installed in your running environment.
3. Agent Configuration
Add configuration `aa_kbc_params= 'eaa_kbc::<$IP>:<$PORT>'` to agent config file, the IP and PORT should be consistent with verdictd.

View File

@@ -5,6 +5,17 @@ This guide assumes that you already have a Kubernetes cluster
and have deployed the operator as described in the **Installation**
section of the [quickstart guide](../quickstart.md).
## Configuring Kubernetes cluster when using SGX hardware mode build
Additional setup steps when using the hardware SGX mode are needed:
1. The cluster needs to have [Intel Software Guard Extensions (SGX) device plugin for Kubernetes](
https://intel.github.io/intel-device-plugins-for-kubernetes/cmd/sgx_plugin/README.html#prerequisites) running.
1. The cluster needs to have [Intel DCAP aesmd](
https://github.com/intel/SGXDataCenterAttestationPrimitives) running on every SGX node and the nodes must be registered.
**Note** kind/minikube based clusters are not recommended when using hardware mode SGX.
## Configuring enclave-cc custom resource to use a different KBC
**Note** Before configuring KBC, please refer to the
@@ -66,7 +77,7 @@ spec:
containers:
- image: ghcr.io/confidential-containers/test-container-enclave-cc:encrypted
name: hello-world
workingDir: "/run/rune/boot_instance/"
workingDir: "/run/rune/occlum_instance/"
resources:
limits:
sgx.intel.com/epc: 600Mi
@@ -74,7 +85,7 @@ spec:
- name: OCCLUM_RELEASE_ENCLAVE
value: "1"
command:
- /run/rune/boot_instance/build/bin/occlum-run
- /run/rune/occlum_instance/build/bin/occlum-run
- /bin/hello_world
runtimeClassName: enclave-cc
@@ -117,6 +128,9 @@ Hello world!
```
**NOTE** When running in the hardware SGX mode, the logging is disabled
by default.
We can also verify the host does not have the image for others to use:
```sh
crictl -r unix:///run/containerd/containerd.sock image ls | grep helloworld_enc

128
guides/ibm-se.md Normal file
View File

@@ -0,0 +1,128 @@
# IBM Secure Execution Guide
This document explains how to install and run a confidential container on an IBM Secure
Execution-enabled Z machine (s390x). A secure image is an encrypted Linux image comprising a kernel image,
an initial RAM file system (initrd) image, and a file specifying kernel parameters (parmfile).
It is an essential component for running a confidential container. The public key used for
encryption is associated with a private key managed by a trusted firmware called
[ultravisor](https://www.ibm.com/docs/en/linux-on-systems?topic=execution-components).
This means that a secure image is machine-specific, resulting in its absence from a released
payload image in `ccruntime`. To use it, you need to build a secure image with your own public
key and create a payload image bundled with it. The following sections elaborate on how to
accomplish this step-by-step.
## Prerequisites
Kindly review the [section](https://github.com/confidential-containers/confidential-containers/blob/main/quickstart.md#prerequisites) titled identically in the `QuickStart`.
- `kustomize`: Kubernetes native configuration management tool which can be installed simply by:
```
$ mkdir -p $GOPATH/src/github.com/confidential-containers
$ cd $GOPATH/src/github.com/confidential-containers
$ git clone https://github.com/confidential-containers/operator.git && cd operator
$ make kustomize
$ export PATH=$PATH:$(pwd)/bin
```
Or simply follow the official [documentation](https://kubectl.docs.kubernetes.io/installation/kustomize/) based on your environment.
## Build a Payload Image via kata-deploy
If you have a local container registry running at `localhost:5000`, refer to the
[document](https://github.com/kata-containers/kata-containers/blob/main/docs/how-to/how-to-run-kata-containers-with-SE-VMs.md#using-kata-deploy-with-confidential-containers-operator)
on Kata Containers for details on building a payload image.
## Install Operator
Let us install an operator with:
```
$ cd $GOPATH/src/github.com/confidential-containers/operator
$ export IMG=localhost:5000/cc-operator
$ make docker-build && make docker-push
$ make install && make deploy
namespace/confidential-containers-system created
customresourcedefinition.apiextensions.k8s.io/ccruntimes.confidentialcontainers.org created
serviceaccount/cc-operator-controller-manager created
role.rbac.authorization.k8s.io/cc-operator-leader-election-role created
clusterrole.rbac.authorization.k8s.io/cc-operator-manager-role created
clusterrole.rbac.authorization.k8s.io/cc-operator-metrics-reader created
clusterrole.rbac.authorization.k8s.io/cc-operator-proxy-role created
rolebinding.rbac.authorization.k8s.io/cc-operator-leader-election-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/cc-operator-manager-rolebinding created
clusterrolebinding.rbac.authorization.k8s.io/cc-operator-proxy-rolebinding created
configmap/cc-operator-manager-config created
service/cc-operator-controller-manager-metrics-service created
deployment.apps/cc-operator-controller-manager created
$ kubectl get pods -n confidential-containers-system --watch
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-6d6b78b7f5-m9chj 2/2 Running 0 61s
$ # Press ctrl C to stop watching
```
## Install Custom Resource (CR)
You can install `ccruntime` with an existing overlay directory named
[`s390x`](https://github.com/confidential-containers/operator/tree/main/config/samples/ccruntime/s390x), by replacing the image name and tag
for a payload image with the ones you pushed to the local registry
(e.g. `localhost:5000/build-kata-deploy:latest`):
```
$ cd $GOPATH/src/github.com/confidential-containers/operator/config/samples/ccruntime/s390x
$ kustomize edit set image quay.io/kata-containers/kata-deploy=localhost:5000/build-kata-deploy:latest
$ kubectl create -k .
ccruntime.confidentialcontainers.org/ccruntime-sample-s390x created
$ kubectl get pods -n confidential-containers-system --watch
NAME READY STATUS RESTARTS AGE
cc-operator-controller-manager-6d6b78b7f5-m9chj 2/2 Running 0 11m
cc-operator-daemon-install-srmc4 1/1 Running 0 2m13s
cc-operator-pre-install-daemon-t9r2h 1/1 Running 0 3m6s
$ # To verify if a payload image is pulled from the updated location
$ kubectl get pods -oyaml -n confidential-containers-system -l 'name=cc-operator-daemon-install' | grep image:
image: localhost:5000/build-kata-deploy:test
image: localhost:5000/build-kata-deploy:test
```
You have to wait until a set of runtime classes is deployed like:
```
$ kubectl get runtimeclass
NAME HANDLER AGE
kata kata-qemu 60s
kata-qemu kata-qemu 61s
kata-qemu-se kata-qemu-se 61s
```
## Verify the Installation
To verify the installation, use the following runtime class: `kata-qemu-se`:
```
$ cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: nginx-kata
spec:
runtimeClassName: kata-qemu-se
containers:
- name: nginx
image: nginx
EOF
pod/nginx-kata created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-kata 1/1 Running 0 15s
```
## Uninstall Resources
You can uninstall confidential containers by removing the resources in reverse order:
```
$ cd $GOPATH/src/github.com/confidential-containers/operator
$ kubectl delete -k config/samples/ccruntime/s390x
$ make undeploy
```

View File

@@ -1,92 +0,0 @@
# Encrypted Container Images without Hardware Support
Without Confidential Computing hardware, there is no way to securely provision
the keys for an encrypted image. Nonetheless, in this demo we describe how to
test encrypted images support with the non-tee `kata`/`kata-qemu` runtimeclass.
## Creating a CoCo workload using a pre-existing encrypted image
We will now proceed to download and run a sample encrypted container image using the CoCo building blocks.
A demo container image is provided at [docker.io/katadocker/ccv0-ssh](https://hub.docker.com/r/katadocker/ccv0-ssh).
It is encrypted with [Attestation Agent](https://github.com/confidential-containers/attestation-agent)'s [offline file system key broker](https://github.com/confidential-containers/attestation-agent/tree/64c12fbecfe90ba974d5fe4896bf997308df298d/src/kbc_modules/offline_fs_kbc) and [`aa-offline_fs_kbc-keys.json`](https://github.com/confidential-containers/documentation/blob/main/demos/ssh-demo/aa-offline_fs_kbc-keys.json) as its key file.
We have prepared a sample CoCo operator custom resource that is based on the standard `ccruntime.yaml`, but in addition has the the decryption keys and configuration required to decrypt this sample container image.
> **Note** All pods started with this sample resource will be able to decrypt the sample container and all keys shown are for demo purposes only and should not be used in production.
To test out creating a workload from the sample encrypted container image, we can take the following steps:
### Swap out the standard custom resource for our sample
Support for multiple custom resources in not available in the current release. Consequently, if a custom resource already exists, then you'll need to remove it first before deploying a new one. We can remove the standard custom resource with:
```sh
kubectl delete -k github.com/confidential-containers/operator/config/samples/ccruntime/<CCRUNTIME_OVERLAY>?ref=<RELEASE_VERSION>
```
and in it's place install the modified version with the sample container's decryption key:
```sh
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/ssh-demo?ref=<RELEASE_VERSION>
```
Wait until each pod has the STATUS of Running.
```sh
kubectl get pods -n confidential-containers-system --watch
```
### Test creating a workload from the sample encrypted image
Create a new Kubernetes deployment that uses the `docker.io/katadocker/ccv0-ssh` container image with:
```sh
cat << EOF > ccv0-ssh-demo.yaml
kind: Service
apiVersion: v1
metadata:
name: ccv0-ssh
spec:
selector:
app: ccv0-ssh
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: ccv0-ssh
spec:
selector:
matchLabels:
app: ccv0-ssh
template:
metadata:
labels:
app: ccv0-ssh
annotations:
io.containerd.cri.runtime-handler: kata
spec:
runtimeClassName: kata
containers:
- name: ccv0-ssh
image: docker.io/katadocker/ccv0-ssh
imagePullPolicy: Always
EOF
```
Apply this with:
```sh
kubectl apply -f ccv0-ssh-demo.yaml
```
and wait for the pod to start. This process should show that we are able to pull the encrypted image, and using the decryption key configured in the CoCo sample guest image, decrypt the container image and create a workload using it.
The demo image has an SSH host key embedded in it, which is protected by it's encryption, but we can download the sample private key and use this to ssh into the container to validate it hasn't been tampered with.
Download the SSH key with:
```sh
curl -Lo ccv0-ssh https://raw.githubusercontent.com/confidential-containers/documentation/main/demos/ssh-demo/ccv0-ssh
```
Ensure that the permissions are set correctly with:
```sh
chmod 600 ccv0-ssh
```
We can then use the key to ssh into the container:
```sh
$ ssh -i ccv0-ssh root@$(kubectl get service ccv0-ssh -o jsonpath="{.spec.clusterIP}")
```
You will be prompted about whether the host key fingerprint is correct. This fingerprint should match the one specified in the container image: `wK7uOpqpYQczcgV00fGCh+X97sJL3f6G1Ku4rvlwtR0.`

View File

@@ -1,201 +1,109 @@
# SEV-ES Guide
Confidential Containers supports SEV(-ES) with pre-attestation using
[simple-kbs](https://github.com/confidential-containers/simple-kbs).
This guide covers platform-specific setup for SEV(-ES) and walks through
complete flows for attestation and encrypted images.
## Creating a CoCo workload using a pre-existing encrypted image on SEV
### Platform Setup
To enable SEV on the host platform, first ensure that it is supported. Then follow these instructions to enable SEV:
[AMD SEV - Prepare Host OS](https://github.com/AMDESE/AMDSEV#prepare-host-os)
### Install sevctl and Export SEV Certificate Chain
[sevctl](https://github.com/virtee/sevctl) is the SEV command line utility and is needed to export the SEV certificate chain.
Follow these steps to install `sevctl`:
* Debian / Ubuntu:
```
# Rust must be installed to build
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
source $HOME/.cargo/env
sudo apt install -y musl-dev musl-tools
# Additional packages are required to build
sudo apt install -y pkg-config libssl-dev asciidoctor
# Clone the repository
git clone https://github.com/virtee/sevctl.git
# Build
(cd sevctl && cargo build)
```
* CentOS / Fedora / RHEL:
```
sudo dnf install sevctl
```
> **Note** Due to [this bug](https://bugzilla.redhat.com/show_bug.cgi?id=2037963) on sevctl for RHEL and Fedora you might need to build the tool from sources to pick the fix up.
If using the SEV kata configuration template file, the SEV certificate chain must be placed in `/opt/sev`. Export the SEV certificate chain using the following commands:
```
sudo mkdir -p /opt/sev
sudo ./sevctl/target/debug/sevctl export --full /opt/sev/cert_chain.cert
```
### Setup and Run the simple-kbs
## Platform Setup
By default, the `kata-qemu-sev` runtime class uses pre-attestation with the
`online-sev-kbc` and [simple-kbs](https://github.com/confidential-containers/simple-kbs) to attest the guest and provision secrets.
`simple-kbs` is a basic prototype key broker which can validate a guest measurement according to a specified policy and conditionally release secrets.
To use encrypted images, signed images, or authenticated registries with SEV, you should setup `simple-kbs`.
If you simply want to run an unencrypted container image, you can disable pre-attestation by adding the following annotation
`io.katacontainers.config.pre_attestation.enabled: "false"` to your pod.
> [!WARNING]
>
> In order to launch SEV or SNP memory encrypted guests, the host must be prepared with a compatible kernel, `6.8.0-rc5-next-20240221-snp-host-cc2568386`. AMD custom changes and required components and repositories will eventually be upstreamed.
If you are using pre-attestation, you will need to add an annotation to your pod configuration which contains the URI of a `simple-kbs` instance.
This annotation should be of the form `io.katacontainers.config.pre_attestation.uri: "<KBS IP>:44444"`.
Port 44444 is the default port per the directions below, but it may be configured to use another port.
The KBS IP must be accessible from inside the guest.
Usually it should be the public IP of the node where `simple-kbs` runs.
> [Sev-utils](https://github.com/amd/sev-utils/blob/coco-202402240000/docs/snp.md) is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, they are already packaged and delivered with Kata.
Alternatively, refer to the [AMDESE guide](https://github.com/confidential-containers/amdese-amdsev/tree/amd-snp-202402240000?tab=readme-ov-file#prepare-host) to manually build the host kernel and other components.
The SEV policy can also be set by adding `io.katacontainers.config.sev.policy: "<SEV POLICY>"` to your pod configuration. The default policy for SEV and SEV-ES are, respectively, "3" and "7", where the following bits are enabled:
## Getting Started
| Bit| Name| Description |
| --- | --- | --- |
|0|NODBG| Debugging of the guest is disallowed |
|1|NOKS| Sharing keys with other guests is disallowed |
|2|ES| SEV-ES is required |
This guide covers platform-specific setup for SEV and walks through the complete flows for the different CoCo use cases:
For more information about SEV policy, see chapter 3 of the [Secure Encrypted Virtualization API](https://www.amd.com/system/files/TechDocs/55766_SEV-KM_API_Specification.pdf) (PDF).
- [Container Launch with Memory Encryption](#container-launch-with-memory-encryption)
- [Pre-Attestation Utilizing Signed and Encrypted Images](#pre-attestation-utilizing-signed-and-encrypted-images)
>Note: the SEV policy is not the same as the policies that drive `simple-kbs`.
## Container Launch With Memory Encryption
The CoCo project has created a sample encrypted container image ([ghcr.io/confidential-containers/test-container:encrypted](https://github.com/orgs/confidential-containers/packages/container/test-container/82546314?tag=encrypted)). This image is encrypted using a key that comes already provisioned inside the `simple-kbs` for ease of testing. No `simple-kbs` policy is required to get things running.
### Launch a Confidential Service
The image encryption key and key for SSH access have been attached to the CoCo sample encrypted container image as docker labels. This image is meant for TEST purposes only as these keys are published publicly. In a production use case, these keys would be generated by the workload administrator and kept secret. For further details, see the section how to [Create an Encrypted Image](#create-an-encrypted-image).
To launch a container with SEV memory encryption, the SEV runtime class (`kata-qemu-sev`) must be specified as an annotation in the yaml. A base alpine docker container ([Dockerfile](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile)) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a [SSH public key](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted.pub) for validation purposes.
To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-firmware-measurement).
Here is a sample service yaml specifying the SEV runtime class:
`docker compose` is required to run the `simple-kbs` and its database in docker containers. Installation instructions are available on [Docker's website](https://docs.docker.com/compose/install/linux/).
Clone the repository for specified tag:
```
simple_kbs_tag="0.1.1"
git clone https://github.com/confidential-containers/simple-kbs.git
(cd simple-kbs && git checkout -b "branch_${simple_kbs_tag}" "${simple_kbs_tag}")
```
Run the service with `docker compose`:
```
(cd simple-kbs && sudo docker compose up -d)
```
### Launch the Pod and Verify SEV Encryption
Here is a sample kubernetes service yaml for an encrypted image:
```
```yaml
kind: Service
apiVersion: v1
metadata:
name: encrypted-image-tests
name: "confidential-unencrypted"
spec:
selector:
app: encrypted-image-tests
app: "confidential-unencrypted"
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: encrypted-image-tests
name: "confidential-unencrypted"
spec:
selector:
matchLabels:
app: encrypted-image-tests
app: "confidential-unencrypted"
template:
metadata:
labels:
app: encrypted-image-tests
app: "confidential-unencrypted"
annotations:
io.containerd.cri.runtime-handler: kata-qemu-sev
spec:
runtimeClassName: kata-qemu-sev
containers:
- name: encrypted-image-tests
image: ghcr.io/fitzthum/encrypted-image-tests:encrypted
- name: "confidential-unencrypted"
image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always
```
Save this service yaml to a file named `encrypted-image-tests.yaml`. Notice the image URL specified points to the previously described CoCo sample encrypted container image. `kata-qemu-sev` must also be specified as the `runtimeClassName`.
Save the contents of this yaml to a file called `confidential-unencrypted.yaml`.
Start the service:
```
kubectl apply -f encrypted-image-tests.yaml
```shell
kubectl apply -f confidential-unencrypted.yaml
```
Check for pod errors:
Check for errors:
```
pod_name=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $1;}')
kubectl describe pod ${pod_name}
```shell
kubectl describe pod confidential-unencrypted
```
If there are no errors, a CoCo encrypted container with SEV has been successfully launched!
If there are no errors in the Events section, then the container has been successfully created with SEV memory encryption.
### Verify SEV Memory Encryption
### Validate SEV Memory Encryption
The container `dmesg` report can be parsed to verify SEV memory encryption.
The container dmesg log can be parsed to indicate that SEV memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get pod IP:
Get the pod IP:
```
pod_ip=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $6;}')
```shell
pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
```
Get the CoCo sample encrypted container image SSH access key from docker image label and save it to a file.
Currently the docker client cannot pull encrypted images. We can inspect the unencrypted image instead,
which has the same labels. You could also use `skopeo inspect` to get the labels from the encrypted image.
Download and save the [SSH private key](https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted) and set the permissions.
```
docker pull ghcr.io/fitzthum/encrypted-image-tests:unencrypted
docker inspect ghcr.io/fitzthum/encrypted-image-tests:unencrypted | \
jq -r '.[0].Config.Labels.ssh_key' \
| sed "s|\(-----BEGIN OPENSSH PRIVATE KEY-----\)|\1\n|g" \
| sed "s|\(-----END OPENSSH PRIVATE KEY-----\)|\n\1|g" \
> encrypted-image-tests
```shell
wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
chmod 600 confidential-image-ssh-key
```
Set permissions on the SSH private key file:
The following command will run a remote SSH command on the container to check if SEV memory encryption is active:
```
chmod 600 encrypted-image-tests
```
Run a SSH command to parse the container `dmesg` output for SEV enabled messages:
```
ssh -i encrypted-image-tests \
```shell
ssh -i confidential-image-ssh-key \
-o "StrictHostKeyChecking no" \
-t root@${pod_ip} \
'dmesg | grep SEV'
'dmesg | grep "Memory Encryption Features"'
```
The output should look something like this:
```
If SEV is enabled and active, the output should return:
```shell
[ 0.150045] Memory Encryption Features active: AMD SEV
```
@@ -203,7 +111,7 @@ The output should look something like this:
If SSH access to the container is desired, create a keypair:
```
```shell
ssh-keygen -t ed25519 -f encrypted-image-tests -P "" -C "" <<< y
```
@@ -211,7 +119,7 @@ The above command will save the keypair in a file named `encrypted-image-tests`.
Here is a sample Dockerfile to create a docker image:
```
```Dockerfile
FROM alpine:3.16
# Update and install openssh-server
@@ -236,13 +144,13 @@ Store this `Dockerfile` in the same directory as the `encrypted-image-tests` ssh
Build image:
```
```shell
docker build -t encrypted-image-tests .
```
Tag and upload this unencrypted docker image to a registry:
```
```shell
docker tag encrypted-image-tests:latest [REGISTRY_URL]:unencrypted
docker push [REGISTRY_URL]:unencrypted
```
@@ -255,7 +163,7 @@ Be sure to replace `[REGISTRY_URL]` with the desired registry URL.
The Attestation Agent hosts a grpc service to support encrypting the image. Clone the repository:
```
```shell
attestation_agent_tag="v0.1.0"
git clone https://github.com/confidential-containers/attestation-agent.git
(cd attestation-agent && git checkout -b "branch_${attestation_agent_tag}" "${attestation_agent_tag}")
@@ -263,14 +171,14 @@ git clone https://github.com/confidential-containers/attestation-agent.git
Run the offline_fs_kbs:
```
```shell
(cd attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs \
&& cargo run --release --features offline_fs_kbs -- --keyprovider_sock 127.0.0.1:50001 &)
```
Create the Attestation Agent keyprovider:
```
```shell
cat > attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf <<EOF
{
"key-providers": {
@@ -282,13 +190,13 @@ EOF
Set a desired value for the encryption key that should be a 32-bytes and base64 encoded value:
```
```shell
enc_key="RcHGava52DPvj1uoIk/NVDYlwxi0A6yyIZ8ilhEX3X4="
```
Create a Key file:
```
```shell
cat > keys.json <<EOF
{
"key_id1":"${enc_key}"
@@ -298,7 +206,7 @@ EOF
Run skopeo to encrypt the image created in the previous section:
```
```shell
sudo OCICRYPT_KEYPROVIDER_CONFIG=$(pwd)/attestation-agent/sample_keyprovider/src/enc_mods/offline_fs_kbs/ocicrypt.conf \
skopeo copy --insecure-policy \
docker:[REGISTRY_URL]:unencrypted \
@@ -316,27 +224,27 @@ response. A remote registry known to support encrypted images like GitHub Contai
At this point it is a good idea to inspect the image was really encrypted as skopeo can silently leave it unencrypted. Use
`skopeo inspect` as shown below to check that the layers MIME types are **application/vnd.oci.image.layer.v1.tar+gzip+encrypted**:
```
```shell
skopeo inspect docker-daemon:[REGISTRY_URL]:encrypted
```
Push the encrypted image to the registry:
```
```shell
docker push [REGISTRY_URL]:encrypted
```
`mysql-client` is required to insert the key into the `simple-kbs` database. `jq` is required to json parse responses on the command line.
* Debian / Ubuntu:
```
```shell
sudo apt install mysql-client jq
```
* CentOS / Fedora / RHEL:
```
```shell
sudo dnf install [ mysql | mariadb | community-mysql ] jq
```
@@ -344,7 +252,7 @@ The `mysql-client` package name may differ depending on OS flavor and version.
The `simple-kbs` uses default settings and credentials for the MySQL database. These settings can be changed by the `simple-kbs` administrator and saved into a credential file. For the purposes of this quick start, set them in the environment for use with the MySQL client command line:
```
```shell
KBS_DB_USER="kbsuser"
KBS_DB_PW="kbspassword"
KBS_DB="simple_kbs"
@@ -353,7 +261,7 @@ KBS_DB_TYPE="mysql"
Retrieve the host address of the MySQL database container:
```
```shell
KBS_DB_HOST=$(docker network inspect simple-kbs_default \
| jq -r '.[].Containers[] | select(.Name | test("simple-kbs[_-]db.*")).IPv4Address' \
| sed "s|/.*$||g")
@@ -361,7 +269,7 @@ KBS_DB_HOST=$(docker network inspect simple-kbs_default \
Add the key to the `simple-kbs` database without any verification policy:
```
```shell
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', NULL);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', NULL);
@@ -376,159 +284,3 @@ Return to step [Launch the Pod and Verify SEV Encryption](#launch-the-pod-and-ve
To learn more about creating custom policies, see the section on [Creating a simple-kbs Policy to Verify the SEV Firmware Measurement](#creating-a-simple-kbs-policy-to-verify-the-sev-guest-firmware-measurement).
## Creating a simple-kbs Policy to Verify the SEV Guest Firmware Measurement
The `simple-kbs` can be configured with a policy that requires the kata shim to provide a matching SEV guest firmware measurement to release the key for decrypting the image. At launch time, the kata shim will collect the SEV guest firmware measurement and forward it in a key request to the `simple-kbs`.
These steps will use the CoCo sample encrypted container image, but the image URL can be replaced with a user created image registry URL.
To create the policy, the value of the SEV guest firmware measurement must be calculated.
`pip` is required to install the `sev-snp-measure` utility.
* Debian / Ubuntu:
```
sudo apt install python3-pip
```
* CentOS / Fedora / RHEL:
```
sudo dnf install python3
```
[sev-snp-measure](https://github.com/IBM/sev-snp-measure) is a utility used to calculate the SEV guest firmware measurement with provided ovmf, initrd, kernel and kernel append input parameters. Install it using the following command:
```
sudo pip install sev-snp-measure
```
The path to the guest binaries required for measurement is specified in the kata configuration. Set them:
```
ovmf_path="/opt/confidential-containers/share/ovmf/OVMF.fd"
kernel_path="/opt/confidential-containers/share/kata-containers/vmlinuz-sev.container"
initrd_path="/opt/confidential-containers/share/kata-containers/kata-containers-initrd.img"
```
The kernel append line parameters are included in the SEV guest firmware measurement. A placeholder will be initially set, and the actual value will be retrieved later from the qemu command line:
```
append="PLACEHOLDER"
```
Use the `sev-snp-measure` utility to calculate the SEV guest firmware measurement using the binary variables previously set:
```
measurement=$(sev-snp-measure --mode=sev --output-format=base64 \
--ovmf="${ovmf_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
)
```
If the container image is not already present, pull it:
```
encrypted_image_url="ghcr.io/fitzthum/encrypted-image-tests:unencrypted"
docker pull "${encrypted_image_url}"
```
Retrieve the encryption key from docker image label:
```
enc_key=$(docker inspect ${encrypted_image_url} \
| jq -r '.[0].Config.Labels.enc_key')
```
Add the key, keyset and policy with measurement to the `simple-kbs` database:
```
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', 10);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', 10);
REPLACE INTO policy VALUES (10, '["${measurement}"]', '[]', 0, 0, '[]', now(), NULL, 1);
EOF
```
Using the same service yaml from the section on [Launch the Pod and Verify SEV Encryption](#launch-the-pod-and-verify-sev-encryption), launch the service:
```
kubectl apply -f encrypted-image-tests.yaml
```
Check for pod errors:
```
pod_name=$(kubectl get pod -o wide | grep encrypted-image-tests | awk '{print $1;}')
kubectl describe pod ${pod_name}
```
The pod will error out on the key retrieval request to the `simple-kbs` because the policy verification failed due to a mismatch in the SEV guest firmware measurement. This is the error message that should display:
```
Policy validation failed: fw digest not valid
```
The `PLACEHOLDER` value that was set for the kernel append line when the SEV guest firmware measurement was calculated does not match what was measured by the kata shim. The kernel append line parameters can be retrieved from the qemu command line using the following scripting commands, as long as kubernetes is still trying to launch the pod:
```
duration=$((SECONDS+30))
set append
while [ $SECONDS -lt $duration ]; do
qemu_process=$(ps aux | grep qemu | grep append || true)
if [ -n "${qemu_process}" ]; then
append=$(echo ${qemu_process} \
| sed "s|.*-append \(.*$\)|\1|g" \
| sed "s| -.*$||")
break
fi
sleep 1
done
echo "${append}"
```
The above check will only work if the `encrypted-image-tests` guest launch is the only consuming qemu process running.
Now, recalculate the SEV guest firmware measurement and store the `simple-kbs` policy in the database:
```
measurement=$(sev-snp-measure --mode=sev --output-format=base64 \
--ovmf="${ovmf_path}" \
--kernel="${kernel_path}" \
--initrd="${initrd_path}" \
--append="${append}" \
)
mysql -u${KBS_DB_USER} -p${KBS_DB_PW} -h ${KBS_DB_HOST} -D ${KBS_DB} <<EOF
REPLACE INTO secrets VALUES (10, 'key_id1', '${enc_key}', 10);
REPLACE INTO keysets VALUES (10, 'KEYSET-1', '["key_id1"]', 10);
REPLACE INTO policy VALUES (10, '["${measurement}"]', '[]', 0, 0, '[]', now(), NULL, 1);
EOF
```
The pod should now show a successful launch:
```
kubectl describe pod ${pod_name}
```
If the service is hung up, delete the pod and try to launch again:
```
# Delete
kubectl delete -f encrypted-image-tests.yaml
# Verify pod cleaned up
kubectl describe pod ${pod_name}
# Relaunch
kubectl apply -f encrypted-image-tests.yaml
```
Testing the SEV encrypted container launch can be completed by returning to the section on how to [Verify SEV Memory Encryption](#verify-sev-memory-encryption).

107
guides/snp.md Normal file
View File

@@ -0,0 +1,107 @@
# SNP Guide
## Platform Setup
> [!WARNING]
>
> In order to launch SEV or SNP memory encrypted guests, the host must be prepared with a compatible kernel, `6.8.0-rc5-next-20240221-snp-host-cc2568386`. AMD custom changes and required components and repositories will eventually be upstreamed.
> [Sev-utils](https://github.com/amd/sev-utils/blob/coco-202402240000/docs/snp.md) is an easy way to install the required host kernel, but it will unnecessarily build AMD compatible guest kernel, OVMF, and QEMU components. The additional components can be used with the script utility to test launch and attest a base QEMU SNP guest. However, for the CoCo use case, they are already packaged and delivered with Kata.
Alternatively, refer to the [AMDESE guide](https://github.com/confidential-containers/amdese-amdsev/tree/amd-snp-202402240000?tab=readme-ov-file#prepare-host) to manually build the host kernel and other components.
## Getting Started
This guide covers platform-specific setup for SNP and walks through the complete flows for the different CoCo use cases:
- [Container Launch with Memory Encryption](#container-launch-with-memory-encryption)
## Container Launch With Memory Encryption
### Launch a Confidential Service
To launch a container with SNP memory encryption, the SNP runtime class (`kata-qemu-snp`) must be specified as an annotation in the yaml. A base alpine docker container ([Dockerfile](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/Dockerfile)) has been previously built for testing purposes. This image has also been prepared with SSH access and provisioned with a [SSH public key](https://github.com/kata-containers/kata-containers/blob/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted.pub) for validation purposes.
Here is a sample service yaml specifying the SNP runtime class:
```yaml
kind: Service
apiVersion: v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
app: "confidential-unencrypted"
ports:
- port: 22
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: "confidential-unencrypted"
spec:
selector:
matchLabels:
app: "confidential-unencrypted"
template:
metadata:
labels:
app: "confidential-unencrypted"
annotations:
io.containerd.cri.runtime-handler: kata-qemu-snp
spec:
runtimeClassName: kata-qemu-snp
containers:
- name: "confidential-unencrypted"
image: ghcr.io/kata-containers/test-images:unencrypted-nightly
imagePullPolicy: Always
```
Save the contents of this yaml to a file called `confidential-unencrypted.yaml`.
Start the service:
```shell
kubectl apply -f confidential-unencrypted.yaml
```
Check for errors:
```shell
kubectl describe pod confidential-unencrypted
```
If there are no errors in the Events section, then the container has been successfully created with SNP memory encryption.
### Validate SNP Memory Encryption
The container dmesg log can be parsed to indicate that SNP memory encryption is enabled and active. The container image defined in the yaml sample above was built with a predefined key that is authorized for SSH access.
Get the pod IP:
```shell
pod_ip=$(kubectl get pod -o wide | grep confidential-unencrypted | awk '{print $6;}')
```
Download and save the SSH private key and set the permissions.
```shell
wget https://github.com/kata-containers/kata-containers/raw/main/tests/integration/kubernetes/runtimeclass_workloads/confidential/unencrypted/ssh/unencrypted -O confidential-image-ssh-key
chmod 600 confidential-image-ssh-key
```
The following command will run a remote SSH command on the container to check if SNP memory encryption is active:
```shell
ssh -i confidential-image-ssh-key \
-o "StrictHostKeyChecking no" \
-t root@${pod_ip} \
'dmesg | grep "Memory Encryption Features""'
```
If SNP is enabled and active, the output should return:
```shell
[ 0.150045] Memory Encryption Features active: AMD SNP
```

Binary file not shown.

Before

Width:  |  Height:  |  Size: 41 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 34 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 76 KiB

View File

@@ -31,7 +31,7 @@ To run the operator you must have an existing Kubernetes cluster that meets the
- Ensure a minimum of 8GB RAM and 4 vCPU for the Kubernetes cluster node
- Only containerd runtime based Kubernetes clusters are supported with the current CoCo release
- The minimum Kubernetes version should be 1.24
- Ensure at least one Kubernetes node in the cluster is having the label `node-role.kubernetes.io/worker=`
- Ensure at least one Kubernetes node in the cluster has the labels `node-role.kubernetes.io/worker=` or `node.kubernetes.io/worker=`. This will assign the worker role to a node in your cluster, making it responsible for running your applications and services
- Ensure SELinux is disabled or not enforced (https://github.com/confidential-containers/operator/issues/115)
For more details on the operator, including the custom resources managed by the operator, refer to the operator [docs](https://github.com/confidential-containers/operator).
@@ -59,18 +59,18 @@ on the worker nodes is **not** on an overlayfs mount but the path is a `hostPath
Deploy the operator by running the following command where `<RELEASE_VERSION>` needs to be substituted
with the desired [release tag](https://github.com/confidential-containers/operator/tags).
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=<RELEASE_VERSION>
```
For example, to deploy the `v0.8.0` release run:
```
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.8.0
For example, to deploy the `v0.10.0` release run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/release?ref=v0.10.0
```
Wait until each pod has the STATUS of Running.
```
```shell
kubectl get pods -n confidential-containers-system --watch
```
@@ -81,25 +81,25 @@ kubectl get pods -n confidential-containers-system --watch
Creating a custom resource installs the required CC runtime pieces into the cluster node and creates
the `RuntimeClasses`
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/<CCRUNTIME_OVERLAY>?ref=<RELEASE_VERSION>
```
The current present overlays are: `default` and `s390x`
For example, to deploy the `v0.8.0` release for `x86_64`, run:
```
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.8.0
For example, to deploy the `v0.10.0` release for `x86_64`, run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/default?ref=v0.10.0
```
And to deploy `v0.8.0` release for `s390x`, run:
```
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.8.0
And to deploy `v0.10.0` release for `s390x`, run:
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/ccruntime/s390x?ref=v0.10.0
```
Wait until each pod has the STATUS of Running.
```
```shell
kubectl get pods -n confidential-containers-system --watch
```
@@ -114,11 +114,11 @@ Please see the [enclave-cc guide](./guides/enclave-cc.md) for more information.
`enclave-cc` is a form of Confidential Containers that uses process-based isolation.
`enclave-cc` can be installed with the following custom resources.
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/sim?ref=<RELEASE_VERSION>
```
or
```
```shell
kubectl apply -k github.com/confidential-containers/operator/config/samples/enclave-cc/hw?ref=<RELEASE_VERSION>
```
for the **simulated** SGX mode build or **hardware** SGX mode build, respectively.
@@ -127,290 +127,84 @@ for the **simulated** SGX mode build or **hardware** SGX mode build, respectivel
Check the `RuntimeClasses` that got created.
```
```shell
kubectl get runtimeclass
```
Output:
```
NAME HANDLER AGE
kata kata 9m55s
kata-clh kata-clh 9m55s
kata-clh-tdx kata-clh-tdx 9m55s
kata-qemu kata-qemu 9m55s
kata-qemu-tdx kata-qemu-tdx 9m55s
kata-qemu-sev kata-qemu-sev 9m55s
```shell
NAME HANDLER AGE
kata kata-qemu 8d
kata-clh kata-clh 8d
kata-qemu kata-qemu 8d
kata-qemu-coco-dev kata-qemu-coco-dev 8d
kata-qemu-sev kata-qemu-sev 8d
kata-qemu-snp kata-qemu-snp 8d
kata-qemu-tdx kata-qemu-tdx 8d
```
Details on each of the runtime classes:
- *kata* - standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non CC HW
- *kata-clh* - standard kata runtime using the cloud hypervisor including all CoCo building blocks for a non CC HW
- *kata-clh-tdx* - using the Cloud Hypervisor, with TD-Shim, and support for Intel TDX CC HW
- *kata* - Convenience runtime that uses the handler of the default runtime
- *kata-clh* - standard kata runtime using the cloud hypervisor
- *kata-qemu* - same as kata
- *kata-qemu-tdx* - using QEMU, with TDVF, and support for Intel TDX CC HW, prepared for using Verdictd and EAA KBC.
- *kata-qemu-coco-dev* - standard kata runtime using the QEMU hypervisor including all CoCo building blocks for a non CC HW
- *kata-qemu-sev* - using QEMU, and support for AMD SEV HW
- *kata-qemu-snp* - using QEMU, and support for AMD SNP HW
- *kata-qemu-tdx* -using QEMU, and support Intel TDX HW based on what's provided by [Ubuntu](https://github.com/canonical/tdx) and [CentOS 9 Stream](https://sigs.centos.org/virt/tdx/).
If you are using `enclave-cc` you should see the following runtime classes.
```
```shell
kubectl get runtimeclass
```
Output:
```
```shell
NAME HANDLER AGE
enclave-cc enclave-cc 9m55s
```
The CoCo operator environment has been setup and deployed!
### Platform Setup
While the operator deploys all the required binaries and artifacts and sets up runtime classes that use them,
certain platforms may require additional configuration to enable confidential computing. For example, the host
kernel and firmware might need to be configured.
See the [guides](./guides) for more information.
certain platforms may require additional configuration to enable confidential computing. For example, a specific
host kernel or firmware may be required. See the [guides](./guides/) for more information.
# Running a workload
## Using CoCo
## Creating a sample CoCo workload
Below is a brief summary and description of some of the CoCo use cases and features:
Once you've used the operator to install Confidential Containers, you can run a pod with CoCo by simply adding a runtime class.
First, we will use the `kata` runtime class which uses CoCo without hardware support.
Initially we will try this with an unencrypted container image.
- **Container Launch with Only Memory Encryption (No Attestation)** - Launch a container with memory encryption
- **Container Launch with Encrypted Image** - Launch an encrypted container by proving the workload is running
in a TEE in order to retrieve the decryption key
- **Container Launch with Image Signature Verification** - Launch a container and verify the authenticity and
integrity of an image by proving the workload is running in a TEE
- **Sealed secret** - Implement wrapped kubernetes secrets that are confidential to the workload owner and are
automatically decrypted by proving the workload is running in a TEE
- **Ephemeral Storage** - Temporary storage that is used during the lifecycle of the container but is cleared out
when a pod is restarted or finishes its task. At the moment, only ephemeral storage of the container itself is
supported and it has to be explicityly configured.
- **Authenticated Registries** - Create secure container registries that require authentication to access and manage container
images that ensures that only trusted images are deployed in the Confidential Container. The host must have access
to the registry credentials.
- **Secure Storage** - Mechanisms and technologies used to protect data at rest, ensuring that sensitive information
remains confidential and tamper-proof.
- **Peer Pods** - Enable the creation of VMs on any environment without requiring bare metal servers or nested
virtualization support. More information about this feature can be found [here](https://github.com/confidential-containers/cloud-api-adaptor/tree/main).
## Platforms
In our example we will be using the bitnami/nginx image as described in the following yaml:
```
apiVersion: v1
kind: Pod
metadata:
labels:
run: nginx
name: nginx
annotations:
io.containerd.cri.runtime-handler: kata
spec:
containers:
- image: bitnami/nginx:1.22.0
name: nginx
dnsPolicy: ClusterFirst
runtimeClassName: kata
```
With some TEEs, the CoCo use cases and/or configurations are implemented differently. Those are described in each corresponding
[guide](./guides) section. To get started using CoCo without TEE hardware, follow the CoCo-dev guide below:
Setting the runtimeClassName is usually the only change needed to the pod yaml, but some platforms
support additional annotations for configuring the enclave. See the [guides](./guides) for
more details.
With Confidential Containers, the workload container images are never downloaded on the host.
For verifying that the container image doesnt exist on the host you should log into the k8s node and ensure the following command returns an empty result:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
You will run this command again after the container has started.
Create a pod YAML file as previously described (we named it `nginx.yaml`) .
Create the workload:
```
kubectl apply -f nginx.yaml
```
Output:
```
pod/nginx created
```
Ensure the pod was created successfully (in running state):
```
kubectl get pods
```
Output:
```
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 3m50s
```
Now go back to the k8s node and ensure that you still dont have any bitnami/nginx images on it:
```
root@cluster01-master-0:/home/ubuntu# crictl -r unix:///run/containerd/containerd.sock image ls | grep bitnami/nginx
```
## Encrypted and/or signed images with attestation
The previous example does not involve any attestation because the workload container isn't signed or encrypted
and the workload itself does not require any secrets.
This is not the case for most real workloads. It is recommended to use CoCo with signed and/or encrypted images.
The workload itself can also request secrets from the attestation agent in the guest.
Secrets are provisioned to the guest in conjunction with an attestation, which is based on hardware evidence.
The rest of this guide focuses on setting up more substantial encrypted/signed workloads using attestation
and confidential hardware.
See [this guide](./guides/nontee_demo.md) if you would like to deploy an example encrypted image without
confidential hardware.
CoCo has a modular attestation interface and there are a few options for attestation.
CoCo provides a generic Key Broker Service (KBS) that the rest of this guide will be focused on.
The SEV runtime class uses `simple-kbs`, which is described in the [SEV guide](./guides/sev.md).
There is also `eaa_kbc`/`verdictd` which is described [here](./guides/eaa_verdictd.md).
### Select Runtime Class
To use CoCo with confidential hardware, first switch to the appropriate runtime class.
TDX has two runtime classes, `kata-qemu-tdx` and `kata-clh-tdx`. One uses QEMU as VMM and TDVF as firmware. The other uses Cloud Hypervisor as VMM and TD-Shim as firmware.
For SEV(-ES) use the `kata-qemu-sev` runtime class and follow the [SEV guide](./guides/sev.md).
For `enclave-cc` follow the [enclave-cc guide](./guides/enclave-cc.md).
### Deploy and Configure tenant-side CoCo Key Broker System cluster
The following describes how to run and provision the generic KBS.
The KBS should be run in a trusted environment. The KBS is not just one service,
but a combination of several.
A tenant-side CoCo Key Broker System cluster includes:
- Key Broker Service (KBS): Brokering service for confidential resources.
- Attestation Service (AS): Verifier for remote attestation.
- Reference Value Provicer Service (RVPS): Provides reference values for AS.
- CoCo Keyprovider: Component to encrypt the images following ocicrypt spec.
To quick start the KBS cluster, a `docker-compose` yaml is provided to launch.
```shell
# Clone KBS git repository
git clone https://github.com/confidential-containers/kbs.git
cd kbs
export KBS_DIR_PATH=$(pwd)
# Generate a user auth key pair
openssl genpkey -algorithm ed25519 > config/private.key
openssl pkey -in config/private.key -pubout -out config/public.pub
# Start KBS cluster
docker-compose up -d
```
If configuration of KBS cluster is required, edit the following config files and restart the KBS cluster with `docker-compose`:
- `$KBS_DIR_PATH/config/kbs-config.json`: configuration for Key Broker Service.
- `$KBS_DIR_PATH/config/as-config.json`: configuration for Attestation Service.
- `$KBS_DIR_PATH/config/sgx_default_qcnl.conf`: configuration for Intel TDX/SGX verification.
When KBS cluster is running, you can modify the policy file used by AS policy engine ([OPA](https://www.openpolicyagent.org/)) at any time:
- `$KBS_DIR_PATH/data/attestation-service/opa/policy.rego`: Policy file for evidence verification of AS, refer to [AS Policy Engine](https://github.com/confidential-containers/attestation-service#policy-engine) for more infomation.
### Encrypting an Image
[skopeo](https://github.com/containers/skopeo) is required to encrypt the container image.
Follow the [instructions](https://github.com/containers/skopeo/blob/main/install.md) to install `skopeo`.
Use `skopeo` to encrypt an image on the same node of the KBS cluster (use busybox:latest for example):
```shell
# edit ocicrypt.conf
tee > ocicrypt.conf <<EOF
{
"key-providers": {
"attestation-agent": {
"grpc": "127.0.0.1:50000"
}
}
}
EOF
# encrypt the image
OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://library/busybox oci:busybox:encrypted
```
The image will be encrypted, and things happens in the KBS cluster background include:
- CoCo Keyprovider generates a random key and a key-id. Then encrypts the image using the key.
- CoCo Keyprovider registers the key with key-id into KBS.
Then push the image to registry:
```shell
skopeo copy oci:busybox:encrypted [SCHEME]://[REGISTRY_URL]:encrypted
```
Be sure to replace `[SCHEME]` with registry scheme type like `docker`, replace `[REGISTRY_URL]` with the desired registry URL like `docker.io/encrypt_test/busybox`.
### Signing an Image
[cosign](https://github.com/sigstore/cosign) is required to sign the container image. Follow the instructions here to install `cosign`:
[cosign installation](https://docs.sigstore.dev/cosign/installation/)
Generate a cosign key pair and register the public key to KBS storage:
```shell
cosign generate-key-pair
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/cosign-key && cp cosign.pub $KBS_DIR_PATH/data/kbs-storage/default/cosign-key/1
```
Sign the encrypted image with cosign private key:
```shell
cosign sign --key cosign.key [REGISTRY_URL]:encrypted
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous steps.
Then edit an image pulling validation policy file.
Here is a sample policy file `security-policy.json`:
```json
{
"default": [{"type": "reject"}],
"transports": {
"docker": {
"[REGISTRY_URL]": [
{
"type": "sigstoreSigned",
"keyPath": "kbs:///default/cosign-key/1"
}
]
}
}
}
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image.
Register the image pulling validation policy file to KBS storage:
```shell
mkdir -p $KBS_DIR_PATH/data/kbs-storage/default/security-policy
cp security-policy.json $KBS_DIR_PATH/data/kbs-storage/default/security-policy/test
```
### Deploy encrypted image as a CoCo workload on CC HW
Here is a sample yaml for encrypted image deploying:
```shell
cat << EOT | tee encrypted-image-test-busybox.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
run: encrypted-image-test-busybox
name: encrypted-image-test-busybox
annotations:
io.containerd.cri.runtime-handler: [RUNTIME_CLASS]
spec:
containers:
- image: [REGISTRY_URL]:encrypted
name: busybox
dnsPolicy: ClusterFirst
runtimeClassName: [RUNTIME_CLASS]
EOT
```
Be sure to replace `[REGISTRY_URL]` with the desired registry URL of the encrypted image generated in previous step, replace `[RUNTIME_CLASS]` with kata runtime class for CC HW.
Then configure `/opt/confidential-containers/share/defaults/kata-containers/configuration-<RUNTIME_CLASS_SUFFIX>.toml` to add `agent.aa_kbc_params=cc_kbc::<KBS_URI>` to kernal parameters. Here `RUNTIME_CLASS_SUFFIX` is something like `qemu-tdx`, `KBS_URI` is the address of Key Broker Service in KBS cluster like `http://123.123.123.123:8080`.
Deploy encrypted image as a workload:
```shell
kubectl apply -f encrypted-image-test-busybox.yaml
```
- [CoCo-dev](./guides/coco-dev.md)
- [SEV(-ES)](./guides/sev.md)
- [SNP](./guides/snp.md)
- TDX: No additional steps required.
- [SGX](./guides/enclave-cc.md)
- [IBM Secure Execution](./guides/ibm-se.md)
- ...

92
releases/v0.10.0.md Normal file
View File

@@ -0,0 +1,92 @@
# Release Notes for v0.10.0
Release Date: September 27th, 2024
This release is based on [3.9.0](https://github.com/kata-containers/kata-containers/releases/tag/3.9.0) of Kata Containers
and [v0.10.0](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.10.0) of enclave-cc.
This is the first release of Confidential Containers which has feature parity with CCv0.
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for
definitions of the acronyms used in this document.
## What's new
* Support to pull and verify `cosign`-signed images
* Trusted image storage on guest
* Support Intel Tiber Trust Services as the verifier with Trustee for both Kata bare metal and peer-pods deployments
* Init-data support for peer pods
* Image-rs support for whiteouts and for layers with hard-link filenames over 100 characters
* enclave-cc updated to Ubuntu 22.04 based runtime instance
## Hardware Support
Attestation is supported and tested on three platforms: Intel TDX, AMD SEV-SNP, and IBM SE.
Not all feature have been tested on every platform, but those based on attestation
are expected to work on the platforms above.
Make sure your host platform is compatible with the hypervisor and guest kernel
provisioned by coco.
This release has been tested on the following stacks:
### AMD SEV-SNP
* Processor: AMD EPYC 7413
* Kernel: [6.8.0-rc5-next-20240221-snp-host-cc2568386](https://github.com/confidential-containers/linux/tree/amd-snp-host-202402240000)
* OS: Ubuntu 22.04.4 LTS
* k8s: v1.30.1 (Kubeadm)
* Kustomize: v4.5.4
### Intel TDX
* Kernel: [6.8.0-1004-intel](https://git.launchpad.net/~kobuk-team/ubuntu/+source/linux-intel/tree/?h=noble-main-next)
* OS: Ubuntu 24.04 LTS
* k8s: v1.30.2 (Kubeadm)
* Kustomize: v5.0.4-0.20230601165947-6ce0bf390ce3
### Secure Execution on IBM zSystems (s390x) running LinuxONE
* Hardware: IBM Z16 LPAR
* Kernel: 5.15.0-113-generic
* OS: Ubuntu 22.04.1 LTS
* k8s: v1.28.4 (k3s)
* Kustomize: v5.3.0
## Limitations
The following are known limitations of this release:
* SEV(-ES) does not support attestation.
* Sealed secrets only supports secrets in environment variables.
* Credentials for authenticated registries are exposed to the host.
* Not all features are tested on all platforms.
* Nydus snapshotter support is not mature.
* Nydus snapshotter sometimes fails to pull an image.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* Complete integration with Kubernetes is still in progress.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* We track our status with the OpenSSF Best Practices Badge, which remained at 75% at the time of this release.
* Community has adopted a security reporting protocol. The status of this is:
* The operator now uses CodeQL for static scans, and it will be added for all other Go-based repositories in the next release.
* Dependencies are now better handled with automatic updates using dependabot.
* Static scan for Rust-based repos will be "N/A".
* Container metadata such as environment variables are not measured.
* The Kata Agent allows the host to call several dangerous endpoints
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
* Policy can be used to block endpoints, but it is not yet tied to the hardware evidence.
## CVE Fixes
None

84
releases/v0.11.0.md Normal file
View File

@@ -0,0 +1,84 @@
# Release Notes for v0.11.0
Release Date: November 25th, 2024
This release is based on [3.11.0](https://github.com/kata-containers/kata-containers/releases/tag/3.11.0) of Kata Containers
and [v0.11.0](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.11.0) of enclave-cc.
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for
definitions of the acronyms used in this document.
## What's new
* Sealed secrets can be exposed as volumes
* TDX guests support measured rootfs with dm-verity
* Policy generation improvements
* Test coverage improved for s390x
* Community reached 100% ("passing") on OpenSSF best practices badge
## Hardware Support
Attestation is supported and tested on three platforms: Intel TDX, AMD SEV-SNP, and IBM SE.
Not all feature have been tested on every platform, but those based on attestation
are expected to work on the platforms above.
Make sure your host platform is compatible with the hypervisor and guest kernel
provisioned by coco.
This release has been tested on the following stacks:
### AMD SEV-SNP
* Processor: AMD EPYC 7413
* Kernel: [6.8.0-rc5-next-20240221-snp-host-cc2568386](https://github.com/confidential-containers/linux/tree/amd-snp-host-202402240000)
* OS: Ubuntu 22.04.4 LTS
* k8s: v1.30.1 (Kubeadm)
* Kustomize: v4.5.4
### Intel TDX
* Kernel: [6.8.0-1004-intel](https://git.launchpad.net/~kobuk-team/ubuntu/+source/linux-intel/tree/?h=noble-main-next)
* OS: Ubuntu 24.04 LTS
* k8s: v1.30.2 (Kubeadm)
* Kustomize: v5.0.4-0.20230601165947-6ce0bf390ce3
### Secure Execution on IBM zSystems (s390x) running LinuxONE
* Hardware: IBM Z16 LPAR
* Kernel: 5.15.0-113-generic
* OS: Ubuntu 22.04.1 LTS
* k8s: v1.28.4 (Kubeadm)
* Kustomize: v5.3.0
## Limitations
The following are known limitations of this release:
* SEV(-ES) does not support attestation.
* Credentials for authenticated registries are exposed to the host.
* Not all features are tested on all platforms.
* Nydus snapshotter support is not mature.
* Nydus snapshotter sometimes fails to pull an image.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* Complete integration with Kubernetes is still in progress.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* Container metadata such as environment variables are not measured.
* The Kata Agent allows the host to call several dangerous endpoints
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
* Policy can be used to block endpoints, but it is not yet tied to the hardware evidence.
## CVE Fixes
None

79
releases/v0.9.0-alpha0.md Normal file
View File

@@ -0,0 +1,79 @@
# Release Notes for v0.9.0-alpha
Release Date: May 2nd, 2024
This is our first release based on Kata Containers main, but it is an alpha release that supports
only a subset of features.
This release is based on [3.4.0](https://github.com/kata-containers/kata-containers/releases/tag/3.4.0) of Kata Containers
and [v0.9.0](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.9.0) of enclave-cc.
This release supports pulling images inside of the guest with some caveats but it does
not support pulling encrypted or signed images inside the guest.
This release supports attestation and includes the guest components in the rootfs,
but it does not support any TEE platform.
Peer pods is also not supported.
**This release was created mainly for development purposes. For a full feature set,
consider returning to v0.8.0 or using the next release.**
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for a
definition of the acronyms used in this document.
## What's new
* This release is built from the main branch of Kata Containers.
* Non-tee attestation is now based on a sample attester and verifier rather than on `offline_fs_kbc`.
* Resources can be dynamically delivered in confidential environments.
* Trustee is integrated into the Kata Containers CI.
* All platforms now share one confidential rootfs.
* All platforms share one confidential guest kernel.
* Image request timeout is configurable to facilitate pulling large images.
* Attestation Agent now supports generic `configfs-tsm` ABI for collecting evidence.
* Enclave-cc moves to unified LibOS bundle for secure rootfs key handling and to the latest Occlum v0.30.1 release that adds SGX EDMM support for dynamically adjusting the enclave size.
* Adoption of a project-wide security reporting protocol
## Hardware Support
This release does not officially support any hardware platforms.
It is mainly intended for testing in non-tee environments.
Future releases will return to previous levels of support.
## Limitations
The following are known limitations of this release:
* Nydus snapshotter support is not mature.
* Nydus snapshot sometimes conflicts with existing node configuration.
* You may need to remove existing container images/snapshots before installing Nydus snapshotter.
* Nydus snapshotter may not support pulling one image with multiple runtime handler annotations even across different pods.
* These limitations can apply to the pause image when filesystem passthrough is not enabled.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* The format of encrypted container images is still subject to change
* The [oci-crypt](https://github.com/containers/ocicrypt) container image format itself may still change
* The tools to generate images are not in their final form
* The image format itself is subject to change in upcoming releases
* Not all image repositories support encrypted container images.
* Complete integration with Kubernetes is still in progress.
* OpenShift support is not yet complete.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* We track our status with the OpenSSF Best Practices Badge, which improved to 75% at the time of this release.
* Community has adopted a security reporting protocol, but application and documentation of static and dynamic analysis still needed.
* Container metadata such as environment variables are not measured.
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
## CVE Fixes
None

68
releases/v0.9.0-alpha1.md Normal file
View File

@@ -0,0 +1,68 @@
# Release Notes for v0.9.0-alpha1
Release Date: June 21st, 2024
This release is based on [3.6.0](https://github.com/kata-containers/kata-containers/releases/tag/3.4.0) of Kata Containers
and [v0.9.1](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.9.0) of enclave-cc.
This is an alpha release, with some limitations, but it builds significantly on the previous release.
For a full feature set, please use v0.8.0 or the upcoming v0.9.0.
For details of what has been added and what does not yet work,
see the following sections.
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for
definitions of the acronyms used in this document.
## What's new
* Cloud API Adaptor (peer pods) is included in the release and based on Kata main.
* Image pulling inside the guest via the nydus snapshotter is more stable.
* Platform support has been partially restored.
* Confidential runtime classes ship with filesystem sharing disabled by default.
* `kata-qemu-coco-dev` runtime class introduced for nontee development.
* Several improvements to guest policy
* `guest_components_procs` option introduced to control which guest components are started.
## Hardware Support
This release tentatively supports multiple TEE platforms, although
not all features are enabled for all platforms and test coverage
is limited.
## Limitations
The following are known limitations of this release:
* Encrypted images and signed images are not yet supported.
* Sealed secrets and encrypted volumes are not yet supported.
* SEV(-ES) and SEV-SNP do not support attestation.
* Authenticated registries are not supported.
* Nydus snapshotter support is not mature.
* Nydus snapshotter sometimes fails to pull an image.
* Nydus snapshotter cannot handle pods with init containers.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* Complete integration with Kubernetes is still in progress.
* OpenShift support is not yet complete.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* We track our status with the OpenSSF Best Practices Badge, which remained at 75% at the time of this release.
* Community has adopted a security reporting protocol, but application and documentation of static and dynamic analysis still needed.
* Container metadata such as environment variables are not measured.
* The Kata Agent allows the host to call several dangerous endpoints
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
* Policy can be used to block endpoints, but it is not yet tied to the hardware evidence.
## CVE Fixes
None

94
releases/v0.9.0.md Normal file
View File

@@ -0,0 +1,94 @@
# Release Notes for v0.9.0
Release Date: July 26th, 2024
This release is based on [3.7.0](https://github.com/kata-containers/kata-containers/releases/tag/3.7.0) of Kata Containers
and [v0.9.1](https://github.com/confidential-containers/enclave-cc/releases/tag/v0.9.1) of enclave-cc.
This is the first non-alpha release of Confidential Containers to be based on the main branch of Kata Containers.
This release does not have complete parity with releases based on CCv0, but it supports most features.
See the limitations section for more details.
Please see the [quickstart guide](../quickstart.md) for details on how to try out Confidential
Containers.
Please refer to our [Acronyms](https://github.com/confidential-containers/documentation/wiki/Acronyms)
and [Glossary](https://github.com/confidential-containers/documentation/wiki/Glossary) pages for
definitions of the acronyms used in this document.
## What's new
* Attestation is supported on SEV-SNP and IBM SE
* Encrypted container images are supported
* Authenticated registries are supported
* Pods with init containers can be run wth Nydus
* Sealed secrets (as environment variables) are supported
## Hardware Support
Attestation is supported and tested on three platforms: Intel TDX, AMD SEV-SNP, and IBM SE.
Not all feature have been tested on every platform, but those based on attestation
are expected to work on the platforms above.
Make sure your host platform is compatible with the hypervisor and guest kernel
provisioned by coco.
This release has been tested on the following stacks:
### AMD SEV-SNP
* Processor: AMD EPYC 7413
* Kernel: [6.8.0-rc5-next-20240221-snp-host-cc2568386](https://github.com/confidential-containers/linux/tree/amd-snp-host-202402240000)
* OS: Ubuntu 22.04.4 LTS
* k8s: v1.24.0 (Kubeadm)
* Kustomize: v4.5.4
### Intel TDX
* Kernel: [6.8.0-1004-intel](https://git.launchpad.net/~kobuk-team/ubuntu/+source/linux-intel/tree/?h=noble-main-next)
* OS: Ubuntu 24.04 LTS
* k8s: v1.30.2 (Kubeadm)
* Kustomize: v5.0.4-0.20230601165947-6ce0bf390ce3
### Secure Execution on IBM zSystems (s390x) running LinuxONE
* Hardware: IBM Z16 LPAR
* Kernel: 5.15.0-113-generic
* OS: Ubuntu 22.04.1 LTS
* k8s: v1.28.4 (k3s)
* Kustomize: v5.3.0
## Limitations
The following are known limitations of this release:
* Signed images are not yet supported.
* Secure storage is not yet supported.
* SEV(-ES) does not support attestation.
* Sealed secrets only supports secrets in environment variables.
* Credentials for authenticated registries are exposed to the host.
* Not all features are tested on all platforms.
* Nydus snapshotter support is not mature.
* Nydus snapshotter sometimes fails to pull an image.
* Host pulling with Nydus snapshotter is not yet enabled.
* Nydus snapshotter is not supported with enclave-cc.
* Pulling container images inside guest may have negative performance implications including greater resource usage and slower startup.
* `crio` support is still evolving.
* Platform support is rapidly changing
* SELinux is not supported on the host and must be set to permissive if in use.
* Complete integration with Kubernetes is still in progress.
* Existing APIs do not fully support the CoCo security and threat model. [More info](https://github.com/confidential-containers/community/issues/53)
* Some commands accessing confidential data, such as `kubectl exec`, may either fail to work, or incorrectly expose information to the host
* The CoCo community aspires to adopting open source security best practices, but not all practices are adopted yet.
* We track our status with the OpenSSF Best Practices Badge, which remained at 75% at the time of this release.
* Community has adopted a security reporting protocol, but application and documentation of static and dynamic analysis still needed.
* Container metadata such as environment variables are not measured.
* The Kata Agent allows the host to call several dangerous endpoints
* Kata Agent does not validate mount requests. A malicious host might be able to mount a shared filesystem into the PodVM.
* Policy can be used to block endpoints, but it is not yet tied to the hardware evidence.
## CVE Fixes
None

View File

@@ -1,87 +1,27 @@
# Confidential Containers Roadmap
When looking at the project's roadmap we distinguish between the short-term roadmap (2-4 months) vs.
the mid/long-term roadmap (4-12 months):
- The **short-term roadmap** is focused on achieving an end-to-end, easy to deploy confidential
containers solution using at least one HW encryption solution and integrated to k8s (with forked
versions if needed)
- The **mid/long-term solutions** focuses on maturing the components of the short-term solution
and adding a number of enhancements both to the solution and the project (such as CI,
interoperability with other projects etc.)
# Short-Term Roadmap
The short-term roadmap aims to achieve the following:
- MVP stack for running confidential containers
- Based on and compatible with Kata Containers 2
- Based on at least one confidential computing implementation (SEV, TDX, SE, etc)
- Integration with Kubernetes: kubectl apply -f confidential-pod.yaml
When looking at the project's roadmap we distinguish between the short-term roadmap (2-6 months) vs. the mid/long-term roadmap (6-18 months):
- The [short term roadmap](#short-term-roadmap) is focused on achieving an end-to-end, easy to deploy and stable confidential containers solution. We track this work on a number of github boards.
- The [mid and long term roadmap](#mid-and-long-term-roadmap) focuses on use case driven development.
The work is targeted to be completed by end of November 2021 and includes 3 milestones:
![September 2021](./images/RoadmapSept2021.jpg)
- **September 2021**
- Unencrypted image pulled inside the guest, kept in tmpfs
- Pod/Container runs from pulled image
- Agent API is restricted
- crictl only
![October 2021](./images/RoadmapOct2021.jpg)
- **October 2021**
- Encrypted image pulled inside the guest, kept in tmpfs
- Image is decrypted with a pre-provisioned key (No attestation)
![November 2021](./images/RoadmapNov2021.jpg)
- **November 2021**
- Image is optionally stored on an encrypted, ephemeral block device
- Image is decrypted with a key obtained from a key brokering service (KBS)
- Integration with kubelet
# Short Term Roadmap
The short-term roadmap is based on our github boards and delivered through our on-going releases
For additional details on each milestone see [Confidential Containers v0](https://docs.google.com/presentation/d/1SIqLogbauLf6lG53cIBPMOFadRT23aXuTGC8q-Ernfw/edit#slide=id.p).
- [Confidential containers github board](https://github.com/orgs/confidential-containers/projects/6/views/22)
- [Trustee github board](https://github.com/orgs/confidential-containers/projects/10/views/1)
Tasks are tracked on a weekly basis through a dedicated spreadsheet.
For more information see [Confidential Containers V0 Plan](https://docs.google.com/spreadsheets/d/1M_MijAutym4hMg8KtIye1jIDAUMUWsFCri9nq4dqGvA/edit#gid=0&fvid=1397558749).
# Mid and Long Term Roadmap
In CoCo use case driven development we identify the main functional requirements the community requires by focusing on key use cases.
# Mid-Term Roadmap
This helps the community deliver releases which address real use cases customers require and focusing on the right priorities. The use case driven development approach also includes developing the relevant CI/CDs to ensure end-to-end use cases the community delivers work over time.
Continue our journey using knowledge and support of Subject Matter Experts (SME's) in other
projects to form stronger opinions on what is needed from components which can be integrated to
deliver the confidential containers objectives.
We target the following use cases:
- Harden the code used for the demos
- Improve CI/CD pipeline
- Clarify the release process
- Establish processes and tools to support planning, prioritisation, and work in progress
- Simple process to get up and running regardless of underlying Trusted Execution Environment
technology
- Develop a small, simple, secure, lightweight and high performance OCI container image
management library [image-rs](https://github.com/confidential-containers/image-rs) for
confidential containers.
- Develop small, simple shim firmware ([td-shim](https://github.com/confidential-containers/td-shim))
in support of trusted execution environment for use with cloud native confidential containers.
- Document threat model and trust model, what are we protecting, how are we achieving it.
- Identify technical convergence points with other confidential computing projects both inside
and outside CNCF.
- Confidential Federated Learning
- Multi-party Computing (data clean room, confidential spaces etc)
- Trusted Pipeline (Supply Chain)
- Confidential RAG LLMs
# Longer-Term Roadmap
Focused meetings will be set up to discuss architecture and the priority of longer-term objectives
in the process of being set up.
Each meeting will have an agreed focus with people sharing material/thoughts ahead of time.
Topics under consideration:
- CI/CD + repositories
- Community structure and expectations
- 2 on Mid-Term Architecture
- Attestation
- Images
- Runtimes
Proposed Topics to influence long-term direction/architecture:
- Baremetal / Peer Pod
- Composability of alternative technologies to deliver confidential containers
- Performance
- Identity / Service Mesh
- Reproducible builds/demos
- Edge Computing
- Reduce footprint of image pull
A dedicated working group leads this effort. For additional details we recommend reviewing the working group's notes: [Confidential containers use cases driven development](https://docs.google.com/document/d/1LnGNeyUyPM61Iv4kBKFbfgmBr3RmxHYZ7Ev88obN0_E/edit?tab=t.0#heading=h.b0rnn2bw76n)

View File

@@ -37,7 +37,7 @@ Further documentation will highlight specific [threat vectors](./threats_overvie
considering risk,
impact, mitigation etc as the project progresses. The Security Assurance section, Page 31, of
Cloud Native Computing Foundation (CNCF)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
will guide this more detailed threat vector effort.
### Related Prior Effort
@@ -51,7 +51,7 @@ For example:
"[A Technical Analysis of Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/10/2023/03/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_unlocked.pdf)"
section 5 of which defines the threat model for confidential computing.
- CNCF Security Technical Advisory Group published
"[Cloud Native Security Whitepaper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)"
"[Cloud Native Security Whitepaper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)"
- Kubernetes provides documentation :
"[Overview of Cloud Native Security](https://kubernetes.io/docs/concepts/security/overview/)"
- Open Web Application Security Project -
@@ -69,7 +69,25 @@ This means our trust and threat modelling should
- Consider existing Cloud Native technologies and the role they can play for confidential containers.
- Consider additional technologies to fulfil a role in Cloud Native exploitation of TEEs.
### Out of Scope
## Illustration
The following diagram shows which components in a Confidential Containers setup
are part of the TEE (green boxes labeled TEE). The hardware and guest work in
tandem to establish a TEE for the pod, which provides the isolation and
integrity protection for data in use.
![Threat model](./images/coco-threat-model.png)
Not depicted: Process-based isolation from the enclave-cc runtime class. That isolation model further removes the guest operating system from the trust boundary. See the enclave-cc sub-project for more details:
https://github.com/confidential-containers/enclave-cc/
Untrusted components include:
1. The host operating system, including its hypervisor, KVM
2. Other Cloud Provider host software beyond the host OS and hypervisor
3. Other virtual machines (and their processes) resident on the same host
4. Any other processes on the host machine (including the kubernetes control plane).
## Out of Scope
The following items are considered out-of-scope for the trust/threat modelling within confidential
containers :
@@ -82,7 +100,7 @@ containers :
and will only highlight them where they become relevant to the trust model or threats we
consider.
### Summary
## Summary
In practice, those deploying workloads into TEE environments may have varying levels of trust
in the personas who have privileges regarding orchestration or hosting the workload. This trust

View File

@@ -5,7 +5,7 @@ Otherwise referred to as actors or agents, these are individuals or groups capab
carrying out a particular threat.
In identifying personas we consider :
- The Runtime Environment, Figure 5, Page 19 of CNCF
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf).
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf).
This highlights three layers, Cloud/Environment, Workload Orchestration, Application.
- The Kubernetes
[Overview of Cloud Native Security](https://kubernetes.io/docs/concepts/security/overview/)
@@ -19,7 +19,7 @@ In identifying personas we consider :
In considering personas we recognise that a trust boundary exists between each persona and we
explore how the least privilege principle (as described on Page 40 of
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
) should apply to any actions which cross these boundaries.
Confidential containers can provide enhancements to ensure that the expected code/containers
@@ -136,7 +136,7 @@ the images they need but also support the verification method they require. A k
relationship is the Workload Provider applying Supply Chain
Security practices (as
described on Page 42 of
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/main/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
) when considering Container
Image Providers. So the Container Image Provider must support the Workload Providers
ability to provide assurance to the Data Owner regarding integrity of the code.