8 Commits

Author SHA1 Message Date
Chris Porter
5ecb274601 Delete .md files associated with trust model
This documentation is being migrated to the website

Signed-off-by: Chris Porter <porter@ibm.com>
2025-10-23 09:07:09 -05:00
Tobin Feldman-Fitzthum
e057107751 sc: add Tobin to SC for NVIDIA
NVIDIA has been a major contributor to Confidential Containers and more
contributions are coming.

As such, let's expand the NVIDIA representation on the SC to two seats.

Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
2025-10-21 09:04:31 -04:00
Tobin Feldman-Fitzthum
bfcdf18bfa Revert "sc: add Tobin to SC for NVIDIA"
This has to be approved by 2/3rds of the SC.

Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
2025-10-17 09:18:09 -04:00
Ariel Adam
c771c13f06 Merge pull request #324 from fitzthum/add-sc
sc: add Tobin to SC for NVIDIA
2025-10-17 12:37:45 +03:00
Tobin Feldman-Fitzthum
595f5a4dd4 sc: add Tobin to SC for NVIDIA
NVIDIA has been a major contributor to Confidential Containers and more
contributions are coming.

As such, let's expand the NVIDIA representation on the SC to two seats.

Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
2025-10-16 09:42:50 -07:00
Tobin Feldman-Fitzthum
746a505f20 governance: replace myself on the steering committee
Since I no longer work at IBM, I can no longer occupy an IBM seat on the
steering commitee. Pursuant to the replacement clause of the governance
document, I am replacing myself with Nina Goradia from IBM. This does
not require a steering committee vote, but it must be approved by the
other IBM representative, James Magowan.

I think Nina will be a great fit for the steering commitee.

Thanks for a wonderful chapter.

Signed-off-by: Tobin Feldman-Fitzthum <tfeldmanfitz@nvidia.com>
2025-10-13 10:51:20 -04:00
Tobin Feldman-Fitzthum
0c97c4b0a7 release: update release checklist
Remove the step where we poke Wainer. His arm is getting sore and it
doesn't seem like the project is widely consumed via operator hub.

Also, add post-release steps for guest-components.

Note that we are not updating the Trustee k8s yaml to point to the
release version. If we want to do this, it has to happen much earlier in
the process (before we bump Kata to use the new version of Trustee).

Signed-off-by: Tobin Feldman-Fitzthum <tobinf@protonmail.com>
2025-10-08 09:06:29 -04:00
Dan Middleton
718fee9f11 Add unit test requirement
Documenting best practice that all new features need to come with unit
tests. This satisfies https://www.bestpractices.dev/en/projects/5719#quality

Signed-off-by: Dan Middleton <dmiddleton@nvidia.com>
2025-09-30 07:51:47 +03:00
8 changed files with 41 additions and 339 deletions

View File

@@ -20,6 +20,7 @@ flowchart LR
Guest-Components .-> Client-tool Guest-Components .-> Client-tool
Guest-Components --> enclave-agent Guest-Components --> enclave-agent
enclave-cc --> kustomization.yaml enclave-cc --> kustomization.yaml
Operator --> versions.yaml
Guest-Components --> versions.yaml Guest-Components --> versions.yaml
Trustee --> versions.yaml Trustee --> versions.yaml
Kata --> versions.yaml Kata --> versions.yaml
@@ -47,7 +48,8 @@ flowchart LR
Starting with v0.9.0 the release process no longer involves centralized dependency management. Starting with v0.9.0 the release process no longer involves centralized dependency management.
In other words, when doing a CoCo release, we don't push the most recent versions of the subprojects In other words, when doing a CoCo release, we don't push the most recent versions of the subprojects
into Kata and enclave-cc. Instead, dependencies should be updated during the normal process of development. into Kata and enclave-cc. Instead, dependencies should be updated during the normal process of development.
Releases of most subprojects are now decoupled from releases of the CoCo project. After the release, we typically cut a release of the subprojects that reflects whatever commit was used
in the Kata release.
## The Steps ## The Steps
@@ -72,13 +74,9 @@ Identify/create the bundles that we will release for Kata and enclave-cc.
If you absolutely cannot use a Kata release, If you absolutely cannot use a Kata release,
you can consider releasing one of these bundles. you can consider releasing one of these bundles.
- [ ] 3. :eyes: **Create a peer pods release** ### Update the Operator
Create a peer pods release based on the Kata release, by following the [documented flow](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/docs/Release-Process.md). - [ ] 3. :eyes: **Check operator pre-installation and open PR if needed**
### Test Release with Operator
- [ ] 4. :eyes: **Check operator pre-installation and open PR if needed**
The operator uses a pre-install container to setup the node. The operator uses a pre-install container to setup the node.
Check that the container matches the dependencies used in Kata Check that the container matches the dependencies used in Kata
@@ -88,7 +86,7 @@ Identify/create the bundles that we will release for Kata and enclave-cc.
* Compare the `nydus-snapshotter` version in Kata [versions.yaml](https://github.com/kata-containers/kata-containers/blob/main/versions.yaml) (search for `nydus-snapshotter` and check its `version` field) with the [Makefile](https://github.com/confidential-containers/operator/blob/main/install/pre-install-payload/Makefile) (check the `NYDUS_SNAPSHOTTER_VERSION` value) for the operator pre-install container. * Compare the `nydus-snapshotter` version in Kata [versions.yaml](https://github.com/kata-containers/kata-containers/blob/main/versions.yaml) (search for `nydus-snapshotter` and check its `version` field) with the [Makefile](https://github.com/confidential-containers/operator/blob/main/install/pre-install-payload/Makefile) (check the `NYDUS_SNAPSHOTTER_VERSION` value) for the operator pre-install container.
* **If they do not match, stop and open a PR now. In the PR, update the operator's Makefile to match the version used in kata. After the PR is merged, continue.** * **If they do not match, stop and open a PR now. In the PR, update the operator's Makefile to match the version used in kata. After the PR is merged, continue.**
- [ ] 5. :wrench: **Open a PR to the operator to update the release artifacts** - [ ] 4. :wrench: **Open a PR to the operator to update the release artifacts**
Update the operator to use the payloads identified in steps 1, 2, 3, and 4. Update the operator to use the payloads identified in steps 1, 2, 3, and 4.
@@ -114,13 +112,39 @@ Identify/create the bundles that we will release for Kata and enclave-cc.
### Final Touches ### Final Touches
- [ ] 6. :trophy: **Cut an operator release using the GitHub release tool** - [ ] 5. :trophy: **Cut an operator release using the GitHub release tool**
- [ ] 6. :wrench: **Create a peer pods release**
Create a peer pods release based on the Kata release, by following the [documented flow](https://github.com/confidential-containers/cloud-api-adaptor/blob/main/docs/Release-Process.md).
- [ ] 7. :green_book: **Make sure to update the [release notes](https://github.com/confidential-containers/confidential-containers/tree/main/releases) and tag/release the confidential-containers repo using the GitHub release tool.** - [ ] 7. :green_book: **Make sure to update the [release notes](https://github.com/confidential-containers/confidential-containers/tree/main/releases) and tag/release the confidential-containers repo using the GitHub release tool.**
- [ ] 8. :hammer: **Poke Wainer Moschetta (@wainersm) to update the release to the OperatorHub. Find the documented flow [here](https://github.com/confidential-containers/operator/blob/main/docs/OPERATOR_HUB.md).**
### Post-release ### Post-release
- [ ] 9. :wrench: **Open a PR to the operator to go back to latest payloads after release** - [ ] 8. :wrench: **Open a PR to the operator to go back to latest payloads after release**
After the release, the operator's payloads need to go back to what they were (e.g. using "latest" instead of a specific commit sha). As an example, the v0.9.0-alpha1 release applied [these changes](https://github.com/confidential-containers/operator/pull/389/files). You should use `git revert -s` for this. After the release, the operator's payloads need to go back to what they were (e.g. using "latest" instead of a specific commit sha). As an example, the v0.9.0-alpha1 release applied [these changes](https://github.com/confidential-containers/operator/pull/389/files). You should use `git revert -s` for this.
- [ ] 9. :pushpin: **Tag the version of guest-components used in the release**.
Go look at [versions.yaml](https://github.com/kata-containers/kata-containers/blob/main/versions.yaml)
in Kata Containers and find the version of the guest-components that was used in the Kata release.
Tag this commit in guest-components with the latest version of guest components.
Note that the version of guest-components might not be the same as the version of CoCo.
- [ ] 10. :scissors: **Cut a release of guest-components using GitHub release tool**
- [ ] 11. :pushpin: **Tag the version of Trustee used in the release**
Follow the same process as step 9 but for Trustee.
- [ ] 12. :scissors: **Cut a release of Trustee using GitHub release tool**
- [ ] 13. :wrench: **Tag the Trustee release images**
Use the Trustee release helper script to push the CI images corresponding to the released hash
as the release images.
- [ ] 14. :pushpin: **Tag the latest version of the website for the release**
Make sure the website is up-to-date for the latest release, and then tag the repo.

View File

@@ -78,6 +78,8 @@ with the community as early as possible. Consider making an `RFC` issue
that explains the changes. You might also try to break large contributions that explains the changes. You might also try to break large contributions
into smaller steps. into smaller steps.
Any new feature must be accompanied by new unit tests.
### Making a Pull Request ### Making a Pull Request
If you aren't familiar with Git or the GitHub PR workflow, take a look at [this section](https://github.com/kata-containers/community/blob/main/CONTRIBUTING.md#github-workflow) If you aren't familiar with Git or the GitHub PR workflow, take a look at [this section](https://github.com/kata-containers/community/blob/main/CONTRIBUTING.md#github-workflow)

View File

@@ -7,11 +7,11 @@ bpradipt, Pradipta Banerjee, Redhat
peterzcst, Peter Zhu, Intel peterzcst, Peter Zhu, Intel
mythi, Mikko Ylinen, Intel mythi, Mikko Ylinen, Intel
magowan, James Magowan, IBM magowan, James Magowan, IBM
fitzthum, Tobin Feldman-Fitzthum, IBM
jiazhang0, Zhang Jia, Alibaba jiazhang0, Zhang Jia, Alibaba
jiangliu, Jiang Liu, Alibaba jiangliu, Jiang Liu, Alibaba
ryansavino, Ryan Savino, AMD ryansavino, Ryan Savino, AMD
sameo, Samuel Ortiz, Rivos sameo, Samuel Ortiz, Rivos
zvonkok, Zvonko Kaiser, NVIDIA zvonkok, Zvonko Kaiser, NVIDIA
fitzthum, Tobin Feldman-Fitzthum, NVIDIA
vbatts, Vincent Batts, Microsoft vbatts, Vincent Batts, Microsoft
danmihai1, Dan Mihai, Microsoft danmihai1, Dan Mihai, Microsoft

View File

@@ -87,11 +87,11 @@ The current members of the SC are:
* Ryan Savino (@ryansavino) - AMD * Ryan Savino (@ryansavino) - AMD
* Jiang Liu (@jiangliu) and Jia Zhang (@jiazhang0) - Alibaba * Jiang Liu (@jiangliu) and Jia Zhang (@jiazhang0) - Alibaba
* James Magowan (@magowan) and Tobin Feldman-Fitzthum (@fitzthum) - IBM * James Magowan (@magowan) and Nina Goradia (@ninag) - IBM
* Peter Zhu (@peterzcst) and Mikko Ylinen (@mythi) - Intel * Peter Zhu (@peterzcst) and Mikko Ylinen (@mythi) - Intel
* Pradipta Banerjee (@bpradipt) and Ariel Adam (@ariel-adam) - Red Hat * Pradipta Banerjee (@bpradipt) and Ariel Adam (@ariel-adam) - Red Hat
* Samuel Ortiz (@sameo) - Rivos * Samuel Ortiz (@sameo) - Rivos
* Zvonko Kaiser (@zvonkok) - NVIDIA * Zvonko Kaiser (@zvonkok) and Tobin Feldman-Fitzthum (@fitzthum) - NVIDIA
* Vincent Batts (@vbatts) and Dan Mihai (@danmihai1) - Microsoft * Vincent Batts (@vbatts) and Dan Mihai (@danmihai1) - Microsoft
### Emeritus Members ### Emeritus Members

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 20 KiB

View File

@@ -1,6 +0,0 @@
# Threat Vectors/Profiles
Links to further documentation detailing specific threats and how Confidential Containers uses
the trust concepts described in the context of the [Trust Model](./trust_model.md) will be added here.
Current TODO List for Threats to be covered is tracked under Issues [#2](https://github.com/confidential-containers/documentation/issues/29)

View File

@@ -1,115 +0,0 @@
# Trust Model for Confidential Containers
A clear definition of trust for the confidential containers project is needed to ensure the
components and architecture deliver the security principles expected for cloud native
confidential computing. It provides the solid foundations and unifying security principles
against which we can assess architecture and implementation ideas and discussions.
## Trust Model Definition
The [Trust Modeling for Security Architecture Development article](https://www.informit.com/articles/article.aspx?p=31546)
defines Trust Modeling as :
> A trust model identifies the specific mechanisms that are necessary to respond to a specific
> threat profile.
> A trust model must include implicit or explicit validation of an entity's identity or the
> characteristics necessary for a particular event or transaction to occur.
## Trust Boundary
The trust model also helps determine the location and direction of the trust boundaries where a
[trust boundary](https://en.wikipedia.org/wiki/Trust_boundary) describes a location where
program data or execution changes its level of "trust", or where two principals with different
capabilities exchange data or commands. Specific to Confidential Containers is the trust
boundary that corresponds to the boundary of the Trusted Execution Environment (TEE). The TEE
side of the trust boundary will be hardened to prevent the violation of the trust
boundary.
## Required Documentation
In order to describe and understand particular threats we need to establish trust boundaries and
trust models relating to the key aspects, components and actors involved in Cloud Native
Confidential Computing. We explore trust using different orthogonal ways of considering cloud
native approaches when they use an underlying TEE technology and
identifying where there may be considerations to preserve the value of using a TEE.
### Trust Model Considerations
- [Personas](./trust_model_personas.md)
Further documentation will highlight specific [threat vectors](./threats_overview.md) in detail,
considering risk,
impact, mitigation etc as the project progresses. The Security Assurance section, Page 31, of
Cloud Native Computing Foundation (CNCF)
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
will guide this more detailed threat vector effort.
### Related Prior Effort
Confidential Containers brings confidential computing into a cloud native context and should
therefore refer to and build on trust and security models already defined.
For example:
- Confidential Computing Consortium (CCC) published
"[A Technical Analysis of Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/10/2023/03/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_unlocked.pdf)"
section 5 of which defines the threat model for confidential computing.
- CNCF Security Technical Advisory Group published
"[Cloud Native Security Whitepaper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)"
- Kubernetes provides documentation :
"[Overview of Cloud Native Security](https://kubernetes.io/docs/concepts/security/overview/)"
- Open Web Application Security Project -
"[Docker Security Threat Modeling](https://github.com/OWASP/Docker-Security/blob/main/001%20-%20Threats.md)"
The commonality between confidential containers project and confidential computing is to reduce
the ability for unauthorised access to data and code inside TEEs sufficiently such that this path
is not an economically or logically viable attack during execution (5.1 Goal within the CCC
publication
[A Technical Analysis of Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/10/2023/03/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_unlocked.pdf)).
This means our trust and threat modelling should
- Focus on which aspects of code and data have integrity and/or confidentiality protections.
- Focus on enhancing existing Cloud Native models in the context of exploiting TEEs.
- Consider existing Cloud Native technologies and the role they can play for confidential containers.
- Consider additional technologies to fulfil a role in Cloud Native exploitation of TEEs.
## Illustration
The following diagram shows which components in a Confidential Containers setup
are part of the TEE (green boxes labeled TEE). The hardware and guest work in
tandem to establish a TEE for the pod, which provides the isolation and
integrity protection for data in use.
![Threat model](./images/coco-threat-model.png)
Not depicted: Process-based isolation from the enclave-cc runtime class. That isolation model further removes the guest operating system from the trust boundary. See the enclave-cc sub-project for more details:
https://github.com/confidential-containers/enclave-cc/
Untrusted components include:
1. The host operating system, including its hypervisor, KVM
2. Other Cloud Provider host software beyond the host OS and hypervisor
3. Other virtual machines (and their processes) resident on the same host
4. Any other processes on the host machine (including the kubernetes control plane).
## Out of Scope
The following items are considered out-of-scope for the trust/threat modelling within confidential
containers :
- Vulnerabilities within the application/code which has been requested to run inside a TEE.
- Availability part of the Confidentiality/Integrity/Availability in CIA Triad.
- Software TEEs. At this time we are focused on hardware TEEs.
- Certain security guarantees are defined by the underlying TEE and these
may vary between TEEs and generations of the same TEE. We take these guarantees at face value
and will only highlight them where they become relevant to the trust model or threats we
consider.
## Summary
In practice, those deploying workloads into TEE environments may have varying levels of trust
in the personas who have privileges regarding orchestration or hosting the workload. This trust
may be based on factors such as the relationship with the owner or operator of the host, the
software and hardware it comprises, and the likelihood of physical, software, or social
engineering compromise.
Confidential containers will have specific focus on preventing potential security threats at
the TEE boundary and ensure privileges which are accepted within cloud native environment as
crossing the boundary are mitigated from threats within the boundary. We cannot allow the
security of the TEE to be under control of operations outside the TEE or from areas not trusted
by the TEE.

View File

@@ -1,199 +0,0 @@
# Trust Model Considerations - Personas
## Personas
Otherwise referred to as actors or agents, these are individuals or groups capable of
carrying out a particular threat.
In identifying personas we consider :
- The Runtime Environment, Figure 5, Page 19 of CNCF
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf).
This highlights three layers, Cloud/Environment, Workload Orchestration, Application.
- The Kubernetes
[Overview of Cloud Native Security](https://kubernetes.io/docs/concepts/security/overview/)
identifies the 4C's of Cloud Native
Security as Cloud, Cluster, Container and Code. However data is core to confidential
containers rather than code.
- The Confidential Computing Consortium paper
[A Technical Analysis of Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/10/2023/03/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_unlocked.pdf)
defines Confidential Computing as the protection of data in use by performing computations in a
hardware-based Trusted Execution Environment (TEE).
In considering personas we recognise that a trust boundary exists between each persona and we
explore how the least privilege principle (as described on Page 40 of
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
) should apply to any actions which cross these boundaries.
Confidential containers can provide enhancements to ensure that the expected code/containers
are the only code that can operate over the data. However any vulnerabilities within this code
are not mitigated by using confidential containers, the Cloud Native Security Whitepaper
details Lifecycle aspects that relate to the security of the code being placed into containers
such as Static/Dynamic Analysis, Security Tests, Code Review etc which must still be followed.
![Personas Model](./images/persona_model.svg)
Any of these personas could attempt to perform malicious actions:
### Infrastructure Operator
This persona has privileges within the Cloud Infrastructure which includes the hardware and
firmware used to provide compute, network and storage to the Cloud Native solution.
They are responsible for availability of infrastructure used by the cloud native environment.
- Have access to the physical hardware.
- Have access to the processes involved in the deployment of compute/storage/memory used by any
orchestration components and by the workload.
- Have control over TEE hardware availability/type.
- Responsibility for applying firmware updates to infrastructure including the TEE Technology.
Example : Cloud Service Provider (CSP), Infrastructure as a Service (IaaS), Site Reliability Engineer, etc.
(SRE)
### Orchestration Operator
This persona has privileges within the Orchestration/Cluster.
They are responsible for deploying a solution into a particular cloud native environment and
managing the orchestration environment.
For managed cluster this would also include the administration of the cluster control plane.
- Control availability of service.
- Control webhooks and deployment of workloads.
- Control availability of cluster resources (data/networking/storage) and cluster
services (Logging/Monitoring/Load Balancing) for the workloads.
- Control the deployment of runtime artifacts initially required by the TEE during
initialisation.
These boot images once initialised will receive the workload.
Example : A kubernetes administrator responsible for deploying pods to a cluster and
maintaining the cluster.
### Workload Provider
This persona designs and creates the orchestration objects comprising the solution (e.g.
kubernetes pod descriptions etc). These objects reference containers published by Container Image Providers.
In some cases the Workload and Container Image Providers may be the same entity.
The solution defined is intended to provide the Application or Workload which in turn provides
value to the data owners (customers and clients).
The workload provider and data owner could be part of same company/organisation but
following the least privilege principle the workload provider should not be able to view or
manipulate end user data without informed consent.
- Need to prove to customer aspects of compliance.
- Defines what the solution requires in order to run and maintain compliance (resources, utility
containers/services, storage).
- Chooses the method of verifying the container images (from those supported by Container Image
Provider) and obtains artifacts needed to allow verification to be completed within
the TEE.
- Provide the boot images initially required by the TEE during
initialisation or designates a trusted party to do so.
- Provide the attestation verification service, or designate a trusted party to provide the
attestation verification service.
Example : 3rd party software vendor, cloud solution provider
### Container Image Provider
This persona is responsible for the part of the supply chain that builds container images and
provides them for use by the solution. Since a workload can
be composed of multiple containers, there may be multiple container image providers, some
will be closely connected to the workload provider (business logic containers), others more
independent to the workload provider (side car containers). The container image provider is expected
to use a mechanism to allow provenance of container image to be established when a
workload pulls in these images at deployment time. This can take the form of signing or encrypting
the container images.
- Builds container images.
- Owner of business logic containers. These may contain proprietary algorithms, models or secrets.
- Signs or encrypts the images.
- Defines the methods available for verifying the container images to be used.
- Publishes the signature verification key (public key).
- Provides any decryption keys through a secure channel (generally to a key management system
controlled by a Key Broker Service).
- Provides other required verification artifacts (secure channel may be considered).
- Protects the keys used to sign or encrypt the container images.
It is recognised that hybrid options exist surrounding workload provider and container provider.
For example the workload provider may choose to protect their supply chain by
signing/encrypting their own container images after following the build patterns already
established by the container image provider.
Example : Sidecar Container Provider
### Data Owner
Owner of data used, and manipulated by the application.
- Concerned with visibility and integrity of their data.
- Concerned with compliance and protection of their data.
- Uses and shares data with solutions.
- Wishes to ensure no visibility or manipulation of data is possible by
Orchestration Operator or Cloud Operator personas.
## Discussion
### Data Owner vs. All Other Personas
The key trust relationship here is between the Data Owner and the other personas. The Data Owner
trusts the code in the form of container images chosen by the Workload Provider to operate across
their data, however they do not trust the Orchestration Operator or Cloud Operator with their
data and wish to ensure data confidentiality.
### Workload Provider vs. Container Image Provider
The Workload Provider is free to choose Container Image Providers that will provide not only
the images they need but also support the verification method they require. A key aspect to this
relationship is the Workload Provider applying Supply Chain
Security practices (as
described on Page 42 of
[Cloud Native Security Paper](https://github.com/cncf/tag-security/blob/3e57e7c472f7053c693292281419ab926155fe2d/community/resources/security-whitepaper/v2/CNCF_cloud-native-security-whitepaper-May2022-v2.pdf)
) when considering Container
Image Providers. So the Container Image Provider must support the Workload Providers
ability to provide assurance to the Data Owner regarding integrity of the code.
With Confidential Containers we match the TEE boundary to the most restrictive boundary which is
between the Workload Provider and the Orchestration Operator.
### Orchestration Operator vs. Infrastructure Operator
Outside the TEE we distinguish between the Orchestration Operator and the Infrastructure
Operator due to nature of how they can impact the TEE and the concerns of Workload Provider and
Data Owner. Direct threats exist from the Orchestration Operator as some orchestration actions
must be permitted to cross the TEE boundary otherwise orchestration cannot occur. A key goal is to
*deprivilege orchestration* and restrict the
Orchestration Operators privileges across the boundary. However indirect threats exist
from the Infrastructure Operator who would not be permitted to exercise orchestration APIs but
could exploit the low level hardware or firmware capabilities to access or impact the contents
of a TEE.
### Workload Provider vs. Data Owner
Inside the TEE we need to be able to distinguish between the Workload Provider and Data Owner in recognition that
the same workload (or parts such as logging/monitoring etc) can be re-used with different data
sets to provide a service/solution. In the case of bespoke workload, the workload provider and
Data Owner may be the same persona. As mentioned the Data Owner must have a level of
trust in the Workload Provider to use and expose the data provided in an expected and approved
manner. Page 10 of [A Technical Analysis of Confidential Computing](https://confidentialcomputing.io/wp-content/uploads/sites/10/2023/03/CCC-A-Technical-Analysis-of-Confidential-Computing-v1.3_unlocked.pdf)
, suggests some approaches to establish trust between them.
The TEE boundary allows the introduction of secrets but just as we recognised the
TEE does not provide protection from code vulnerabilities, we also recognised that a TEE cannot
enforce complete distrust between Workload Provider and Data Owner. This means secrets within
the TEE are at risk from both Workload Provider and Data Owner and trying to keep secrets
which protect the workload (container encryption etc), separated from secrets to protect the
data (data encryption) is not provided simply by using a TEE.
Recognising that Data Owner and Workload Provider are separate personas helps us to
identify threats to both data and workload independently and to recognise that any solution must
consider the potential independent nature of these personas.
Two examples of trust between Data Owner and Workload Provider are :
- AI Models which are proprietary and protected requires the workload to be encrypted and not
shared with the Data Owner. In this case secrets private to the Workload Provider are needed
to access the workload, secrets requiring access to the data are provided by the Data Owner
while trusting the workload/model without having direct access to how the workload functions.
The Data Owner completely trusts the workload and Workload Provider, whereas the Workload
Provider does not trust the Data Owner with the full details of their workload.
- Data Owner verifies and approves certain versions of a workload, the workload provides the
data owner with secrets in order to fulfil this. These secrets are available in the TEE for
use by the Data Owner to verify the workload, once achieved the data owner will then provide
secrets and data into the TEE for use by the workload in full confidence of what the workload
will do with their data. The Data Owner will independently verify versions of the workload and
will only trust specific versions of the workload with the data whereas the Workload Provider
completely trusts the Data Owner.
### Data Owner vs. End User
We do not draw a distinction between data owner and end user though we do recognise that in
some cases these may not be identical. For example data may be provided to a workload to allow
analysis and results to be made available to an end user. The original data is never provided
directly to the end user but the derived data is, in this case the data owner can be different
from the end user and may wish to protect this data from the end user.