If someone gains the ability to create static pods, they might try to use that
ability to run code which gets access to the resources associated with some
existing claim which was previously allocated for some other pod. Such an
attempt already fails because the claim status tracks which pods are allowed to
use the claim, the static pod is not in that list, the node is not authorized
to add it, and the kubelet checks that list before starting the pod in
195803cde5/pkg/kubelet/cm/dra/manager.go (L218-L222).
Even if the pod were started, DRA drivers typically manage node-local resources
which can already be accessed via such an attack without involving DRA. DRA
drivers which manage non-node-local resources have to consider access by a
compromised node as part of their threat model.
Nonetheless, it is better to not accept static pods which reference
ResourceClaims or ResourceClaimTemplates in the first place because there
is no valid use case for it.
This is done at different levels for defense in depth:
- configuration validation in the kubelet
- admission checking of node restrictions
- API validation
Co-authored-by: Jordan Liggitt <liggitt@google.com>
Code changes by Jordan, with one small change (resourceClaims -> resourceclaims).
Unit tests by Patrick.
The conntrack reconciler maintains the consistency between the
conntrack table on each node and the desired state of Kubernetes UDP services.
A valid entry matches a service's ClusterIP, LoadBalancerIP, or ExternalIP and Service port,
or any ip matching a NodePort, and has a reverse source IP matching an active endpoint for
that service. Other entries are deleted.
Services without endpoints and traffic not handled by kube-proxy are ignored
Co-authored-by: Daman Arora <aroradaman@gmail.com>
Before, containers with the PostStart sleep lifecycle hook would cause
null pointer panics due to a typo in the field name being checked. This
commit fixes that.
The check also needs to be done on the oldPodSpec, rather than the
podSpec, so that existing workloads which use the zero value continue
functioning in the same way.
Added tests to check that if NodeRules, ClusterRoles, and NamespaceRoles
include `List`, it also include `Watch`.
Signed-off-by: Mitsuru Kariya <mitsuru.kariya@nttdata.com>
Starting from version 1.32, the client feature `WatchListClient` has been
set to `true` in `kube-controller-manager`.
(commit 06a15c5cf9)
As a result, when the `kube-controller-manager` executes the `List` method,
it utilizes `Watch`. However, there are some existing controller roles that
include `List` but do not include `Watch`. Therefore, when processes using
these controller roles execute the `List` method, `Watch` is executed first,
but due to permission errors, it falls back to `List`.
This PR adds `Watch` to the controller roles that include `List` but do not
include `Watch`.
The affected roles are as follows (prefixed with `system:controller:`):
- `cronjob-controller`
- `endpoint-controller`
- `endpointslice-controller`
- `endpointslicemirroring-controller`
- `horizontal-pod-autoscaler`
- `node-controller`
- `pod-garbage-collector`
- `storage-version-migrator-controller`
Signed-off-by: Mitsuru Kariya <mitsuru.kariya@nttdata.com>