If someone gains the ability to create static pods, they might try to use that
ability to run code which gets access to the resources associated with some
existing claim which was previously allocated for some other pod. Such an
attempt already fails because the claim status tracks which pods are allowed to
use the claim, the static pod is not in that list, the node is not authorized
to add it, and the kubelet checks that list before starting the pod in
195803cde5/pkg/kubelet/cm/dra/manager.go (L218-L222).
Even if the pod were started, DRA drivers typically manage node-local resources
which can already be accessed via such an attack without involving DRA. DRA
drivers which manage non-node-local resources have to consider access by a
compromised node as part of their threat model.
Nonetheless, it is better to not accept static pods which reference
ResourceClaims or ResourceClaimTemplates in the first place because there
is no valid use case for it.
This is done at different levels for defense in depth:
- configuration validation in the kubelet
- admission checking of node restrictions
- API validation
Co-authored-by: Jordan Liggitt <liggitt@google.com>
Code changes by Jordan, with one small change (resourceClaims -> resourceclaims).
Unit tests by Patrick.
We don't build these tests for Windows, let's remove this skip.
We should have never added that skip, we should have skipped the entire
suite on Windows.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
This reverts commit 8597b343fa.
I wrote in the Kubernetes documentation:
In practice this means you need at least Linux 6.3, as tmpfs started
supporting idmap mounts in that version. This is usually needed as
several Kubernetes features use tmpfs (the service account token that is
mounted by default uses a tmpfs, Secrets use a tmpfs, etc.)
The check is wrong for several reasons:
* Pods can use userns before 6.3, they will just need to be
careful to not use a tmpfs (like a serviceaccount). MOST users
will probably need 6.3, but it is possible to use earlier kernel
versions. 5.19 probably works fine and with improvements in
the runtime 5.12 can probably be supported too.
* Several distros backport changes and the recommended way is
usually to try the syscall instead of testing kernel versions.
I expect support for simple fs like tmpfs will be backported
in several distros, but with this check it can generate confusion.
* Today a clear error is shown when the pod is created, so it's
unlikely a user will not understand why it fails.
* Returning an error if utilkernel fails to understand what
kernel version is running is also too strict (as we are
logging a warning even if it is not the expected version)
* We are switching to enabled by default, which will log a
warning on every user that runs on an older than 6.3 kernel,
adding noise to the logs.
For there reasons, let's just remove the hardcoded kernel version check.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
This reverts commit fd06dcd604.
The revert is not to make it a hard error again, this revert is needed
to revert cleanly the commit that added this as an error in the first
place.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
This makes it clear the error comes due to a user namespace
configuration. Otherwise the error returned looks too generic and is not
clear.
Before this PR, the error was:
Warning FailedCreatePodSandBox 1s kubelet Failed to create pod sandbox: the handler "" is not known
Now it is:
Warning FailedCreatePodSandBox 1s kubelet Failed to create pod sandbox: runtime does not support user namespaces
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
When using an old runtime like containerd 1.7, this message is not
implemented and what we get here is an empty non-nil slice. Let's check
the len of the slice instead.
While we are there, let's just return false and no error. In the
following commits we will wrap the error and we didn't find any more
info to add here.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>