If the node authorizer is active, RBAC rules are not needed. But if it's
disabled, kubelet needs to get permission through RBAC. In contrast to the
authorizer code which is a bit more flexible and isn't directly tied to the
current kubelet implementation (i.e. it allows list+delete instead of just
deletecollection), the RBAC entry is just for what the current kubelet does
because it's a bit easier to change.
In reality, the kubelet plugin of a DRA driver is meant to be deployed as a
daemonset with a service account that limits its
permissions. https://kubernetes.io/docs/reference/access-authn-authz/service-accounts-admin/#additional-metadata-in-pod-bound-tokens
ensures that the node name is bound to the pod, which then can be used
in a validating admission policy (VAP) to ensure that the operations are
limited to the node.
In E2E testing, we emulate that via impersonation. This ensures that the plugin
does not accidentally depend on additional permissions.
When a resource gets deleted during migration, the SVM SSA patch
calls are interpreted as a logical create request. Since the object
from storage is nil, the merged result is just a type meta object,
which lacks a name in the body. This fails when the API server
checks that the name from the request URL and the body are the same.
Note that a create request is something that SVM controller should
never do.
Once the UID is set on the patch, the API server will fail the
request at a slightly earlier point with an "uid mismatch" conflict
error, which the SVM controller can handle gracefully.
Setting UID by itself is not sufficient. When a resource gets
deleted and recreated, if RV is not set but UID is set, we would get
an immutable field validation error for attempting to update the
UID. To address this, we set the resource version on the SSA patch
as well. This will cause that update request to also fail with a
conflict error.
Added the create verb on all resources for SVM controller RBAC as
otherwise the API server will reject the request before it fails
with a conflict error.
The change addresses a host of other issues with the SVM controller:
1. Include failure message in SVM resource
2. Do not block forever on unsynced GC monitor
3. Do not immediately fail on GC monitor being missing, allow for
a grace period since discovery may be out of sync
4. Set higher QPS and burst to handle large migrations
Test changes:
1. Clean up CRD webhook convertor logs
2. Allow SVM tests to be run multiple times to make finding flakes easier
3. Create and delete CRs during CRD test to force out any flakes
4. Add a stress test with multiple parallel migrations
5. Enable RBAC on KAS
6. Run KCM directly to exercise wiring and RBAC
7. Better logs during CRD migration
8. Scan audit logs to confirm SVM controller never creates
Signed-off-by: Monis Khan <mok@microsoft.com>
Adding the required Kubernetes API so that the kubelet can start using
it. This patch also adds the corresponding alpha feature gate as
outlined in KEP 4639.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
validating that one endpoint is reachable from one part of the cluster
is not enough condition to consider it will be reachable from any node,
as different Services proxies on different nodes will have different
propagation delays for the EndpointSlices and Services information.
A Kubernetes Node requires to have at minimum one IP address
because those are used on the Pods field HostIPs and in some cases,
when pods uses hostNetwork: true, as PodIPs.
Nodes that use IP addresses as Hostname are interpreted as an IP
address, so it is possible that are nodes that don't hane any DNSname.
The feature gate AllowDNSOnlyNodeCSR will allow user to opt-in for
the old behavior.
Change-Id: I094531d87246f1e7a5ef4fe57bd5d9840cb1375d
This is the second and final step towards making kubelet independent of the
resource.k8s.io API versioning because it now doesn't need to copy structs
defined by that API from the driver to the API server.
This is a first step towards making kubelet independent of the resource.k8s.io
API versioning because it now doesn't need to copy structs defined by that API
from the driver to the API server. The next step is removing the other
direction (reading ResourceClaim status and passing the resource handle to
drivers).
The drivers must get deployed so that they have their own connection to the API
server. Securing at least the writes via a validating admission policy should
be possible.
As before, the kubelet removes all ResourceSlices for its node at startup, then
DRA drivers recreate them if (and only if) they start up again. This ensures
that there are no orphaned ResourceSlices when a driver gets removed while the
kubelet was down.
While at it, logging gets cleaned up and updated to use structured, contextual
logging as much as possible. gRPC requests and streams now use a shared,
per-process request ID and streams also get logged.
2e34e187c9 enabled kubelet to do List and Watch
requests with the caveat that kubelet should better use a field selector (which
it does). The same is now also needed for DeleteCollection because kubelet will
use that to clean up in one operation instead of using multiple.