If something goes wrong during the Azure cloud detection, trying to cast
the returned value will result in the following panic and give no clue
as to what the error was.
```
panic: interface conversion: cloudprovider.Interface is nil, not *azure.Cloud
goroutine 1 [running]:
k8s.io/kubernetes/test/e2e/framework/providers/azure.newProvider()
test/e2e/framework/providers/azure/azure.go:50 +0x2b5
k8s.io/kubernetes/test/e2e/framework.SetupProviderConfig({0xc0007966b8, 0x5})
test/e2e/framework/provider.go:82 +0x1a6
```
As discussed during the KEP and code review of dynamic resource allocation for
Kubernetes 1.26, to increase security kubelet should only get read access to
those ResourceClaim objects that are needed by a pod on the node.
This can be enforced easily by the node authorizer:
- the names of the objects are defined by the pod status
- there is no indirection as with volumes (pod -> pvc -> pv)
Normally the graph only gets updated when a pod is not scheduled yet.
Resource claim status changes can still happen after that, so they
also must trigger an update.
Because it is redundant and has a bad name and its HELP string was
outdated.
Also note intended retention period for request_concurrency_in_use.
Signed-off-by: Mike Spreitzer <mspreitz@us.ibm.com>
The recommendation and default in the controller helper code is to set
ReservedFor to the pod which triggered delayed allocation. However, this
is neither required nor enforced. Therefore we should also test the fallback
path were kube-scheduler itself adds the pod to ReservedFor.
We don't need to remember that a pod got deleted when it had no resource claims
because the code which checks the cached UIDs only checks for pods which have
resource claims.
This patch adds few unit tests to assert that the webhook accessors are
only recreate when they are update in the api-server.
In order to test this feature we had to make few changes to wb manager
that allows us to mock `NewValidatingWebhookAccessor` external function.
Combining all prepare/unprepare operations for a pod enables plugins to
optimize the execution. Plugins can continue to use the v1beta2 API for now,
but should switch. The new API is designed so that plugins which want to work
on each claim one-by-one can do so and then report errors for each claim
separately, i.e. partial success is supported.
Change name to make it compliant with prometheus guidelines.
Calculate it on demand instead of periodic to comply with prometheus standards.
Replace "endpoint" with "server" label to make it semantically consistent with storage factory