Since the filter status is missed for the phase of preemption, there
will be no way to tell why the preemption failed for some reasons, and
those reasons could be different with the status from the main scheduling
process (the first failed plugin will hide other failures in the chain).
This change provides verbose information based on the node status generated
during pod preemption, those information helps us to diagnose the issue which
is happened during pod preemption.
Signed-off-by: Dave Chen <dave.chen@arm.com>
The DNS autoscaler test was not correctly counting tainted but
schedulable nodes, meaning that the target count was not correct for
clusters with multiple control-plane nodes (which are often tainted
but schedulable).
The cluster-proportional-autoscaler ignores non-schedulable nodes, but
does not consider taints.
Migrate how resource lock and leader election config is generated to new way, hidding kubeClient. This also halfs kubeClient timeout, making it an useful value.
If timeout is equal to RenewDeadline and we hit client timeout on request, there will be no retry, as RenewDeadline part will cancel the context and lose leader election. So setting a timeout to value at least equal to RenewDeadline is pointless.
Setting it as half of RenewDeadline is a heuristic to resolve this missing retry problem without adding additional parameter
Migrate how resource lock and leader election config is generated to new way, hidding kubeClient. This also halfs kubeClient timeout, making it an useful value.
If timeout is equal to RenewDeadline and we hit client timeout on request, there will be no retry, as RenewDeadline part will cancel the context and lose leader election. So setting a timeout to value at least equal to RenewDeadline is pointless.
Setting it as half of RenewDeadline is a heuristic to resolve this missing retry problem without adding additional parameter.
Assume the following CRDs exist (ordered by priority, the first is the highest):
- authentications.migration.k8s.io (K=Authentications, G=migration.k8s.io)
- authentications.metal3.io (K=Authentications, G=metal3.io)
- authentications.whereabouts.cni.cncf.io (K=Authentications, G=whereabouts.cni.cncf.io)
- authentications.snapshot.storage.k8s.io (K=Authentications, G=snapshot.storage.k8s.io)
In case 'kubectl explain authentications' is ran, the highest priority definition (in this case authentications.migration.k8s.io)
is returned. In case a user wants to explain authentication CRD of a different group, --api-version flag has to be set alongside
to point to a specific group and version. E.g. --api-version=metal3.io/v1
This PR allows to dismiss --api-version flag and perform a prefix check to select a resource (e.g. CRD) whose (resource, group) pair
fully prefixes requested resource. E.g. running 'kubectl explain authentications.metal3.io' will return
description of authentications.metal3.io. The same holds for optional field path.
I.e. 'kubectl explain authentications.metal3.io.spec' will return description of spec field
of authentications.metal3.io.spec. In case no resource match is found, the search falls back
to selecting the highest priority gvr that matches the resource.
In case --api-version is set, no prefix matching is performed. To cover cases
such as 'kubectl explain authentications.metal3.io --api-version=authentications.metal3.io/v1' where
fields path coincide with the resource fully specified name (to access .metal3.io field of authentications.metal3.io).