Since the filter status is missed for the phase of preemption, there
will be no way to tell why the preemption failed for some reasons, and
those reasons could be different with the status from the main scheduling
process (the first failed plugin will hide other failures in the chain).
This change provides verbose information based on the node status generated
during pod preemption, those information helps us to diagnose the issue which
is happened during pod preemption.
Signed-off-by: Dave Chen <dave.chen@arm.com>
Fix range loop when using jsonpath
Without patch:
kubectl get -n openshift-oauth-apiserver po -o jsonpath='{range .items[?(.status.phase=="Running")]}{.metadata.name}{" is Running\n"}'
apiserver-7d9cc97649-79c2x is Running
apiserver-7d9cc97649-lgks6 is Running
apiserver-7d9cc97649-qgkxn is Running
is Running
With patch:
kubectl get -n openshift-oauth-apiserver po -o jsonpath='{range .items[?(.status.phase=="Running")]}{.metadata.name}{" is Running\n"}'
apiserver-7d9cc97649-79c2x is Running
apiserver-7d9cc97649-lgks6 is Running
apiserver-7d9cc97649-qgkxn is Running
The DNS autoscaler test was not correctly counting tainted but
schedulable nodes, meaning that the target count was not correct for
clusters with multiple control-plane nodes (which are often tainted
but schedulable).
The cluster-proportional-autoscaler ignores non-schedulable nodes, but
does not consider taints.
Migrate how resource lock and leader election config is generated to new way, hidding kubeClient. This also halfs kubeClient timeout, making it an useful value.
If timeout is equal to RenewDeadline and we hit client timeout on request, there will be no retry, as RenewDeadline part will cancel the context and lose leader election. So setting a timeout to value at least equal to RenewDeadline is pointless.
Setting it as half of RenewDeadline is a heuristic to resolve this missing retry problem without adding additional parameter
Migrate how resource lock and leader election config is generated to new way, hidding kubeClient. This also halfs kubeClient timeout, making it an useful value.
If timeout is equal to RenewDeadline and we hit client timeout on request, there will be no retry, as RenewDeadline part will cancel the context and lose leader election. So setting a timeout to value at least equal to RenewDeadline is pointless.
Setting it as half of RenewDeadline is a heuristic to resolve this missing retry problem without adding additional parameter.