Currently if a group is specified for an impersonated user,
'system:authenticated' is not added to the 'Groups' list inside the
request context.
This causes priority and fairness match to fail. The catch-all flow
schema needs the user to be in the 'system:authenticated' or in the
'system:unauthenticated' group. An impersonated user with a specified
group is in neither.
As a general rule, if an impersonated user has passed authorization
checks, we should consider him authenticated.
Currently, in interpodaffinty plugin, it only processes all nodes when the incoming
pod with affinity. Actually, it only cares about all nodes when the incoming pod
with preferred affinity. Then it will reduces the number of nodes need to be
processed.
This is a performance optimization that reduces the overhead of inter-pod affinity PreFilter calculaitons. Basically
eliminates that overhead when no pods in the cluster use required pod anti-affinity. This offered 20% improvement on 5k clusters for preferred anti-affinity benchmarks.
Before this fix, a Service with a loadBalancerSourceRange value that
included a space would cause kube-proxy to crashloop. This updates
kube-proxy to trim any space from that field.
Make similar buckets for the apiserver_request_duration_seconds and
the etcd_request_duration_seconds histogram so that the result is
more comparable side by side.
etcd_request_duration_seconds uses the default buckets provided by
prometheus client library:
DefBuckets = []float64{.005, .01, .025, .05, .1, .25, .5, 1, 2.5, 5, 10}
apiserver_request_duration_seconds on the other hand uses more fine
grained buckets, and the maximum bucket size is 60s. Both histograms
should use similar bucket sizes so they are more comparable side by side.
We're seeing legacy-cloud-providersazure/clients
(in our case, the synchronized copy used by cluster-autoscaler)
segfaulting under heavy pressure and ARM throttling, like so:
```
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x1b595b5]
cluster-autoscaler-all-79b9478bf5-cgkg8 cluster-autoscaler
goroutine 82 [running]:
k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/clients/armclient.(*Client).Send(0xc00052f520, 0x3b6dc40, 0xc000937440, 0xc000933f00, 0x0, 0x0)
/home/jb/go/src/k8s.io/autoscaler/cluster-autoscaler/vendor/k8s.io/legacy-cloud-providers/azure/clients/armclient/azure_armclient.go:122 +0xb5
```
Reason is the ARM client expects `sendRequest()` to return either
a non nil *retry.Error, or a non nil *http.Response, while both
can be nil.