Without this it fails after deployments were switched from
extensions to apps with
```
E0902 11:25:51.197420 1 reflector.go:283] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to watch *v1.Node: unknown (get nodes)
E0902 11:25:53.118490 1 reflector.go:283] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to watch *v1.Node: unknown (get nodes)
E0902 11:25:54.997493 1 reflector.go:283] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to watch *v1.Node: unknown (get nodes)
E0902 11:25:57.097423 1 reflector.go:283] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to watch *v1.Node: unknown (get nodes)
E0902 11:25:59.097417 1 reflector.go:283] github.com/kubernetes-incubator/cluster-proportional-autoscaler/pkg/autoscaler/k8sclient/k8sclient.go:96: Failed to watch *v1.Node: unknown (get nodes)
I0902 11:25:59.697325 1 k8sclient.go:221] Falling back to extensions/v1beta1, error using apps/v1: deployments.apps "calico-typha" is forbidden: User "system:serviceaccount:kube-system:typha-cpha" cannot get resource "deployments/scale" in API group "apps" in the namespace "kube-system"
E0902 11:25:59.699833 1 autoscaler_server.go:120] Update failure: the server could not find the requested resource
```
Ref. https://github.com/kubernetes/test-infra/pull/13709
Work around Linux kernel bug that sometimes causes multiple flows to
get mapped to the same IP:PORT and consequently some suffer packet
drops.
Also made the same update in kubelet.
Also added cross-pointers between the two bodies of code, in comments.
Some day we should eliminate the duplicate code. But today is not
that day.
This changes the retry logic in DisruptionController so that it
reconciles update conflicts. In the old behavior, any pdb status update
failure was retried with the same status, regardless of error.
Now there is no retry logic with the status update. The error is passed
up the stack where the PDB can be requeued for processing.
If the PDB status update error is a conflict error, there are some new
special cases:
- failSafe is not triggered, since this is considered a retryable error
- the PDB is requeued immediately (ignoring the rate limiter) because we
assume that conflict can be resolved by getting the latest version