* Added custom error message when wrong file is provided with KUBECONFIG
* Modified test case
* Updated the code to warn the missing files
* Renamed the variable
The metadata client uses protobuf and returns only a subset of object
data (the metadata) which allows operations that act only on objects
generically to work much faster. Use the metadata client in the
namespace controller to reduce the amount of work the namespace controller
has to do in large namespaces.
When a client requests a PartialObjectMetadata returned from the
ObjectReaction type, if the object has a GVK set use that instead
of what the schema returns, since the majority of clients getting
partial object metadata will be doing so using the metadata client
or server side conversion.
This change adds pending pods to the ignored set first before
selecting pods missing metrics. Pending pods are always ignored when
calculating scale.
When the HPA decides which pods and metric values to take into account
when scaling, it divides the pods into three disjoint subsets: 1)
ready 2) missing metrics and 3) ignored. First the HPA selects pods
which are missing metrics. Then it selects pods should be ignored
because they are not ready yet, or are still consuming CPU during
initialization. All the remaining pods go into the ready set. After
the HPA has decided what direction it wants to scale based on the
ready pods, it considers what might have happened if it had the
missing metrics. It makes a conservative guess about what the missing
metrics might have been, 0% if it wants to scale up--100% if it wants
to scale down. This is a good thing when scaling up, because newly
added pods will likely help reduce the usage ratio, even though their
metrics are missing at the moment. The HPA should wait to see the
results of its previous scale decision before it makes another
one. However when scaling down, it means that many missing metrics can
pin the HPA at high scale, even when load is completely removed. In
particular, when there are many unschedulable pods due to insufficient
cluster capacity, the many missing metrics (assumed to be 100%) can
cause the HPA to avoid scaling down indefinitely.
As the benchmark shows it speeds up the method~x4 and reduces memory
consumption ~x20.
```
benchmark old ns/op new ns/op delta
BenchmarkGetPodMapForDeployment-12 276121 72591 -73.71%
benchmark old allocs new allocs delta
BenchmarkGetPodMapForDeployment-12 241 238 -1.24%
benchmark old bytes new bytes delta
BenchmarkGetPodMapForDeployment-12 554025 28956 -94.77%
```