Bump client_golang to v1.12.1 to fix a concurrency issue in the Go
Collector that was introduced by the library in v1.12.0.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
When it initially landed in kubernetes/kubernetes@c6e9ad066e (Initial
node drain implementation for #3885, 2015-08-30,
kubernetes/kubernetes#16698), the drain logic looked in a created-by
annotation for recognized kinds [1], so listing the set of recognized
kinds was a clear approach.
Sometime later, the source moved into ownerReferences, but the
hard-coded set of recognized controller kinds remained.
When kubernetes/kubernetes@2f1108451f (Remove hard-coded
pod-controller check, 2017-12-05, kubernetes/kubernetes#56864) removed
the hard-coded set of recognized controller kinds, it should have also
updated these messages to remove stale references to the previous
hard-coded values. This commit catches the message strings up with
that commit.
[1]: c6e9ad066e (diff-211259b8a8ec42f105264c10897dad48029badb538684e60e43eaead68c3d219R216)
The restclient metrics were updated to track only the host field of the
url, the finalURLTemplate is not longer needed, its only goal was to
replace name and namespace in the path to avoid cardinality.
Current request latency metrics have the following buckets:
0.001, 0.002, 0.004, 0.008, 0.016, 0.032, 0.064, 0.128, 0.256, 0.512
That has two much granularity for http requests, and it gets capped to
aprox half seconds, loosing visibility on the requests that may be more
interested, the ones that take more than 1 second.
Using the same used for etcd request latency, with the same upper and
lower limits than the ones used in the apiserver, but only adding one
bucket more.
[]float64{0.005, 0.025, 0.1, 0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 15.0, 30.0, 60.0},
The primary use case for the allocator is to reduce cost of object serialization.
Initially it will be used by the protobuf serializer.
This approach puts less load on GC and leads to less fragmented memory in general.
It allows us to allocate a single buffer for the entire watch session and release it when a watch connection is closed.
Previously memory was allocated for every object serialization putting a lot of pressure on GC and consuming more memory than needed.
- actual_state_of_world_test.go: test the new method GetVolumesToReportAttachedForNode
for an existing node and a non-existing node
- node_status_updater_test.go: test UpdateNodeStatuses and UpdateNodeStatuses in nominal
case with 2 nodes getting one volume each. Test UpdateNodeStatuses with the first call
to node.patch failing but the following one succeeding
- add comment in node_status_updater.go
- fix log line in reconciler.go
- rename variable in actual_state_of_world.go