The restclient metrics were updated to track only the host field of the
url, the finalURLTemplate is not longer needed, its only goal was to
replace name and namespace in the path to avoid cardinality.
Current request latency metrics have the following buckets:
0.001, 0.002, 0.004, 0.008, 0.016, 0.032, 0.064, 0.128, 0.256, 0.512
That has two much granularity for http requests, and it gets capped to
aprox half seconds, loosing visibility on the requests that may be more
interested, the ones that take more than 1 second.
Using the same used for etcd request latency, with the same upper and
lower limits than the ones used in the apiserver, but only adding one
bucket more.
[]float64{0.005, 0.025, 0.1, 0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 15.0, 30.0, 60.0},
The primary use case for the allocator is to reduce cost of object serialization.
Initially it will be used by the protobuf serializer.
This approach puts less load on GC and leads to less fragmented memory in general.
It allows us to allocate a single buffer for the entire watch session and release it when a watch connection is closed.
Previously memory was allocated for every object serialization putting a lot of pressure on GC and consuming more memory than needed.
- actual_state_of_world_test.go: test the new method GetVolumesToReportAttachedForNode
for an existing node and a non-existing node
- node_status_updater_test.go: test UpdateNodeStatuses and UpdateNodeStatuses in nominal
case with 2 nodes getting one volume each. Test UpdateNodeStatuses with the first call
to node.patch failing but the following one succeeding
- add comment in node_status_updater.go
- fix log line in reconciler.go
- rename variable in actual_state_of_world.go
The UpdateNodeStatuses code stops too early in case there is
an error when calling updateNodeStatus. It will return immediately
which means any remaining node won't have its update status put back
to true.
Looking at the call sites for UpdateNodeStatuses, it appears this is
not the only issue. If the lister call fails with anything but a Not Found
error, it's silently ignored which is wrong in the detach path.
Also the reconciler detach path calls UpdateNodeStatuses but the real intent
is to only update the node currently processed in the loop and not proceed
with the detach call if there is an error updating that specifi node volumesAttached
property. With the current implementation, it will not proceed if there is
an error updating another node (which is not completely bad but not ideal) and
worse it will proceed if there is a lister error on that node which means the
node volumesAttached property won't have been updated.
To fix those issues, introduce the following changes:
- [node_status_updater] introduce UpdateNodeStatusForNode which does what
UpdateNodeStatuses does but only for the provided node
- [node_status_updater] if the node lister call fails for anything but a Not
Found error, we will return an error, not ignore it
- [node_status_updater] if the update of a node volumesAttached properties fails
we continue processing the other nodes
- [actual_state_of_world] introduce GetVolumesToReportAttachedForNode which
does what GetVolumesToReportAttached but for the node whose name is provided
it returns a bool which indicates if the node in question needs an update as
well as the volumesAttached list. It is used by UpdateNodeStatusForNode
- [actual_state_of_world] use write lock in updateNodeStatusUpdateNeeded, we're
modifying the map content
- [reconciler] use UpdateNodeStatusForNode in the detach loop
For each iptables-restore call, log the number of services, endpoints,
filter chains, filter rules, NAT chains, and NAT rules in the update
at V(2), in addition to logging the actual rules if V(9).