When translating InTree pv to CSI pv we use default secret namespace
when it's not found in the InTree pv.
Using the default is not ideal for several reasons:
1) it can result in failed pod creation after users migrate to cluster
with CSI enabled because the existing intree pvs might not have the
namespace defined. In that case the "default" is used and mount fails
because secret could not be found.
2) falling back to "default" namespace can result in referencing a
secret from different namespace which is a security risk
However, there is another object we can use to determine correct
namespace which presence can be safely assumed - ClaimRef. Mounting a
volume is done only through a PVC which is bound. Binding adds ClaimRef
to PV and finally the volume gets mounted which is where the
translation code is used.
Bump client_golang to v1.12.1 to fix a concurrency issue in the Go
Collector that was introduced by the library in v1.12.0.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>
When it initially landed in kubernetes/kubernetes@c6e9ad066e (Initial
node drain implementation for #3885, 2015-08-30,
kubernetes/kubernetes#16698), the drain logic looked in a created-by
annotation for recognized kinds [1], so listing the set of recognized
kinds was a clear approach.
Sometime later, the source moved into ownerReferences, but the
hard-coded set of recognized controller kinds remained.
When kubernetes/kubernetes@2f1108451f (Remove hard-coded
pod-controller check, 2017-12-05, kubernetes/kubernetes#56864) removed
the hard-coded set of recognized controller kinds, it should have also
updated these messages to remove stale references to the previous
hard-coded values. This commit catches the message strings up with
that commit.
[1]: c6e9ad066e (diff-211259b8a8ec42f105264c10897dad48029badb538684e60e43eaead68c3d219R216)
The restclient metrics were updated to track only the host field of the
url, the finalURLTemplate is not longer needed, its only goal was to
replace name and namespace in the path to avoid cardinality.
Current request latency metrics have the following buckets:
0.001, 0.002, 0.004, 0.008, 0.016, 0.032, 0.064, 0.128, 0.256, 0.512
That has two much granularity for http requests, and it gets capped to
aprox half seconds, loosing visibility on the requests that may be more
interested, the ones that take more than 1 second.
Using the same used for etcd request latency, with the same upper and
lower limits than the ones used in the apiserver, but only adding one
bucket more.
[]float64{0.005, 0.025, 0.1, 0.25, 0.5, 1.0, 2.0, 4.0, 8.0, 15.0, 30.0, 60.0},
The primary use case for the allocator is to reduce cost of object serialization.
Initially it will be used by the protobuf serializer.
This approach puts less load on GC and leads to less fragmented memory in general.
It allows us to allocate a single buffer for the entire watch session and release it when a watch connection is closed.
Previously memory was allocated for every object serialization putting a lot of pressure on GC and consuming more memory than needed.