This is the first step towards being able to support a new plugin API version
in parallel with the existing one.
Signed-off-by: Kevin Klues <kklues@nvidia.com>
The following investigation occurred during development.
Add TimingHistogram impl that shares lock with WeightedHistogram
Benchmarking and profiling shows that two layers of locking is
noticeably more expensive than one.
After adding this new alternative, I now get the following benchmark
results.
```
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogram$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogram-16 22232037 52.79 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 1.404s
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogram$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogram-16 22190997 54.50 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 1.435s
```
and
```
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogramDirect$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramDirect-16 28863244 40.99 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 1.890s
(base) mspreitz@mjs12 kubernetes %
(base) mspreitz@mjs12 kubernetes %
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogramDirect$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramDirect-16 27994173 40.37 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 1.384s
```
So the new implementation is roughly 20% faster than the original.
Add overlooked exception, rename timingHistogram to timingHistogramLayered
Use the direct (one mutex) style of TimingHistogram impl
This is about a 20% gain in CPU speed on my development machine, in
benchmarks without lock contention. Following are two consecutive
trials.
(base) mspreitz@mjs12 prometheusextension % go test -benchmem -run=^$ -bench Histogram .
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramLayered-16 21650905 51.91 ns/op 0 B/op 0 allocs/op
BenchmarkTimingHistogramDirect-16 29876860 39.33 ns/op 0 B/op 0 allocs/op
BenchmarkWeightedHistogram-16 49227044 24.13 ns/op 0 B/op 0 allocs/op
BenchmarkHistogram-16 41063907 28.82 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 5.432s
(base) mspreitz@mjs12 prometheusextension % go test -benchmem -run=^$ -bench Histogram .
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramLayered-16 22483816 51.72 ns/op 0 B/op 0 allocs/op
BenchmarkTimingHistogramDirect-16 29697291 39.39 ns/op 0 B/op 0 allocs/op
BenchmarkWeightedHistogram-16 48919845 24.03 ns/op 0 B/op 0 allocs/op
BenchmarkHistogram-16 41153044 29.26 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 5.044s
Remove layered implementation of TimingHistogram
Bump cAdvisor to v0.44.1 to pick up fix for containerd task timeout
which resulted in empty network metrics.
Signed-off-by: David Porter <david@porter.me>
The readonly port could be disabled.
Since we are only using the /healthz endpoint,
we can use the healthz port for this.
Change-Id: Ie0e05a5ab4ec6f51e4d3c63226aa23c1b3a69956
This change has been generated by the `update-rules` command in
`publishing-bot` repository. Since this is the first time we are
updating the rules using a script, there is a considerable amount of
diff, which is caused because of the YAML marshaller.
Signed-off-by: Nabarun Pal <pal.nabarun95@gmail.com>
containerd v1.6.0 introduced HostProcessContainers support [1], which
are required for e2e tests that need that feature.
This addresses some of the permafailing tests for Windows GCE E2E test runs.
[1] https://github.com/containerd/containerd/pull/5131
when running integration tests without an external etcd, the framework
spawns an etcd instance executing it in its own process and killing
it once the test stops.
Instead of killing it directly, allow etcd to exit gracefully or kill
it after 5 seconds.