Leverage the usedPVCSet in snapshot to determine
whether a PVC with ReadWriteOncePod access mode is being
used by another scheduled pod to achieve O(1) look up in
PreFilter and avoid needing to parallel process all nodes.
Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
This change creates a StorageInfoLister interface
and have it under scheduler SharedLister.
The initial StorageInfoLister interface has a
IsPVCUsedByPods which returns true/false depending on
whether the PVC (keyed by namespace/name) has at least
one scheduled pod using it.
In snapshot real implementation, add a usedPVCSet
key by PVC namespace/name which contains all PVCs
that have at least one scheduled pod using it.
During snapshot update, populate this set based on
whether the PVCRefCounts map for node(s) have been
updated since last snapshot.
Signed-off-by: Yibo Zhuang <yibzhuang@gmail.com>
* feat: Provide previous replica count for deployment/replica set scale up/down event
Signed-off-by: GitHub <noreply@github.com>
* change format of event
Co-authored-by: Maciej Szulik <soltysh@gmail.com>
Co-authored-by: Maciej Szulik <soltysh@gmail.com>
In a number of tests, the underlying storage backend interaction will
return the revision (logical clock underpinning the MVCC implementation)
at the call-time of the RPC. Previously, the tests validated that this
returned revision was exactly equal to some previously seen revision.
This assertion is only true in systems where no other events are
advancing the logical clock. For instance, when using a single etcd
cluster as a shared fixture for these tests, the assertion is not valid
any longer. By checking that the returned revision is no older than the
previously seen revision, the validation logic is correct in all cases.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
This is the first step towards being able to support a new plugin API version
in parallel with the existing one.
Signed-off-by: Kevin Klues <kklues@nvidia.com>
The following investigation occurred during development.
Add TimingHistogram impl that shares lock with WeightedHistogram
Benchmarking and profiling shows that two layers of locking is
noticeably more expensive than one.
After adding this new alternative, I now get the following benchmark
results.
```
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogram$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogram-16 22232037 52.79 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 1.404s
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogram$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogram-16 22190997 54.50 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 1.435s
```
and
```
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogramDirect$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramDirect-16 28863244 40.99 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 1.890s
(base) mspreitz@mjs12 kubernetes %
(base) mspreitz@mjs12 kubernetes %
(base) mspreitz@mjs12 kubernetes % go test -benchmem -run=^$ -bench ^BenchmarkTimingHistogramDirect$ k8s.io/component-base/metrics/prometheusextension
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramDirect-16 27994173 40.37 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 1.384s
```
So the new implementation is roughly 20% faster than the original.
Add overlooked exception, rename timingHistogram to timingHistogramLayered
Use the direct (one mutex) style of TimingHistogram impl
This is about a 20% gain in CPU speed on my development machine, in
benchmarks without lock contention. Following are two consecutive
trials.
(base) mspreitz@mjs12 prometheusextension % go test -benchmem -run=^$ -bench Histogram .
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramLayered-16 21650905 51.91 ns/op 0 B/op 0 allocs/op
BenchmarkTimingHistogramDirect-16 29876860 39.33 ns/op 0 B/op 0 allocs/op
BenchmarkWeightedHistogram-16 49227044 24.13 ns/op 0 B/op 0 allocs/op
BenchmarkHistogram-16 41063907 28.82 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 5.432s
(base) mspreitz@mjs12 prometheusextension % go test -benchmem -run=^$ -bench Histogram .
goos: darwin
goarch: amd64
pkg: k8s.io/component-base/metrics/prometheusextension
cpu: Intel(R) Core(TM) i9-9880H CPU @ 2.30GHz
BenchmarkTimingHistogramLayered-16 22483816 51.72 ns/op 0 B/op 0 allocs/op
BenchmarkTimingHistogramDirect-16 29697291 39.39 ns/op 0 B/op 0 allocs/op
BenchmarkWeightedHistogram-16 48919845 24.03 ns/op 0 B/op 0 allocs/op
BenchmarkHistogram-16 41153044 29.26 ns/op 0 B/op 0 allocs/op
PASS
ok k8s.io/component-base/metrics/prometheusextension 5.044s
Remove layered implementation of TimingHistogram
Previous work by liggitt in 01760927b8 improved the boilerplate
required to run an embedded etcd server for tests as well as set up the
`*etcd3.store{}` for testing. A number of tests were not ported to use the
new helpers, though, either due to custom setup or due to inconsistent
use of setup options. A follow-up by stevekuznetsov in 6aa37eb062
removed much of the inconsistency, meaning that most callers to
`newStore()` were simply using the default boilerplate and options that
`testSetup()` used.
This patch moves all users to testSetup(), adding options as necessary
to enable some fringe setup use-cases. With a unified setup, new tests
will not copy boilerplate they do not need and it will be immediately
obvious when reading a test if the client or storage setup is *not*
default, improving readability.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>