- add new header "X-Load-Balancing-Endpoint-Weight" returned from service health. Value of the header is number of local endpoints. Header can be used in weighted load balancing. Parsing header for number of endpoints is faster than unmarshalling json from the content body.
- add missing unit test for new and old headers returned from service health
Doing the initialization once was not good enough because it was not guaranteed
that RunCustomEtcd gets called early enough, before there are other goroutines
which use gRPC. The data race for
test/integration/apiserver.TestWatchCacheUpdatedByEtcd was:
WARNING: DATA RACE
Read at 0x00000cfffb90 by goroutine 140052:
k8s.io/kubernetes/vendor/google.golang.org/grpc/grpclog.V()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/grpclog/grpclog.go:41 +0x30
k8s.io/kubernetes/vendor/google.golang.org/grpc/grpclog.(*componentData).V()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/grpclog/component.go:103 +0x4e
k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).Close()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:955 +0xca
k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1619 +0xbfb
k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func11()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:394 +0x47
Previous write at 0x00000cfffb90 by goroutine 145643:
k8s.io/kubernetes/vendor/google.golang.org/grpc/grpclog.SetLoggerV2()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/grpclog/loggerv2.go:75 +0x104
k8s.io/kubernetes/test/integration/framework.RunCustomEtcd.func2()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/framework/etcd.go:157 +0x33
sync.(*Once).doSlow()
/usr/local/go/src/sync/once.go:74 +0x101
sync.(*Once).Do()
/usr/local/go/src/sync/once.go:65 +0x46
k8s.io/kubernetes/test/integration/framework.RunCustomEtcd()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/framework/etcd.go:156 +0xb97
k8s.io/kubernetes/test/integration/apiserver.multiEtcdSetup()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/apiserver/watchcache_test.go:41 +0xc4
k8s.io/kubernetes/test/integration/apiserver.TestWatchCacheUpdatedByEtcd()
/home/prow/go/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/integration/apiserver/watchcache_test.go:92 +0xa9
testing.tRunner()
/usr/local/go/src/testing/testing.go:1576 +0x216
testing.(*T).Run.func1()
/usr/local/go/src/testing/testing.go:1629 +0x47
This commit removes the legacy networkpolicy tests since they now have
complete appropriate coverage in the new netpol suite.
Signed-off-by: Andrew Stoycos <astoycos@redhat.com>
We have a e2e test which want to get a rate limit error. To do so, we
sent an abnormally high amount of calls in a tight loop.
The relevant test per se is reportedly fine, but wwe need to play nicer
with *other* tests which may run just after and which need to query the API.
If the testsuite runs "too fast", it's possible an innocent test falls in the
same rate limit watch period which was saturated by the ratelimit test,
so the innocent test can still be throttled because the throttling period
is not exhausted yet, yielding false negatives, leading to flakes.
We can't reset the period for the rate limit, we just wait "long enough" to
make sure we absorb the burst and other legit queries are not rejected.
Signed-off-by: Francesco Romani <fromani@redhat.com>
event is not passed to QueueingHintFn but it exists a comment about it.
event is unnecessary in QueueingHintFn because QueueingHintFn is used in
ClusterEventWithHint and ClusterEventWithHint already have ClusterEvent.
Signed-off-by: Shingo Omura <everpeace@gmail.com>
This releases the underlying resource sooner and ensures that another consumer
can get scheduled without being influenced by a decision that was made for the
previous consumer.
An alternative would have been to have the apiserver trigger the deallocation
whenever it sees the `status.reservedFor` getting reduced to zero. But that
then also triggers deallocation when kube-scheduler removes the last
reservation after a failed scheduling cycle. In that case we want to keep the
claim allocated and let the kube-scheduler decide on a case-by-case basis which
claim should get deallocated.
add integration test to wait for json without value
refactor JSON condition value parsing and validating
adjusting test to reflect the error message refactoring
* client-go: add DNS resolver latency metrics
* client-go: add locking to DNS latency metrics
* client-go: add locking for whole DNSStart and DNSDone
Signed-off-by: Vu Dinh <vudinh@outlook.com>
* Fix a mismatched ctx on the request
Signed-off-by: Vu Dinh <vudinh@outlook.com>
* Clean up request code and fix comments
Signed-off-by: Vu Dinh <vudinh@outlook.com>
---------
Signed-off-by: Vu Dinh <vudinh@outlook.com>
Co-authored-by: Vu Dinh <vudinh@outlook.com>
ginkgo.By should be used for steps in the test flow. Creating and deleting CDI
files happens in parallel to that. If reported via ginkgo.By, progress reports
look weird because they contain e.g. step "waiting for...." (from the main
test, which is still on-going) and end with "creating CDI file" (which is
already completed).