Merge pull request #63657 from shyamjvs/remove-gc-qps-bump

Automatic merge from submit-queue (batch tested with PRs 63424, 63657). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove 20x factor in garbage-collector qps

Fixes https://github.com/kubernetes/kubernetes/issues/63610

I was discussing offline with @wojtek-t. And among the two options of:

- Increasing the qps 20x to compensate for the earlier "bloated" qps (this kind of assumes that we have ~20 resource types in the cluster)
- Keeping the qps same to make sure that we don't overwhelm the apiserver with the new re-adjusted qps (like what seems to happen with our performance tests where we mostly just have ~1 resource type)

we agreed that the latter one seems to be less riskier as it's probably better to have the GC slower than to make our API call latencies shoot up.
That said, we can try to increase it later if it's justifiable.

cc @kubernetes/sig-api-machinery-misc @deads2k @wojtek-t 

```release-note
GC is now bound by QPS (it wasn't before) and so if you need more QPS to avoid ratelimiting GC, you'll have to set it.
```
This commit is contained in:
Kubernetes Submit Queue 2018-05-10 06:31:19 -07:00 committed by GitHub
commit d5ab3559ad
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -350,9 +350,6 @@ func startGarbageCollectorController(ctx ControllerContext) (bool, error) {
discoveryClient := cacheddiscovery.NewMemCacheClient(gcClientset.Discovery())
config := ctx.ClientBuilder.ConfigOrDie("generic-garbage-collector")
// bump QPS limits on our dynamic client that we use to GC every deleted object
config.QPS *= 20
config.Burst *= 20
dynamicClient, err := dynamic.NewForConfig(config)
if err != nil {
return true, err