mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-24 20:24:09 +00:00
Merge pull request #63657 from shyamjvs/remove-gc-qps-bump
Automatic merge from submit-queue (batch tested with PRs 63424, 63657). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Remove 20x factor in garbage-collector qps Fixes https://github.com/kubernetes/kubernetes/issues/63610 I was discussing offline with @wojtek-t. And among the two options of: - Increasing the qps 20x to compensate for the earlier "bloated" qps (this kind of assumes that we have ~20 resource types in the cluster) - Keeping the qps same to make sure that we don't overwhelm the apiserver with the new re-adjusted qps (like what seems to happen with our performance tests where we mostly just have ~1 resource type) we agreed that the latter one seems to be less riskier as it's probably better to have the GC slower than to make our API call latencies shoot up. That said, we can try to increase it later if it's justifiable. cc @kubernetes/sig-api-machinery-misc @deads2k @wojtek-t ```release-note GC is now bound by QPS (it wasn't before) and so if you need more QPS to avoid ratelimiting GC, you'll have to set it. ```
This commit is contained in:
commit
d5ab3559ad
@ -350,9 +350,6 @@ func startGarbageCollectorController(ctx ControllerContext) (bool, error) {
|
||||
discoveryClient := cacheddiscovery.NewMemCacheClient(gcClientset.Discovery())
|
||||
|
||||
config := ctx.ClientBuilder.ConfigOrDie("generic-garbage-collector")
|
||||
// bump QPS limits on our dynamic client that we use to GC every deleted object
|
||||
config.QPS *= 20
|
||||
config.Burst *= 20
|
||||
dynamicClient, err := dynamic.NewForConfig(config)
|
||||
if err != nil {
|
||||
return true, err
|
||||
|
Loading…
Reference in New Issue
Block a user