mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-08 12:41:58 +00:00
kube-controller-manager: Add configure-cloud-routes option
This allows kube-controller-manager to allocate CIDRs to nodes (with allocate-node-cidrs=true), but will not try to configure them on the cloud provider, even if the cloud provider supports Routes. The default is configure-cloud-routes=true, and it will only try to configure routes if allocate-node-cidrs is also configured, so the default behaviour is unchanged. This is useful because on AWS the cloud provider configures routes by setting up VPC routing table entries, but there is a limit of 50 entries. So setting configure-cloud-routes on AWS would allow us to continue to allocate node CIDRs as today, but replace the VPC route-table mechanism with something not limited to 50 nodes. We can't just turn off the cloud-provider entirely because it also controls other things - node discovery, load balancer creation etc. Fix #25602
This commit is contained in:
@@ -67,6 +67,7 @@ kube-controller-manager
|
||||
--concurrent-replicaset-syncs=5: The number of replica sets that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
|
||||
--concurrent-resource-quota-syncs=5: The number of resource quotas that are allowed to sync concurrently. Larger number = more responsive quota management, but more CPU (and network) load
|
||||
--concurrent_rc_syncs=5: The number of replication controllers that are allowed to sync concurrently. Larger number = more responsive replica management, but more CPU (and network) load
|
||||
--configure-cloud-routes[=true]: Should CIDRs allocated by allocate-node-cidrs be configured on the cloud provider.
|
||||
--controller-start-interval=0: Interval between starting controller managers.
|
||||
--daemonset-lookup-cache-size=1024: The the size of lookup cache for daemonsets. Larger number = more responsive daemonsets, but more MEM load.
|
||||
--deleting-pods-burst=10: Number of nodes on which pods are bursty deleted in case of node failure. For more details look into RateLimiter.
|
||||
|
Reference in New Issue
Block a user