Merge pull request #18819 from wojtek-t/flag_gate_second_etcd

Auto commit by PR queue bot
This commit is contained in:
k8s-merge-robot 2015-12-20 00:36:58 -08:00
commit 2eea4c0e8f
3 changed files with 18 additions and 1 deletions

View File

@ -67,6 +67,8 @@ touch /var/log/etcd-events.log:
server_port: 2380
cpulimit: '"200m"'
# Switch on second etcd instance if there are more than 50 nodes.
{% if pillar['num_nodes'] is defined and pillar['num_nodes'] > 50 -%}
/etc/kubernetes/manifests/etcd-events.manifest:
file.managed:
- source: salt://etcd/etcd.manifest
@ -81,3 +83,4 @@ touch /var/log/etcd-events.log:
port: 4002
server_port: 2381
cpulimit: '"100m"'
{% endif -%}

View File

@ -41,7 +41,11 @@
{% endif -%}
{% set etcd_servers = "--etcd-servers=http://127.0.0.1:4001" -%}
{% set etcd_servers_overrides = "--etcd-servers-overrides=/events#http://127.0.0.1:4002" -%}
{% set etcd_servers_overrides = "" -%}
# If there are more than 50 nodes, there is a dedicated etcd instance for events.
{% if pillar['num_nodes'] is defined and pillar['num_nodes'] > 50 -%}
{% set etcd_servers_overrides = "--etcd-servers-overrides=/events#http://127.0.0.1:4002" -%}
{% endif -%}
{% set service_cluster_ip_range = "" -%}
{% if pillar['service_cluster_ip_range'] is defined -%}

View File

@ -62,6 +62,16 @@ To avoid running into cloud provider quota issues, when creating a cluster with
* Target pools
* Gating the setup script so that it brings up new node VMs in smaller batches with waits in between, because some cloud providers rate limit the creation of VMs.
### Etcd storage
To improve performance of large clusters, we store events in a separate dedicated etcd instance.
When creating a cluster, existing salt scripts:
* start and configure additional etcd instance
* configure api-server to use it for storing events
However, this is done only for clusters having more than 50 nodes.
### Addon Resources
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](http://pr.k8s.io/10653/files) and [#10778](http://pr.k8s.io/10778/files)).