bridge-nf-call-iptables appears to only be relevant when the containers are
attached to a Linux bridge, which is usually the case with default Kubernetes
setups, docker, and flannel. That ensures that the container traffic is
actually subject to the iptables rules since it traverses a Linux bridge
and bridged traffic is only subject to iptables when bridge-nf-call-iptables=1.
But with other networking solutions (like openshift-sdn) that don't use Linux
bridges, bridge-nf-call-iptables may not be not relevant, because iptables is
invoked at other points not involving a Linux bridge.
The decision to set bridge-nf-call-iptables should be influenced by networking
plugins, so push the responsiblity out to them. If no network plugin is
specified, fall back to the existing bridge-nf-call-iptables=1 behavior.
Add aws cloud config:
[global]
disableSecurityGroupIngress = true
The aws provider creates an inbound rule per load balancer on the node
security group. However, this can quickly run into the AWS security
group rule limit of 50.
This disables the automatic ingress creation. It requires that the user
has setup a rule that allows inbound traffic on kubelet ports from the
local VPC subnet (so load balancers can access it). E.g. `10.82.0.0/16
30000-32000`.
Limits: http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Appendix_Limits.html#vpc-limits-security-groups
Authors: @jsravn, @balooo
When finding instance by node name in AWS, only retrieve running
instances. Otherwise terminated, old nodes can show up with the same
tag when rebuilding nodes in the cluster.
Another improvement made is to filter instances by the node names
provided, rather than selecting all instances and filtering in code.
Authors: @jsravn, @chbatey, @balooo
cleanupTerminatedPods is responsible for checking whether a pod has been
terminated and force a status update to trigger the pod deletion. However, this
function is called in the periodic clenup routine, which runs every 2 seconds.
In other words, it forces a status update for each non-running (and not yet
deleted in the apiserver) pod. When batch deleting tens of pods, the rate of
new updates surpasses what the status manager can handle, causing numerous
redundant requests (and the status channel to be full).
This change forces a status update only when detecting the DeletionTimestamp is
set for a terminated pod. Note that for other non-terminated pods, the pod
workers should be responsible for setting the correct status after killling all
the containers.
- Add Makefile targets
- Start services in the test harness and connect locally
- Build test into binary and copy to remote host to start services
- Use tar to copy binaries to remote hosts to simplify design