the iptables monitor was using iptables -L to list the chains,
without the -n option, so it was trying to do reverse DNS lookups.
A side effect is that it was holding the lock, so other components
could not use it.
We can use -S instead of -L -n to avoid this, since we only want
to check the chain exists.
iptables has two options to modify the behaviour trying to
acquire the lock.
--wait -w [seconds] maximum wait to acquire xtables lock
before give up
--wait-interval -W [usecs] wait time to try to acquire xtables
lock
interval to wait for xtables lock
default is 1 second
Kubernetes uses -w 5 that means that wait 5 seconds to try to
acquire the lock. If we are not able to acquire it, kube-proxy
fails and retries in 30 seconds, that is an important penalty
on sensitive applications.
We can be a bit more aggresive and try to acquire the lock every
100 msec, that means that we have to fail 50 times to not being
able to succeed.
Kubelet and kube-proxy both had loops to ensure that their iptables
rules didn't get deleted, by repeatedly recreating them. But on
systems with lots of iptables rules (ie, thousands of services), this
can be very slow (and thus might end up holding the iptables lock for
several seconds, blocking other operations, etc).
The specific threat that they need to worry about is
firewall-management commands that flush *all* dynamic iptables rules.
So add a new iptables.Monitor() function that handles this by creating
iptables-flush canaries and only triggering a full rule reload after
noticing that someone has deleted those chains.
The firewalld monitoring code was not well tested (and not easily
testable), would never be triggered on most platforms, and was only
being taken advantage of from one place (kube-proxy), which didn't
need it anyway since it already has its own resync loop.
Since the firewalld monitoring was the only consumer of pkg/util/dbus,
we can also now delete that.
Work around Linux kernel bug that sometimes causes multiple flows to
get mapped to the same IP:PORT and consequently some suffer packet
drops.
Also made the same update in kubelet.
Also added cross-pointers between the two bodies of code, in comments.
Some day we should eliminate the duplicate code. But today is not
that day.
The iptables code was doing version detection on the iptables binary
but feature detection on the iptables-restore binary, to try to
support the version of iptables in RHEL 7, which claims to be 1.4.21
but has certain features from iptables 1.6.
The problem is that this particular set of versions and checks
resulted in the code passing "-w" ("wait forever for the lock") to
iptables, but "-w 5" ("wait at most 5 seconds for the lock") to
iptables-restore. On systems with very very many iptables rules, this
could result in the kubelet periodic resyncs (which use "iptables")
blocking kube-proxy (which uses "iptables-restore") and causing it to
time out.
We already have code to grab the lock file by hand when using a
version of iptables-restore that doesn't support "-w", and it works
fine. So just use that instead, and only pass "-w 5" to
iptables-restore when iptables reports a version that actually
supports it.
- Move from the old github.com/golang/glog to k8s.io/klog
- klog as explicit InitFlags() so we add them as necessary
- we update the other repositories that we vendor that made a similar
change from glog to klog
* github.com/kubernetes/repo-infra
* k8s.io/gengo/
* k8s.io/kube-openapi/
* github.com/google/cadvisor
- Entirely remove all references to glog
- Fix some tests by explicit InitFlags in their init() methods
Change-Id: I92db545ff36fcec83afe98f550c9e630098b3135
in cases where iptables stucks forever due to some reasons, we lost
the availability of kube-proxy, this is about adding a timeout for
the rule checking operations, as a result, it should give us a more
reliable working iptables proxy.
Automatic merge from submit-queue (batch tested with PRs 55009, 55532, 55601, 52569, 55533). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.
Kube-proxy adds forward rules to ensure NodePorts work
**What this PR does / why we need it**:
Updates kube-proxy to set up proper forwarding so that NodePorts work with docker 1.13 without depending on iptables FORWARD being changed manually/externally.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#39823
**Special notes for your reviewer**:
@thockin I used option number 2 that I mentioned in the #39823 issue, please let me know what you think about this change. If you are happy with the change then I can try to add tests but may need a little direction about what and where to add them.
**Release note**:
```release-note
Add iptables rules to allow Pod traffic even when default iptables policy is to reject.
```
There were a few places that the last PR https://github.com/kubernetes/kubernetes/pull/54763 missed because the flags that PR covered were of the form -w2. Some of the code had --wait=2. This changes that code to use the same global variable for the wait setting so that everything is consistent.
For iptables save and restore operations, kube-proxy currently uses
the IPv4 versions of the iptables save and restore utilities
(iptables-save and iptables-restore, respectively). For IPv6 operation,
the IPv6 versions of these utilities needs to be used
(ip6tables-save and ip6tables-restore, respectively).
Both this change and PR #48551 are needed to get Kubernetes services
to work in an IPv6-only Kubernetes cluster (along with setting
'--bind-address ::0' on the kube-proxy command line. This change
was alluded to in a discussion on services for issue #1443.
fixes#50474