We don't need to parse out the counter values from the iptables-save
output (since they are always 0 for the chains we care about). Just
parse the chain names themselves.
Also, all of the callers of GetChainLines() pass it input that
contains only a single table, so just assume that, rather than
carefully parsing only a single table's worth of the input.
The test was calling GetChainLines() on invalid pseudo-iptables-save
output where most of the lines were indented. GetChainLines() happened
to still parse this "correctly", but it would be better to be testing
it on actually-correct data.
FakeIPTables barely implemented any of the iptables interface, and the
main part that it did implement, it implemented incorrectly. Fix it:
- Implement EnsureChain, DeleteChain, EnsureRule, and DeleteRule, not
just SaveInto/Restore/RestoreAll.
- Restore/RestoreAll now correctly merge the provided state with the
existing state, rather than simply overwriting it.
- SaveInto now returns the table that was requested, rather than just
echoing back the Restore/RestoreAll.
There were previously some strange iptables-rule-parsing functions
that were only used by two unit tests in pkg/proxy/ipvs. Get rid of
them and replace them with some much better iptables-rule-parsing
functions.
Fix some warnings from go-staticcheck.
"should use buffer.String() instead of string(buffer.Bytes()) (S1030)"
This warning is explained at this link.
https://staticcheck.io/docs/checks#S1030
the iptables restore function, if it considers that the --wait flag
is not supported, creates a lock file to mimic the iptables behaviour.
The test should take this into account and remove the file.
1. For iptables mode, add KUBE-NODEPORTS chain in filter table. Add
rules to allow healthcheck node port traffic.
2. For ipvs mode, add KUBE-NODE-PORT chain in filter table. Add
KUBE-HEALTH-CHECK-NODE-PORT ipset to allow traffic to healthcheck
node port.
the iptables monitor was using iptables -L to list the chains,
without the -n option, so it was trying to do reverse DNS lookups.
A side effect is that it was holding the lock, so other components
could not use it.
We can use -S instead of -L -n to avoid this, since we only want
to check the chain exists.
iptables has two options to modify the behaviour trying to
acquire the lock.
--wait -w [seconds] maximum wait to acquire xtables lock
before give up
--wait-interval -W [usecs] wait time to try to acquire xtables
lock
interval to wait for xtables lock
default is 1 second
Kubernetes uses -w 5 that means that wait 5 seconds to try to
acquire the lock. If we are not able to acquire it, kube-proxy
fails and retries in 30 seconds, that is an important penalty
on sensitive applications.
We can be a bit more aggresive and try to acquire the lock every
100 msec, that means that we have to fail 50 times to not being
able to succeed.
Kubelet and kube-proxy both had loops to ensure that their iptables
rules didn't get deleted, by repeatedly recreating them. But on
systems with lots of iptables rules (ie, thousands of services), this
can be very slow (and thus might end up holding the iptables lock for
several seconds, blocking other operations, etc).
The specific threat that they need to worry about is
firewall-management commands that flush *all* dynamic iptables rules.
So add a new iptables.Monitor() function that handles this by creating
iptables-flush canaries and only triggering a full rule reload after
noticing that someone has deleted those chains.