All of the tests used a localDetector that considered the pod IP range
to be 10.0.0.0/24, but lots of the tests used pod IPs in 10.180.0.0/16
or 10.0.1.0/24, meaning the generated iptables rules were somewhat
inconsistent. Fix this by expanding the localDetector's pod IP range
to 10.0.0.0/8. (Changing the pod IPs to all be in 10.0.0.0/24 instead
would be a much larger change since it would result in the SEP chain
names changing.)
Meanwhile, the different tests were also horribly inconsistent about
what values they used for other IPs, and some of them even used the
same IPs (or ports) for different things in the same test case. Fix
these all up and create a consistent set of IP assignments:
// Pod IPs: 10.0.0.0/8
// Service ClusterIPs: 172.30.0.0/16
// Node IPs: 192.168.0.0/24
// Local Node IP: 192.168.0.2
// Service ExternalIPs: 192.168.99.0/24
// LoadBalancer IPs: 1.2.3.4, 5.6.7.8, 9.10.11.12
// Non-cluster IPs: 203.0.113.0/24
// LB Source Range: 203.0.113.0/25
Only run assertIPTablesRuleJumps() on the expected output, not on the
actual output, since if there's a problem with the actual output, we'd
rather see it as the diff from the expected output.
In large clusters, the iptables-restore input will be tens of
thousands of lines long, and logging it at V(5) essentially means that
"kube-proxy -v=5" cannot be used in such clusters to see _other_
things that get logged at V(5), because logs will get rolled over far
too quickly. So bump the full-rules logging output down to V(9).
In this test, the current implementation uses a nebulous "RV 1" for some
queries. The intent of this absolute choice is to probe etcd at a
version before any writes ocurred for the test. The particular test
fixture for etcd that is used starts at revision 1, so 1 is used.
This choice is hard to understand the meaning of for readers, though,
and is not valid for any other etcd fixture used for the tests. In order
to improve readability of the test as well as to make it more resilient
to the underlying store, this change updates the test to read the
revision of the underlying storage before making any writes and using
that revision when querying the storage in the tests.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
This was the last test to not use sub-tests, so we can also remove the
indices that the expectation functions take as parameters now.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
When comparing EndpointSubsets and Endpoints, we ignore the difference
in ResourceVersion of Pod to avoid unnecessary updates caused by Pod
updates that we don't care, e.g. annotation update.
Otherwise periodic Service resync would intensively update Endpoints or
EndpointSlice whose Pods have irrelevant change between two resyncs,
leading to delay in processing newly created Services. In a scale
cluster with thousands of such Endpoints, we observed 2 minutes of
delay when the resync happens.