Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This moves the FW chain creation to the end (rather than having it in
the middle of adding the jump rules for the LB IPs).
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This fixes the jump rules for internal traffic. Previously we were
handling "jumping from kubeServices to internalTrafficChain" and
"adding masquerade rules to internalTrafficChain" in the same place.
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This fixes the handling of the EXT chain.
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This fixes the handling of the SVC and SVL chains. We were already
filling them in at the end of the loop; this fixes it to create them
at the bottom of the loop as well.
Part of reorganizing the syncProxyRules loop to do:
1. figure out what chains are needed, mark them in activeNATChains
2. write servicePort jump rules to KUBE-SERVICES/KUBE-NODEPORTS
3. write servicePort-specific chains (SVC, SVL, EXT, FW, SEP)
This fixes the handling of the endpoint chains. Previously they were
handled entirely at the top of the loop. Now we record which ones are
in use at the top but don't create them and fill them in until the
bottom.
We figure out early on whether we're going to end up outputting no
endpoints, so update the metrics then.
(Also remove a redundant feature gate check; svcInfo already checks
the ServiceInternalTrafficPolicy feature gate itself and so
svcInfo.InternalPolicyLocal() will always return false if the gate is
not enabled.)
Fix a TODO to plumb an update filter from above in the resource quota
monitor code that was handling update events for quota-able objects,
instead of hard-coding the logic in the resource quota monitor.
Signed-off-by: Andy Goldstein <andy.goldstein@redhat.com>
Some operations with Azure Disk volumes can be arbitrarily slow
sometimes. This causes CI flakes that are hard to debug.
Bumping the timeouts has sown to improve or eliminate this issue.
Ginkgo has been migrated to V2, add this to unwanted dependencies
so that it won't be shown up as a dep again in the future.
Signed-off-by: Dave Chen <dave.chen@arm.com>
The alias for vendor/github.com/onsi/ginkgo/ginkgo ensures that code like
30e99cb2a9/experiment/kind-conformance-image-e2e.sh (L110)
continues to work. The one without "vendor/" is there just in case that it
was used because it also worked.
Long term, "ginkgo" is a nicer, version independent alias. It gets used
internally to avoid future churn and gets documented also publicly in the
Makefile help.
The caveat is that there's no guarantee that a future v3 CLI will be compatible
with current invocations. But the most common usage is through
hack/ginkgo-e2e.sh, which can deal with such differences.
Ginkgo is now writing the JUnit file itself. The -report-dir parameter is used
as fallback for enabling JUnit output in case that users haven't migrated to
the new -junit-report parameter.
Co-authored-by: Patrick Ohly <patrick.ohly@intel.com>
Signed-off-by: Dave Chen <dave.chen@arm.com>