This PR sets higher priority to the `share-processes` flag than
provided profile.
For example, if user tries to use copy-to debugging with restricted
profiling, share process namespace should be false if user explicitly
disables it via `--share-processes=false`.
The test creates a Service exposing two protocols on the same port
and a backend that replies on both protocols.
1. Test that Service with works for both protocol
2. Update Service to expose only the TCP port
3. Verify that TCP works and UDP does not work
4. Update Service to expose only the UDP port
5. Verify that TCP does not work and UDP does work
Change-Id: Ic4f3a6509e332aa5694d20dfc3b223d7063a7871
The long-term goal is that when "make verify" is invoked in pull job, it will
also run golangci-lint with the strict configuration and write an
$ARTIFACTS/golangci-lint-githubactions.log file with GitHub actions
annotations. How to get those published for the GitHub PR is open.
When "make verify" is invoked manually or in any other job, the stricter check
will be skipped. That works because "PR_NUMBER" is only set for pre-merge
jobs (https://github.com/kubernetes/test-infra/blob/master/prow/jobs.md#job-environment-variables).
Because strict linting is still new and might not be useful without a better
UI (= GitHub annotations), this PR does not fully enable the integration into
"make verify". Instead, the new verify-golangci-lint-pr.sh is excluded from it
and needs to be run in a separate job.
It is useful to check new code with a stricter configuration because we want it
to be of higher quality. Code reviews also become easier when reviewers don't
need to point out inefficient code manually.
What exactly should be enabled is up for debate. The current config uses the
golangci-lint defaults plus everything that is enabled explicitly by the normal
golangci.yaml, just to be on the safe side.
Test 2 scenarios:
- pod can connect to a terminating pods
- terminating pod can connect to other pods
Change-Id: Ia5dc4e7370cc055df452bf7cbaddd9901b4d229d
v1.Container is still changing a log which caused the test to fail each time a
new field was added. To test loading, let's better use something that is
unlikely to change. The runtimev1.VersionResponse gets logged by kubelet and
seems to be stable.
The benchmarks and unit tests were written so that they used custom APIs for
each log format. This made them less realistic because there were subtle
differences between the benchmark and a real Kubernetes component. Now all
logging configuration is done with the official
k8s.io/component-base/logs/api/v1.
To make the different test cases more comparable, "messages/s" is now reported
instead of the generic "ns/op".