rules.yaml captures the dependencies of the repo and is used by
the publishing-bot to publish the staging repository.
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
- use `ginkgo.DeferCleanup` instead of clean up in the AfterEach block
- encourage use of ginkgo by not extending expect.go
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
Since https://github.com/kubernetes/kubernetes/pull/112648, we can
efficiently handle selectors from pre-existing `map[string]string`,
making the cache obsolete.
Benchmark:
```
name old time/op new time/op delta
GetPodServiceMemberships-48 189µs ± 1% 193µs ± 1% +2.10% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
GetPodServiceMemberships-48 59.0kB ± 0% 58.9kB ± 0% -0.09% (p=0.000 n=9+9)
name old allocs/op new allocs/op delta
GetPodServiceMemberships-48 1.02k ± 0% 1.02k ± 0% ~ (all equal)
```
Add a test case with a DaemonSet behind a simple load balancer whose
address is being constantly hit via HTTP requests.
The test passes if there are no errors when doing HTTP requests to the
load balancer address, during DaemonSet `RollingUpdate` operations.
Signed-off-by: Ionut Balutoiu <ibalutoiu@cloudbasesolutions.com>
Some node e2e tests check for expected number of pods running
on the node to verify the correct state of that node after running
test scenarios. An example of such a check is in the device plugin
end to end test here: [1].
If the node is not left in a clean state after an e2e test finishes
running, it can lead to flaky tests because the node might have
unexpected pods running on the node.
In order to avoid that, we make sure that the test pods are
cleaned up after the test runs.
[1]: https://github.com/kubernetes/kubernetes/blob/master/test/e2e_node/device_plugin_test.go#L189-L190
Signed-off-by: Swati Sehgal <swsehgal@redhat.com>
wait.Until catches panics and logs them, which leads to confusing
output. Besides, the test is written so that failures must get reported to the
main goroutine.
Looking up the expected nodes in the goroutine raced with the test making
changes to the configuration. When doing (unrelated?) changes, the test started
to fail:
Oct 23 15:47:03.092: INFO: Unexpected error:
<*errors.errorString | 0xc001154c70>: {
s: "no subset of available IP address found for the endpoint test-rolling-update-with-lb within timeout 2m0s",
}
Oct 23 15:47:03.092: FAIL: no subset of available IP address found for the endpoint test-rolling-update-with-lb within timeout 2m0s
Now that everything is connected to a per-test context, the gRPC server might
encounter an error before it gets shut down normally. We must not panic in that
case because it would kill the entire Ginkgo worker process. This is not even
an error, so just log it as info message.
The wrapper can be used in combination with ginkgo.DeferCleanup to ignore
harmless "not found" errors during delete operations.
Original code suggested by Onsi Fakhouri.
It is set in all of the test/e2e* suites, but not in the ginkgo output
tests. This check is needed before adding a test case there which would trigger
this nil pointer access.
`rollout restart` command calculates patches according to the
`kubectl.kubernetes.io/restartedAt` annotation whose time format is
RFC3339. That is sufficient for users. However, if automated scripts
execute `rollout restart` in multiple times within second, commands fails
by returning an error "empty patch".
This PR changes error message to more descriptive format to warn users
that rollout restart does not work subsequent execution within second.