before:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 3.792s
after:
go test -v -race -count 1 -run ^TestCacheWatcherDrainingNoBookmarkAfterResourceVersionReceived$
ok k8s.io/apiserver/pkg/storage/cacher 1.783s
before:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 6.775s
after:
go test -v -race -count 1 -run ^TestWatchNotHangingOnStartupFailure$
ok k8s.io/apiserver/pkg/storage/cacher 2.781s
The runtime classes are apiserver's concept, while the handlers are kubelet's concept.
For NodeStatus, it makes more sense to return the latter ones here.
This commit modifies the following files:
- pkg/apis/core/types.go
- staging/src/k8s.io/api/core/v1/types.go
- pkg/kubelet/nodestatus/setters.go
- pkg/kubelet/kubelet_node_status.go
- pkg/registry/core/node/strategy.go
- test/e2e_node/mount_rro_linux_test.go
Other changes were auto-generated by running `make update`.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
* Add e2e tests for Service.spec.trafficDistribution
* Fix linting issue
* Fix spelling
* Add integration tests for trafficDistribution
* Use nodeSelection instead of nodeName to schedule pods on a specific zonal node
* Fix import alias corev1 -> v1 in e2e test
* Address comments
* Add a way to only print log lines in case of errors. This is deemed to be good behaviour by e2e tests guidelines
It turns out that kube has a custom timeout for tests of 3 minutes.
The tests in the cacher package are utilizing nearly the
entire time and are being terminated, resulting in failing jobs.
Before the change, the TestWatchSemantics took ~43s to run. With this simple change, it now takes ~18s.
When we created the tests, we didn't measure the running time and assumed that waiting 1 second on a watch channel
to make sure no more events are received was sufficient.
This PR decreases the waiting time to 300 milliseconds.
Modern computers can perform many tasks within that time.
In addition to that, the tests are serial in nature, meaning that there is no other
actor that could add items to the database, which could result in receiving new items.
After the change the total running time decreased by 17%.
Before the tests needed ~176s after they need ~146s.
The changes also improved TestWatchSemanticInitialEventsExtended.
Test that were using AddDate(+y, 0, 0) and then time.Sub were
sensitive to a specific date of their execution.
An example is a test with AddDate(-5, 0, 0) when executed
on 28th of February 2024 and when executed on 1st of March 2024.
The difference seen by Sub will be 5y1d in the first case
and 5y2d in the second case, because 29th of Feb 2024 is a leap
year as well as 29th of Feb 2020 that falls within the 5 year
difference.
Signed-off-by: Martin Sivak <msivak@redhat.com>
updates the test to wait 300 ms instead of 3s
the watch was established otherwise
we would be blocking on a call to cache.Watch(...)
in addition to that, the tests are serial in nature,
meaning that there is no other actor
that could add items to the database,
which could result in receiving new items.
Before:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 8.450s
After:
go test -race -run TestEmptyWatchEventCache
ok k8s.io/apiserver/pkg/storage/cacher 2.635s
The individual cases can be safely run in parallel.
Before
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 10.787s
After:
go test -race -run TestWaitUntilWatchCacheFreshAndForceAllEvents
ok k8s.io/apiserver/pkg/storage/cacher 4.857s
During the PR to get "Forensic Container Checkpointing" enabled in
containerd the decision was made to not correctly report if containerd
cannot find the CRIU binary. The reason was that the e2e_node checkpoint
test did not understand the error message.
The e2e_node checkpoint test is skipped if the container runtime (CRI-O
or containerd) does not enable checkpoint support of if checkpoint
support is not implemented.
This commit adds another reason to skip a check. If the underlying OS
which is used to test "Forensic Container Checkpointing" in combination
with containerd or CRI-O is missing the CRIU binary.
This was encountered on Google's Container-Optimized OS (COS) based
tests where CRIU was not installed.
With this change merged it is possible for containerd to return the
correct error message without breaking Kubernetes e2e tests.
Signed-off-by: Adrian Reber <areber@redhat.com>