This releases the underlying resource sooner and ensures that another consumer
can get scheduled without being influenced by a decision that was made for the
previous consumer.
An alternative would have been to have the apiserver trigger the deallocation
whenever it sees the `status.reservedFor` getting reduced to zero. But that
then also triggers deallocation when kube-scheduler removes the last
reservation after a failed scheduling cycle. In that case we want to keep the
claim allocated and let the kube-scheduler decide on a case-by-case basis which
claim should get deallocated.
* client-go: add DNS resolver latency metrics
* client-go: add locking to DNS latency metrics
* client-go: add locking for whole DNSStart and DNSDone
Signed-off-by: Vu Dinh <vudinh@outlook.com>
* Fix a mismatched ctx on the request
Signed-off-by: Vu Dinh <vudinh@outlook.com>
* Clean up request code and fix comments
Signed-off-by: Vu Dinh <vudinh@outlook.com>
---------
Signed-off-by: Vu Dinh <vudinh@outlook.com>
Co-authored-by: Vu Dinh <vudinh@outlook.com>
ginkgo.By should be used for steps in the test flow. Creating and deleting CDI
files happens in parallel to that. If reported via ginkgo.By, progress reports
look weird because they contain e.g. step "waiting for...." (from the main
test, which is still on-going) and end with "creating CDI file" (which is
already completed).
The test required two APIs to be available to test for migration.
Keep it simple and use a variable "gv" on top of the function body
to easily swap the version to be tested once an old API is deleted.
e.g. currently v1beta3 is the "old" API, v1beta4 is the "new" one.
Ultimately, this test only makes sure that the expected kinds are
available post migration.
This runs workloads that are labeled as "integration-test". The apiserver and
scheduler are only started once per unique configuration, followed by each
workload using that configuration. This makes execution faster. In contrast to
benchmarking, we care less about starting with a clean slate for each test.
Merely deleting the namespace is not enough:
- Workloads might rely on the garbage collector to get rid of obsolete objects,
so we should run it to be on the safe side.
- Pods must be force-deleted because kubelet is not running.
- Finally, the namespace controller is needed to get rid of
deleted namespaces.
* Skip terminal Pods with a deletion timestamp from the Daemonset sync
Change-Id: I64a347a87c02ee2bd48be10e6fff380c8c81f742
* Review comments and fix integration test
Change-Id: I3eb5ec62bce8b4b150726a1e9b2b517c4e993713
* Include deleted terminal pods in history
Change-Id: I8b921157e6be1c809dd59f8035ec259ea4d96301
1aeec10efb removed iterating over containers in favor of iterating over pod
claims. This had the unintended consequence that NodePrepareResource gets
called unnecessarily when no container needs the claim. The more natural
behavior is to skip unused resources. This enables (theoretic, at this time)
use cases where some DRA driver relies on the controller part to influence
scheduling, but then doesn't use CDI with containers.