The condition methods will eventually all take a context. Since we
have been provided one, alter the accepted condition type and
change the four references in tree.
Collers of ExponentialBackoffWithContext should use a condition
aware function (ConditionWithContextFunc). If the context can be
ignored the helper ConditionFunc.WithContext can be used to convert
an existing function to the new type.
hack/pin-dependency.sh github.com/moby/ipvs v1.1.0
- go to a fixed tag for `vishvananda/netns`
- no more references to `pkg/errors`
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
While refactoring the backoff manager to simplify and unify the code
in wait a race condition was encountered in
TestSharedInformerWatchDisruption. The new implementation failed
because the fake clock was not propagated to the backoff managers
when the reflector was used in a controller. After ensuring the
mangaers, reflector, controller, and informer shared the same
clock the test needed was updated to avoid the race condition by
advancing the fake clock and adding real sleeps to wait for
asynchronous propagation of the various goroutines in the controller.
Due to the deep structure of informers it is difficult to inject
hooks to avoid having to perform sleeps. At a minimum the FakeClock
interface should allow a caller to determine the number of waiting
timers (to avoid the first sleep).
Remove dependencies on internal fieldmanager for admission things. This
is preparing for moving fieldmanager out, but the admission part will
stay here, so it can't depend directly on internal.
This commit makes the job controller re-honor exponential backoff for
failed pods. Before this commit, the controller created pods without any
backoff. This is a regression because the controller used to
create pods with an exponential backoff delay before (10s, 20s, 40s ...).
The issue occurs only when the JobTrackingWithFinalizers feature is
enabled (which is enabled by default right now). With this feature, we
get an extra pod update event when the finalizer of a failed pod is
removed.
Note that the pod failure detection and new pod creation happen in the
same reconcile loop so the 2nd pod is created immediately after the 1st
pod fails. The backoff is only applied on 2nd pod failure, which means
that the 3rd pod created 10s after the 2nd pod, 4th pod is created 20s
after the 3rd pod and so on.
This commit fixes a few bugs:
1. Right now, each time `uncounted != nil` and the job does not see a
_new_ failure, `forget` is set to true and the job is removed from the
queue. Which means that this condition is also triggered each time the
finalizer for a failed pod is removed and `NumRequeues` is reset, which
results in a backoff of 0s.
2. Updates `updatePod` to only apply backoff when we see a particular
pod failed for the first time. This is necessary to ensure that the
controller does not apply backoff when it sees a pod update event
for finalizer removal of a failed pod.
3. If `JobsReadyPods` feature is enabled and backoff is 0s, the job is
now enqueued after `podUpdateBatchPeriod` seconds, instead of 0s. The
unit test for this check also had a few bugs:
- `DefaultJobBackOff` is overwritten to 0 in certain unit tests,
which meant that `DefaultJobBackOff` was considered to be 0,
effectively not running any meaningful checks.
- `JobsReadyPods` was not enabled for test cases that ran tests
which required the feature gate to be enabled.
- The check for expected and actual backoff had incorrect
calculations.
Modify unwrap error utility to make it work with go1.20
This version of Go introduces a new layer of wrapping via
a new error type. The commit accounts for that while being
compatible with go1.19
Signed-off-by: Madhav Jivrajani <madhav.jiv@gmail.com>