Setting a new consumption target in autoscaling.ResourceConsumer caused
the internal sleep duration between consumption requests to reset.
The next consumption would then get delayed, starting after a gap of 0-30s.
Now that projected service account tokens do not require the secret
to be created, exclude the wait condition on the token and simply
wait for the service account.
The change from service account secrets to projected tokens and
the new dependency on kube-root-ca.crt to start pods with those
projected tokens means that e2e tests can start before
kube-root-ca.crt is created in a namespace. Wait for the default
service account AND the kube-root-ca.crt configmap in normal
e2e tests.
Having only the "master" taint in the list of non-blocking taints
blocks kubeadm / kind clusters from migrating to applying
both the "control-plane" and "master" taints in 1.24.
Add "control-plane" to the list of taints.
Leave TODO to cleanup the "master" taint in 1.25+.
It has to be removed either way as part of the inclusive
language cleanup efforts.
Some tests have a short timeout for starting the pods (1 minute), but if
those tests happen to be the first ones to run, and the images have to be
pulled, then the test could timeout, especially with larger images. This
commit will allow us to prepull commonly used E2E test images, so this issue
can be avoided.
For some test failures, checking the pod logs could potentially
yield some interesting information, which could be used to further
investigate certain failures / flakes (for example, if there are some
networking issues, we could at least see if requests reach the containers,
(agnhost logs the connections / requests), or if there were any
other issues during the container's startup).
* De-share the Handler struct in core API
An upcoming PR adds a handler that only applies on one of these paths.
Having fields that don't work seems bad.
This never should have been shared. Lifecycle hooks are like a "write"
while probes are more like a "read". HTTPGet and TCPSocket don't really
make sense as lifecycle hooks (but I can't take that back). When we add
gRPC, it is EXPLICITLY a health check (defined by gRPC) not an arbitrary
RPC - so a probe makes sense but a hook does not.
In the future I can also see adding lifecycle hooks that don't make
sense as probes. E.g. 'sleep' is a common lifecycle request. The only
option is `exec`, which requires having a sleep binary in your image.
* Run update scripts