Due to a recent change in k8s.io/gengo/v2, the register-gen is
missing 2 imports.The imports were previously auto inserted during
the code generation by k8s.io/gengo/v2.
Now, instead, they are directly imported by the register-gen.
An output_tests has been added to register-gen. This generates an
example, which would have been invalid with these changes.
Signed-off-by: Lionel Jouin <lionel.jouin@est.tech>
IIUC, before using the translator handler, the ping data can be delivered from
the client to the runtime side since kube-apiserver does not parse any client
data. However, with WebSocket, the server responds with a pong to the client
without forwarding the data to the runtime side. If a proxy is present, it may
close the connection due to inactivity. SPDY's PingPeriod can help address this
issue.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Co-authored-by: Antonio Ojea <aojea@google.com>
(cherry picked from commit dc59c0246f)
Signed-off-by: Wei Fu <fuweid89@gmail.com>
v1beta4 added the Timeouts struct and a EtcdAPICall timeout
field, but it was never used in the etcd client calls.
This is a bug, so it should be fixed, we also reduced
the timeout from 200 seconds exponentional backoff to 2 minute
linear default timeout.
This makes the script identical to current
master (f3cbd79db7). This is needed
because pull-kubernetes-apidiff-client-go is the same for all
branches and assumes that the script automatically determines
the diff based on Prow env variables.
- Use environment variables to pass string arguments in the node log
query PS command
- Split getLoggingCmd into getLoggingCmdEnv and getLoggingCmdArgs
for better modularization
We want to be sure that the maximum number of pods per claim are actually
scheduled concurrently. Previously the test just made sure that they ran
eventually.
Running 256 pods only works on more than 2 nodes, so network-attached resources
have to be used. This is what the increased limit is meant for anyway. Because
of the tightened validation of node selectors in 1.32, the E2E test has to
use MatchExpressions because they allow listing node names.
The original limit of 32 seemed sufficient for a single GPU on a node. But for
shared non-local resources it is too low. For example, a ResourceClaim might be
used to allocate an interconnect channel that connects all pods of a workload
running on several different nodes, in which case the number of pods can be
considerably larger.
256 is high enough for currently planned systems. If we need something even
higher in the future, an alternative approach might be needed to avoid
scalability problems.
Normally, increasing such a limit would have to be done incrementally over two
releases. In this case we decided on
Slack (https://kubernetes.slack.com/archives/CJUQN3E4T/p1734593174791519) to
make an exception and apply this change to current master for 1.33 and backport
it to the next 1.32.x patch release for production usage.
This breaks downgrades to a 1.32 release without this change if there are
ResourceClaims with a number of consumers > 32 in ReservedFor. In practice,
this breakage is very unlikely because there are no workloads yet which need so
many consumers and such downgrades to a previous patch release are also
unlikely. Downgrades to 1.31 already weren't supported when using DRA v1beta1.