By default, AWS EBS CSI driver does not increase IOPS of a volume to the
minimal value supported by AWS - such increase results in increased costs,
which might be surprising. It returns an error instead (increase IOPSPerGB
or volume size).
In-tree volume plugin does increase the IOPS. In order to preserve this
behavior of volumes migrated to CSI, the translated StorageClasses should
include "allowAutoIOPSPerGBIncrease" option that allows the driver to
increase IOPS as the in-tree volume plugin would do.
- Add log functions to facilitate debug logging.
- Wrap commands called in main with debug logging.
- Configure a systemd service to forward the logs to the serial port.
- Add a 'retry-forever' function to harden download steps.
- Add default value support to 'get-metadata-value' function.
- Fix some spellcheck lints.
This replaces the check of mount propagation to/from the host OS mount
namespace to a similar check about the mount namespace where kubelet is
running (which may or may not be the same mount namespace as the host
OS).
This addresses issue #100259
it turns out that setting a timeout on HTTP client affect watch requests made by the delegated authentication component.
with a 10 second timeout watch requests are being re-established exactly after 10 seconds even though the default request timeout for them is ~5 minutes.
this is because if multiple timeouts were set, the stdlib picks the smaller timeout to be applied, leaving other useless.
for more details see a937729c2c/src/net/http/client.go (L364)
instead of setting a timeout on the HTTP client we should use context for cancellation.
This changes the `/ephemeralcontainers` subresource of `/pods` to use
the `Pod` kind rather than `EphemeralContainers`.
When designing this API initially it seemed preferable to create a new
kind containing only the pod's ephemeral containers, similar to how
binding and scaling work.
It later became clear that this made admission control more difficult
because the controller wouldn't be presented with the entire Pod, so we
updated this to operate on the entire Pod, similar to how `/status`
works.
To run the tests in a single node cluster, create two pods consuming 2/5 of the extended resource instead of one consuming 2/3.
The low priority pod will be consuming 2/5 of the extended resource instead so in case there's only a single node,
a high priority pod consuming 2/5 of the extended resource can be still scheduled. Thus, making sure only the low priority pod
gets preempted once the preemptor pod consuming 2/5 of the extended resource gets scheduled while keeping the high priority pod untouched.