The way gingko handles interrupts is:
- It starts running AfterSuite hooks in a separate goroutine (this includes cleanupAction hooks)
- Once AfterSuite hook is done executing it calls
os.Exit(1) on test suite.
So how cleanupFunc() that runs via defer in test can be interrupted
is:
- cleanupFunc starts running via defer (or AfterEach hook) but first
thing that function does is to remove cleanupHandle from
framework.RemoveCleanupAction.
- Test suite receives interrupt from user and AfterSuite block
starts executing
- remember that while cleanupFunc is running in goroutine#1,
AfterSuite is running concurrently in goroutine#2.
- AfterSuite hook has bunch of CleanupActions it needs to run which
were registered via framework.AddCleanupAction(cleanupFunc) but
once cleanupFunc starts executing via defer in the test, it will
remove the cleanupHandle from framework's aftersuite hooks.
- So if AfterSuite did not had anything to run (because
those actions were removed via framework.RemoveCleanupAction
then it will simply go to the last framework.AfterEach action and call os.Exit(1)
- So if os.Exit(1) is called before cleanupFunc has a chance to finish in defer, it will not complete.
This cluster add-on is required for snapshotting of CSI volumes and
must be installed when bringing up a cluster because CSI driver
installations depend on that.
It is unclear how many users of the script need CSI snapshotting,
therefore it is disabled by default (= the previous behavior).
Until now, users were always asked to manually convert a component config to a
version supported by kubeadm, if kubeadm is not supporting its version.
This is true even for configs generated with older kubeadm versions, hence
getting users to make manual conversions on kubeadm generated configs.
This is not appropriate and user friendly, although, it tends to be the most
common case. Hence, we sign kubeadm generated component configs stored in
config maps with a SHA256 checksum. If a configs is loaded by kubeadm from a
config map and has a valid signature it's considered "kubeadm generated" and if
a version migration is required, this config is automatically discarded and a
new one is generated.
If there is no checksum or the checksum is not matching, the config is
considered as "user supplied" and, if a version migration is required, kubeadm
will bail out with an error, requiring manual config migration (as it's today).
The behavior when supplying component configs on the kubeadm command line
does not change. Kubeadm would still bail out with an error requiring migration
if it can recognize their groups but not versions.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
Given the assumption that 90% of images on dockerhub drops into this range (23~1000)MB,
this assumption is based on the container images instead of the pod.
pod might hold multiple container images, it's better to multiply the assumption by the size
of container images.
Signed-off-by: Dave Chen <dave.chen@arm.com>
This change removes the `coreCount` variable and associated counting
logic from the cluster size autoscaling test. This variable was used
during the 1.9 release era, but the tests that used it were removed
before the 1.10 release. Please see the referenced commits[0][1] for
more information.
[0]
c8b807837a
[1]
fd738945b1
The e2e/lifecycle package is owned by SIG CL, although maybe this
should be moved to e2e/auth at some point.
- copy the OWNERS from /cmd/kubeadm (minus the area/kueadm label)
- remove the OWNERS file in /bootstrap letting the parent OWNERS file
manage this sub-package.