Back up kubelet config file for `kubeadm upgrade apply`, some code
refactoring is done to de-dup some redundant code logic.
Signed-off-by: Dave Chen <dave.chen@arm.com>
It was just saying the copy of file failed with `exit status 1`,
no much details for what's going wrong.
Combine the stderr and stdout and show those info will be easier
for us to fix the problem.
Signed-off-by: Dave Chen <dave.chen@arm.com>
If we are dry-running, do not attempt to fetch the /version
resource and just return the stored FakeServerVersion,
which is done when constructing the dry-run client in
upgrade/common.go#getClient().
The problem here is that during upgrade
dry-run client reactors are backed by a dynamic client
via NewClientBackedDryRunGetterFromKubeconfig() and
for GetActions there seems to be no analog to
Discovery().Serverversion() resource for a dynamic client(?).
Currently, there are some unit tests that are failing on Windows due to
various reasons:
- filepath.IsAbs does not consider "/" or "\" as absolute paths, even
though files can be addressed as such.
- paths not properly joined (filepath.Join should be used).
- files not closed, which means that they cannot be removed / renamed.
- some assertions fail due to slashes / backslashes not matching.
- backslashes need to be escaped in yaml files, or put between ''
instead of "".
With phases/kubelet/WriteConfigToDisk() about to support patches
it is required that the function accepts an io.Writer
where the PatchManager can output to and also a patch directory.
Modify all call sites of the function WriteConfigToDisk()
to properly prepare an pass an io.Writer and patches dir to it.
This results in command phases for init/join/upgrade to pass
the root io.Writer (usually stdout) and the patchesDir populated
either via the config file or --patches flag.
- iniconfiguration.go: stop applying the "master" taint
for new clusters; update related unit tests in _test.go
- apply.go: Remove logic related to cleanup of the "master" label
during upgrade
- apply.go: Add cleanup of the "master" taint on CP nodes
during upgrade
- controlplane_nodes_test.go: remove test for old "master" taint
on nodes (this needs backport to 1.24, because we have a kubeadm
1.25 vs kubernetes test suite 1.24 e2e test)
During upgrade when a CP node is missing the old / legacy "master"
taint, assume the user has manually removed it to allow
workloads to schedule.
In such cases do not re-taint the node with the new "control-plane"
taint.
- During "upgrade apply" call a new function AddNewControlPlaneTaint()
that finds all nodes with the new "control-plane" node-role label
and adds the new "control-plane" taint to them.
- The function is called in "apply" and is separate from
the step to remove the old "master" label for better debugging
if errors occur.
- Rename the function in postupgrade.go to better reflect
what is being done.
- During "upgrade apply" find all nodes with the old label
and remove it by calling PatchNode.
- Update health check for CP nodes to not track "master"
labeled nodes. At this point all CP nodes should have
"control-plane" and we can use that selector only.