It was just saying the copy of file failed with `exit status 1`,
no much details for what's going wrong.
Combine the stderr and stdout and show those info will be easier
for us to fix the problem.
Signed-off-by: Dave Chen <dave.chen@arm.com>
A check that verifies that kubeadm does not "upgrade" to an older release was
overly optimized by skipping upgrade if the new version is the same as the old
one. This somewhat makes sense, but that way changes in any of the etcd fields
in the ClusterConfiguration won't be applied if the etcd version is not
changed.
Hence, this simple change ensures that the upgrade is done even when no version
change takes place.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
kubeadm uses image tags (such as `v3.4.3-0`) to specify the version of
etcd. However, the upgrade code in kubeadm uses the etcd client API to
fetch the currently deployed version. The result contains only the etcd
version without the additional information (such as image revision) that
is normally found in the tag. As a result it would refuse an upgrade
where the etcd versions match and the only difference is the image
revision number (`v3.4.3-0` to `v3.4.3-1`).
To fix the above issue, the following changes are done:
- Replace the existing etcd version querying code, that uses the etcd
client library, with code that returns the etcd image tag from the
local static pod manifest file.
- If an etcd `imageTag` is specified in the ClusterConfiguration during
upgrade, use that tag instead. This is done regardless if the tag was
specified in the configuration stored in the cluster or with a new
configuration supplied by the `--config` command line parameter.
If no custom tag is specified, kubeadm will select one depending on
the desired Kubernetes version.
- `kubeadm upgrade plan` no longer prints upgrade information about
external etcd. It's the user's responsibility to manage it in that
case.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
For historical reasons InitConfiguration is used almost everywhere in kubeadm
as a carrier of various configuration components such as ClusterConfiguration,
local API server endpoint, node registration settings, etc.
Since v1alpha2, InitConfiguration is meant to be used solely as a way to supply
the kubeadm init configuration from a config file. Its usage outside of this
context is caused by technical dept, it's clunky and requires hacks to fetch a
working InitConfiguration from the cluster (as it's not stored in the config
map in its entirety).
This change is a small step towards removing all unnecessary usages of
InitConfiguration. It reduces its usage by replacing it in some places with
some of the following:
- ClusterConfiguration only.
- APIEndpoint (as local API server endpoint).
- NodeRegistrationOptions only.
- Some combinations of the above types, or if single fields from them are used,
only those field.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
When 'kubeadm init ...' is used with an IPv6 kubeadm configuration,
kubeadm currently generates an etcd.yaml manifest that uses IP:port
combinatins where the IP is an IPv6 address, but it is not enclosed
in square brackets, e.g.:
- --advertise-client-urls=https://fd00:20::2:2379
For IPv6 advertise addresses, this should be of the form:
- --advertise-client-urls=https://[fd00:20::2]:2379
The lack of brackets around IPv6 addresses in cases like this is
causing failures to bring up IPv6-only clusters with Kubeadm as
described in kubernetes/kubeadm Issues #1212.
This format error is fixed by using net.JoinHostPort() to generate
URLs as shown above.
Fixes kubernetes/kubeadm Issue #1212
When doing upgrades kubeadm generates new manifest and
waits until kubelet restarts correspondent pod.
However, kubelet won't restart pod if there are no changes
in the manifest. That makes kubeadm stuck waiting for
restarted pod.
Skipping upgrade if new component manifest is the same as
current manifest should solve this.
Fixes: kubernetes/kubeadm#1054