- `cert-dir` could be specified to a value other than the default value
- we have tests that should be executed successfully on the working cluster
Signed-off-by: Dave Chen <dave.chen@arm.com>
Once we patch a kubelet configuration file, the patched output
is in JSON. Make sure it's converted back to YAML, given
the kubelet config in the cluster and on disk is always in YAML.
Add unit test for the new function applyKubeletConfigPatches()
In phases/kubelet/WriteConfigToDisk() create a patch
manager for the root patches directory and apply
the user patches with a target "kubeletconfiguration".
With phases/kubelet/WriteConfigToDisk() about to support patches
it is required that the function accepts an io.Writer
where the PatchManager can output to and also a patch directory.
Modify all call sites of the function WriteConfigToDisk()
to properly prepare an pass an io.Writer and patches dir to it.
This results in command phases for init/join/upgrade to pass
the root io.Writer (usually stdout) and the patchesDir populated
either via the config file or --patches flag.
If the user runs "kubeadm upgrade apply", kubeadm can download
a configuration from the cluster. If the configuration contains
the legacy default imageRepository of "k8s.gcr.io", mutate it
to the new default of "registry.k8s.io" and update the
configuration in the config map.
During "upgrade node/diff" download the configuration, mutate the
image repository locally, but do not mutate the in-cluster value.
That is done only on "apply".
This ensures that users are migrated from the old default registry
domain.
- lock the FG to true by default
- cleanup wrappers and logic related to versioned vs unversioned
naming of API objects (CMs and RBAC)
- update unit tests
The OldControlPlaneTaint taint (master) can be replaced
with the new ControlPlaneTaint (control-plane) taint.
Adapt unit tests in markcontrolplane_test.go
and cluster_test.go.
- iniconfiguration.go: stop applying the "master" taint
for new clusters; update related unit tests in _test.go
- apply.go: Remove logic related to cleanup of the "master" label
during upgrade
- apply.go: Add cleanup of the "master" taint on CP nodes
during upgrade
- controlplane_nodes_test.go: remove test for old "master" taint
on nodes (this needs backport to 1.24, because we have a kubeadm
1.25 vs kubernetes test suite 1.24 e2e test)
Use the etcd 3.5.3+ HTTP(s) endpoint "/health?serializable=true",
to allow the kubelet liveness and starup probes in the
kubeadm generated etcd.yaml (static Pod) to track
individual member health instead of tracking the whole
etcd cluster health.
Given kubeadm 1.25 only supports kubelet 1.25 and 1.24,
1.23 related logic around dockershim can be removed.
- Don't clean the directories
/var/lib/dockershim, /var/runkubernetes, /var/lib/cni
- Pass the CRISocket directly to the kubelet
--container-runtime-endpoint flag without extra handling
of dockershim
- No longer apply the --container-runtime=remote flag
as that is the only possible value in 1.24 and 1.25
- Update unit tests
Note: we are still passing --pod-infra-container-image
to avoid the pause image to be GCed by the kubelet.
During upgrade when a CP node is missing the old / legacy "master"
taint, assume the user has manually removed it to allow
workloads to schedule.
In such cases do not re-taint the node with the new "control-plane"
taint.
Include the flag "--experimental-initial-corrupt-check"
in etcd static pod manifests to ensure
etcd member data consistency.
The etcd feature is planned for graduation in 3.6,
at which point we should switch to using the flag
without the "experimental" prefix.
Some of these changes are cosmetic (repeatedly calling klog.V instead of
reusing the result), others address real issues:
- Logging a message only above a certain verbosity threshold without
recording that verbosity level (if klog.V().Enabled() { klog.Info... }):
this matters when using a logging backend which records the verbosity
level.
- Passing a format string with parameters to a logging function that
doesn't do string formatting.
All of these locations where found by the enhanced logcheck tool from
https://github.com/kubernetes/klog/pull/297.
In some cases it reports false positives, but those can be suppressed with
source code comments.
We now re-use the crictl tool path within the `ContainerRuntime` when
exec'ing into it. This allows introducing a convenience function to
create the crictl command and re-use it where necessary.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
For the YAML examples, make the indentation consistent
by starting with a space and following with a TAB.
Also adjust the indentation of some fields to place them under
the right YAML field parent - e.g. ignorePreflightErrors
is under nodeRegistration.
- Modify VerifyUnmarshalStrict to use serializer/json instead
of sigs.k8s.io/yaml. In strict mode, the serializers
in serializer/json use the new sigs.k8s.io/json library
that also catches case sensitive errors for field names -
e.g. foo vs Foo. Include test case for that in strict/testdata.
- Move the hardcoded schemes to check to the side of the
caller - i.e. accept a slice of runtime.Scheme.
- Move the klog warnings outside of VerifyUnmarshalStrict
and make them the responsibility of the caller.
- Call VerifyUnmarshalStrict when downloading the configuration
from kubeadm-config or the kube-proxy or kubelet-config CMs.
This validation is useful if the user has manually patched the CMs.
The legacy naming "kubelet-config-x.yy" is no longer the
default behavior. Rename instances in documentation and comments
of "kubelet-config-x.yy" to "kubelet-config".
- Graduate the feature gate to Beta and enable it by default.
- Pre-set the default value for UnversionedKubeletConfigMap
to "true" in test/e2e_kubeadm.
- Fix a couple of typos in "tolerate" introduced in the PR that
added the FG in 1.23.
Compare with two pointers will always show that they are different value,
so it will always print the warning message.
Signed-off-by: Dave Chen <dave.chen@arm.com>
- During "upgrade apply" call a new function AddNewControlPlaneTaint()
that finds all nodes with the new "control-plane" node-role label
and adds the new "control-plane" taint to them.
- The function is called in "apply" and is separate from
the step to remove the old "master" label for better debugging
if errors occur.
- Apply "control-plane" taint during init/join by adding the
taint in SetNodeRegistrationDynamicDefaults(). The old
taint "master" is still applied.
- Clarify API docs (v1beta2 and v1beta3) for nodeRegistration.Taint
to not mention "master" taint and be more generic. Remove
example for taints that includes the word "master".
- Update unit tests.
- Update the markcontrolplane phase used by init and join to
only label the nodes with the new control plane label.
- Cleanup TODOs about the old label.
- Remove outdated comment about selfhosting in staticpod/utils.go.
Selfhosting has not been supported in kubeadm for a while
and the comment also mentions the "master" label.
- Update unit tests.
- Rename the function in postupgrade.go to better reflect
what is being done.
- During "upgrade apply" find all nodes with the old label
and remove it by calling PatchNode.
- Update health check for CP nodes to not track "master"
labeled nodes. At this point all CP nodes should have
"control-plane" and we can use that selector only.
- Throw an error if there is more than one known socket on the host.
- Remove the special handling for docker+containerd.
- Remove the local instances of constants for endpoints for
Windows / Unix and use the defaultKnownCRISockets variable
which is populated from OS specific constants.
- Update error message in detectCRISocketImpl to have more
details.
- Make detectCRISocketImpl accept a list of "known" sockets
- Update unit tests for detectCRISocketImpl and make them
use generic paths such as "unix:///foo/bar.sock".
Change the default container runtime CRI socket endpoint to the
one of containerd. Previously it was the one for Docker
- Rename constants.DefaultDockerCRISocket to DefaultCRISocket
- Make the constants files include the endpoints for all supported
container runtimes for Unix/Windows.
- Update unit tests related to docker runtime testing.
- In kubelet/flags.go hardcode the legacy docker socket as a check
to allow kubeadm 1.24 to run against kubelet 1.23 if the user
explicitly sets the criSocket field to "npipe:////./pipe/dockershim"
on Windows or "unix:///var/run/dockershim.sock" on Linux.
The API was deprecated in 1.23 when output/v1alpha2 was
added. v1alpha1 is problematic since it embeds kubeadm/v1beta2
BootstrapToken related types directly. v1alpha2 imports
a new group dedicated to bootstrap tokens apis/bootstraptoken.
crictl already works with the current state of dockershim.
Using the docker CLI is not required and the DockerRuntime
can be removed from kubeadm. This means that crictl
can connect at the dockershim (or cri-dockerd) socket and
be used to list containers, pull images, remove containers, and
all actions that the kubelet can otherwise perform with the socket.
Ensure that crictl is now required for all supported container runtimes
in checks.go. In the help text in waitcontrolplane.go show only
the crictl example.
Remove the check for the docker service from checks.go.
Remove the DockerValidor check from checks.go.
These two checks were special casing Docker as CR and compensating
for the lack of the same checks in dockershim. With the
extraction of dockershim to cri-dockerd, ideally cri-dockerd
should perform the required checks whether it can support
a given Docker config / version running on a host.
During "upgrade node" and "upgrade apply" read the
kubelet env file from /var/lib/kubelet/kubeadm-flags.env
patch the --container-runtime-endpoint flag value to
have the appropriate URL scheme prefix (e.g. unix:// on Linux)
and write the file back to disk.
This is a temporary workaround that should be kept only for 1 release
cycle - i.e. remove this in 1.25.
The CRI socket that kubeadm writes as an annotation
on a particular Node object can include an endpoint that
does not have an URL scheme. This is undesired as long term
the kubelet can stop allowing endpoints without URL scheme.
For control plane nodes "kubeadm upgrade apply" takes
the locally defaulted / populated NodeRegistration and refreshes
the CRI socket in PerformPostUpgradeTasks. But for secondary
nodes "kubeadm upgrade node" does not.
Adapt "upgrade node" to fetch the NodeRegistration for this node
and fix the CRI socket missing URL scheme if needed in the Node
annotation.
- Update defaults for v1beta2 and 3 to have URL scheme
- Raname DefaultUrlScheme to DefaultContainerRuntimeURLScheme
- Prepend a missing URL scheme to user sockets and warn them
that this might not be supported in the future
- Update socket validation to exclude IsAbs() testing
(This is broken on Windows). Assume the path is not empty and has
URL scheme at this point (validation happens after defaulting).
- Use net.Dial to open Unix sockets
- Update all related unit tests
Signed-off-by: pacoxu <paco.xu@daocloud.io>
Signed-off-by: Lubomir I. Ivanov <lubomirivanov@vmware.com>
Currently when the dockershim socket is used, kubeadm only passes
the --network-plugin=cni to the kubelet and assumes the built-in
dockershim. This is valid for versions <1.24, but with dockershim
and related flags removed the kubelet will fail.
Use preflight.GetKubeletVersion() to find the version of the host
kubelet and if the version is <1.24 assume that it has built-in
dockershim. Newer versions should will be treated as "remote" even
if the socket is for dockershim, for example, provided by cri-dockerd.
Update related unit tests.
Apply a small fix to ensure the kubeconfig files
that kubeadm manages have a CA when printed in the table
of the "check expiration" command. "CAName" is the field used for that.
In practice kubeconfig files can contain multiple credentials
from different CAs, but this is not supported by kubeadm and there
is a single cluster CA that signs the single client cert/key
in kubeadm managed kubeconfigs.
In case stacked etcd is used, the code that does expiration checks
does not validate if the etcd CA is "external" (missing key)
and if the etcd CA signed certificates are valid.
Add a new function UsingExternalEtcdCA() similar to existing functions
for the cluster CA and front-proxy CA, that performs the checks for
missing etcd CA key and certificate validity.
This function only runs for stacked etcd, since if etcd is external
kubeadm does not track any certs signed by that etcd CA.
This fixes a bug where the etcd CA will be reported as local even
if the etcd/ca.key is missing during "certs check-expiration".
When the "kubeadm certs check-expiration" command is used and
if the ca.key is not present, regular on disk certificate reads
pass fine, but fail for kubeconfig files. The reason for the
failure is that reading of kubeconfig files currently
requires reading both the CA key and cert from disk. Reading the CA
is done to ensure that the CA cert in the kubeconfig is not out of date
during renewal.
Instead of requiring both a CA key and cert to be read, only read
the CA cert from disk, as only the cert is needed for kubeconfig files.
This fixes printing the cert expiration table even if the ca.key
is missing on a host (i.e. the CA is considered external).
kubeam:node-proxier -> kubeadm:node-proxier
This causes e2e test failures:
"[area-kubeadm] proxy addon kube-proxy ServiceAccount should
be bound to the system:node-proxier cluster role"
in:
- kubeadm-kinder-latest
- kubeadm-kinder-latest-on-...
- other tests
Before this commit when setting bindAddress to 1.2.3.4 the warning was:
The recommended value for "bindAddress" in "KubeProxyConfiguration" is: 1.2.3.4; the provided value is: 0.0.0.0
Signed-off-by: Etienne Champetier <e.champetier@ateme.com>
Add the UnversionedKubeletConfigMap feature gate that can
be used to control legacy vs new behavior for naming the
default configmap used to store the KubeletConfiguration.
Update related unit tests.
Instead of the individual error and return, it's better to aggregate all
the errors so that we can fix them all at once.
Take the chance to fix some comments, since kubeadm are not checking that
the certs are equal across controlplane.
Signed-off-by: Dave Chen <dave.chen@arm.com>
The addition of output/v1alpha2 made the converter-gen require
an explicit converter for:
kubeadm/v1beta2.BootstrapToken -> bootstraptoken/v1.BootstrapToken.
Add this converter under kubeadm/v1beta.
Use the converter in output/v1alpha1.
* De-share the Handler struct in core API
An upcoming PR adds a handler that only applies on one of these paths.
Having fields that don't work seems bad.
This never should have been shared. Lifecycle hooks are like a "write"
while probes are more like a "read". HTTPGet and TCPSocket don't really
make sense as lifecycle hooks (but I can't take that back). When we add
gRPC, it is EXPLICITLY a health check (defined by gRPC) not an arbitrary
RPC - so a probe makes sense but a hook does not.
In the future I can also see adding lifecycle hooks that don't make
sense as probes. E.g. 'sleep' is a common lifecycle request. The only
option is `exec`, which requires having a sleep binary in your image.
* Run update scripts
The API is a copy of output/v1alpha1 with a minor difference
where output/v1alpha2.BootstrapToken embeds
bootstraptoken/v1.BootstrapToken instead of
kubeadm/v1beta2.BootstrapToken.
Embedding the later is an undesired binding between the "kubeadm"
and "output" groups, preventing the eventual deprecation and removal
of the kubeadm.v1beta2 API.
This new output API version, unlike v1alpha1, does not include
defaulting which is not needed.
Panicing if not running in a test and if the component-base/version
variables are empty is not ideal. At some point sections
of kubeadm could be exposed as a library and if these sections
import the constants package, they would panic on the library
users unless they set the version information in component-base
with ldflags.
Instead:
- If the component-base version is empty, return a placeholder version
that should indicate to users that build kubeadm that something is not
right (e.g. they did not use 'make'). During library usage or unit
tests this version should not be relevant.
- Update unit tests to use hardcoded versions instead of the versions
from the constants package. Using the constants package for testing
is good but during unit tests these versions are already placeholders
since unit tests do not populate the actual component-base versions
(e.g. 1.23).
Tests under /app and /test would fail if the current/minimum k8s version
is dynamically populated from the version in the kubeadm binary.
Adapt the tests to support that.
Kubeadm requires manual version updates of its current supported k8s
control plane version and minimally supported k8s control plane and
kubelet versions every release cycle.
To avoid that, in constants.go:
- Add the helper function getSkewedKubernetesVersion() that can be
used to retrieve a MAJOR.(MINOR+n).0 version of k8s. It currently
uses the kubeadm version populated in "component-base/version" during
the kubeadm build process.
- Use the function to set existing version constants (variables).
Update util/config/common.go#NormalizeKubernetesVersion() to
tolerate the case where a k8s version in the ClusterConfiguration
is too old for the kubeadm binary to use during code freeze.
Include unit tests for the new utilities.
This change optimizes the kubeadm/etcd `AddMember` client-side function
by stopping early in the backoff loop when a peer conflict is found
(indicating the member has already been added to the etcd cluster). In
this situation, the function will stop early and relay a call to
`ListMembers` to fetch the current list of members to return. With this
optimization, front-loading a `ListMembers` call is no longer necessary,
as this functionally returns the equivalent response.
This helps reduce the amount of time taken in situational cases where an
initial client request to add a member is accepted by the server, but
fails client-side.
This situation is possible situationally, such as if network latency
causes the request to timeout after it was sent and accepted by the
cluster. In this situation, the following loop would occur and fail with
an `ErrPeerURLExist` response, and would be stuck until the backoff
timeout was met (roughly ~2min30sec currently).
Testing Done:
* Manual testing with an etcd cluster. Initial "AddMember` call was
successful, and the etcd manifest file was identical to prior version
of these files. Subsequent calls to add the same member succeeded
immediately (retaining idempotency), and the resulting manifest file
remains identical to previous version as well. The difference, this
time, is the call finished ~2min25sec faster in an identical test in
the environment tested with.
The purell package at github.com/PuerkitoBio/purell is no longer maintained and in k/k repo under kubeadm package its been used for normalizing the URL. This commit removes the dependency on this package and creates a local function for normalizing the URL within the preflight package under cmd/kubeadm.
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
chore: add new line at end of the file
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
fix: remove unused mod from vendor modules file
Signed-off-by: gkarthiks <github.gkarthiks@gmail.com>
During operations such as "upgrade", kubeadm fetches the
ClusterConfiguration object from the kubeadm ConfigMap.
However, due to requiring node specifics it wraps it in an
InitConfiguration object. The function responsible for that is:
app/util/config#FetchInitConfigurationFromCluster().
A problem with this function (and sub-calls) is that it ignores
the static defaults applied from versioned types
(e.g. v1beta3/defaults.go) and only applies dynamic defaults for:
- API endpoints
- node registration
- etc...
The introduction of Init|JoinConfiguration.ImagePullPolicy now
has static defaulting of the NodeRegistration object with a default
policy of "PullIfNotPresent". Respect this defaulting by constructing
a defaulted internal InitConfiguration from
FetchInitConfigurationFromCluster() and only then apply the dynamic
defaults over it.
This fixes a bug where "kubeadm upgrade ..." fails when pulling images
due to an empty ("") ImagePullPolicy. We could assume that empty
string means default policy on runtime in:
cmd/kubeadm/app/preflight/checks.go#ImagePullCheck()
but that might actually not be the user intent during "init" and "join",
due to e.g. a typo. Similarly, we don't allow empty tokens
on runtime and error out.
Instead of dynamically defaulting NodeRegistration.ImagePullPolicy,
which is common when doing defaulting depending on host state - e.g.
hostname, statically default it in v1beta3/defaults.go.
- Remove defaulting in checks.go
- Add one more unit test in checks_test.go
- Adapt v1beta2 conversion and fuzzer / round tripping tests
This also results in the default being visible when calling:
"kubeadm config print ...".
Given bootstraptoken/v1 is now a separate GV, there is no need
to duplicate the API and utilities inside v1beta3 and the internal
version.
v1beta2 must continue to use its internal copy due, since output/v1alpha1
embeds the v1beta2.BootstrapToken object. See issue 2427 in k/kubeadm.
- Make v1beta3 use bootstraptoken/v1 instead of local copies
- Make the internal API use bootstraptoken/v1
- Update validation, /cmd, /util and other packages
- Update v1beta2 conversion
Package bootstraptoken contains an API and utilities wrapping the
"bootstrap.kubernetes.io/token" Secret type to ease its usage in kubeadm.
The API is released as v1, since these utilities have been part of a
GA workflow for 10+ releases.
The "bootstrap.kubernetes.io/token" Secret type is also GA.
During "join" of new control plane machines, kubeadm would
download shared certificates and keys from the cluster stored
in a Secret. Based on the contents of an entry in the Secret,
it would use helper functions from client-go to either write
it as public key, cert (mode 644) or as a private key (mode 600).
The existing logic is always writing both keys and certs with mode 600.
Allow detecting public readable data properly and writing some files
with mode 644.
First check the data with ParsePrivateKeyPEM(); if this passes
there must be at least one private key and the file should be written
with mode 600 as private. If that fails, validate if the data contains
public keys with ParsePublicKeysPEM() and write the file as public
(mode 644).
As a result of this new logic, and given the current set of managed
kubeadm files, .key files will end up with 600, while .crt and .pub
files will end up with 644.
Add {Init|Join}Configuration.Patches, which is a structure that
contains patch related options. Currently it only has the "Directory"
field which is the same option as the existing --experimental-patches
flag.
The flags --[experimental-]patches value override this value
if both a flag and config is passed during "init" or "join".
The feature of "patches" in kubeadm has been in Alpha for a few
releases. It has not received major bug reports from users.
Deprecate the --experimental-patches flag and add --patches.
Both flags are allowed to be mixed with --config.
If the user has not specified a pull policy we must assume a default of
v1.PullIfNotPresent.
Add some extra verbose output to help users monitor what policy is
used and what images are skipped / pulled.
Use "fallthrough" and case handle "v1.PullAlways".
Update unit test.