the controller manager should validate the podSubnet against the node-mask
because if they are incorrect can cause the controller-manager to fail.
We don't need to calculate the node-cidr-masks, because those should
be provided by the user, if they are wrong we fail in validation.
Kubeadm setup of kube-controller-manager and kube-scheduler is
lacking the --port=0 option which caused the component to enable
the insecure port by default and serve insecurely on the default
node interface.
Add --port=0 by default to both components. Users are still allowed
the explicitly set the flag (via extraArgs), which allows them
to override this default kubeadm behavior and enable the insecure port.
NOTE: the flag is deprecated and should be removed from kubeadm manifests
once it's removed from core.
Add PatchStaticPod() in staticpod/utils.go
Apply patches to static Pods in:
- phases/controlplane/CreateStaticPodFiles()
- phases/etcd/CreateLocalEtcdStaticPodManifestFile() and
CreateStackedEtcdStaticPodManifestFile()
Add unit tests and update Bazel.
kubeadm init prints:
W0410 23:02:10.119723 13040 manifests.go:225] the default kube-apiserver
authorization-mode is "Node,RBAC"; using "Node,RBAC"
Add a new function compareAuthzModes() and a unit test for it.
Make sure the warning is printed only if the user modes don't match
the defaults.
After the shift for init phases, GetStaticPodSpecs() from
app/phases/controlplane/manifests.go gets called on each control-plane
component sub-phase. This ends up calling the Printf from
AddExtraHostPathMounts() in app/phases/controlplane/volumes.go
multiple times printing the same volumes for different components.
- Remove the Printf call from AddExtraHostPathMounts().
- Print all volumes for a component in CreateStaticPodFiles() using klog
V(2).
Perhaps in the future a bigger refactor is needed here were a
single control-plane component spec can be requested instead of a
map[string]v1.Pod.
While `ClusterStatus` will be maintained and uploaded, it won't be
used by the internal `kubeadm` logic in order to determine the etcd
endpoints anymore.
The only exception is during the first upgrade cycle (`kubeadm upgrade
apply`, `kubeadm upgrade node`), in which we will fallback to the
ClusterStatus to let the upgrade path add the required annotations to
the newly created static pods.
kubeadm always use the IPv4 localhost address by defaultA for etcd
The probe hostname is obtained before the generation of the etcd
parameters, so it can't detect the right IP familiy for the
host of the probe.
This causes that with IPv6 clusters doesn't work because the probe
uses the IPv4 localhost address.
This patchs configures the right localhost address based on the used
AdvertiseAddress IP family.
Etcd v3.3.0 added the --listen-metrics-urls flag which allows specifying
addition URLs to the already present /health and /metrics endpoints.
While /health and /metrics are enabled for URLS defined with
--listen-client-urls (v3+ ?) they do require HTTPS.
Replace the present etcdctl based liveness probe with a standard HTTP
GET v1.Probe that connects to http://127.0.0.1:2381/health.
These endpoints are not reachable from the outside and only available
for localhost connections.
Secure serving was already enabled for kube-controller-manager.
Do the same for kube-scheduler, by passing the flags
"authentication-kubeconfig" and "authorization-kubeconfig"
to the binary in the static Pod.
This change allows the scheduler to perform reviews on incoming
requests, such as:
- authentication.k8s.io/v1beta1 TokenReview
- authorization.k8s.io/v1 SubjectAccessReview
The authentication and authorization checks for "system:kube-scheduler"
users were previously enabled by PR 72491.
progagates the kubeadm FG to the individual k8scomponents
on the control-plane node.
* Note: Users who want to join worker nodes to the cluster
will have to specify the dual-stack FG to kubelet using the
nodeRegistration.kubeletExtraArgs option as part of their
join config. Alternatively, they can use KUBELET_EXTRA_ARGS.
kubeadm FG: kubernetes/kubeadm#1612
kube-controller-manager.
If a service CIDR that overlaps with the cluster CIDR is
specified to kube-controller-manager then kube-controller-
manager will incorrectly allocate node CIDRs that overlap
with the service CIDR. The fix ensure that kubeadm
maps the --service-cidr to --service-cluster-ip-range for use
by kube-controller-manager.
As per docs, --allocate-node-cidrs must be true for
--service-cluster-ip-range to be considered. It does not make
sense for --cluster-cidr to be unspecified but for
--service-cluster-ip-range and --allocate-node-cidrs to be
set, since the purpose of these options is to have the
controller-manager do the per node CIDR allocation. Also
note that --service-cluster-ip-range is passed to the
api-server, so the presence of *just*
--service-cluster-ip-range should not imply that
--allocate-node-cidrs should be true.
Resolves: kubernetes/kubeadm/issues/1591