Secure serving was already enabled for kube-controller-manager.
Do the same for kube-scheduler, by passing the flags
"authentication-kubeconfig" and "authorization-kubeconfig"
to the binary in the static Pod.
This change allows the scheduler to perform reviews on incoming
requests, such as:
- authentication.k8s.io/v1beta1 TokenReview
- authorization.k8s.io/v1 SubjectAccessReview
The authentication and authorization checks for "system:kube-scheduler"
users were previously enabled by PR 72491.
- common_test.go: use constants.CurrentKubernetesVersion
- diff_test.go: write temporary files instead of using testdata.
this allows us to not have to bump kubernetesVersions in the
testdata files (now removed)
- policy_test.go: apply fixes to tests that were previously passing,
but a bump in constants.go breaks them. these tests now work
for any version.
progagates the kubeadm FG to the individual k8scomponents
on the control-plane node.
* Note: Users who want to join worker nodes to the cluster
will have to specify the dual-stack FG to kubelet using the
nodeRegistration.kubeletExtraArgs option as part of their
join config. Alternatively, they can use KUBELET_EXTRA_ARGS.
kubeadm FG: kubernetes/kubeadm#1612
If the cluster has a PSP that blocks Pods from running as root
the DS that handles upgrade prepull will fail to create its Pods.
Workaround that by adding a PodSecurityContext with RunAsUser=999.
When a kubeconfig file is read from disk it may lack the
propper mapping between contexts and clusters.
In such a case the kubeconfig phase backend will panic,
without throwing a sensible error.
Add nil checks for a couple of map operations in
validateKubeConfig().
This helper is used in tests and pulls in unnecessary dependency, which should
not be used if kubeadm is to move to staging.
Replace with direct use of the GroupResource type.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
RBAC construction helpers are part of the Kubernetes internal APIs. As such,
we cannot use them once we move to staging.
Hence, replace their use with manual RBAC rule construction.
Signed-off-by: Rostislav M. Georgiev <rostislavg@vmware.com>
During the control plane joins, sometimes the control plane returns an
expected error when trying to download the `kubeadm-config` ConfigMap.
This is a workaround for this issue until the root cause is completely
identified and fixed.
Ideally, this commit should be reverted in the near future.
kube-controller-manager.
If a service CIDR that overlaps with the cluster CIDR is
specified to kube-controller-manager then kube-controller-
manager will incorrectly allocate node CIDRs that overlap
with the service CIDR. The fix ensure that kubeadm
maps the --service-cidr to --service-cluster-ip-range for use
by kube-controller-manager.
As per docs, --allocate-node-cidrs must be true for
--service-cluster-ip-range to be considered. It does not make
sense for --cluster-cidr to be unspecified but for
--service-cluster-ip-range and --allocate-node-cidrs to be
set, since the purpose of these options is to have the
controller-manager do the per node CIDR allocation. Also
note that --service-cluster-ip-range is passed to the
api-server, so the presence of *just*
--service-cluster-ip-range should not imply that
--allocate-node-cidrs should be true.
Resolves: kubernetes/kubeadm/issues/1591
The existing logic already creates a proper "tree"
where a CA is always generated before the certs that are signed
by this CA, however the tree is not deterministic.
Always use the default list of certs when generating the
"kubeadm init phase certs" phases. Add a unit test that
makes sure that CA always precede signed certs in the default
lists.
This solves the problem where the help screen for "kubeadm
init" cert sub-phases can have a random order.