Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327) kubeadm: Fully implement --dry-run **What this PR does / why we need it**: Finishes the work begun in #50631 - Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable - Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those. - Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # fixes: https://github.com/kubernetes/kubeadm/issues/389 **Special notes for your reviewer**: Full `kubeadm init --dry-run` output: ``` [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.7.4 [init] Using Authorization mode: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service' [preflight] Starting the kubelet service [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930" [dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them [dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written [dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content: apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --allow-privileged=true - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --requestheader-extra-headers-prefix=X-Remote-Extra- - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota - --experimental-bootstrap-token-auth=true - --client-ca-file=/etc/kubernetes/pki/ca.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --secure-port=6443 - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --insecure-port=0 - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-allowed-names=front-proxy-client - --advertise-address=192.168.200.101 - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --authorization-mode=Node,RBAC - --etcd-servers=http://127.0.0.1:2379 image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 6443 scheme: HTTPS initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-apiserver resources: requests: cpu: 250m volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/pki name: ca-certs-etc-pki readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/pki name: k8s-certs - hostPath: path: /etc/ssl/certs name: ca-certs - hostPath: path: /etc/pki name: ca-certs-etc-pki status: {} [dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content: apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager - --address=127.0.0.1 - --kubeconfig=/etc/kubernetes/controller-manager.conf - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key - --leader-elect=true - --use-service-account-credentials=true - --controllers=*,bootstrapsigner,tokencleaner - --root-ca-file=/etc/kubernetes/pki/ca.crt - --service-account-private-key-file=/etc/kubernetes/pki/sa.key image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10252 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-controller-manager resources: requests: cpu: 200m volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/kubernetes/controller-manager.conf name: kubeconfig readOnly: true - mountPath: /etc/pki name: ca-certs-etc-pki readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/pki name: k8s-certs - hostPath: path: /etc/ssl/certs name: ca-certs - hostPath: path: /etc/kubernetes/controller-manager.conf name: kubeconfig - hostPath: path: /etc/pki name: ca-certs-etc-pki status: {} [dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content: apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-scheduler tier: control-plane name: kube-scheduler namespace: kube-system spec: containers: - command: - kube-scheduler - --leader-elect=true - --kubeconfig=/etc/kubernetes/scheduler.conf - --address=127.0.0.1 image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10251 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-scheduler resources: requests: cpu: 100m volumeMounts: - mountPath: /etc/kubernetes/scheduler.conf name: kubeconfig readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/scheduler.conf name: kubeconfig status: {} [markmaster] Will mark node thegopher as master by adding a label and a taint [dryrun] Would perform action GET on resource "nodes" in API group "core/v1" [dryrun] Resource name: "thegopher" [dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1" [dryrun] Resource name: "thegopher" [dryrun] Attached patch: {"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}} [markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master="" [token] Using token: 96efd6.98bbb2f4603c026b [dryrun] Would perform action GET on resource "secrets" in API group "core/v1" [dryrun] Resource name: "bootstrap-token-96efd6" [dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4= expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA== token-id: OTZlZmQ2 token-secret: OThiYmIyZjQ2MDNjMDI2Yg== usage-bootstrap-authentication: dHJ1ZQ== usage-bootstrap-signing: dHJ1ZQ== kind: Secret metadata: creationTimestamp: null name: bootstrap-token-96efd6 type: bootstrap.kubernetes.io/token [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - kind: Group name: system:bootstrappers [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: creationTimestamp: null name: system:certificates.k8s.io:certificatesigningrequests:nodeclient rules: - apiGroups: - certificates.k8s.io resources: - certificatesigningrequests/nodeclient verbs: - create [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - kind: Group name: system:bootstrappers [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: kubeconfig: | apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://192.168.200.101:6443 name: "" contexts: [] current-context: "" kind: Config preferences: {} users: [] kind: ConfigMap metadata: creationTimestamp: null name: cluster-info namespace: kube-public [dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: creationTimestamp: null name: kubeadm:bootstrap-signer-clusterinfo namespace: kube-public rules: - apiGroups: - "" resourceNames: - cluster-info resources: - configmaps verbs: - get [dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: creationTimestamp: null name: kubeadm:bootstrap-signer-clusterinfo namespace: kube-public roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubeadm:bootstrap-signer-clusterinfo subjects: - kind: User name: system:anonymous [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: MasterConfiguration: | api: advertiseAddress: 192.168.200.101 bindPort: 6443 apiServerCertSANs: [] apiServerExtraArgs: null authorizationModes: - Node - RBAC certificatesDir: /etc/kubernetes/pki cloudProvider: "" controllerManagerExtraArgs: null etcd: caFile: "" certFile: "" dataDir: /var/lib/etcd endpoints: [] extraArgs: null image: "" keyFile: "" featureFlags: null imageRepository: gcr.io/google_containers kubernetesVersion: v1.7.4 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 nodeName: thegopher schedulerExtraArgs: null token: 96efd6.98bbb2f4603c026b tokenTTL: 86400000000000 unifiedControlPlaneImage: "" kind: ConfigMap metadata: creationTimestamp: null name: kubeadm-config namespace: kube-system [dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Resource name: "system:node" [dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: kube-dns namespace: kube-system [dryrun] Would perform action GET on resource "services" in API group "core/v1" [dryrun] Resource name: "kubernetes" [dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1" [dryrun] Attached object: apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null labels: k8s-app: kube-dns name: kube-dns namespace: kube-system spec: selector: matchLabels: k8s-app: kube-dns strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 template: metadata: creationTimestamp: null labels: k8s-app: kube-dns spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 containers: - args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 env: - name: PROMETHEUS_PORT value: "10055" image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 successThreshold: 1 timeoutSeconds: 5 name: kubedns ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP initialDelaySeconds: 3 timeoutSeconds: 5 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi volumeMounts: - mountPath: /kube-dns-config name: kube-dns-config - args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 successThreshold: 1 timeoutSeconds: 5 name: dnsmasq ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP resources: requests: cpu: 150m memory: 20Mi volumeMounts: - mountPath: /etc/k8s/dns/dnsmasq-nanny name: kube-dns-config - args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 successThreshold: 1 timeoutSeconds: 5 name: sidecar ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: cpu: 10m memory: 20Mi dnsPolicy: Default serviceAccountName: kube-dns tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/master volumes: - configMap: name: kube-dns optional: true name: kube-dns-config status: {} [dryrun] Would perform action CREATE on resource "services" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: KubeDNS name: kube-dns namespace: kube-system resourceVersion: "0" spec: clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns status: loadBalancer: {} [addons] Applied essential addon: kube-dns [dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: kube-proxy namespace: kube-system [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: kubeconfig.conf: | apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https://192.168.200.101:6443 name: default contexts: - context: cluster: default namespace: default user: default name: default current-context: default users: - name: default user: tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token kind: ConfigMap metadata: creationTimestamp: null labels: app: kube-proxy name: kube-proxy namespace: kube-system [dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1" [dryrun] Attached object: apiVersion: extensions/v1beta1 kind: DaemonSet metadata: creationTimestamp: null labels: k8s-app: kube-proxy name: kube-proxy namespace: kube-system spec: selector: matchLabels: k8s-app: kube-proxy template: metadata: creationTimestamp: null labels: k8s-app: kube-proxy spec: containers: - command: - /usr/local/bin/kube-proxy - --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4 imagePullPolicy: IfNotPresent name: kube-proxy resources: {} securityContext: privileged: true volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /run/xtables.lock name: xtables-lock hostNetwork: true serviceAccountName: kube-proxy tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master - effect: NoSchedule key: node.cloudprovider.kubernetes.io/uninitialized value: "true" volumes: - configMap: name: kube-proxy name: kube-proxy - hostPath: path: /run/xtables.lock name: xtables-lock updateStrategy: type: RollingUpdate status: currentNumberScheduled: 0 desiredNumberScheduled: 0 numberMisscheduled: 0 numberReady: 0 [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:node-proxier roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-proxier subjects: - kind: ServiceAccount name: kube-proxy namespace: kube-system [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1 ``` **Release note**: ```release-note kubeadm: Implement a `--dry-run` mode and flag for `kubeadm` ``` @kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
Kubernetes

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.
Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.
To start using Kubernetes
See our documentation on kubernetes.io.
Try our interactive tutorial.
Take a free course on Scalable Microservices with Kubernetes.
To start developing Kubernetes
The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.
If you want to build Kubernetes right away there are two options:
You have a working Go environment.
$ go get -d k8s.io/kubernetes
$ cd $GOPATH/src/k8s.io/kubernetes
$ make
You have a working Docker environment.
$ git clone https://github.com/kubernetes/kubernetes
$ cd kubernetes
$ make quick-release
If you are less impatient, head over to the developer's documentation.
Support
If you need support, start with the troubleshooting guide and work your way through the process that we've outlined.
That said, if you have questions, reach out to us one way or another.