Commit Graph

53743 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
7579bc835c Merge pull request #51340 from yan234280533/patch-3
Automatic merge from submit-queue (batch tested with PRs 51391, 51338, 51340, 50773, 49599)

add an starting info log of namespace controller.

**What this PR does / why we need it**:

add an starting info log of namespace controller.

**Release note**:
NA
2017-08-26 08:49:23 -07:00
Kubernetes Submit Queue
4cbc4592ff Merge pull request #51338 from yan234280533/patch-2
Automatic merge from submit-queue (batch tested with PRs 51391, 51338, 51340, 50773, 49599)

modify an little gammer error.

**What this PR does / why we need it**:
I found that it used "Found" in the middle of sentence. I think use "found" in the middle of sentence is better than "Found" ,so I modified it.
2017-08-26 08:49:21 -07:00
Kubernetes Submit Queue
223227eb59 Merge pull request #51391 from alrs/fix-iscsi-swallowed-error
Automatic merge from submit-queue

Fix swallowed error in iscsi package

**What this PR does / why we need it**: Fixes a swallowed error in the iscsi package.


```release-note NONE
```
2017-08-26 08:23:09 -07:00
Kubernetes Submit Queue
f7eb492f0d Merge pull request #51390 from alrs/fix-photon-pd-swallowed-errors
Automatic merge from submit-queue

Fix swallowed errors in tests of photon_pd package

**What this PR does / why we need it**: Fixes swallowed errors in the tests of the photon_pd package.

```release-note NONE
```
2017-08-26 07:32:24 -07:00
Kubernetes Submit Queue
84d9778f22 Merge pull request #51388 from alrs/fix-scaleio-swallowed-error
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)

Fix swallowed error in scaleio package tests

**What this PR does / why we need it**: Fixes a dropped error in the tests of the scaleio package.

**Release note**:
```release-note NONE
```
2017-08-26 06:43:36 -07:00
Kubernetes Submit Queue
4b7135513f Merge pull request #51382 from nicksardo/revert-51038-gce-netproj
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)

Revert "GCE: Consume new config value for network project id"

Reverts kubernetes/kubernetes#51038

Broke GKE tests
2017-08-26 06:43:33 -07:00
Kubernetes Submit Queue
27fbb68f18 Merge pull request #51087 from oracle/for/upstream/master/ccm-instance-exists
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)

Add InstanceExistsByProviderID to cloud provider interface for CCM

**What this PR does / why we need it**:

Currently, [`MonitorNode()`](02b520f0a4/pkg/controller/cloud/nodecontroller.go (L240)) in the node controller checks with the CCM if a node still exists by calling `ExternalID(nodeName)`. `ExternalID` is supposed to return the provider id of a node which is not supported on every cloud. This means that any clouds who cannot infer the provider id by the node name from a remote location will never remove nodes that no longer exist. 


**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50985

**Special notes for your reviewer**:

We'll want to create a subsequent issue to track the implementation of these two new methods in the cloud providers.

**Release note**:

```release-note
Adds `InstanceExists` and `InstanceExistsByProviderID` to cloud provider interface for the cloud controller manager
```

/cc @wlan0 @thockin @andrewsykim @luxas @jhorwit2

/area cloudprovider
/sig cluster-lifecycle
2017-08-26 06:43:30 -07:00
Kubernetes Submit Queue
41a06d1fbb Merge pull request #51363 from luxas/move_uploadconfig
Automatic merge from submit-queue (batch tested with PRs 51174, 51363, 51087, 51382, 51388)

kubeadm: Move the uploadconfig phase right in the beginning of cluster init

**What this PR does / why we need it**:

In order to be forwards-compatible, I'm moving the uploadconfig to be the first thing in the chain in order to make it possible to rely on it being present in future releases when we have a beta or higher API to rely on.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
@kubernetes/sig-cluster-lifecycle-pr-reviews
2017-08-26 06:43:27 -07:00
Kubernetes Submit Queue
1e5d85a0bb Merge pull request #51174 from caesarxuchao/fix-resourcequota
Automatic merge from submit-queue

Let the quota evaluator handle mutating specs of pod & pvc

### Background
The final goal is to address https://github.com/kubernetes/kubernetes/issues/47837, which aims to allow more mutation for uninitialized objects.

To do that, we [decided](https://github.com/kubernetes/kubernetes/issues/47837#issuecomment-321462433) to let the admission controllers to handle mutation of uninitialized objects.

### Issue
#50399 attempted to fix all admission controllers so that can handle mutating uninitialized objects. It was incomplete. I didn't realize although the resourcequota admission plugin handles the update operation, the underlying evaluator didn't. This PR updated the evaluators to handle updates of uninitialized pods/pvc.

### TODO
We still miss another piece. The [quota replenish controller](https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/resourcequota/replenishment_controller.go) uses the sharedinformer, which doesn't observe the deletion of uninitialized pods at the moment. So there is a quota leak if a pod is deleted before it's initialized. It will be addressed with https://github.com/kubernetes/kubernetes/issues/48893.
2017-08-26 06:07:29 -07:00
Kubernetes Submit Queue
6368c1fc82 Merge pull request #51348 from rmmh/coreos-no-password
Automatic merge from submit-queue

Make coreos test images sshd not allow password login.

This will prevent security scanners from triggering.

Configuration is verbatim from:
https://coreos.com/os/docs/latest/customizing-sshd.html

```release-note
NONE
```
2017-08-26 04:19:11 -07:00
Kubernetes Submit Queue
d27da4133d Merge pull request #49439 from zhangxiaoyu-zidif/fix-err-message-for-pdb
Automatic merge from submit-queue

fix error message for pdb.go

**What this PR does / why we need it**:
fix error message for pdb.go

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
NONE

**Special notes for your reviewer**:
NONE

**Release note**:

```release-note
NONE
```
2017-08-26 03:24:31 -07:00
Kubernetes Submit Queue
c241cbe44d Merge pull request #51173 from liggitt/role-printers
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Print multiple node roles, remove kubeadm-specific annotation from kubectl

related to #50010

Follow up to https://github.com/kubernetes/kubernetes/pull/50438 that removes the kubeadm-specific label, makes kubectl role-agnostic, and outputs multiple roles if present
2017-08-26 02:05:39 -07:00
Kubernetes Submit Queue
25a2177a95 Merge pull request #51296 from kokhang/kubeadm-flexvolume
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Add host mountpath to controller-manager for flexvolume dir

Controller manager needs access to Flexvolume plugin when using attach-detach controller interface.

This PR adds the host mount path for the default directory of flexvolume plugins

Fixes https://github.com/kubernetes/kubeadm/issues/410
2017-08-26 02:05:36 -07:00
Kubernetes Submit Queue
932e07af53 Merge pull request #50031 from verult/ConnectedProbe
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Dynamic Flexvolume plugin discovery, probing with filesystem watch.

**What this PR does / why we need it**: Enables dynamic Flexvolume plugin discovery. This model uses a filesystem watch (fsnotify library), which notifies the system that a probe is necessary only if something changes in the Flexvolume plugin directory.

This PR uses the dependency injection model in https://github.com/kubernetes/kubernetes/pull/49668.

**Release Note**:
```release-note
Dynamic Flexvolume plugin discovery. Flexvolume plugins can now be discovered on the fly rather than only at system initialization time.
```

/sig-storage

/assign @jsafrane @saad-ali 
/cc @bassam @chakri-nelluri @kokhang @liggitt @thockin
2017-08-26 02:05:34 -07:00
Kubernetes Submit Queue
d660a41f36 Merge pull request #51101 from zhangxiaoyu-zidif/refactor-kubelet-kuberuntime-test
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

Refactor kuberuntime test case with sets.String

**What this PR does / why we need it**:
change to make got and want use sets.String instead, since that is both safe and more clearly shows the intent.

ref: https://github.com/kubernetes/kubernetes/pull/50554

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes https://github.com/kubernetes/kubernetes/issues/51396

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-26 02:05:29 -07:00
Kubernetes Submit Queue
83f2ddea99 Merge pull request #51054 from derekwaynecarr/local-up-swap
Automatic merge from submit-queue (batch tested with PRs 51054, 51101, 50031, 51296, 51173)

hack/local-up-cluster.sh defaults to allow swap

**What this PR does / why we need it**:
developers on linux typically have swap on while developing.
defaults local-up-cluster experience to not fail kubelet if swap is enabled.

**Release note**:
```release-note
NONE
```
2017-08-26 02:05:26 -07:00
Kubernetes Submit Queue
b65d665b99 Merge pull request #51264 from m1093782566/e2e-maxTries
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

Fix e2e network util wrong output message

**What this PR does / why we need it**:

See https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/networking_utils.go#L217

and 

https://github.com/kubernetes/kubernetes/blob/master/test/e2e/framework/networking_utils.go#L273

I assume it should be `minTries` -> `MaxTries`

This PR fixes the wrong output message.

**Which issue this PR fixes**: fixes #51265

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-25 22:43:37 -07:00
Kubernetes Submit Queue
2616513381 Merge pull request #51297 from ixdy/bazel-fast-docker_pull
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

bazel: use fast docker_pull

**What this PR does / why we need it**: takes advantage of https://github.com/bazelbuild/rules_docker/pull/71.

Faster builds = yay.

**Release note**:

```release-note
NONE
```

/assign @Q-Lee @spxtr @mikedanese
2017-08-25 22:43:34 -07:00
Kubernetes Submit Queue
6650bbe0dd Merge pull request #50582 from dixudx/support_fieldSelector_spec.schedulerName
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

support fieldSelector spec.schedulerName

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #49190

**Special notes for your reviewer**:
/assign @davidopp  @bsalamat
/cc @lavalamp

**Release note**:

```release-note
add fieldSelector spec.schedulerName
```
2017-08-25 22:43:32 -07:00
Kubernetes Submit Queue
ea206bbe29 Merge pull request #51347 from Random-Liu/fix-no-new-privs
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

Fix NoNewPrivs and also allow remote runtime to provide the support.

Fixes https://github.com/kubernetes/kubernetes/issues/51319.

This PR:
1) Let kubelet admit remote runtime for `NoNewPrivis` container runtime.
2) Fix a `NoNewPrivis` bug which checks wrong runtime type.

/cc @kubernetes/sig-node-bugs @jessfraz
2017-08-25 22:43:28 -07:00
Kubernetes Submit Queue
76c520cea3 Merge pull request #50889 from NickrenREN/local-storage-eviction
Automatic merge from submit-queue (batch tested with PRs 50889, 51347, 50582, 51297, 51264)

Change eviction manager to manage one single local storage resource

**What this PR does / why we need it**:
We decided to manage one single resource name, eviction policy should be modified too.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:  part of #50818

**Special notes for your reviewer**:

**Release note**:
```release-note
Change eviction manager to manage one single local ephemeral storage resource
```

/assign @jingxu97
2017-08-25 22:43:26 -07:00
Derek Carr
17c3b1ff56 hack/local-up-cluster.sh defaults to allow swap 2017-08-26 01:04:08 -04:00
Lars Lehtonen
47ee11437d
Fix swallowed error in iscsi package 2017-08-25 20:57:58 -07:00
Kubernetes Submit Queue
c112dbcab4 Merge pull request #51341 from mtaufen/fix-port-disable
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

fix ReadOnlyPort defaulting, CAdvisorPort documentation

The ReadOnlyPort defaulting prevented passing 0 to diable via
the KubeletConfiguraiton struct.

The HealthzPort defaulting prevented passing 0 to disable via the
KubeletConfiguration struct. The documentation also failed to mention
this, but the check is performed in code.

The CAdvisorPort documentation failed to mention that you can pass 0 to
disable.


fixes #51345
2017-08-25 20:43:40 -07:00
Kubernetes Submit Queue
21aa8cacc5 Merge pull request #50730 from andrewsykim/49836
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

Cloud Controller Manager now sets Node.Spec.ProviderID

**What this PR does / why we need it**:
Cloud Controller Manager now sets `Node.Spec.ProviderID`.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
https://github.com/kubernetes/kubernetes/issues/49836. 

**Special notes for your reviewer**:
* As part of an effort to move cloud controller manager into beta https://github.com/kubernetes/kubernetes/issues/48690.
2017-08-25 20:43:37 -07:00
Kubernetes Submit Queue
9e69d5b8f0 Merge pull request #50595 from k82cn/k8s_50594
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

NodeConditionPredicates should return NodeOutOfDisk error.

**What this PR does / why we need it**:
In https://github.com/kubernetes/kubernetes/pull/49932 , I moved node condition check into a predicates; but it return incorrect error :(. 

We also need to add more cases to `TestNodeShouldRunDaemonPod` which is key function of DaemonSet.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50594 

**Release note**:

```release-note
None
```
2017-08-25 20:43:35 -07:00
Kubernetes Submit Queue
21ca7f7eec Merge pull request #47782 from php-coder/fix_reverse_in_tests
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

Fix benchmarks to really test reverse order of the keys

**What this PR does / why we need it**:
This PR modifies the code to do what comments says -- reverse the order of keys. It also fixes the logic that was wrong and didn't allow stale data.

**Special notes for your reviewer**:
This change resolves the following review comments:
- https://github.com/kubernetes/kubernetes/pull/41939#discussion_r117068104
- https://github.com/kubernetes/kubernetes/pull/46916#discussion_r122763350
- https://github.com/kubernetes/kubernetes/pull/46916#discussion_r122764000

**Release note**:
```release-note
NONE
```

PTAL @smarterclayton
2017-08-25 20:43:33 -07:00
Kubernetes Submit Queue
b65f3cc8dd Merge pull request #49850 from m1093782566/service-session-timeout
Automatic merge from submit-queue (batch tested with PRs 49850, 47782, 50595, 50730, 51341)

Paramaterize `stickyMaxAgeMinutes` for service in API

**What this PR does / why we need it**:

Currently I find `stickyMaxAgeMinutes` for a session affinity type service is hard code to 180min. There is a TODO comment, see

https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/iptables/proxier.go#L205

I think the seesion sticky max time varies from service to service and users may not aware of it since it's hard coded in all proxier.go - iptables, userspace and winuserspace.

Once we parameterize it in API, users can set/get the values for their different services.

Perhaps, we can introduce a new field `api.ClientIPAffinityConfig` in `api.ServiceSpec`.

There is an initial discussion about it in sig-network group. See,

https://groups.google.com/forum/#!topic/kubernetes-sig-network/i-LkeHrjs80

**Which issue this PR fixes**: 

fixes #49831

**Special notes for your reviewer**:

**Release note**:

```release-note
Paramaterize session affinity timeout seconds in service API for Client IP based session affinity.
```
2017-08-25 20:43:30 -07:00
Lars Lehtonen
f77dd0ebac
Fix swallowed errors in tests of photon_pd package 2017-08-25 20:37:05 -07:00
Lars Lehtonen
7ee91d6d54
Fix swallowed error in scaleio package tests
Test log improvement
2017-08-25 20:18:44 -07:00
Kubernetes Submit Queue
85f963310e Merge pull request #50504 from yastij/fcVolume-handleFailedMount
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

handle failed mounts for fc volumes

**What this PR does / why we need it**: handles failed mounts for fc

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50502

**Special notes for your reviewer**: 

**Release note**:

```release-note
None
```
2017-08-25 19:40:38 -07:00
Kubernetes Submit Queue
c170f5bfa2 Merge pull request #50972 from FengyunPan/external-loadBalancerIP
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

Support for specifying external LoadBalancerIP on openstack

1. Support ServiceAnnotationLoadBalancerFloatingNetworkId for LB v1

2. Support for specifying external LoadBalancerIP on openstack
    Add ServiceAnnotationLoadBalancerInternal annotation to distinguish
    between internal LoadBalancerIP and external LoadBalancerIP.


**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fix #50851 

**Release note**:
```release-note
NONE
```
2017-08-25 19:40:36 -07:00
Kubernetes Submit Queue
9d7bdb6a5f Merge pull request #51274 from yastij/clean-cinder-detachLogError
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

Clean cinder detachlogerror

**What this PR does / why we need it**:

**Which issue this PR fixes** : fixes #50441

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-08-25 19:40:32 -07:00
Kubernetes Submit Queue
e923f2ba5c Merge pull request #50819 from NickrenREN/remove-overlay-scheduler
Automatic merge from submit-queue (batch tested with PRs 51235, 50819, 51274, 50972, 50504)

Changing scheduling part to manage one single local storage resource

**What this PR does / why we need it**:
 Finally decided to manage a single local storage resource

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:  part of #50818

**Special notes for your reviewer**:
Since finally decided to manage a single local storage resource, remove overlay related code in scheduling part and change the name scratch to ephemeral storage.

**Release note**:
```release-note
Changing scheduling part of the alpha feature 'LocalStorageCapacityIsolation' to manage one single local ephemeral storage resource
```

/assign @jingxu97 
cc @ddysher
2017-08-25 19:40:29 -07:00
Kubernetes Submit Queue
65da3ce246 Merge pull request #51235 from cheftako/aggregator
Automatic merge from submit-queue

Fixed gke auth update wait condition.

Lookup whoami on gke using gcloud auth list.
Make sure we do not run the test on any cluster older than 1.7.

**What this PR does / why we need it**: Fixes issue with aggregator e2e test on GKE

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50945 

**Special notes for your reviewer**: There is a TODO, follow up will be provided when the immediate problem is resolved.

**Release note**: ```release-note
NONE
```
2017-08-25 18:52:46 -07:00
Nick Sardo
0d55f6bdcb Revert "GCE: Consume new config value for network project id" 2017-08-25 18:02:10 -07:00
Lucas Käldström
b1fb289f0f
kubeadm: Move the uploadconfig phase right in the beginning of cluster init 2017-08-26 01:50:24 +03:00
Josh Horwitz
cf75c49883 change godoc based on feedback from luxas 2017-08-25 18:04:10 -04:00
Lantao Liu
b760fa95e5 Fix NoNewPrivs and also allow remote runtime to provide the support. 2017-08-25 21:32:33 +00:00
NickrenREN
9730e3d302 Change validation for local ephemeral storage 2017-08-26 05:15:16 +08:00
NickrenREN
27901ad5df Change eviction policy to manage one single local storage resource 2017-08-26 05:14:49 +08:00
Kubernetes Submit Queue
a235ba4e49 Merge pull request #51327 from wasylkowski/ensure-ca-is-on
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

Made the tests ensure that Cluster Autoscaler is on before running.

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-08-25 14:01:36 -07:00
Kubernetes Submit Queue
b5bb8099e7 Merge pull request #50971 from CaoShuFeng/audit_json
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

set --audit-log-format default to json

Updates: https://github.com/kubernetes/kubernetes/issues/48561

**Release note**:
```
set --audit-log-format default to json for kube-apiserver
```
2017-08-25 14:01:33 -07:00
Kubernetes Submit Queue
ccae631ff9 Merge pull request #50562 from atlassian/call-cleanup-properly
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

Call the right cleanup function

**What this PR does / why we need it**:
`defer cleanup()` will always call the function that was returned by the first call to `r.resyncChan()` but it should call the one returned by the last call.

**Special notes for your reviewer**:
This will print `c1`, not `c2`. See https://play.golang.org/p/FDjDbUxOvI
```go
func main() {
	var c func()
	c = c1
	defer c()
	c = c2
}

func c1 () {
	fmt.Println("c1")
}

func c2 () {
	fmt.Println("c2")
}
```

**Release note**:
```release-note
NONE
```
/kind bug
/sig api-machinery
2017-08-25 14:01:30 -07:00
Kubernetes Submit Queue
39581ac9bf Merge pull request #51122 from luxas/kubeadm_impl_dryrun
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327)

kubeadm: Fully implement --dry-run

**What this PR does / why we need it**:

Finishes the work begun in #50631 
 - Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable
 - Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those.
 - Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

fixes: https://github.com/kubernetes/kubeadm/issues/389

**Special notes for your reviewer**:

Full `kubeadm init --dry-run` output:

```
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization mode: [Node RBAC]
[preflight] Running pre-flight checks
[preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service'
[preflight] Starting the kubelet service
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930"
[dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them
[dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written
[dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-apiserver
	    tier: control-plane
	  name: kube-apiserver
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-apiserver
	    - --allow-privileged=true
	    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
	    - --requestheader-extra-headers-prefix=X-Remote-Extra-
	    - --service-cluster-ip-range=10.96.0.0/12
	    - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt
	    - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key
	    - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt
	    - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota
	    - --experimental-bootstrap-token-auth=true
	    - --client-ca-file=/etc/kubernetes/pki/ca.crt
	    - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key
	    - --secure-port=6443
	    - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key
	    - --insecure-port=0
	    - --requestheader-username-headers=X-Remote-User
	    - --requestheader-group-headers=X-Remote-Group
	    - --requestheader-allowed-names=front-proxy-client
	    - --advertise-address=192.168.200.101
	    - --service-account-key-file=/etc/kubernetes/pki/sa.pub
	    - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt
	    - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt
	    - --authorization-mode=Node,RBAC
	    - --etcd-servers=http://127.0.0.1:2379
	    image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 6443
	        scheme: HTTPS
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-apiserver
	    resources:
	      requests:
	        cpu: 250m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/pki
	      name: k8s-certs
	      readOnly: true
	    - mountPath: /etc/ssl/certs
	      name: ca-certs
	      readOnly: true
	    - mountPath: /etc/pki
	      name: ca-certs-etc-pki
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/pki
	    name: k8s-certs
	  - hostPath:
	      path: /etc/ssl/certs
	    name: ca-certs
	  - hostPath:
	      path: /etc/pki
	    name: ca-certs-etc-pki
	status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-controller-manager
	    tier: control-plane
	  name: kube-controller-manager
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-controller-manager
	    - --address=127.0.0.1
	    - --kubeconfig=/etc/kubernetes/controller-manager.conf
	    - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt
	    - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key
	    - --leader-elect=true
	    - --use-service-account-credentials=true
	    - --controllers=*,bootstrapsigner,tokencleaner
	    - --root-ca-file=/etc/kubernetes/pki/ca.crt
	    - --service-account-private-key-file=/etc/kubernetes/pki/sa.key
	    image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10252
	        scheme: HTTP
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-controller-manager
	    resources:
	      requests:
	        cpu: 200m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/pki
	      name: k8s-certs
	      readOnly: true
	    - mountPath: /etc/ssl/certs
	      name: ca-certs
	      readOnly: true
	    - mountPath: /etc/kubernetes/controller-manager.conf
	      name: kubeconfig
	      readOnly: true
	    - mountPath: /etc/pki
	      name: ca-certs-etc-pki
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/pki
	    name: k8s-certs
	  - hostPath:
	      path: /etc/ssl/certs
	    name: ca-certs
	  - hostPath:
	      path: /etc/kubernetes/controller-manager.conf
	    name: kubeconfig
	  - hostPath:
	      path: /etc/pki
	    name: ca-certs-etc-pki
	status: {}
[dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content:
	apiVersion: v1
	kind: Pod
	metadata:
	  annotations:
	    scheduler.alpha.kubernetes.io/critical-pod: ""
	  creationTimestamp: null
	  labels:
	    component: kube-scheduler
	    tier: control-plane
	  name: kube-scheduler
	  namespace: kube-system
	spec:
	  containers:
	  - command:
	    - kube-scheduler
	    - --leader-elect=true
	    - --kubeconfig=/etc/kubernetes/scheduler.conf
	    - --address=127.0.0.1
	    image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4
	    livenessProbe:
	      failureThreshold: 8
	      httpGet:
	        host: 127.0.0.1
	        path: /healthz
	        port: 10251
	        scheme: HTTP
	      initialDelaySeconds: 15
	      timeoutSeconds: 15
	    name: kube-scheduler
	    resources:
	      requests:
	        cpu: 100m
	    volumeMounts:
	    - mountPath: /etc/kubernetes/scheduler.conf
	      name: kubeconfig
	      readOnly: true
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: /etc/kubernetes/scheduler.conf
	    name: kubeconfig
	status: {}
[markmaster] Will mark node thegopher as master by adding a label and a taint
[dryrun] Would perform action GET on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1"
[dryrun] Resource name: "thegopher"
[dryrun] Attached patch:
	{"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}}
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[token] Using token: 96efd6.98bbb2f4603c026b
[dryrun] Would perform action GET on resource "secrets" in API group "core/v1"
[dryrun] Resource name: "bootstrap-token-96efd6"
[dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4=
	  expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA==
	  token-id: OTZlZmQ2
	  token-secret: OThiYmIyZjQ2MDNjMDI2Yg==
	  usage-bootstrap-authentication: dHJ1ZQ==
	  usage-bootstrap-signing: dHJ1ZQ==
	kind: Secret
	metadata:
	  creationTimestamp: null
	  name: bootstrap-token-96efd6
	type: bootstrap.kubernetes.io/token
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:kubelet-bootstrap
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:node-bootstrapper
	subjects:
	- kind: Group
	  name: system:bootstrappers
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRole
	metadata:
	  creationTimestamp: null
	  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
	rules:
	- apiGroups:
	  - certificates.k8s.io
	  resources:
	  - certificatesigningrequests/nodeclient
	  verbs:
	  - create
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:node-autoapprove-bootstrap
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
	subjects:
	- kind: Group
	  name: system:bootstrappers
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  kubeconfig: |
	    apiVersion: v1
	    clusters:
	    - cluster:
	        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
	        server: https://192.168.200.101:6443
	      name: ""
	    contexts: []
	    current-context: ""
	    kind: Config
	    preferences: {}
	    users: []
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  name: cluster-info
	  namespace: kube-public
[dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: Role
	metadata:
	  creationTimestamp: null
	  name: kubeadm:bootstrap-signer-clusterinfo
	  namespace: kube-public
	rules:
	- apiGroups:
	  - ""
	  resourceNames:
	  - cluster-info
	  resources:
	  - configmaps
	  verbs:
	  - get
[dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: RoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:bootstrap-signer-clusterinfo
	  namespace: kube-public
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: Role
	  name: kubeadm:bootstrap-signer-clusterinfo
	subjects:
	- kind: User
	  name: system:anonymous
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  MasterConfiguration: |
	    api:
	      advertiseAddress: 192.168.200.101
	      bindPort: 6443
	    apiServerCertSANs: []
	    apiServerExtraArgs: null
	    authorizationModes:
	    - Node
	    - RBAC
	    certificatesDir: /etc/kubernetes/pki
	    cloudProvider: ""
	    controllerManagerExtraArgs: null
	    etcd:
	      caFile: ""
	      certFile: ""
	      dataDir: /var/lib/etcd
	      endpoints: []
	      extraArgs: null
	      image: ""
	      keyFile: ""
	    featureFlags: null
	    imageRepository: gcr.io/google_containers
	    kubernetesVersion: v1.7.4
	    networking:
	      dnsDomain: cluster.local
	      podSubnet: ""
	      serviceSubnet: 10.96.0.0/12
	    nodeName: thegopher
	    schedulerExtraArgs: null
	    token: 96efd6.98bbb2f4603c026b
	    tokenTTL: 86400000000000
	    unifiedControlPlaneImage: ""
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  name: kubeadm-config
	  namespace: kube-system
[dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Resource name: "system:node"
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  creationTimestamp: null
	  name: kube-dns
	  namespace: kube-system
[dryrun] Would perform action GET on resource "services" in API group "core/v1"
[dryrun] Resource name: "kubernetes"
[dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1"
[dryrun] Attached object:
	apiVersion: extensions/v1beta1
	kind: Deployment
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-dns
	  name: kube-dns
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: kube-dns
	  strategy:
	    rollingUpdate:
	      maxSurge: 10%
	      maxUnavailable: 0
	  template:
	    metadata:
	      creationTimestamp: null
	      labels:
	        k8s-app: kube-dns
	    spec:
	      affinity:
	        nodeAffinity:
	          requiredDuringSchedulingIgnoredDuringExecution:
	            nodeSelectorTerms:
	            - matchExpressions:
	              - key: beta.kubernetes.io/arch
	                operator: In
	                values:
	                - amd64
	      containers:
	      - args:
	        - --domain=cluster.local.
	        - --dns-port=10053
	        - --config-dir=/kube-dns-config
	        - --v=2
	        env:
	        - name: PROMETHEUS_PORT
	          value: "10055"
	        image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /healthcheck/kubedns
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: kubedns
	        ports:
	        - containerPort: 10053
	          name: dns-local
	          protocol: UDP
	        - containerPort: 10053
	          name: dns-tcp-local
	          protocol: TCP
	        - containerPort: 10055
	          name: metrics
	          protocol: TCP
	        readinessProbe:
	          httpGet:
	            path: /readiness
	            port: 8081
	            scheme: HTTP
	          initialDelaySeconds: 3
	          timeoutSeconds: 5
	        resources:
	          limits:
	            memory: 170Mi
	          requests:
	            cpu: 100m
	            memory: 70Mi
	        volumeMounts:
	        - mountPath: /kube-dns-config
	          name: kube-dns-config
	      - args:
	        - -v=2
	        - -logtostderr
	        - -configDir=/etc/k8s/dns/dnsmasq-nanny
	        - -restartDnsmasq=true
	        - --
	        - -k
	        - --cache-size=1000
	        - --log-facility=-
	        - --server=/cluster.local/127.0.0.1#10053
	        - --server=/in-addr.arpa/127.0.0.1#10053
	        - --server=/ip6.arpa/127.0.0.1#10053
	        image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /healthcheck/dnsmasq
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: dnsmasq
	        ports:
	        - containerPort: 53
	          name: dns
	          protocol: UDP
	        - containerPort: 53
	          name: dns-tcp
	          protocol: TCP
	        resources:
	          requests:
	            cpu: 150m
	            memory: 20Mi
	        volumeMounts:
	        - mountPath: /etc/k8s/dns/dnsmasq-nanny
	          name: kube-dns-config
	      - args:
	        - --v=2
	        - --logtostderr
	        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
	        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
	        image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4
	        imagePullPolicy: IfNotPresent
	        livenessProbe:
	          failureThreshold: 5
	          httpGet:
	            path: /metrics
	            port: 10054
	            scheme: HTTP
	          initialDelaySeconds: 60
	          successThreshold: 1
	          timeoutSeconds: 5
	        name: sidecar
	        ports:
	        - containerPort: 10054
	          name: metrics
	          protocol: TCP
	        resources:
	          requests:
	            cpu: 10m
	            memory: 20Mi
	      dnsPolicy: Default
	      serviceAccountName: kube-dns
	      tolerations:
	      - key: CriticalAddonsOnly
	        operator: Exists
	      - effect: NoSchedule
	        key: node-role.kubernetes.io/master
	      volumes:
	      - configMap:
	          name: kube-dns
	          optional: true
	        name: kube-dns-config
	status: {}
[dryrun] Would perform action CREATE on resource "services" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: Service
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-dns
	    kubernetes.io/cluster-service: "true"
	    kubernetes.io/name: KubeDNS
	  name: kube-dns
	  namespace: kube-system
	  resourceVersion: "0"
	spec:
	  clusterIP: 10.96.0.10
	  ports:
	  - name: dns
	    port: 53
	    protocol: UDP
	    targetPort: 53
	  - name: dns-tcp
	    port: 53
	    protocol: TCP
	    targetPort: 53
	  selector:
	    k8s-app: kube-dns
	status:
	  loadBalancer: {}
[addons] Applied essential addon: kube-dns
[dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	kind: ServiceAccount
	metadata:
	  creationTimestamp: null
	  name: kube-proxy
	  namespace: kube-system
[dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1"
[dryrun] Attached object:
	apiVersion: v1
	data:
	  kubeconfig.conf: |
	    apiVersion: v1
	    kind: Config
	    clusters:
	    - cluster:
	        certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
	        server: https://192.168.200.101:6443
	      name: default
	    contexts:
	    - context:
	        cluster: default
	        namespace: default
	        user: default
	      name: default
	    current-context: default
	    users:
	    - name: default
	      user:
	        tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token
	kind: ConfigMap
	metadata:
	  creationTimestamp: null
	  labels:
	    app: kube-proxy
	  name: kube-proxy
	  namespace: kube-system
[dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1"
[dryrun] Attached object:
	apiVersion: extensions/v1beta1
	kind: DaemonSet
	metadata:
	  creationTimestamp: null
	  labels:
	    k8s-app: kube-proxy
	  name: kube-proxy
	  namespace: kube-system
	spec:
	  selector:
	    matchLabels:
	      k8s-app: kube-proxy
	  template:
	    metadata:
	      creationTimestamp: null
	      labels:
	        k8s-app: kube-proxy
	    spec:
	      containers:
	      - command:
	        - /usr/local/bin/kube-proxy
	        - --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf
	        image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4
	        imagePullPolicy: IfNotPresent
	        name: kube-proxy
	        resources: {}
	        securityContext:
	          privileged: true
	        volumeMounts:
	        - mountPath: /var/lib/kube-proxy
	          name: kube-proxy
	        - mountPath: /run/xtables.lock
	          name: xtables-lock
	      hostNetwork: true
	      serviceAccountName: kube-proxy
	      tolerations:
	      - effect: NoSchedule
	        key: node-role.kubernetes.io/master
	      - effect: NoSchedule
	        key: node.cloudprovider.kubernetes.io/uninitialized
	        value: "true"
	      volumes:
	      - configMap:
	          name: kube-proxy
	        name: kube-proxy
	      - hostPath:
	          path: /run/xtables.lock
	        name: xtables-lock
	  updateStrategy:
	    type: RollingUpdate
	status:
	  currentNumberScheduled: 0
	  desiredNumberScheduled: 0
	  numberMisscheduled: 0
	  numberReady: 0
[dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1"
[dryrun] Attached object:
	apiVersion: rbac.authorization.k8s.io/v1beta1
	kind: ClusterRoleBinding
	metadata:
	  creationTimestamp: null
	  name: kubeadm:node-proxier
	roleRef:
	  apiGroup: rbac.authorization.k8s.io
	  kind: ClusterRole
	  name: system:node-proxier
	subjects:
	- kind: ServiceAccount
	  name: kube-proxy
	  namespace: kube-system
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1

```

**Release note**:

```release-note
kubeadm: Implement a `--dry-run` mode and flag for `kubeadm`
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
2017-08-25 14:01:27 -07:00
Kubernetes Submit Queue
cfbd97c4f9 Merge pull request #51134 from stevekuznetsov/skuznets/verbose-bazel-error
Automatic merge from submit-queue

Don't silence `go get` during verify scripts

When the verify scripts fail to install a component using `go get`
today, they silence output go `stderr` and cause the user to not get any
actionable feedback on failure.

Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>

```release-note
NONE
```

/assign @ixdy
2017-08-25 13:59:51 -07:00
Josh Horwitz
82a69b2815 refactor method name as per comments 2017-08-25 16:25:19 -04:00
Steve Leon
c682d7cd74 Add host mountpath for controller-manager for flexvolume dir
Controller manager needs access to Flexvolume plugin when
using attach-detach controller interface.

This PR adds the host mount path for the default directory of flexvolume
plugins

Fixes https://github.com/kubernetes/kubeadm/issues/410
2017-08-25 13:23:53 -07:00
Josh Horwitz
3528ceb27f address test & doc comments 2017-08-25 16:15:55 -04:00
Michael Taufen
6918ab1d70 fix ReadOnlyPort, HealthzPort, CAdvisorPort defaulting/documentation
The ReadOnlyPort defaulting prevented passing 0 to diable via
the KubeletConfiguraiton struct.

The HealthzPort defaulting prevented passing 0 to disable via the
KubeletConfiguration struct. The documentation also failed to mention
this, but the check is performed in code.

The CAdvisorPort documentation failed to mention that you can pass 0 to
disable.
2017-08-25 13:15:36 -07:00