Commit Graph

64572 Commits

Author SHA1 Message Date
Christoph Blecker
94d56ffbde
Bump minimum required go version to 1.10.1 2018-04-23 18:08:10 -07:00
Kubernetes Submit Queue
4692a6bf2e
Merge pull request #62784 from hanxiaoshuai/bugfix0418
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

we should use Infof when we are using format string

**What this PR does / why we need it**:
we should use Infof when we are using format string.
**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-04-23 17:45:20 -07:00
Kubernetes Submit Queue
d23ad1f894
Merge pull request #62947 from fabriziopandini/kubeadm-ha-ControlPlaneEndpoint2
Automatic merge from submit-queue (batch tested with PRs 62464, 62947). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

make API.ControlPlaneEndpoint accept IP

**What this PR does / why we need it**:
This PR implements one of the actions defined by https://github.com/kubernetes/kubeadm/issues/751 (checklist form implementing HA in kubeadm).

With this PR, the `API.ControlPlaneEndpoint` value in the kubeadm MasterConfiguration file now accepts both DNS and IP.

The `API.ControlPlaneEndpoint` should be used to set a stable IP address for the control plane; in an HA configuration, this should be the load balancer address (no matter if identified by a DNS name or by a stable IP).

**Special notes for your reviewer**:
/CC @timothysc 
This PR is the same of https://github.com/kubernetes/kubernetes/pull/62667, that I closed by error 😥

**Release note**:
```release-note
NONE
```
Nb. first https://github.com/kubernetes/kubernetes/pull/62667 already has the release note
2018-04-23 16:42:06 -07:00
Kubernetes Submit Queue
fca65dcd64
Merge pull request #62464 from choury/fix-double-rlock-in-cpumanger
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

avoid dobule RLock() in cpumanager

**What this PR does / why we need it**:

We met a deadlock when removing pod.
kubelet keeps logging:
```
Pod "xxxx" is terminated, but some containers are still running 
```
After debug, we found it stuck in `SetDefaultCPUSet` [here](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/cpumanager/policy_static.go#L184)  while another goroutine are calling `GetCPUSetOrDefault` [here](https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/cm/cpumanager/cpu_manager.go#L256).

According  golang/go#15418, It is not safe to double RLock a RWMutex.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:

none

**Special notes for your reviewer**:

**Release note**:

```release-note
removed unsafe double RLock in cpumanager
```
2018-04-23 16:31:57 -07:00
Kubernetes Submit Queue
316b76f093
Merge pull request #62860 from mikedanese/rawpprof
Automatic merge from submit-queue (batch tested with PRs 63007, 62919, 62669, 62860). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

e2e: save raw profiles too

#62808

```release-note
NONE
```
2018-04-23 15:45:20 -07:00
Kubernetes Submit Queue
eea406c108
Merge pull request #62669 from immutableT/deploy_helper_test
Automatic merge from submit-queue (batch tested with PRs 63007, 62919, 62669, 62860). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add unit test for configure-helper.sh.

**What this PR does / why we need it**:
Add a framework for unit-testing configure-helper.sh.
configure-helper.sh plays a critical role in initializing clusters both on GCE and GKE. It is currently, over 2K lines of code, yet it has no unit test coverage.
This PR proposes a framework/approach on how to provide test coverage for this component.
Notes: 
1. Changes to configure-helper.sh itself were necessary to enable sourcing of this script for the purposes of testing.
2. As POC api_manifest_test.go covers the logic related to the initialization of apiserver when integration with KMS was requested. The hope is that the same approach could be extended to the rest of the script.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-04-23 15:45:17 -07:00
Kubernetes Submit Queue
5752d1faaa
Merge pull request #62919 from vmware/backward_comp
Automatic merge from submit-queue (batch tested with PRs 63007, 62919, 62669, 62860). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix vSphere Cloud Provider to handle upgrade from k8s version less than v1.9.4 to v1.9.4+

**What this PR does / why we need it**:
vSphere Cloud Provider in kubernetes master v1.9.4+ is not able to identify the kubernetes nodes of version less than 1.9.4. Hence, volume operations fail in this case. This PR fixes this.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #62435

**Special notes for your reviewer**:
Internally reviewed here: https://github.com/vmware/kubernetes/pull/477

**Release note**:

```release-note
None
```
2018-04-23 15:45:13 -07:00
Kubernetes Submit Queue
6726844cb2
Merge pull request #63007 from Cynerva/gkk/update-gcr-url
Automatic merge from submit-queue (batch tested with PRs 63007, 62919, 62669, 62860). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

juju: Use k8s.gcr.io url for arm64 ingress image

**What this PR does / why we need it**:

This updates the kubernetes-worker charm to point to k8s.gcr.io for the nginx-ingress-controller-arm64 image. This should have no impact on functionality today, but as I understand it, we're all standardizing on k8s.gcr.io to allow for future changes.

**Release note**:

```release-note
NONE
```
2018-04-23 15:45:10 -07:00
Mike Danese
9d9e588ced e2e: save raw profiles too 2018-04-23 13:06:37 -07:00
Kubernetes Submit Queue
a0f9412361
Merge pull request #62810 from liggitt/request-mapper
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Remove request context mapper

http.Request now allows setting/retrieving a per-request context, which removes the need for plumbing a request-context mapper throughout the stack

In addition to being way simpler, this has the benefit of removing a potentially contentious lock object from the handling path

This PR:
* removes RequestContextMapper
* converts context fetchers to use `req.Context()`
* converts context setters to use `req = req.WithContext(...)`
* updates filter plumbing in two places (audit and timeout) to properly return the request with modified context
* updates tests that used a fake context mapper to set the context in the request instead

Fixes https://github.com/kubernetes/kubernetes/issues/62796

```release-note
NONE
```
2018-04-23 13:01:14 -07:00
Kubernetes Submit Queue
e52e68a0eb
Merge pull request #61950 from grayluck/llb-test
Automatic merge from submit-queue (batch tested with PRs 63001, 62152, 61950). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Unit test for internal load balancer

**What this PR does / why we need it**:
Unit tests for internal load balancer. Coverage increases from 76.7% to 91.0%.

Fix the volatile fakeApiService issue. Now tests should use GetApiService to get a copy of fakeApiService to prevent testcase interferences.

**Release note**:

```release-note
NONE
```
2018-04-23 12:34:17 -07:00
Kubernetes Submit Queue
939c0783e1
Merge pull request #62152 from smarterclayton/use_client_store
Automatic merge from submit-queue (batch tested with PRs 63001, 62152, 61950). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

When bootstrapping a client cert, store it with other client certs

The kubelet uses two different locations to store certificates on
initial bootstrap and then on subsequent rotation:

* bootstrap: certDir/kubelet-client.(crt|key)
* rotation:  certDir/kubelet-client-(DATE|current).pem

Bootstrap also creates an initial node.kubeconfig that points to the
certs. Unfortunately, with short rotation the node.kubeconfig then
becomes out of date because it points to the initial cert/key, not the
rotated cert key.

Alter the bootstrap code to store client certs exactly as if they would
be rotated (using the same cert Store code), and reference the PEM file
containing cert/key from node.kubeconfig, which is supported by kubectl
and other Go tooling. This ensures that the node.kubeconfig continues to
be valid past the first expiration.

Example:

```
bootstrap:
  writes to certDir/kubelet-client-DATE.pem and symlinks to certDir/kubelet-client-current.pem
  writes node.kubeconfig pointing to certDir/kubelet-client-current.pem
rotation:
  writes to certDir/kubelet-client-DATE.pem and symlinks to certDir/kubelet-client-current.pem
```

This will also allow us to remove the wierd "init store with bootstrap cert" stuff, although I'd prefer to do that in a follow up.

@mikedanese @liggitt as per discussion on Slack today

```release-note
The `--bootstrap-kubeconfig` argument to Kubelet previously created the first bootstrap client credentials in the certificates directory as `kubelet-client.key` and `kubelet-client.crt`.  Subsequent certificates created by cert rotation were created in a combined PEM file that was atomically rotated as `kubelet-client-DATE.pem` in that directory, which meant clients relying on the `node.kubeconfig` generated by bootstrapping would never use a rotated cert.  The initial bootstrap certificate is now generated into the cert directory as a PEM file and symlinked to `kubelet-client-current.pem` so that the generated kubeconfig remains valid after rotation.
```
2018-04-23 12:34:14 -07:00
Kubernetes Submit Queue
56fcfe3bd1
Merge pull request #63001 from chuckha/kube-dns
Automatic merge from submit-queue (batch tested with PRs 63001, 62152, 61950). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Bump kube-dns version for kubeadm upgrade

Signed-off-by: Chuck Ha <ha.chuck@gmail.com>

**What this PR does / why we need it**: This PR bumps the version of kube-dns to the correct version for the kubeadm upgrade path.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Related to kubernetes/kubeadm#739 although that is a different ticket, the bug was reported there.

**Special notes for your reviewer**:

This should also be cherry picked to the 1.9 release since the kube-dns was also cherry picked.

related PR
https://github.com/kubernetes/kubernetes/pull/62678

**Release note**:

```release-note
NONE
```
/cc @kubernetes/sig-cluster-lifecycle-pr-reviews
2018-04-23 12:34:10 -07:00
immutablet
dc78d72f04 Add unit test for configure-helper. 2018-04-23 12:18:57 -07:00
Kubernetes Submit Queue
bbfa29921c
Merge pull request #63013 from rramkumar1/patch-10
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Update upgrade/downgrade images for ingress-gce

/assign @MrHohn 

**Release note**:

```release-note
None
```
2018-04-23 11:22:32 -07:00
Kubernetes Submit Queue
5f1793e3dc
Merge pull request #62728 from php-coder/psp_update_addons_manifests
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Update addon manifests to use policy/v1beta1

**What this PR does / why we need it:**
This is a part of the PSP migration from extensions to policy API group. This PR updates addon manifests to use policy/v1beta1 and grant permissions in policy API group.

**Which issue(s) this PR fixes:**
Addressed to https://github.com/kubernetes/features/issues/5
2018-04-23 10:05:35 -07:00
Chuck Ha
3cbb283306
Bump kube-dns version for kubeadm upgrade
Signed-off-by: Chuck Ha <ha.chuck@gmail.com>
2018-04-23 17:24:34 +01:00
Kubernetes Submit Queue
5b77996433
Merge pull request #62543 from ingvagabund/timeout-on-cloud-provider-request
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Timeout on instances.NodeAddresses cloud provider request

**What this PR does / why we need it**:

In cases the cloud provider does not respond before the node gets evicted.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:


**Release note**:
```release-note
stop kubelet to cloud provider integration potentially wedging kubelet sync loop
```
2018-04-23 09:12:42 -07:00
Rohit Ramkumar
f3cce76d3c
Update upgrade/downgrade images for ingress-gce 2018-04-23 08:41:45 -07:00
Clayton Coleman
368959346a
When bootstrapping a client cert, store it with other client certs
The kubelet uses two different locations to store certificates on
initial bootstrap and then on subsequent rotation:

* bootstrap: certDir/kubelet-client.(crt|key)
* rotation:  certDir/kubelet-client-(DATE|current).pem

Bootstrap also creates an initial node.kubeconfig that points to the
certs. Unfortunately, with short rotation the node.kubeconfig then
becomes out of date because it points to the initial cert/key, not the
rotated cert key.

Alter the bootstrap code to store client certs exactly as if they would
be rotated (using the same cert Store code), and reference the PEM file
containing cert/key from node.kubeconfig, which is supported by kubectl
and other Go tooling. This ensures that the node.kubeconfig continues to
be valid past the first expiration.
2018-04-23 10:23:01 -04:00
George Kraft
408c2c30fa juju: Use k8s.gcr.io url for arm64 ingress image 2018-04-23 08:39:21 -05:00
Jan Chaloupka
61efc29394 Timeout on instances.NodeAddresses cloud provider request 2018-04-23 13:28:43 +02:00
Kubernetes Submit Queue
9b7439d77d
Merge pull request #62909 from kawych/master
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Manage Metadata Agent Config with Addon Manager

**What this PR does / why we need it**:
Fixes error where config map for Metadata Agent was not created by addon manager.

**Release note**:
```release-note
Fix error where config map for Metadata Agent was not created by addon manager.
```
2018-04-23 02:52:06 -07:00
Kubernetes Submit Queue
f0b207df2d
Merge pull request #62856 from liggitt/node-authorizer-contention-benchmark
Automatic merge from submit-queue (batch tested with PRs 62409, 62856). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Add node authorizer contention benchmark

* Makes the node authorization benchmark run in parallel
* Runs the tests a second time with a background goroutine pushing graph modifications at a rate of 100x per second (to test authorization performance with contention on the graph lock).

Graph modifications come from the informers watching objects relevant to node authorization, and only fire when a relevant change is made (for example, most node updates do not trigger a graph modification, only ones which change the node's config source configmap reference; most pod updates do not trigger a graph modification, only ones that set the pod's nodeName or uid)

The results do not indicate bottlenecks in the authorizer, even under higher-than-expected write contention.

```
$ go test ./plugin/pkg/auth/authorizer/node/ -run foo -bench 'Authorization' -benchmem -v
goos: darwin
goarch: amd64
pkg: k8s.io/kubernetes/plugin/pkg/auth/authorizer/node
BenchmarkAuthorization/allowed_node_configmap-8                                 596 ns/op   529 B/op   11 allocs/op    3000000
BenchmarkAuthorization/allowed_configmap-8                                      609 ns/op   529 B/op   11 allocs/op    3000000
BenchmarkAuthorization/allowed_secret_via_pod-8                                 586 ns/op   529 B/op   11 allocs/op    3000000
BenchmarkAuthorization/allowed_shared_secret_via_pod-8                        18202 ns/op   542 B/op   11 allocs/op     100000
BenchmarkAuthorization/disallowed_node_configmap-8                              900 ns/op   691 B/op   17 allocs/op    2000000
BenchmarkAuthorization/disallowed_configmap-8                                   868 ns/op   693 B/op   17 allocs/op    2000000
BenchmarkAuthorization/disallowed_secret_via_pod-8                              875 ns/op   693 B/op   17 allocs/op    2000000
BenchmarkAuthorization/disallowed_shared_secret_via_pvc-8                      1215 ns/op   948 B/op   22 allocs/op    1000000
BenchmarkAuthorization/disallowed_pvc-8                                         912 ns/op   693 B/op   17 allocs/op    2000000
BenchmarkAuthorization/disallowed_pv-8                                         1137 ns/op   834 B/op   19 allocs/op    2000000
BenchmarkAuthorization/disallowed_attachment_-_no_relationship-8                892 ns/op   677 B/op   16 allocs/op    2000000
BenchmarkAuthorization/disallowed_attachment_-_feature_disabled-8               236 ns/op   208 B/op    2 allocs/op   10000000
BenchmarkAuthorization/allowed_attachment_-_feature_enabled-8                   723 ns/op   593 B/op   12 allocs/op    2000000

BenchmarkAuthorization/contentious_allowed_node_configmap-8                     726 ns/op   529 B/op   11 allocs/op    2000000
BenchmarkAuthorization/contentious_allowed_configmap-8                          698 ns/op   529 B/op   11 allocs/op    2000000
BenchmarkAuthorization/contentious_allowed_secret_via_pod-8                     778 ns/op   529 B/op   11 allocs/op    2000000
BenchmarkAuthorization/contentious_allowed_shared_secret_via_pod-8            21406 ns/op   638 B/op   13 allocs/op     100000
BenchmarkAuthorization/contentious_disallowed_node_configmap-8                 1135 ns/op   692 B/op   17 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_configmap-8                      1239 ns/op   691 B/op   17 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_secret_via_pod-8                 1043 ns/op   692 B/op   17 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_shared_secret_via_pvc-8          1404 ns/op   950 B/op   22 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_pvc-8                            1177 ns/op   693 B/op   17 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_pv-8                             1295 ns/op   834 B/op   19 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_attachment_-_no_relationship-8   1170 ns/op   676 B/op   16 allocs/op    1000000
BenchmarkAuthorization/contentious_disallowed_attachment_-_feature_disabled-8   262 ns/op   208 B/op    2 allocs/op   10000000
BenchmarkAuthorization/contentious_allowed_attachment_-_feature_enabled-8       790 ns/op   593 B/op   12 allocs/op    2000000

--- BENCH: BenchmarkAuthorization
   node_authorizer_test.go:592: graph modifications during non-contention test: 0
   node_authorizer_test.go:589: graph modifications during contention test: 6301
   node_authorizer_test.go:590: <1ms=5507, <10ms=128, <25ms=43, <50ms=65, <100ms=135, <250ms=328, <500ms=93, <1000ms=2, >1000ms=0
PASS
ok     k8s.io/kubernetes/plugin/pkg/auth/authorizer/node   112.616s
```

```release-note
NONE
```
2018-04-23 01:35:14 -07:00
Kubernetes Submit Queue
77f5324223
Merge pull request #62409 from rajansandeep/corednsscaler
Automatic merge from submit-queue (batch tested with PRs 62409, 62856). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

DNS-Autoscaler support for CoreDNS

**What this PR does / why we need it**:
This PR provides the dns-horizontal autoscaler for CoreDNS in kube-up, enabling the tests to pass once CoreDNS is the default. 

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #61176 

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-04-23 01:35:07 -07:00
choury
c1b19fce90 avoid dobule RLock() in cpumanager 2018-04-23 10:33:40 +08:00
fabriziopandini
8f838d9e42 autogenerated files 2018-04-23 00:16:30 +02:00
fabriziopandini
8abc54d257 make API.ControlPlaneEndpoint accept IP 2018-04-23 00:16:13 +02:00
Kubernetes Submit Queue
a048ca888a
Merge pull request #61891 from zhangxiaoyu-zidif/patch-9
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix pr No. from 517326 to 57326

**What this PR does / why we need it**:

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-04-22 08:08:40 -07:00
Kubernetes Submit Queue
afa68cc287
Merge pull request #62886 from msau42/fix-localssd-fsgroup
Automatic merge from submit-queue (batch tested with PRs 62780, 62886). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Only count local mounts that are from other pods

**What this PR does / why we need it**:
In GCE, we mount the same local SSD in two different paths (for backwards compatability).  This makes the fsGroup conflict check fail because it thinks the 2nd mount is from another pod.  For the fsgroup check, we only want to detect if other pods are mounting the same volume, so this PR filters the mount list to only those mounts under "/var/lib/kubelet".

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #62867

**Release note**:

```release-note
NONE
```
2018-04-20 20:06:13 -07:00
Kubernetes Submit Queue
48243a9c24
Merge pull request #62780 from RenaudWasTaken/master
Automatic merge from submit-queue (batch tested with PRs 62780, 62886). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Change Capacity log verbosity in status update

*What this PR does / why we need it:*

While in production we noticed that the log verbosity for the Capacity field in the node status was to high.
This log message is called for every device plugin resource at every update.

A proposed solution is to tune it down from V(2) to V(5). In a normal setting you'll be able to see the effect by looking at the node status.

Release note:
```
NONE
```

/sig node
/area hw-accelerators
/assign @vikaschoudhary16 @jiayingz @vishh
2018-04-20 20:06:10 -07:00
Kubernetes Submit Queue
4d6a6ced8c
Merge pull request #56525 from tianshapjq/testcase-helpers_linux.go
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

new testcase to helpers_linux.go

new testcase to helpers_linux.go, PTAL.

```release-note
NONE
```
2018-04-20 18:55:13 -07:00
Kubernetes Submit Queue
bdd6ff40db
Merge pull request #62765 from wgliang/master.pob-name-conflict
Automatic merge from submit-queue (batch tested with PRs 61324, 62880, 62765). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

-Fix the name could cause a conflict if an object with the same name …

…is created in a different namespace

**What this PR does / why we need it**:
/kind bug

Using the name could cause a conflict if an object with the same name is created in a different namespace

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #
#62750

**Special notes for your reviewer**:
/assign @bsalamat 

**Release note**:
```
NONE
```
2018-04-20 17:23:23 -07:00
Kubernetes Submit Queue
467ce8d8f3
Merge pull request #62880 from deads2k/cli-35-io
Automatic merge from submit-queue (batch tested with PRs 61324, 62880, 62765). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

provide standard iostream struct for commands

Commands usually need some kind of iostream.  For consistency, delegation, and testability this pull introduces a standard struct to embed in every set of command options.  It also starts the plumbing so that the benefits of standardization for all three of those cases become clear.

@kubernetes/sig-cli-maintainers 
@soltysh @juanvallejo 

```release-note
NONE
```
2018-04-20 17:23:20 -07:00
Kubernetes Submit Queue
663c6edc46
Merge pull request #61324 from pospispa/60764-K8s-1.10-StorageObjectInUseProtection-downgrade-issue
Automatic merge from submit-queue (batch tested with PRs 61324, 62880, 62765). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Always Start pvc-protection-controller and pv-protection-controller

**What this PR does / why we need it**:
After K8s 1.10 is upgraded to K8s 1.11 finalizer `[kubernetes.io/pvc-protection]` is added to PVCs
because `StorageObjectInUseProtection` feature will be GA in K8s 1.11.
However, when K8s 1.11 is downgraded to K8s 1.10 and the `StorageObjectInUseProtection` feature is disabled the finalizers remain in the PVCs and as `pvc-protection-controller` is not started in K8s 1.10 finalizers are not removed automatically from deleted PVCs and that's why deleted PVC are not removed from the system but remain in `Terminating` phase.
The same applies to `pv-protection-controller` and `[kubernetes.io/pvc-protection]` finalizer in PVs.

That's why `pvc-protection-controller` is always started because the `pvc-protection-controller` removes finalizers from PVCs automatically when a PVC is not in active use by a pod.
Also the `pv-protection-controller` is always started to remove finalizers from PVs automatically when a PV is not `Bound` to a PVC.

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes N/A
This issue https://github.com/kubernetes/kubernetes/issues/60764 is for downgrade from K8s 1.10 to K8s 1.9.
This PR fixes the same problem but for downgrade from K8s 1.11 to K8s 1.10.

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2018-04-20 17:23:17 -07:00
yankaiz
8aeb77f9c8 Add unit tests for gce loadbalancer internal.
Increase coverage for gce_loadbalancer_internal from 76.7% to 91.0%.
2018-04-20 16:14:02 -07:00
Kubernetes Submit Queue
9c25da64f0
Merge pull request #62649 from liggitt/loopback-routing
Automatic merge from submit-queue (batch tested with PRs 50899, 62649). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Ensure webhook service routing resolves kubernetes.default.svc correctly

Going through the normal endpoint resolve path isn't correct in multi-master scenarios

The auth wrapper is pulling from LoopbackClientConfig, the service resolver should do the same

```release-note
Fixes the kubernetes.default.svc loopback service resolution to use a loopback configuration.
```
2018-04-20 15:34:12 -07:00
Abrar Shivani
c15336e97a Fix upgrade to Kubernetes v1.9.3+ 2018-04-20 15:18:28 -07:00
Kubernetes Submit Queue
f2c8ef7571
Merge pull request #50899 from WanLinghao/rollout_pause
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

add test file for pkg/kubectl/cmd/rollout/rollout_pause.go file

new:   pkg/kubectl/cmd/rollout/rollout_pause_test.go
        modified: pkg/kubectl/cmd/rollout/BUILD


**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2018-04-20 15:05:37 -07:00
Kubernetes Submit Queue
d78ef491de
Merge pull request #62827 from linyouchong/data-reace-csi-20180418
Automatic merge from submit-queue (batch tested with PRs 62876, 62733, 62827). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

fix csi data race in csi_attacher_test.go

**What this PR does / why we need it**:
fix csi data race in csi_attacher_test.go#TestAttacherWaitForVolumeAttachment

**Which issue(s) this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close the issue(s) when PR gets merged)*:
Fixes #62630

**Special notes for your reviewer**:
run `stress -p 500 ./csi.test -v 5 -alsologtostderr` , There is another failure
I think we should fix it in another PR.
```
--- FAIL: TestAttacherMountDevice (0.07s)
        csi_attacher_test.go:495: Running test case: normal
        csi_attacher_test.go:534: test should not fail, but error occurred: mkdir path2: file exists
```

**Release note**:

```release-note
NONE
```

/sig storage
2018-04-20 13:39:14 -07:00
Kubernetes Submit Queue
6da4355ad5
Merge pull request #62733 from soltysh/discovery_timeout
Automatic merge from submit-queue (batch tested with PRs 62876, 62733, 62827). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Set a default request timeout for discovery client

Fixes https://bugzilla.redhat.com/show_bug.cgi?id=1546117

Adds a default request timeout to requests made by the discovery client.
This prevents a command from hanging indefinitely due to one or multiple calls
to the apiserver taking longer than usual when when a --request-timeout flag value
has not been set.

/assign @deads2k @juanvallejo 

**Release note**:
```release-note
NONE
```
2018-04-20 13:39:11 -07:00
Kubernetes Submit Queue
03160fde5a
Merge pull request #62876 from deads2k/cli-33-discovery
Automatic merge from submit-queue (batch tested with PRs 62876, 62733, 62827). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

remove discovery injection from factory

We added this shim when cached discovery was a contentious thing to give ourselves flexibility.  It is no longer contentious and this removes a layer of complexity we no longer need.

@kubernetes/sig-cli-maintainers 
@soltysh @juanvallejo 

```release-note
NONE
```
2018-04-20 13:39:07 -07:00
Pavel Pospisil
d3ddf7eb8b Always Start pvc-protection-controller and pv-protection-controller
After K8s 1.10 is upgraded to K8s 1.11 finalizer [kubernetes.io/pvc-protection] is added to PVCs
because StorageObjectInUseProtection feature will be GA in K8s 1.11.
However, when K8s 1.11 is downgraded to K8s 1.10 and the StorageObjectInUseProtection feature is disabled
the finalizers remain in the PVCs and as pvc-protection-controller is not started in K8s 1.10 finalizers
are not removed automatically from deleted PVCs and that's why deleted PVC are not removed from the system
but remain in Terminating phase.
The same applies to pv-protection-controller and [kubernetes.io/pvc-protection] finalizer in PVs.

That's why pvc-protection-controller is always started because the pvc-protection-controller removes finalizers
from PVCs automatically when a PVC is not in active use by a pod.
Also the pv-protection-controller is always started to remove finalizers from PVs automatically when a PV is not
Bound to a PVC.

Related issue: https://github.com/kubernetes/kubernetes/issues/60764
2018-04-20 19:54:50 +02:00
David Eads
8ef56776b9 provide standard iostream struct for commands 2018-04-20 12:54:11 -04:00
Jordan Liggitt
d421affd2d
loopback webhook integration test 2018-04-20 12:30:27 -04:00
Jordan Liggitt
54c883f27b
Honor existing CA bundle and TLS server name in webhook client 2018-04-20 12:26:39 -04:00
Jordan Liggitt
6f65742474
ensure tls server name is used in transport 2018-04-20 12:26:38 -04:00
Jordan Liggitt
d45fbce379
distinguish custom dialers in transport cache 2018-04-20 12:26:38 -04:00
Jordan Liggitt
fe23fa3eee
Ensure service routing resolves kubernetes.default.svc correctly 2018-04-20 12:26:38 -04:00
Kubernetes Submit Queue
9c60fd5242
Merge pull request #62683 from juanvallejo/jvallejo/print-list-apply-cmd
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

aggregate objs before printing in apply cmd

**Release note**:
```release-note
NONE
```

Aggregates all objects into a list before printing 

Fixes https://github.com/kubernetes/kubernetes/issues/58834

cc @soltysh
2018-04-20 09:16:53 -07:00