Commit Graph

54329 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
0f8febf1b4 Merge pull request #51868 from sttts/sttts-fix-client-go-build
Automatic merge from submit-queue (batch tested with PRs 51845, 51868, 51864)

client-go: fix 'go build ./...'
2017-09-03 21:31:58 -07:00
Kubernetes Submit Queue
cdcccaab34 Merge pull request #51845 from Random-Liu/update-sysspec
Automatic merge from submit-queue (batch tested with PRs 51845, 51868, 51864)

Update sys spec to support docker 1.11-1.13 and overlay2.

Fixes https://github.com/kubernetes/kubernetes/issues/32536.

Update docker spec to:
1) Support overlay2;
2) Support docker version 1.11-1.13.

@dchen1107 @yguo0905 @luxas 
/cc @kubernetes/sig-node-pr-reviews 

```release-note
Kubernetes 1.8 supports docker version 1.11.x, 1.12.x and 1.13.x. And also supports overlay2.
```
2017-09-03 21:31:55 -07:00
Kubernetes Submit Queue
0dedd13ad7 Merge pull request #51734 from soltysh/cronjobs_beta
Automatic merge from submit-queue

Enable batch/v1beta1.CronJobs by default

This PR re-applies the cronjobs->beta back (https://github.com/kubernetes/kubernetes/pull/51720)  with the fix from @shyamjvs.

Fixes #51692

@apelisse @dchen1107 @smarterclayton ptal
@janetkuo @erictune fyi
2017-09-03 18:22:27 -07:00
Kubernetes Submit Queue
6ec80eac1b Merge pull request #51816 from liggitt/xiangpengzhao-remove-initc-anno
Automatic merge from submit-queue

Remove deprecated init-container in annotations

fixes #50655
fixes #51816 
closes #41004
fixes #51816 

Builds on #50654 and drops the initContainer annotations on conversion to prevent bypassing API server validation/security and targeting version-skewed kubelets that still honor the annotations

```release-note
The deprecated alpha and beta initContainer annotations are no longer supported. Init containers must be specified using the initContainers field in the pod spec.
```
2017-09-03 17:35:11 -07:00
Kubernetes Submit Queue
52b50fa82a Merge pull request #51828 from kow3ns/workloads-deprecations-1.8
Automatic merge from submit-queue

Workloads deprecation 1.8

**What this PR does / why we need it**: This PR deprecates the Deployment, ReplicaSet, and DaemonSet kinds in the extensions/v1beta1 group version and the StatefulSet, Deployment, and ControllerRevision kinds in the apps/v1beta1 group version. The Deployment, ReplicaSet, DaemonSet, StatefuSet, and ControllerRevision kinds in the apps/v1beta2 group version are now the current version.

xref kubernetes/features#353

```release-note
The Deployment, DaemonSet, and ReplicaSet kinds in the extensions/v1beta1 group version are now deprecated, as are the Deployment, StatefulSet, and ControllerRevision kinds in apps/v1beta1. As they will not be removed until after a GA version becomes available, you may continue to use these kinds in existing code. However, all new code should be developed against the apps/v1beta2 group version.
```
2017-09-03 16:44:46 -07:00
Kubernetes Submit Queue
53ee4397e7 Merge pull request #51827 from bowei/2ndary-range-name
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)

Add `secondary-range-name` to the gce.conf

```release-note
NONE
```
2017-09-03 15:54:25 -07:00
Kubernetes Submit Queue
f9a82dd3b7 Merge pull request #50924 from liggitt/alpha-fields
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)

Clear values for disabled alpha fields

Fixes #51831

Before persisting new or updated resources, alpha fields that are disabled by feature gate must be removed from the incoming objects.

This adds a helper for clearing these values for pod specs and calls it from the strategies of all in-tree resources containing pod specs.

Addresses https://github.com/kubernetes/community/pull/869
2017-09-03 15:54:22 -07:00
Kubernetes Submit Queue
e528a6e785 Merge pull request #51369 from luxas/kubeadm_poll_kubelet
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)

kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy

**What this PR does / why we need it**:

In order to improve the UX when the kubelet is unhealthy or stopped, or whatever, kubeadm now polls the kubelet's API after 40 and 60 seconds, and then performs an exponential backoff for a total of 155 seconds.

If the kubelet endpoint is not returning `ok` by then, kubeadm gives up and exits.

This will miligate at least 60% of our "[apiclient] Created API client, waiting for control plane to come up" issues in the kubeadm issue tracker 🎉, as kubeadm now informs the user what's wrong and also doesn't deadlock like before.

Demo:
```
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[apiclient] All control plane components are healthy after 40.502199 seconds
[markmaster] Will mark node thegopher as master by adding a label and a taint
[markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master=""
[bootstraptoken] Using token: 5776d5.91e7ed14f9e274df
[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[addons] Applied essential addon: kube-dns
[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user):

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join --token 5776d5.91e7ed14f9e274df 192.168.1.115:6443 --discovery-token-ca-cert-hash sha256:6f301ce8c3f5f6558090b2c3599d26d6fc94ffa3c3565ffac952f4f0c7a9b2a9

lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm reset
[preflight] Running pre-flight checks
[reset] Stopping the kubelet service
[reset] Unmounting mounted directories in "/var/lib/kubelet"
[reset] Removing kubernetes-managed containers
[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/dockershim /var/run/kubernetes /var/lib/etcd]
[reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki]
[reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
lucas@THEGOPHER:~/luxas/kubernetes$ sudo systemctl stop kubelet
lucas@THEGOPHER:~/luxas/kubernetes$ sudo ./kubeadm init --skip-preflight-checks
[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.
[init] Using Kubernetes version: v1.7.4
[init] Using Authorization modes: [Node RBAC]
[preflight] Skipping pre-flight checks
[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.1.115]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"
[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] This often takes around a minute; or longer if the control plane images have to be pulled.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz/syncloop' failed with error: Get http://localhost:10255/healthz/syncloop: dial tcp 127.0.0.1:10255: getsockopt: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1:10255: getsockopt: connection refused.

Unfortunately, an error has occurred:
	timed out waiting for the condition

This error is likely caused by that:
	- The kubelet is not running
	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	- There is no internet connection; so the kubelet can't pull the following control plane images:
		- gcr.io/google_containers/kube-apiserver-amd64:v1.7.4
		- gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4
		- gcr.io/google_containers/kube-scheduler-amd64:v1.7.4

You can troubleshoot this for example with the following commands if you're on a systemd-powered system:
	- 'systemctl status kubelet'
	- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster
```

In this demo, I'm first starting kubeadm normally and everything works as usual.
In the second case, I'm explicitely stopping the kubelet so it doesn't run, and skipping preflight checks, so that kubeadm doesn't even try to exec `systemctl start kubelet` like it does usually.
That obviously results in a non-working system, but now kubeadm tells the user what's the problem instead of waiting forever.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

Fixes: https://github.com/kubernetes/kubeadm/issues/377

**Special notes for your reviewer**:

**Release note**:

```release-note
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy
```
@kubernetes/sig-cluster-lifecycle-pr-reviews @pipejakob 

cc @justinsb @kris-nova @lukemarsden as well as you wanted this feature :)
2017-09-03 15:54:19 -07:00
Kubernetes Submit Queue
0f2a72f9f5 Merge pull request #51546 from apelisse/remove-duplicate-fake-openapi
Automatic merge from submit-queue (batch tested with PRs 51682, 51546, 51369, 50924, 51827)

Remove duplicate fake and unused openapi

**What this PR does / why we need it**:
Follow-up on PR #50404

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:
```release-note
NONE
```
2017-09-03 15:54:16 -07:00
Kubernetes Submit Queue
7a219684a9 Merge pull request #51682 from m1093782566/ipvs-rsync-iptables
Automatic merge from submit-queue

rsync IPVS proxier to the HEAD of iptables

**What this PR does / why we need it**:

There was a significant performance improvement made to iptables. Since IPVS proxier makes use of iptables in some use cases, I think we should rsync IPVS proxier to the HEAD of iptables.

**Which issue this PR fixes** : 

xref #51679 

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-09-03 15:48:31 -07:00
Kubernetes Submit Queue
a31bc44b38 Merge pull request #51500 from m1093782566/fix-kube-proxy-panic
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)

fix kube-proxy panic because of nil sessionAffinityConfig

**What this PR does / why we need it**:

fix kube-proxy panic because of nil sessionAffinityConfig

**Which issue this PR fixes**: closes #51499 

**Special notes for your reviewer**:

I apology that this bug is introduced by #49850 :(

@thockin @smarterclayton @gnufied 

**Release note**:

```release-note
NONE
```
2017-09-03 15:00:15 -07:00
Kubernetes Submit Queue
a4ff702a13 Merge pull request #51818 from liggitt/controller-roles
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)

Build controller roles/bindings on demand

As we start to have alpha gated features that involve policy changes, we need to conditionally include roles/bindings in policy based on feature enablement.

Examples:
 * https://github.com/kubernetes/kubernetes/pull/49727/files#diff-a066255fca075e2bdcfe045e7ca352f7
 * https://github.com/kubernetes/kubernetes/pull/51202/files#diff-eee450e334a11e0b683ce965f584c3c4R137

This moves the policy building from an init() func to be on demand, so that feature gates set at the point we set up the post-start reconcile take effect
2017-09-03 15:00:11 -07:00
Kubernetes Submit Queue
3c621d6ee6 Merge pull request #51761 from karataliu/ccmupdatenode
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)

Fix providerID update validation

**What this PR does / why we need it**:
Cloud controller manager supports updating providerID in #50730, but the node updating was blocked by 
validation rule.

This is to propose a fix for updating the validation rule by allowing altering spec.providerID if not set.

Please check #51596 for detail

**Which issue this PR fixes**
fixes #51596

**Special notes for your reviewer**:

**Release note**:
```release-note
```
2017-09-03 15:00:07 -07:00
Kubernetes Submit Queue
765d9089d2 Merge pull request #51706 from zhangxiaoyu-zidif/fix-format-local-follow-go
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)

Fix local storage code to follow go style

**What this PR does / why we need it**:
Fix local storage code to follow go style
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-09-03 15:00:03 -07:00
Kubernetes Submit Queue
5735d80e53 Merge pull request #51819 from cblecker/godep-license-darwin
Automatic merge from submit-queue (batch tested with PRs 51819, 51706, 51761, 51818, 51500)

Update godep-licenses script to work on darwin

**What this PR does / why we need it**:
When reviewing #51766, noticed that the godep licenses script has slightly different output on darwin (using the BSD md5sum) and linux (using the GNU md5sum).

This adds an awk trim to ensure that the output is the same on both platforms.

**Release note**:
```release-note
NONE
```
2017-09-03 15:00:00 -07:00
Kubernetes Submit Queue
47d0db0e87 Merge pull request #51237 from gunjan5/calico-2.5-rbac
Automatic merge from submit-queue

Add RBAC, healthchecks, autoscalers and update Calico to v2.5.1

**What this PR does / why we need it**:
- Updates Calico to `v2.5`
  - Calico/node to `v2.5.1`
  - Calico CNI to `v1.10.0`
  - Typha to `v0.4.1`
- Enable health check endpoints
  - Add Readiness probe for calico-node and Typha
  - Add Liveness probe for calico-node and Typha
- Add RBAC manifest
  - With calico ClusterRole, ServiceAccount and ClusterRoleBinding
- Add Calico CRDs in the Calico manifest (only works for k8s v1.7+)
- Add vertical autoscaler for calico-node and Typha
- Add horizontal autoscaler for Typha 

**Release note**:

```release-note
NONE
```
2017-09-03 14:01:04 -07:00
Kubernetes Submit Queue
b63abc9fdd Merge pull request #51153 from clamoriniere1A/feature/job_failure_policy_controller
Automatic merge from submit-queue

Job failure policy controller support

**What this PR does / why we need it**:
Start implementing the support of the "Backoff policy and failed pod limit" in the ```JobController```  defined in https://github.com/kubernetes/community/pull/583.
This PR depends on a previous PR #48075  that updates the K8s API types.

TODO: 
* [X] Implement ```JobSpec.BackoffLimit``` support
* [x] Rebase when #48075 has been merged.
* [X] Implement end2end tests



implements https://github.com/kubernetes/community/pull/583

**Special notes for your reviewer**:

**Release note**:
```release-note
Add backoff policy and failed pod limit for a job
```
2017-09-03 13:13:17 -07:00
Kubernetes Submit Queue
bee221cca9 Merge pull request #51638 from mfojtik/client-gen-custom-methods
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)

Allow custom client verbs to be generated using client-gen

This change will allow to define custom verbs for resources using the following new tag:

```
// +genclient:method=Foo,verb=create,subresource=foo,input=Bar,output=k8s.io/pkg/api.Blah
```

This will generate client method `Foo(bar *Bar) (*api.Blah, error)` (format depends on the particular verb type)

With this change we can add `UpdateScale()` and `GetScale()` into all scalable resources. Note that intention of this PR is not to fix the Scale(), but that is used as an example of this new capability.
Additionally this will also allow us to get rid of `// +genclient:noStatus` and fix guessing of the "updateStatus" subresource presence based on the existence of '.Status' field.
Basically you will have to add following into all types you want to generate `UpdateStatus()` for:

```
// +genclient:method=UpdateStatus,verb=update,subresource=status
```

This allows further extension of the client without writing an expansion (which proved to be pain to maintain and copy...). Also allows to customize native CRUD methods if needed (input/output types).

```release-note
NONE
```
2017-09-03 11:10:09 -07:00
Kubernetes Submit Queue
f07279ada2 Merge pull request #51474 from verult/ProberTest
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)

Flexvolume dynamic plugin discovery: Prober unit tests and basic e2e test.

**What this PR does / why we need it**: Tests for changes introduced in PR #50031 .
As part of the prober unit test, I mocked filesystem, filesystem watch, and Flexvolume plugin initialization.
Moved the filesystem event goroutine to watcher implementation.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #51147

**Special notes for your reviewer**:
First commit contains added functionality of the mock filesystem.
Second commit is the refactor for moving mock filesystem into a common util directory.
Third commit is the unit and e2e tests.

**Release note**:

```release-note
NONE
```
/release-note-none
/sig storage
/assign @saad-ali @liggitt 
/cc @mtaufen @chakri-nelluri @wongma7
2017-09-03 11:10:05 -07:00
Kubernetes Submit Queue
4d42f80382 Merge pull request #50925 from staebler/server-event-rate-limiter
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)

Limit events accepted by API Server

**What this PR does / why we need it**:
This PR adds the ability to limit events processed by an API server. Limits can be set globally on a server, per-namespace, per-user, and per-source+object. This is needed to prevent badly-configured or misbehaving players from making a cluster unstable.

Please see https://github.com/kubernetes/community/pull/945.

**Release Note:**
```release-note
Adds a new alpha EventRateLimit admission control that is used to limit the number of event queries that are accepted by the API Server.
```
2017-09-03 11:10:03 -07:00
Kubernetes Submit Queue
9637f46122 Merge pull request #51725 from nicksardo/gce-plumb-netvars
Automatic merge from submit-queue (batch tested with PRs 51805, 51725, 50925, 51474, 51638)

GCE: Plumb network & subnetwork to master

**Which issue this PR fixes** *
Fixes #51714

/assign @bowei 

**Release note**:
```release-note
NONE
```
2017-09-03 11:10:00 -07:00
Kubernetes Submit Queue
f12368a187 Merge pull request #51805 from yujuhong/net-tiers-static-ip-test
Automatic merge from submit-queue

e2e: test using reserved IP with network tiers
2017-09-03 10:33:12 -07:00
Kubernetes Submit Queue
d610b828ce Merge pull request #51336 from luxas/kubeadm_api_omitempty
Automatic merge from submit-queue

kubeadm: Add omitempty tags to nullable values and use metav1.Duration

**What this PR does / why we need it**:

From @sttts review of https://github.com/kubernetes/kubernetes/pull/49959; we found some shortcomings of the current JSON tags in the kubeadm config file.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

Note that https://github.com/kubernetes/kubernetes/pull/49959 will not be merged for v1.8, but this at least improves the state in v1.8 without changing anything really.

**Release note**:

```release-note
NONE
```
@kubernetes/sig-api-machinery-pr-reviews
2017-09-03 09:46:28 -07:00
Kubernetes Submit Queue
f24eb1da7c Merge pull request #51803 from deads2k/server-01-authz-evaluation
Automatic merge from submit-queue (batch tested with PRs 50579, 50875, 51797, 51807, 51803)

make url parsing in apiserver configurable

We have known cases where the attributes for a request are assigned differently.  The kubelet is one example.  This makes the value an interface, not a struct, and provides a hook for (non-default) users to override it.
2017-09-03 08:46:31 -07:00
Kubernetes Submit Queue
e6070b9632 Merge pull request #51807 from mml/sh-test-two
Automatic merge from submit-queue (batch tested with PRs 50579, 50875, 51797, 51807, 51803)

Depend on //cluster/lib instead of :all-srcs.

Cleanup after #51649

Bug: #51642

```release-note
NONE
```

/assign @ixdy
/assign @roberthbailey
2017-09-03 08:46:28 -07:00
Kubernetes Submit Queue
3a987b0168 Merge pull request #51797 from CaoShuFeng/protobuf
Automatic merge from submit-queue (batch tested with PRs 50579, 50875, 51797, 51807, 51803)

update generated protobuf for audit v1beta1 api

**Release note**:
```
NONE
```
2017-09-03 08:46:26 -07:00
Kubernetes Submit Queue
d970eb8f94 Merge pull request #50875 from ericchiang/oidc-claims-prefix
Automatic merge from submit-queue (batch tested with PRs 50579, 50875, 51797, 51807, 51803)

oidc auth: make the OIDC claims prefix configurable

Add the following flags to control the prefixing of usernames and
groups authenticated using OpenID Connect tokens.

	--oidc-username-prefix
	--oidc-groups-prefix

```release-note
The OpenID Connect authenticator can now use a custom prefix, or omit the default prefix, for username and groups claims through the --oidc-username-prefix and --oidc-groups-prefix flags. For example, the authenticator can map a user with the username "jane" to "google:jane" by supplying the "google:" username prefix.
```

Closes https://github.com/kubernetes/kubernetes/issues/50408
Ref https://github.com/kubernetes/kubernetes/issues/31380

cc @grillz @kubernetes/sig-auth-pr-reviews @thomastaylor312 @gtaylor
2017-09-03 08:46:23 -07:00
Kubernetes Submit Queue
ab27bc9e6e Merge pull request #50579 from erhudy/bugfix/29271-accept-prefixed-namespaces
Automatic merge from submit-queue

Fixes kubernetes/kubernetes#29271: accept prefixed namespaces

**What this PR does / why we need it**: `kubectl get namespaces -o name` outputs the names of all namespaces, prefixed with `namespaces/`. This changeset allows these namespace names to be passed directly back in to `kubectl` via the `-n` flag without reprocessing them to remove `namespaces/`.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #29271

**Special notes for your reviewer**:

**Release note**:

```NONE
```
2017-09-03 08:33:24 -07:00
Lucas Käldström
92c5997b8e
kubeadm: Detect kubelet readiness and error out if the kubelet is unhealthy 2017-09-03 18:02:46 +03:00
Lucas Käldström
d3081ee23d
autogenerated code 2017-09-03 17:26:02 +03:00
Lucas Käldström
b0a17d11e4
kubeadm: Add omitempty tags to nullable values and use metav1.Duration 2017-09-03 17:25:45 +03:00
Dr. Stefan Schimanski
48cba8a44f client-go: fix 'go build ./...'
Test-only directories seem to confuse go-build and make it fail. We do this as
a smoke test in the github publishing bot.
2017-09-03 16:19:09 +02:00
Kubernetes Submit Queue
75e111ad87 Merge pull request #50864 from mbohlool/update_openapi_aggr
Automatic merge from submit-queue

Improvements to OpenAPI aggregation

Fixes #50863
Fixes #50011
Related: #50896
2017-09-03 06:54:50 -07:00
Kubernetes Submit Queue
b3efdebeb6 Merge pull request #48899 from luxas/kubeadm_easy_upgrades_plan
Automatic merge from submit-queue

Implement the `kubeadm upgrade` command

**What this PR does / why we need it**:

Implements the kubeadm upgrades proposal: https://docs.google.com/document/d/1PRrC2tvB-p7sotIA5rnHy5WAOGdJJOIXPPv23hUFGrY/edit#

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

fixes: https://github.com/kubernetes/kubeadm/issues/14

**Special notes for your reviewer**:

I'm gonna split out changes not directly related to the upgrade procedure into separate PRs as dependencies as we go.
**Now ready for review. Please look at `cmd/kubeadm/app/phases/upgrade` first and give feedback on that. The rest kind of follows.**

**Release note**:

```release-note
Implemented `kubeadm upgrade plan` for checking whether you can upgrade your cluster to a newer version
Implemented `kubeadm upgrade apply` for upgrading your cluster from one version to an other
```

cc @fabriziopandini @kubernetes/sig-cluster-lifecycle-pr-reviews @craigtracey @mattmoyer
2017-09-03 05:48:23 -07:00
Kubernetes Submit Queue
ea1d10543f Merge pull request #51719 from soltysh/audit_switch_beta
Automatic merge from submit-queue

Switch audit output to v1beta1

This PR adds two switches to pick preferred version for webhook and log backends, and it switches to use `audit.k8s.io/v1beta1` as default for both.

@sttts @crassirostris ptal

**Release note**:
```release-note
Switch to audit.k8s.io/v1beta1 in audit.
```
2017-09-03 04:14:09 -07:00
Kubernetes Submit Queue
6b9ce5ba11 Merge pull request #50597 from dixudx/qemu_upgrade_2.9.1
Automatic merge from submit-queue

bump QEMU version to v2.9.1

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
xref #38067

**Special notes for your reviewer**:
/assign @luxas 

**Release note**:

```release-note
update QEMU version to v2.9.1
```
2017-09-03 03:24:53 -07:00
cedric lamoriniere
1dbef2f113
Job failure policy support in JobController
Job failure policy integration in JobController. From the
JobSpec.BackoffLimit the JobController will define the backoff
duration between Job retry.

It use the ```workqueue.RateLimitingInterface``` to store the number of
"retry" as "requeue" and the default Job backoff initial duration is set
during the initialization of the ```workqueue.RateLimiter.

Since the number of retry for each job is store in a local structure
"JobController.queue" if the JobController restarts the number of retries
will be lost and the backoff duration will be reset to 0.

Add e2e test for Job backoff failure policy
2017-09-03 12:07:12 +02:00
Lucas Käldström
c575626988
autogenerated bazel 2017-09-03 12:29:03 +03:00
Lucas Käldström
94983530d4
Add unit tests for kubeadm upgrade 2017-09-03 12:26:10 +03:00
Lucas Käldström
c237ff5bc0
Fully implement the kubeadm upgrade functionality 2017-09-03 12:25:47 +03:00
mbohlool
b9eacd0bf5 update bazel
update OpenAPI spec

update staging godeps
2017-09-03 02:18:14 -07:00
Shyam Jeedigunta
ba9e93cb27
Correct CronJob group version at remaining places 2017-09-03 11:17:33 +02:00
Maciej Szulik
6962427b35
Enable batch/v1beta1.CronJobs by default 2017-09-03 11:17:33 +02:00
mbohlool
72ce8773a4 Update Godep for kube-openapi 2017-09-03 02:16:08 -07:00
mbohlool
76e24f216f Consolidate local OpenAPI specs and APIServices' spec into one data structure
Remove APIService OpenAPI spec when it is deleted

Add eTag support and returning httpStatus to OpenAPI spec downloader

Update aggregated OpenAPI spec periodically

Use delegate chain

Refactor OpenAPI aggregator to have separate controller and aggregation function

Enable OpenAPI spec for extensions api server

Do not filter paths. higher priority specs wins the conflicting paths

Move OpenAPI aggregation controller to pkg/controller/openapi
2017-09-03 02:16:08 -07:00
mbohlool
7cbdb90890 Provide whole delegate chain to kube aggregator 2017-09-03 02:16:08 -07:00
Kubernetes Submit Queue
28857a2f02 Merge pull request #49142 from joelsmith/slowstart
Automatic merge from submit-queue (batch tested with PRs 50602, 51561, 51703, 51748, 49142)

Slow-start batch pod creation of rs, rc, ds, jobs

Prevent too-large replicas from generating enormous numbers
of events by creating only a few pods at a time, then increasing
the batch size when pod creations succeed. Stop creating batches
of pods when any pod creation errors are encountered.

Todo:

- [x] Add automated tests
- [x] Test ds

Fixes https://github.com/kubernetes/kubernetes/issues/49145

**Release note**:
```release-note
controllers backoff better in face of quota denial
```
2017-09-03 01:12:14 -07:00
Kubernetes Submit Queue
fc87bba2dd Merge pull request #51748 from smarterclayton/events_inline
Automatic merge from submit-queue (batch tested with PRs 50602, 51561, 51703, 51748, 49142)

Simplify describe events table

The describe table for events is not easy to read and violates other
output guidelines. Change to use spaces (we don't use tabs in formal
output for tables). Remove columns that are not normally needed or
available on events.

Example for pods:

```
...
QoS Class:       BestEffort
Node-Selectors:  role=app
Tolerations:     <none>
Events:
  Type     Reason      Age                 From                         Message
  ----     ------      ----                ----                         -------
  Normal   Pulling     1h (x51 over 5h)    kubelet, origin-ci-ig-n-gj0x pulling image "registry.svc.ci.openshift.org/experiment/commenter:latest"
  Normal   BackOff     8m (x1274 over 5h)  kubelet, origin-ci-ig-n-gj0x Back-off pulling image "registry.svc.ci.openshift.org/experiment/commenter:latest"
  Warning  FailedSync  3m (x1359 over 5h)  kubelet, origin-ci-ig-n-gj0x Error syncing pod
```

Puts the type first (separate important from not), then reason (which is
the most impactful scanning field). Collapses first seen, last seen, and
times into a single field, since most of the time you care about the
last time the event happened, not the first time.

@kubernetes/sig-cli-pr-reviews sorry for the last minute drop, but the usability of this is driving me up the wall and I can't take it anymore. Would like to slip this into 1.8 so that I can debug things without dying a little inside.

Fixes #47715

```release-note
The event table output under `kubectl describe` has been simplified to show only the most essential info.
```
2017-09-03 01:12:12 -07:00
Kubernetes Submit Queue
1d43050372 Merge pull request #51703 from deads2k/discovery-02-scale
Automatic merge from submit-queue (batch tested with PRs 50602, 51561, 51703, 51748, 49142)

expose discovery information on scalable resources

Builds on https://github.com/kubernetes/kubernetes/pull/49971 and provides the GroupVersion information that can be used by a dynamic scale client.

@kubernetes/sig-api-machinery-pr-reviews 
@foxish @DirectXMan12 since you both asked for it.
2017-09-03 01:12:09 -07:00
Kubernetes Submit Queue
9ad2bd0f7f Merge pull request #51561 from cheftako/getzone
Automatic merge from submit-queue (batch tested with PRs 50602, 51561, 51703, 51748, 49142)

Implement GetZoneByProviderID & GetZoneByNodeName

Adding an implementation of GetZoneByProviderID & GetZoneByNodeName for
GCE.
This is related to ticket 50926.
This was tested as part of the ongoing separate GCE cloud provider work.

**What this PR does / why we need it**: It implements GCE methods needed by the cloud provider work.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #50926 

**Special notes for your reviewer**: Tested with pull/50811

**Release note**:
<!--  Steps to write your release note:
```release-note NONE
```
2017-09-03 01:12:07 -07:00