Commit Graph

48737 Commits

Author SHA1 Message Date
Kubernetes Submit Queue
26d7ee0447 Merge pull request #44774 from kargakis/uniquifier
Automatic merge from submit-queue

Switch Deployments to new hashing algo w/ collision avoidance mechanism

Implements https://github.com/kubernetes/community/pull/477

@kubernetes/sig-apps-api-reviews @kubernetes/sig-apps-pr-reviews 

Fixes https://github.com/kubernetes/kubernetes/issues/29735
Fixes https://github.com/kubernetes/kubernetes/issues/43948

```release-note
Deployments are updated to use (1) a more stable hashing algorithm (fnv) than the previous one (adler) and (2) a hashing collision avoidance mechanism that will ensure new rollouts will not block on hashing collisions anymore.
```
2017-05-25 06:09:58 -07:00
Michail Kargakis
9190a47c37
Generated changes for collision count
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 12:23:17 +02:00
Kubernetes Submit Queue
9c1480bb61 Merge pull request #46366 from nicksardo/gce-subnetwork-url
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)

GCE - Retrieve subnetwork name/url from gce.conf 

**What this PR does / why we need it**:
Features like ILB require specifying the subnetwork if the network is type manual.

**Notes:**
The network URL can be [constructed](68e7e18698/pkg/cloudprovider/providers/gce/gce.go (L211-L217)) by fetching instance metadata; however, the subnetwork is not provided through this feature. Users must specify the subnetwork name/url through the gce.conf.

Although multiple subnets can exist in the same region for a network, the cloud provider will only use one subnet url for creating LBs. 


**Release note**:
```release-note
NONE
```
2017-05-25 03:14:05 -07:00
Kubernetes Submit Queue
8f9f412d2f Merge pull request #46162 from lixiaobing10051267/masterFound
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)

break the loop when found true

break the loop when found true.
2017-05-25 03:14:03 -07:00
Kubernetes Submit Queue
0cd76e1716 Merge pull request #46376 from perotinus/sacrb
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)

[Federation] Uniquify the ClusterRole and ClusterRoleBinding names created by `kubefed join`

This allows a cluster to be in multiple federations.

**Release note**:
```release-note
NONE
```
2017-05-25 03:14:01 -07:00
Kubernetes Submit Queue
23348ceedc Merge pull request #46354 from smarterclayton/metrics_subresource
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)

Subresources are not included in apiserver prometheus metrics

Subresources are very often completely different code paths and errors
generated on those code paths are important to distinguish.

@kubernetes/sig-api-machinery-pr-reviews

```release-note
The Prometheus metrics for the kube-apiserver for tracking incoming API requests and latencies now return the `subresource` label for correctly attributing the type of API call.
```
2017-05-25 03:13:59 -07:00
Kubernetes Submit Queue
4234d79aca Merge pull request #45573 from shiywang/zh
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)

Add Simplified Chinese translation for kubectl

What this PR does / why we need it:
This PR provides first attempt to translate kubectl in Simplified Chinese.

Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #
No issues

Special notes for your reviewer:
Although I'm a native speaker for Mandarin Chinese, but I think translation is a whole different knowledge which I'm not good at it, so this pr absolutely need to be polished.
@adohe @mengqiy @resouer @k82cn @caesarxuchao @wanghaoran1988 sorry I think there are so many folks who are good at Chinese I haven't mention, feel free to leave a comment on it : )
also cc @brendandburns
2017-05-25 03:13:57 -07:00
Michail Kargakis
4a2c5eae92
Implement hash collision avoidance mechanism
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:45 +02:00
Michail Kargakis
aeb2d9b9b4
Deep equality helper should not mutate state
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:45 +02:00
Michail Kargakis
fcf68ba7a7
Remove obsolete deployment helpers
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:44 +02:00
Michail Kargakis
4aa8b1a66a
Add collisionCount api field in DeploymentStatus
Signed-off-by: Michail Kargakis <mkargaki@redhat.com>
2017-05-25 11:17:44 +02:00
Kubernetes Submit Queue
4def5add11 Merge pull request #46373 from deads2k/controller-06-queue
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

don't queue namespaces for deletion if the namespace isn't deleted

Most namespaces aren't deleted most of the time.  No need to queue them for cleanup if they aren't deleted.
2017-05-25 00:11:07 -07:00
Kubernetes Submit Queue
d84f3f4b7e Merge pull request #46363 from MrHohn/fix-CheckPodsCondition
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Fix CheckPodsCondition to print out the correct podName

From a couple CIs (https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/1114, https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gci-qa-serial-master/2246, https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-pre-release/2187), all indicate we print out the wrong pod name in CheckPodsCondition for _"Pod XXX failed to be running and ready, or succeeded."_:
```
I0524 02:09:50.173] May 24 02:09:50.173: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m55.033881993s elapsed)
I0524 02:09:52.178] May 24 02:09:52.178: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m57.03848264s elapsed)
I0524 02:09:54.183] May 24 02:09:54.182: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m59.043463323s elapsed)
I0524 02:09:56.183] May 24 02:09:56.183: INFO: Pod fluentd-gcp-v2.0-6wf67 failed to be running and ready, or succeeded.
I0524 02:09:56.184] May 24 02:09:56.183: INFO: Wanted all 23 pods to be running and ready, or succeeded. Result: false. Pods: [heapster-v1.3.0-3806988011-kzkg6 kube-proxy-bootstrap-e2e-minion-group-bbwn rescheduler-v0.3.0-bootstrap-e2e-master monitoring-influxdb-grafana-v4-1q59k l7-default-backend-1044750973-zgxsc etcd-server-events-bootstrap-e2e-master kube-apiserver-bootstrap-e2e-master kube-proxy-bootstrap-e2e-minion-group-6nqb kube-proxy-bootstrap-e2e-minion-group-mzbz fluentd-gcp-v2.0-chd2x kube-dns-806549836-f8p46 fluentd-gcp-v2.0-44x97 kube-dns-autoscaler-2528518105-vlg8t fluentd-gcp-v2.0-p1h4b kube-controller-manager-bootstrap-e2e-master l7-lb-controller-v0.9.3-bootstrap-e2e-master kubernetes-dashboard-2917854236-tn3nx kube-dns-806549836-fq2fp kube-scheduler-bootstrap-e2e-master etcd-empty-dir-cleanup-bootstrap-e2e-master kube-addon-manager-bootstrap-e2e-master etcd-server-bootstrap-e2e-master fluentd-gcp-v2.0-6wf67]
I0524 02:09:56.184] May 24 02:09:56.183: INFO: At least one pod wasn't running and ready or succeeded at test start.
I0524 02:09:56.184] [AfterEach] [k8s.io] Restart [Disruptive]
```

Check the codes and found we always print out the last pod name, which is random. Pass the pod name into channel to fix.

**Release note**:

```release-note
NONE
```
2017-05-25 00:11:05 -07:00
Kubernetes Submit Queue
f5bdd61b12 Merge pull request #46352 from humblec/gluster-mount-4
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Dont exit if 'mount.glusterfs -V' resulted in an error.
2017-05-25 00:11:03 -07:00
Kubernetes Submit Queue
74f501935b Merge pull request #46065 from timstclair/audit-api
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Update audit API with missing pieces

Follow-up to https://github.com/kubernetes/kubernetes/pull/45315 to resolve pending decisions & issues, including:

- Audit ID format
- Identifying audit event "stage"
- Request/Response object format (resolve conversion issue)
- Add a subresource field to the `ObjectReference`

For https://github.com/kubernetes/features/issues/22

~~TODO: Add generated code once we've reached consensus on the types.~~

/cc @deads2k @ihmccreery @sttts @soltysh @ericchiang
2017-05-25 00:11:01 -07:00
Kubernetes Submit Queue
fe5b303365 Merge pull request #45913 from enj/enj/t/etcd_cohabitating_resources
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)

Detect cohabitating resources in etcd storage test

**What this PR does / why we need it**:

This change updates the etcd storage path test to detect cohabitating resources by looking at their expected location in etcd.  This was not detected in the past because the GVK check did not span across groups.

To limit noise from failures caused by multiple objects at the same location in etcd, the test now fails when different GVRs share the same expected path.  Thus every object is expected to have a unique path.

@liggitt PTAL

Signed-off-by: Monis Khan <mkhan@redhat.com>

**Release note**:

```
NONE
```
2017-05-25 00:10:59 -07:00
Kubernetes Submit Queue
80171e5106 Merge pull request #46150 from bowei/ip-alias-service
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Create a subnet for reserving the service cluster IP range

This will be done if IP aliases is enabled on GCP.

```release-note
NONE
```
2017-05-24 23:19:11 -07:00
Kubernetes Submit Queue
ed8843406e Merge pull request #46303 from Random-Liu/fix-cos-image-project
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Fix cos image project to cos-cloud.

Addressed https://github.com/kubernetes/kubernetes/pull/45136#discussion_r118092211.

@vishh @yujuhong @dchen1107
2017-05-24 23:19:09 -07:00
Kubernetes Submit Queue
8d88c55231 Merge pull request #46311 from dashpole/disable_ubuntu_gpu_test
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Dont attach a GPU to ubuntu test machines for node e2e serial tests

This should fix flakes in the e2e_node serial suite.

@vishh I think this is what you were asking for...

/assign @vishh
2017-05-24 23:19:07 -07:00
Kubernetes Submit Queue
b71ca6691b Merge pull request #46309 from Random-Liu/move-docker-validation-to-separate-project
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Move docker validation test to separate project.

Docker validation test is leaking VMs because new docker version `DOCKER_VERSION=17.05.0-c` totally breaks the new gci image `GCE_IMAGES=gci-test-60-9579-0-0` with the `gci-docker-version` metadata specified.

The test successfully created the instance, but timed out when checking VM aliveness, and leaked the VM.

I've cleaned up all leaked VMs. This PR moves docker validation node e2e test into a separate project to not influencing other node e2e test.

@kewu1992 We should fix the docker automated validation test.

/cc @dchen1107 @yujuhong @abgworrall
2017-05-24 23:19:05 -07:00
Kubernetes Submit Queue
3c2e6a9f4d Merge pull request #46299 from ncdc/fix-DirectClientConfig-Namespace-override
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)

Fix in-cluster kubectl --namespace override

**What this PR does / why we need it**:
Before this change, if the config was empty, ConfirmUsable() would
return an "invalid configuration" error instead of examining and
honoring the value of the --namespace flag. This change looks at the
overrides first, and returns the overridden value if it exists before
attempting to check if the config is usable. This is most applicable to
in-cluster clients, where they don't have a kubeconfig but they do have
a token and can use KUBERNETES_SERVICE_HOST/_PORT.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
The --namespace flag is now honored for in-cluster clients that have an empty configuration.
```

@kubernetes/sig-api-machinery-pr-reviews @fabianofranz @liggitt @deads2k @smarterclayton @caesarxuchao @soltysh
2017-05-24 23:18:59 -07:00
Kubernetes Submit Queue
cbd6b25c1c Merge pull request #46207 from zjj2wry/spea-space
Automatic merge from submit-queue

/pkg/client/listers: fix some typo

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-24 20:39:00 -07:00
Kubernetes Submit Queue
9812856088 Merge pull request #45317 from ericchiang/oidc-client-update
Automatic merge from submit-queue

oidc client plugin: reduce round trips and fix scopes requested

This PR attempts to simplify the OpenID Connect client plugin to
reduce round trips. The steps taken by the client are now:

* If ID Token isn't expired:
   * Do nothing.
* If ID Token is expired:
   * Query /.well-known discovery URL to find token_endpoint.
   * Use an OAuth2 client and refresh token to request new ID token.

This avoids the previous pattern of always initializing a client,
which would hit the /.well-known endpoint several times.

The client no longer does token validation since the server already
does this. As a result, this code no longer imports
github.com/coreos/go-oidc, instead just using golang.org/x/oauth2
for refreshing.

Overall reduction in tests because we're not verify as many things
on the client side. For example, we're no longer validating the
id_token signature (again, because it's being done on the server
side).

This has been manually tested against dex, and I hope to continue
to test this over the 1.7 release cycle.

cc @mlbiam @frodenas @curtisallen @jsloyer @rithujohn191 @philips @kubernetes/sig-auth-pr-reviews 

```release-note
NONE
```

Updates https://github.com/kubernetes/kubernetes/issues/42654
Closes https://github.com/kubernetes/kubernetes/issues/37875
Closes https://github.com/kubernetes/kubernetes/issues/37874
2017-05-24 19:49:26 -07:00
Kubernetes Submit Queue
ee0de5f376 Merge pull request #46268 from jianglingxia/jlx523
Automatic merge from submit-queue

fix the invalid link

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
```
2017-05-24 16:17:23 -07:00
Kubernetes Submit Queue
aeeadb0c03 Merge pull request #46329 from zjj2wry/DeamonSet-DaemonSet
Automatic merge from submit-queue

DeamonSet-DaemonSet

**What this PR does / why we need it**:

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #

**Special notes for your reviewer**:

**Release note**:

```release-note
NONE
```
2017-05-24 16:17:15 -07:00
Kubernetes Submit Queue
de1ebf8118 Merge pull request #44443 from jamiehannaford/kubelet-tc
Automatic merge from submit-queue

Bump kubelet/networks test coverage

**What this PR does / why we need it**:

Bumps test coverage

**Which issue this PR fixes**:

https://github.com/kubernetes/kubernetes/issues/40780
https://github.com/kubernetes/kubernetes/issues/39559

**Special notes for your reviewer**:

Writing positive test cases for these lines:
https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/networks.go#L38 https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/networks.go#L69 
is quite difficult, so the former has a negative case and the latter has no test coverage.

**Release note**:
```release-note
New tests for kubelet/networks
```
2017-05-24 16:17:08 -07:00
Kubernetes Submit Queue
89a76b8c8b Merge pull request #46128 from jagosan/master
Automatic merge from submit-queue

Added deprecation notice and guidance for cloud providers.

**What this PR does / why we need it**:
Adding context/background and general guidance for incoming cloud providers. 

**Which issue this PR fixes** 

**Special notes for your reviewer**:
Generalized message per discussion with @bgrant0607
2017-05-24 14:19:01 -07:00
Kubernetes Submit Queue
c1d6439fe3 Merge pull request #46262 from xilabao/fix-message-in-storage-extensions
Automatic merge from submit-queue

fix err message in storage extensions

**Release note**:

```release-note
`NONE`
```
2017-05-24 14:18:53 -07:00
Kubernetes Submit Queue
b3181ec2f3 Merge pull request #46305 from sjenning/init-container-status
Automatic merge from submit-queue

clear init container status annotations when cleared in status

When I pod with an init container is terminated due to exceeding its active deadline, the pod status is phase `Failed` with reason `DeadlineExceeded`.  All container statuses are cleared from the pod status.

With init containers, however, the status is being regenerated from the status annotations.  This is causing kubectl to report the pod state as `Init:0/1` instead of `DeadlineExceeded` because the kubectl printer observes a running init container, which in reality is not running.

This PR clears out the init container status annotations when they have been removed from the pod status so they are not regenerated on the apiserver.

xref https://bugzilla.redhat.com/show_bug.cgi?id=1453180

@derekwaynecarr 

```release-note
Fix init container status reporting when active deadline is exceeded.
```
2017-05-24 14:18:45 -07:00
Clayton Coleman
ad431c454c
Subresources are not included in apiserver prometheus metrics
Subresources are very often completely different code paths and errors
generated on those code paths are important to distinguish.
2017-05-24 16:23:50 -04:00
Jonathan MacMillan
748ea1109d [Federation] Uniquify the ClusterRole and ClusterRoleBinding names created by . 2017-05-24 12:04:16 -07:00
deads2k
ba5a1113e6 don't queue namespaces for deletion if the namespace isn't deleted 2017-05-24 14:47:53 -04:00
Nick Sardo
e7ee3913d7 Add subnetworkUrl param to e2e 2017-05-24 10:54:51 -07:00
Nick Sardo
68e7e18698 Set NODE_SUBNETWORK env var in gce.conf 2017-05-24 10:23:08 -07:00
Zihong Zheng
03d08623e8 Fix CheckPodsCondition to print out the correct podName 2017-05-24 10:20:57 -07:00
Nick Sardo
435303c647 Add subnetworkURL to GCE provider 2017-05-24 09:35:51 -07:00
Humble Chirammal
55808add37 Dont exit if 'mount.glusterfs -V' resulted in an error.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
2017-05-24 21:07:58 +05:30
Tim St. Clair
4c54970d31
Update existing code for audit API changes 2017-05-24 07:45:19 -07:00
Kubernetes Submit Queue
6f7eac63c2 Merge pull request #46315 from wongma7/gcepdalready
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)

Fix provisioned GCE PD not being reused if already exists

@jsafrane PTAL 

This is another attempt at https://github.com/kubernetes/kubernetes/pull/38702 . We have observed that `gce.service.Disks.Insert(gce.projectID, zone, diskToCreate).Do()` instantly gets an error response of alreadyExists, so we must check for it.

I am not sure if we still need to check for the error after `waitForZoneOp`; I think that if there is an alreadyExists error, the `Do()` above will always respond with it instantly. But because I'm not sure, and to be safe, I will leave it.
2017-05-24 06:47:03 -07:00
Kubernetes Submit Queue
70dd10cc50 Merge pull request #41785 from jamiehannaford/cinder-performance
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)

Only retrieve relevant volumes

**What this PR does / why we need it**:

Improves performance for Cinder volume attach/detach calls. 

Currently when Cinder volumes are attached or detached, functions try to retrieve details about the volume from the Nova API. Because some only have the volume name not its UUID, they use the list function in gophercloud to iterate over all volumes to find a match. This incurs severe performance problems on OpenStack projects with lots of volumes (sometimes thousands) since it needs to send a new request when the current page does not contain a match. A better way of doing this is use the `?name=XXX` query parameter to refine the results.

**Which issue this PR fixes**:

https://github.com/kubernetes/kubernetes/issues/26404

**Special notes for your reviewer**:

There were 2 ways of addressing this problem:

1. Use the `name` query parameter
2. Instead of using the list function, switch to using volume UUIDs and use the GET function instead. You'd need to change the signature of a few functions though, such as [`DeleteVolume`](https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder.go#L49), so I'm not sure how backwards compatible that is.

Since #1 does effectively the same as #2, I went with it because it ensures BC.

One assumption that is made is that the `volumeName` being retrieved matches exactly the name of the volume in Cinder. I'm not sure how accurate that is, but I see no reason why cloud providers would want to append/prefix things arbitrarily. 

**Release note**:
```release-note
Improves performance of Cinder volume attach/detach operations
```
2017-05-24 06:46:59 -07:00
Kubernetes Submit Queue
2bc097b066 Merge pull request #38505 from pospispa/260-finish-aws-provisioner-parse-pvc-selector-dynamic-provision-first-part-including-GCE-changes-StorageClass-zones-part-back-in-history-to-test-it-on-AWS
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)

GCE and AWS provisioners, dynamic provisioning: admins can configure zone(s) where PVs shall be created

Zone configuration capabilities for GCE and AWS dynamic provisioners are extended.
Admins can configure in a storage class a comma separated list of allowed zone(s).

Partly fixes Trello cards:
- [GCE provisioner, parse pvc.Selector](https://trello.com/c/CyemTzsK/259-finish-gce-provisioner-parse-pvc-selector-dynamic-provision)
- [AWS provisioner, parse pvc.Selector](https://trello.com/c/2XjouSWw/260-finish-aws-provisioner-parse-pvc-selector-dynamic-provision)

```release-note
GCE and AWS dynamic provisioners extension: admins can configure zone(s) in which a persistent volume shall be created.
```

cc: @jsafrane
2017-05-24 06:46:58 -07:00
Kubernetes Submit Queue
54f6688174 Merge pull request #46213 from xiao-zhou/extention-api
Automatic merge from submit-queue

Add test for cross namespace watch and list

**What this PR does / why we need it**: Add more integration test for kube-apiextensions-server

**Which issue this PR fixes** : fixes https://github.com/kubernetes/kubernetes/issues/45511

**Special notes for your reviewer**: The client with cluster scope also works, but it seems to be trivial

@deads2k
2017-05-24 05:29:41 -07:00
Jamie Hannaford
4bd71a3b77 Refactor to use Volume IDs and remove ambiguity 2017-05-24 12:59:16 +02:00
Kubernetes Submit Queue
7c76e3994c Merge pull request #46101 from sttts/sttts-crd-core-names
Automatic merge from submit-queue

apiextensions: add Established condition

This introduces a `Established` condition on `CustomResourceDefinition`s. `Established` means that the resource has become active. A resource is established when all names are accepted initially without a conflict. A resource stays established until deleted, even during a later NameConflict due to changed names. Note that not all names can be changed.

This change is necessary to allow deletion of once-active CRDs which might have still instances, but  have NameConflicts now. Before this PR the REST endpoint was not active anymore in this case, making deletion of the instances impossible.
2017-05-24 02:13:32 -07:00
pospispa
9eb912e62f Admin Can Specify in Which AWS Availability Zone(s) a PV Shall Be Created
An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.
2017-05-24 10:48:11 +02:00
pospispa
d73c0d649d Admin Can Specify in Which GCE Availability Zone(s) a PV Shall Be Created
An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning.

That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.
2017-05-24 10:48:10 +02:00
pospispa
dd17d620d7 Added func ValidateZone
The zone parameter provided in a Storage Class may erroneously be an empty string or contain only spaces and tab characters. Such situation shall be detected and reported as an error.

That's why the func ValidateZone was added.
2017-05-24 10:48:10 +02:00
pospispa
0f3a9cfc5f Added func ZonesToSet
An admin shall be able to configure a comma separated list of zones for a StorageClass.

That's why the func ZonesToSet (string) (set.String, error) is added. The func ZonesToSet converts a string containing a comma separated list of zones to a set. In case the list contains an empty zone an error is returned.
2017-05-24 10:48:10 +02:00
zhengjiajin
550a834bf1 DeamonSet-DaemonSet 2017-05-24 16:06:34 +08:00
Kubernetes Submit Queue
c1c7365e7c Merge pull request #46147 from nicksardo/gce-cluster-id
Automatic merge from submit-queue (batch tested with PRs 45891, 46147)

Watching ClusterId from within GCE cloud provider

**What this PR does / why we need it**:
Adds the ability for the GCE cloud provider to watch a config map for `clusterId` and `providerId`.

WIP - still needs more testing

cc @MrHohn @csbell @madhusudancs @thockin @bowei @nikhiljindal 

**Release note**:
```release-note
NONE
```
2017-05-24 00:42:58 -07:00