Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
GCE - Retrieve subnetwork name/url from gce.conf
**What this PR does / why we need it**:
Features like ILB require specifying the subnetwork if the network is type manual.
**Notes:**
The network URL can be [constructed](68e7e18698/pkg/cloudprovider/providers/gce/gce.go (L211-L217)) by fetching instance metadata; however, the subnetwork is not provided through this feature. Users must specify the subnetwork name/url through the gce.conf.
Although multiple subnets can exist in the same region for a network, the cloud provider will only use one subnet url for creating LBs.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
break the loop when found true
break the loop when found true.
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
[Federation] Uniquify the ClusterRole and ClusterRoleBinding names created by `kubefed join`
This allows a cluster to be in multiple federations.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
Subresources are not included in apiserver prometheus metrics
Subresources are very often completely different code paths and errors
generated on those code paths are important to distinguish.
@kubernetes/sig-api-machinery-pr-reviews
```release-note
The Prometheus metrics for the kube-apiserver for tracking incoming API requests and latencies now return the `subresource` label for correctly attributing the type of API call.
```
Automatic merge from submit-queue (batch tested with PRs 45573, 46354, 46376, 46162, 46366)
Add Simplified Chinese translation for kubectl
What this PR does / why we need it:
This PR provides first attempt to translate kubectl in Simplified Chinese.
Which issue this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close that issue when PR gets merged): fixes #
No issues
Special notes for your reviewer:
Although I'm a native speaker for Mandarin Chinese, but I think translation is a whole different knowledge which I'm not good at it, so this pr absolutely need to be polished.
@adohe @mengqiy @resouer @k82cn @caesarxuchao @wanghaoran1988 sorry I think there are so many folks who are good at Chinese I haven't mention, feel free to leave a comment on it : )
also cc @brendandburns
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)
don't queue namespaces for deletion if the namespace isn't deleted
Most namespaces aren't deleted most of the time. No need to queue them for cleanup if they aren't deleted.
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)
Fix CheckPodsCondition to print out the correct podName
From a couple CIs (https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-serial/1114, https://k8s-gubernator.appspot.com/build/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-gci-qa-serial-master/2246, https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-e2e-gci-gke-pre-release/2187), all indicate we print out the wrong pod name in CheckPodsCondition for _"Pod XXX failed to be running and ready, or succeeded."_:
```
I0524 02:09:50.173] May 24 02:09:50.173: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m55.033881993s elapsed)
I0524 02:09:52.178] May 24 02:09:52.178: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m57.03848264s elapsed)
I0524 02:09:54.183] May 24 02:09:54.182: INFO: Waiting for pod heapster-v1.3.0-3806988011-kzkg6 in namespace 'kube-system' status to be 'running and ready, or succeeded'(found phase: "Running", readiness: false) (4m59.043463323s elapsed)
I0524 02:09:56.183] May 24 02:09:56.183: INFO: Pod fluentd-gcp-v2.0-6wf67 failed to be running and ready, or succeeded.
I0524 02:09:56.184] May 24 02:09:56.183: INFO: Wanted all 23 pods to be running and ready, or succeeded. Result: false. Pods: [heapster-v1.3.0-3806988011-kzkg6 kube-proxy-bootstrap-e2e-minion-group-bbwn rescheduler-v0.3.0-bootstrap-e2e-master monitoring-influxdb-grafana-v4-1q59k l7-default-backend-1044750973-zgxsc etcd-server-events-bootstrap-e2e-master kube-apiserver-bootstrap-e2e-master kube-proxy-bootstrap-e2e-minion-group-6nqb kube-proxy-bootstrap-e2e-minion-group-mzbz fluentd-gcp-v2.0-chd2x kube-dns-806549836-f8p46 fluentd-gcp-v2.0-44x97 kube-dns-autoscaler-2528518105-vlg8t fluentd-gcp-v2.0-p1h4b kube-controller-manager-bootstrap-e2e-master l7-lb-controller-v0.9.3-bootstrap-e2e-master kubernetes-dashboard-2917854236-tn3nx kube-dns-806549836-fq2fp kube-scheduler-bootstrap-e2e-master etcd-empty-dir-cleanup-bootstrap-e2e-master kube-addon-manager-bootstrap-e2e-master etcd-server-bootstrap-e2e-master fluentd-gcp-v2.0-6wf67]
I0524 02:09:56.184] May 24 02:09:56.183: INFO: At least one pod wasn't running and ready or succeeded at test start.
I0524 02:09:56.184] [AfterEach] [k8s.io] Restart [Disruptive]
```
Check the codes and found we always print out the last pod name, which is random. Pass the pod name into channel to fix.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)
Update audit API with missing pieces
Follow-up to https://github.com/kubernetes/kubernetes/pull/45315 to resolve pending decisions & issues, including:
- Audit ID format
- Identifying audit event "stage"
- Request/Response object format (resolve conversion issue)
- Add a subresource field to the `ObjectReference`
For https://github.com/kubernetes/features/issues/22
~~TODO: Add generated code once we've reached consensus on the types.~~
/cc @deads2k @ihmccreery @sttts @soltysh @ericchiang
Automatic merge from submit-queue (batch tested with PRs 45913, 46065, 46352, 46363, 46373)
Detect cohabitating resources in etcd storage test
**What this PR does / why we need it**:
This change updates the etcd storage path test to detect cohabitating resources by looking at their expected location in etcd. This was not detected in the past because the GVK check did not span across groups.
To limit noise from failures caused by multiple objects at the same location in etcd, the test now fails when different GVRs share the same expected path. Thus every object is expected to have a unique path.
@liggitt PTAL
Signed-off-by: Monis Khan <mkhan@redhat.com>
**Release note**:
```
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)
Create a subnet for reserving the service cluster IP range
This will be done if IP aliases is enabled on GCP.
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)
Dont attach a GPU to ubuntu test machines for node e2e serial tests
This should fix flakes in the e2e_node serial suite.
@vishh I think this is what you were asking for...
/assign @vishh
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)
Move docker validation test to separate project.
Docker validation test is leaking VMs because new docker version `DOCKER_VERSION=17.05.0-c` totally breaks the new gci image `GCE_IMAGES=gci-test-60-9579-0-0` with the `gci-docker-version` metadata specified.
The test successfully created the instance, but timed out when checking VM aliveness, and leaked the VM.
I've cleaned up all leaked VMs. This PR moves docker validation node e2e test into a separate project to not influencing other node e2e test.
@kewu1992 We should fix the docker automated validation test.
/cc @dchen1107 @yujuhong @abgworrall
Automatic merge from submit-queue (batch tested with PRs 46299, 46309, 46311, 46303, 46150)
Fix in-cluster kubectl --namespace override
**What this PR does / why we need it**:
Before this change, if the config was empty, ConfirmUsable() would
return an "invalid configuration" error instead of examining and
honoring the value of the --namespace flag. This change looks at the
overrides first, and returns the overridden value if it exists before
attempting to check if the config is usable. This is most applicable to
in-cluster clients, where they don't have a kubeconfig but they do have
a token and can use KUBERNETES_SERVICE_HOST/_PORT.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
The --namespace flag is now honored for in-cluster clients that have an empty configuration.
```
@kubernetes/sig-api-machinery-pr-reviews @fabianofranz @liggitt @deads2k @smarterclayton @caesarxuchao @soltysh
Automatic merge from submit-queue
/pkg/client/listers: fix some typo
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
oidc client plugin: reduce round trips and fix scopes requested
This PR attempts to simplify the OpenID Connect client plugin to
reduce round trips. The steps taken by the client are now:
* If ID Token isn't expired:
* Do nothing.
* If ID Token is expired:
* Query /.well-known discovery URL to find token_endpoint.
* Use an OAuth2 client and refresh token to request new ID token.
This avoids the previous pattern of always initializing a client,
which would hit the /.well-known endpoint several times.
The client no longer does token validation since the server already
does this. As a result, this code no longer imports
github.com/coreos/go-oidc, instead just using golang.org/x/oauth2
for refreshing.
Overall reduction in tests because we're not verify as many things
on the client side. For example, we're no longer validating the
id_token signature (again, because it's being done on the server
side).
This has been manually tested against dex, and I hope to continue
to test this over the 1.7 release cycle.
cc @mlbiam @frodenas @curtisallen @jsloyer @rithujohn191 @philips @kubernetes/sig-auth-pr-reviews
```release-note
NONE
```
Updates https://github.com/kubernetes/kubernetes/issues/42654
Closes https://github.com/kubernetes/kubernetes/issues/37875
Closes https://github.com/kubernetes/kubernetes/issues/37874
Automatic merge from submit-queue
fix the invalid link
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
DeamonSet-DaemonSet
**What this PR does / why we need it**:
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue
Added deprecation notice and guidance for cloud providers.
**What this PR does / why we need it**:
Adding context/background and general guidance for incoming cloud providers.
**Which issue this PR fixes**
**Special notes for your reviewer**:
Generalized message per discussion with @bgrant0607
Automatic merge from submit-queue
clear init container status annotations when cleared in status
When I pod with an init container is terminated due to exceeding its active deadline, the pod status is phase `Failed` with reason `DeadlineExceeded`. All container statuses are cleared from the pod status.
With init containers, however, the status is being regenerated from the status annotations. This is causing kubectl to report the pod state as `Init:0/1` instead of `DeadlineExceeded` because the kubectl printer observes a running init container, which in reality is not running.
This PR clears out the init container status annotations when they have been removed from the pod status so they are not regenerated on the apiserver.
xref https://bugzilla.redhat.com/show_bug.cgi?id=1453180
@derekwaynecarr
```release-note
Fix init container status reporting when active deadline is exceeded.
```
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)
Fix provisioned GCE PD not being reused if already exists
@jsafrane PTAL
This is another attempt at https://github.com/kubernetes/kubernetes/pull/38702 . We have observed that `gce.service.Disks.Insert(gce.projectID, zone, diskToCreate).Do()` instantly gets an error response of alreadyExists, so we must check for it.
I am not sure if we still need to check for the error after `waitForZoneOp`; I think that if there is an alreadyExists error, the `Do()` above will always respond with it instantly. But because I'm not sure, and to be safe, I will leave it.
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)
Only retrieve relevant volumes
**What this PR does / why we need it**:
Improves performance for Cinder volume attach/detach calls.
Currently when Cinder volumes are attached or detached, functions try to retrieve details about the volume from the Nova API. Because some only have the volume name not its UUID, they use the list function in gophercloud to iterate over all volumes to find a match. This incurs severe performance problems on OpenStack projects with lots of volumes (sometimes thousands) since it needs to send a new request when the current page does not contain a match. A better way of doing this is use the `?name=XXX` query parameter to refine the results.
**Which issue this PR fixes**:
https://github.com/kubernetes/kubernetes/issues/26404
**Special notes for your reviewer**:
There were 2 ways of addressing this problem:
1. Use the `name` query parameter
2. Instead of using the list function, switch to using volume UUIDs and use the GET function instead. You'd need to change the signature of a few functions though, such as [`DeleteVolume`](https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/cinder/cinder.go#L49), so I'm not sure how backwards compatible that is.
Since #1 does effectively the same as #2, I went with it because it ensures BC.
One assumption that is made is that the `volumeName` being retrieved matches exactly the name of the volume in Cinder. I'm not sure how accurate that is, but I see no reason why cloud providers would want to append/prefix things arbitrarily.
**Release note**:
```release-note
Improves performance of Cinder volume attach/detach operations
```
Automatic merge from submit-queue (batch tested with PRs 38505, 41785, 46315)
GCE and AWS provisioners, dynamic provisioning: admins can configure zone(s) where PVs shall be created
Zone configuration capabilities for GCE and AWS dynamic provisioners are extended.
Admins can configure in a storage class a comma separated list of allowed zone(s).
Partly fixes Trello cards:
- [GCE provisioner, parse pvc.Selector](https://trello.com/c/CyemTzsK/259-finish-gce-provisioner-parse-pvc-selector-dynamic-provision)
- [AWS provisioner, parse pvc.Selector](https://trello.com/c/2XjouSWw/260-finish-aws-provisioner-parse-pvc-selector-dynamic-provision)
```release-note
GCE and AWS dynamic provisioners extension: admins can configure zone(s) in which a persistent volume shall be created.
```
cc: @jsafrane
Automatic merge from submit-queue
Add test for cross namespace watch and list
**What this PR does / why we need it**: Add more integration test for kube-apiextensions-server
**Which issue this PR fixes** : fixes https://github.com/kubernetes/kubernetes/issues/45511
**Special notes for your reviewer**: The client with cluster scope also works, but it seems to be trivial
@deads2k
Automatic merge from submit-queue
apiextensions: add Established condition
This introduces a `Established` condition on `CustomResourceDefinition`s. `Established` means that the resource has become active. A resource is established when all names are accepted initially without a conflict. A resource stays established until deleted, even during a later NameConflict due to changed names. Note that not all names can be changed.
This change is necessary to allow deletion of once-active CRDs which might have still instances, but have NameConflicts now. Before this PR the REST endpoint was not active anymore in this case, making deletion of the instances impossible.
An admin wants to specify in which AWS availability zone(s) users may create persistent volumes using dynamic provisioning.
That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.
An admin wants to specify in which GCE availability zone(s) users may create persistent volumes using dynamic provisioning.
That's why the admin can now configure in StorageClass object a comma separated list of zones. Dynamically created PVs for PVCs that use the StorageClass are created in one of the configured zones.
The zone parameter provided in a Storage Class may erroneously be an empty string or contain only spaces and tab characters. Such situation shall be detected and reported as an error.
That's why the func ValidateZone was added.
An admin shall be able to configure a comma separated list of zones for a StorageClass.
That's why the func ZonesToSet (string) (set.String, error) is added. The func ZonesToSet converts a string containing a comma separated list of zones to a set. In case the list contains an empty zone an error is returned.
Automatic merge from submit-queue (batch tested with PRs 45891, 46147)
Watching ClusterId from within GCE cloud provider
**What this PR does / why we need it**:
Adds the ability for the GCE cloud provider to watch a config map for `clusterId` and `providerId`.
WIP - still needs more testing
cc @MrHohn @csbell @madhusudancs @thockin @bowei @nikhiljindal
**Release note**:
```release-note
NONE
```