Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)
Fix panic when using `kubeadm init` with vsphere cloud-provider
**What this PR does / why we need it**:
Check if the reference is nil when finding machine reference by UUID.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44603
**Special notes for your reviewer**:
This is just a quick fix for the panic.
**Release note**:
```release-note
NONE
```
Automatic merge from submit-queue (batch tested with PRs 44687, 44689, 44661)
Retry in get-kube.sh to avoid download flakes.
GCS has up to 2% 5xx rates, so retrying is critical.
This is currently failing about 8 times per day [according to the dashboard](https://storage.googleapis.com/k8s-gubernator/triage/index.html?test=Extract#be2f33fb1e6dd2389d12). It could be backported to reduce the flake rate.
Relase note:
```release-note
NONE
```
Automatic merge from submit-queue
Add hack/lib to kubernetes release tarball
**What this PR does / why we need it**:
Add hack/lib to kubernetes release tarball, to fix an issue with https://get.k8s.io/ script introduced in #42748.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*:
Fixes https://github.com/kubernetes/kubernetes/pull/42748#issuecomment-295412268
**Special notes for your reviewer**:
I'm new to bazel, so hopefully I'm not off-base here :)
**Release note**:
```release-note
NONE
```
cc: @ixdy @dcbw @smarterclayton
Automatic merge from submit-queue
Implement LRU for AWS device allocator
On failure to attach do not use device from pool
In AWS environment when attach fails on the node
lets not use device from the pool. This makes sure
that a bigger pool of devices is available.
Automatic merge from submit-queue
select one api endpoint at random when deploying kubernetes-core charm
**What this PR does / why we need it**: Fixes a bug in the kubernetes-worker Juju charm code that attempted to give kube-proxy more than one api endpoint.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**: https://github.com/juju-solutions/bundle-canonical-kubernetes/issues/255
**Release note**:
```release-note
Fixes a bug in the kubernetes-worker Juju charm code that attempted to give kube-proxy more than one api endpoint.
```
Automatic merge from submit-queue
Update gcr.io/google_containers/porter image to 4524579c0e
**What this PR does / why we need it**: updates the porter image to one built at 4524579c0e using go1.8.1.
This incorporates #44638, which has a new dummy certificate that is compliant with go1.8+.
Image has already been pushed.
**Release note**:
```release-note
NONE
```
/assign @liggitt
/cc @luxas @lavalamp
The Job Listers still use selectors, because this is the
behavior expected by callers. This clarifies the meaning of the
returned list. Some callers may need to switch to using
GetControllerOf() instead, but that is a separate, case-by-case issue.
Automatic merge from submit-queue
Fixes a missing comma in a list of strings.
**What this PR does / why we need it**: Fixes a missing comma in a list of strings in the kubernetes-worker Juju charm.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Fixes a missing comma in a list of strings.
```
Automatic merge from submit-queue (batch tested with PRs 44499, 44674)
Strip "pods/" prefix from the pod names returned by kubectl get pods.
Note that the result returned by kubectl get -o name has plural prefixes "pods". We were already trying to remove the prefix "pod", but that's not how the results are returned unfortunately.
**Release note**:
```release-note
NONE
```
cc @perotinus @kubernetes/sig-federation-pr-reviews
Automatic merge from submit-queue
Edge based services in proxy
This is sibling effort to what I did for endpoints in KubeProxy.
This PR is first one (changing config & iptables) - userspace will follow.
Automatic merge from submit-queue (batch tested with PRs 44667, 44673)
Allow summaries to be printed out to ReportDir instead of stdout
Fix#44003
cc @shyamjvs
Automatic merge from submit-queue
Fix ceph-secret type to kubernetes.io/rbd in kubernetes-master charm
**What this PR does / why we need it**:
This fixes the type of the ceph-secret secret that's created by the kubernetes-master charm.
Without the `kubernetes.io/rbd` type, automatic provisioning of PVCs doesn't work.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
**Special notes for your reviewer**:
**Release note**:
```release-note
Fix ceph-secret type to kubernetes.io/rbd in kubernetes-master charm
```
Automatic merge from submit-queue
Fixes an issue in cide_set.go
Function getBeginingAndEndIndices may return
end index too big
**What this PR does / why we need it**:
Fixes getBeginingAndEndIndices() in cidr_set.go
End index is off by one when s.clusterMaskSize >= maskSize
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes#44558
**Special notes for your reviewer**:
**Release note**:
```release-note
```
Automatic merge from submit-queue
issue_43986: fix docu with non-functional proxy
The documentation defines a couple of replication-controller and service
to provision a docker-registry somewhere on the cluster and have it
available by the name viz. A record of
kube-registry.default.svc.<clustername>.
On each node, http-proxies are placed as daemon-set with the
kube-registry DNS name set as upstream, so that the registry is
available on each host under endpoint localhost:5000
Because in the documentation, selector-identifiers are the same for
"upstream" registry and proxies, the proxies themselves register under
the service intended for the upstream and now have themselves as
upstream under a different port, where connection attempts result in
"connection refused".
Adapting selectors to be unique as in this patch fixes the problem.
**What this PR does / why we need it**:
Patch fixes (cf. above) erroneous documentation.
**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #
Fixes#43986
**Special notes for your reviewer**:
Thank you for your consideration.
**Release note**:
```release-note
```
Automatic merge from submit-queue
Fix ensureDnsRecords comments for federated services
I went to look at the source comments, because the documentation is not exhaustive about what kind of DNS records are created for federated services (and http://blog.kubernetes.io/2016/07/cross-cluster-services.html is wrong...).
It turns out that even the comment is not in sync with the code: two out of three records listed use `.federation`, while the author probably meant `.mydomain.com` (which has less chance of getting mixed up with `myfed`). I fixed those, as well as a few spelling and parenthesis errors. Hopefully this will help others save time and not scratch their heads.
cc @quinton-hoole
Automatic merge from submit-queue (batch tested with PRs 44645, 44639, 43510)
Add support for Azure internal load balancer
**Which issue this PR fixes**
Fixes https://github.com/kubernetes/kubernetes/issues/38901
**What this PR does / why we need it**:
This PR is to add support for Azure internal load balancer
Currently when exposing a serivce with LoadBalancer type, Azure provider would assume that it requires a public load balancer.
Thus it will request a public IP address resource, and expose the service via that public IP.
In this case we're not able to apply private IP addresses (within the cluster virtual network) for the service.
**Special notes for your reviewer**:
1. Clarification:
a. 'LoadBalancer' refers to an option for 'type' field under ServiceSpec. See https://kubernetes.io/docs/resources-reference/v1.5/#servicespec-v1
b. 'Azure LoadBalancer' refers a type of Azure resource. See https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-overview
2. For a single Azure LoadBalancer, all frontend ip should reference either a subnet or publicIpAddress, which means that it could be either an Internet facing load balancer or an internal one.
For current provider, it would create an Azure LoadBalancer with generated '${loadBalancerName}' for all services with 'LoadBalancer' type.
This PR introduces name '${loadBalancerName}-internal' for a separate Azure Load Balancer resource, used by all the service that requires internal load balancers.
3. This PR introduces a new annotation for the internal load balancer type behaviour:
a. When the annotaion value is set to 'false' or not set, it falls back to the original behaviour, assuming that user is requesting a public load balancer;
b. When the annotaion value is set to 'true', the following rule applies depending on 'loadBalancerIP' field on ServiceSpec:
- If 'loadBalancerIP' is not set, it will create a load balancer rule with dynamic assigned frontend IP under the cluster subnet;
- If 'loadBalancerIP' is set, it will create a load balancer rule with the frontend IP set to the given value. If the given value is not valid, that is, it does not falls into the cluster subnet range, then the creation will fail.
4. Users may change the load balancer type by applying the annotation to the service at runtime.
In this case, the load balancer rule would need to be 'switched' between the internal one and external one.
For example, it we have a service with internal load balancer, and then user removes the annotation, making it to a public one. Before we creating rules in the public Azure LoadBalancer, we'll need to clean up rules in the internal Azure LoadBalancer.
**Release note**:
Automatic merge from submit-queue (batch tested with PRs 44645, 44639, 43510)
[Federation][kubefed]: Set apiserver to bind securely to 8443 instead of 443
On platforms like OpenShift that don't run containers as root by default, binding to ports < 1000 is not permitted. Having the apiserver bind to a high port means it can run with reduced privileges. The service will still expose the apiserver on 443, so this change shouldn't impact clients of the federation api.
cc: @kubernetes/sig-federation-pr-reviews @perotinus