Automatic merge from submit-queue
add jenkins project for kubenet
added a jenkins project for gce using kubenet as network provider
`k8s-jkns-e2e-gce-kubenet` has been created and configured
Automatic merge from submit-queue
Up to golang 1.6
A second attempt to upgrade go version above `go1.4`
Merge ASAP after you've cut the `release-1.2` branch and feel ready.
`go1.6` should perform slightly better than `go1.5`, so this time it might work
@gmarek @wojtek-t @zmerlynn @mikedanese @brendandburns @ixdy @thockin
pause is built into containervm. if it's not on the machine we should just pull
it. nobody that I'm aware of uses kube-registry-proxy and it makes build/deployment
more complicated and slower.
Automatic merge from submit-queue
Cross-build hyperkube and debian-iptables for ARM. Also add a flannel image
We have to be able to build complex docker images too on `amd64` hosts.
Right now we can't build Dockerfiles with `RUN` commands when building for other architectures e.g. ARM.
Resin has a tutorial about this here: https://resin.io/blog/building-arm-containers-on-any-x86-machine-even-dockerhub/
But it's a bit clumsy syntax.
The other alternative would be running this command in a Makefile:
```
# This registers in the kernel that ARM binaries should be run by /usr/bin/qemu-{ARCH}-static
docker run --rm --privileged multiarch/qemu-user-static:register --reset
```
and
```
ADD https://github.com/multiarch/qemu-user-static/releases/download/v2.5.0/x86_64_qemu-arm-static.tar.xz /usr/bin
```
Then the kernel will be able to differ ARM binaries from amd64. When it finds a ARM binary, it will invoke `/usr/bin/qemu-arm-static` first and lets `qemu` translate the ARM syscalls to amd64 ones.
Some code here: https://github.com/multiarch
WDYT is the best approach? If registering `binfmt_misc` in the kernels of the machines is OK, then I think we should go with that.
Otherwise, we'll have to wait for resin's patch to be merged into mainline qemu before we may use the code I have here now.
@fgrzadkowski @david-mcmahon @brendandburns @zmerlynn @ixdy @ihmccreery @thockin
Automatic merge from submit-queue
Add a timeout to the sshDialer to prevent indefinite hangs.
Prevents the SSH Dialer from hanging forever. Fixes a problem where SSH Tunnels get stuck trying to open.
Addresses #23835.
Automatic merge from submit-queue
Ensure object returned by volume getCloudProvider incorporates cloud config
This PR addresses https://github.com/kubernetes/kubernetes/issues/23517.
**Problem**
The existing GCE PD and AWS EBS volume plugin code were fetching cloud provider without specifying a cloud config: `cloudprovider.GetCloudProvider("gce", nil)`
This caused the cloud provider to use default auth mechanism, which is not acceptable for the provisioning controller running on GKE master.
**Fix**
This PR does the following:
* Modifies the GCE PD and AWS EBS volume plugin code to use the cloud provider object pre-constructed by the binary with a cloud config.
* Enable provisioning E2E test for GKE (to catch future issues).
Thanks to @cjcullen for debugging and finding the root cause! 👍
This should be cherry-picked into the v1.2 branch for the next release.
In order to allow a Go type to have a different internal representation,
provide the comment `// +protobuf.as=MESSAGENAME` which substitutes the
current message's members with the named message's members. Used by the
unversioned.Time type to have the same fields as unversioned.Timestamp
and to have an alternate serialization.
Automatic merge from submit-queue
support NETWORK_PROVIDER=cni for KUBERNETES_PROVIDER=vagrant
While trying to develop CNI plugins for K8's, I found the docs referenced the support of --network-plugin=cni for kubelet, but this wasn't surfaced up via salt to support env NETWORK_PROVIDER=cni before a kube-up deployment.
This PR is my attempt at adding CNI support to the kube-up happy path, following a lot of similar work for NETWORK_PROVIDER=kubenet which already exists.
Also, I've added the ability to consume CNI plugin's (binaries) and configuration files from the local cluster/network-plugins directory into the necessary locations as referenced here for CNI:
http://kubernetes.io/docs/admin/network-plugins
This allows a local developer to easily work on CNI plugin development while following the existing kube-up.sh docs and process.
In general, i've struggled to find any authoritative information or answers to my questions in slack regarding CNI progress / correct integration, so comments encouraged here!