This was a dumb mistake during a re-factor of configure-vm. I tested
this early, re-factored the tail of this file, spot checked kube-push
and failed to test kube-push properly. My bad.
Fixes#5361. Fixes#5408.
This feature adds Juju provisioning to the kube-up script. It currently
parses out the pre-requisits on debian/ubuntu based systems and installs
them if they are missing.
From there we followed the integration path that was found in the
libvirt-coreos path, implementing the methods found in the boilerplate
and calling juju service calls. There are a few "arbitrary sleeps" in
the code to allow the cloud provider to settle and properly deploy.
These are work-around cases from the script executing faster than juju
was able to communicate from the state server to subsequent nodes. I
left comments inline at these points.
To exercise this:
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
It will spin up a ref arch with 1 Kubernetes Master, 2 minions, and run
the cluster validation checks against the deployment. Bridging the gap
between the juju specific bits and the upstream recommended guides for
getting started with Juju.
To note, if you do not have a "current environment" set in Juju, it will
spin up the quickstart integration wizard in interactive mode, allowing
you to configure juju, and add the proper provider/use it. Otherwise it
assumes you're in the provider you wish to use, and will deploy there.
The better solution is some fence with Salt, but the actual logs
provided in the bug don't support any race condition here, plus the
ordering in the Salt configuration seems correct.
We haven't seen this again in a while, but given the results of the
situation (a borked cluster), I'm proposing a relatively simple
workaround.
Fixes#4357 (dubiously)
I had some trouble with the kubernetes docker image for SkyDNS being outdated. In my experience the version in `kubernetes/skydns:2014-12-23-001` will not behave correctly if it manages to startup before etcd, for details see skynetservices/skydns#142
Updating to SkyDNS latest fixes this.
IP address. The configure-vm script can resolve this relatively easily
on the node. This is less painful for GKE, which creates all the
resources in parallel.
Change provisioning to pass all variables to both master and node. Run
Salt in a masterless setup on all nodes ala
http://docs.saltstack.com/en/latest/topics/tutorials/quickstart.html,
which involves ensuring Salt daemon is NOT running after install. Kill
Salt master install. And fix push to actually work in this new flow.
As part of this, the GCE Salt config no longer has access to the Salt
mine, which is primarily obnoxious for two reasons: - The minions
can't use Salt to see the master: this is easily fixed by static
config. - The master can't see the list of all the minions: this is
fixed temporarily by static config in util.sh, but later, by other
means (see
https://github.com/GoogleCloudPlatform/kubernetes/issues/156, which
should eventually remove this direction).
As part of it, flatten all of cluster/gce/templates/* into
configure-vm.sh, using a single, separate piece of YAML to drive the
environment variables, rather than constantly rewriting the startup
script.