Merge pull request #7195 from mbruzek/add-charms

Add the Juju charms to Kubernetes
This commit is contained in:
Eric Tune 2015-04-30 16:26:58 -07:00
commit dddf414cf5
58 changed files with 2573 additions and 63 deletions

View File

@ -0,0 +1,197 @@
# kubernetes-bundle
The kubernetes-bundle allows you to deploy the many services of
Kubernetes to a cloud environment and get started using the Kubernetes
technology quickly.
## Kubernetes
Kubernetes is an open source system for managing containerized
applications. Kubernetes uses [Docker](http://docker.com) to run
containerized applications.
## Juju TL;DR
The [Juju](https://juju.ubuntu.com) system provides provisioning and
orchestration across a variety of clouds and bare metal. A juju bundle
describes collection of services and how they interelate. `juju
quickstart` allows you to bootstrap a deployment environment and
deploy a bundle.
## Dive in!
#### Install Juju Quickstart
You will need to
[install the Juju client](https://juju.ubuntu.com/install/) and
`juju-quickstart` as pre-requisites. To deploy the bundle use
`juju-quickstart` which runs on Mac OS (`brew install
juju-quickstart`) or Ubuntu (`apt-get install juju-quickstart`).
### Deploy Kubernetes Bundle
Deploy Kubernetes onto any cloud and orchestrated directly in the Juju
Graphical User Interface using `juju quickstart`:
juju quickstart -i https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml
The command above does few things for you:
- Starts a curses based gui for managing your cloud or MAAS credentials
- Looks for a bootstrapped deployment environment, and bootstraps if
required. This will launch a bootstrap node in your chosen
deployment environment (machine 0).
- Deploys the Juju GUI to your environment onto the bootstrap node.
- Provisions 4 machines, and deploys the Kubernetes services on top of
them (Kubernetes-master, two Kubernetes minions using flannel, and etcd).
- Orchestrates the relations among the services, and exits.
Now you should have a running Kubernetes. Run `juju status
--format=oneline` to see the address of your kubernetes master.
For further reading on [Juju Quickstart](https://pypi.python.org/pypi/juju-quickstart)
### Using the Kubernetes Client
You'll need the Kubernetes command line client,
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
to interact with the created cluster. The kubectl command is
installed on the kubernetes-master charm. If you want to work with
the cluster from your computer you will need to install the binary
locally (see instructions below).
You can access kubectl by a number ways using juju.
via juju run:
juju run --service kubernetes-master/0 "sudo kubectl get mi"
via juju ssh:
juju ssh kubernetes-master/0 -t "sudo kubectl get mi"
You may also `juju ssh kubernetes-master/0` and call kubectl from that
machine.
See the
[kubectl documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
for more details of what can be done with the command line tool.
### Scaling up the cluster
You can add capacity by adding more Docker units:
juju add-unit docker
### Known Limitations
Kubernetes currently has several platform specific functionality. For
example load balancers and persistence volumes only work with the
Google Compute provider at this time.
The Juju integration uses the Kubernetes null provider. This means
external load balancers and storage can't be directly driven through
Kubernetes config files at this time. We look forward to adding these
capabilities to the charms.
## More about the components the bundle deploys
### Kubernetes master
The master controls the Kubernetes cluster. It manages for the worker
nodes and provides the primary interface for control by the user.
### Kubernetes minion
The minions are the servers that perform the work. Minions must
communicate with the master and run the workloads that are assigned to
them.
### Flannel-docker
Flannel provides individual subnets for each machine in the cluster by
creating a
[software defined networking](http://en.wikipedia.org/wiki/Software-defined_networking).
### Docker
An open platform for distributed applications for developers and sysadmins.
### Etcd
Etcd persists state for Flannel and Kubernetes. It is a distributed
key-value store with an http interface.
## For further information on getting started with Juju
Juju has complete documentation with regard to setup, and cloud
configuration on it's own
[documentation site](https://juju.ubuntu.com/docs/).
- [Getting Started](https://juju.ubuntu.com/docs/getting-started.html)
- [Using Juju](https://juju.ubuntu.com/docs/charms.html)
## Installing the kubectl outside of kubernetes master machine
Download the Kuberentes release from:
https://github.com/GoogleCloudPlatform/kubernetes/releases and extract
the release, you can then just directly use the cli binary at
./kubernetes/platforms/linux/amd64/kubectl
You'll need the address of the kubernetes-master as environment variable :
juju status kubernetes-master/0
Grab the public-address there and export it as KUBERNETES_MASTER
environment variable :
export KUBERNETES_MASTER=$(juju status --format=oneline kubernetes-master | cut -d' ' -f3):8080
And now you can run kubectl on the command line :
kubectl get mi
See the
[kubectl documentation](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
for more details of what can be done with the command line tool.
## Hacking on the kubernetes-bundle and associated charms
The kubernetes-bundle is open source and available on github.com. If
you want to get started developing on the bundle you can clone it from
github. Often you will need the related charms which are also on
github.
mkdir ~/bundles
git clone https://github.com/whitmo/kubernetes-bundle.git ~/bundles/kubernetes-bundle
mkdir -p ~/charms/trusty
git clone https://github.com/whitmo/kubernetes-charm.git ~/charms/trusty/kubernetes
git clone https://github.com/whitmo/kubernetes-master-charm.git ~/charms/trusty/kubernetes-master
juju quickstart specs/develop.yaml
## How to contribute
Send us pull requests! We'll send you a cookie if they include tests and docs.
## Current and Most Complete Information
- [kubernetes-master charm on Github](https://github.com/whitmo/charm-kubernetes-master)
- [kubernetes charm on GitHub](https://github.com/whitmo/charm-kubernetes)
- [etcd charm on GitHub](https://github.com/whitmo/etcd-charm)
- [Flannel charm on GitHub](https://github.com/chuckbutler/docker-flannel-charm)
- [Docker charm on GitHub](https://github.com/chuckbutler/docker-charm)
More information about the
[Kubernetes project](https://github.com/GoogleCloudPlatform/kubernetes)
or check out the
[Kubernetes Documentation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs)
for more details about the Kubernetes concepts and terminology.
Having a problem? Check the [Kubernetes issues database](https://github.com/GoogleCloudPlatform/kubernetes/issues)
for related issues.

View File

@ -0,0 +1,50 @@
kubernetes-local:
services:
kubernetes-master:
charm: local:trusty/kubernetes-master
annotations:
"gui-x": "600"
"gui-y": "0"
expose: true
options:
version: "v0.15.0"
docker:
charm: docker
branch: https://github.com/chuckbutler/docker-charm.git
num_units: 2
options:
latest: true
annotations:
"gui-x": "0"
"gui-y": "0"
flannel-docker:
charm: cs:trusty/flannel-docker
annotations:
"gui-x": "0"
"gui-y": "300"
kubernetes:
charm: local:trusty/kubernetes
annotations:
"gui-x": "300"
"gui-y": "300"
etcd:
charm: cs:~kubernetes/trusty/etcd
annotations:
"gui-x": "300"
"gui-y": "0"
relations:
- - "flannel-docker:network"
- "docker:network"
- - "flannel-docker:docker-host"
- "docker:juju-info"
- - "flannel-docker:db"
- "etcd:client"
- - "kubernetes:docker-host"
- "docker:juju-info"
- - "etcd:client"
- "kubernetes:etcd"
- - "etcd:client"
- "kubernetes-master:etcd"
- - "kubernetes-master:minions-api"
- "kubernetes:api"
series: trusty

1
cluster/juju/charms/trusty/.gitignore vendored Normal file
View File

@ -0,0 +1 @@
/docker

View File

@ -0,0 +1 @@
.git

View File

@ -0,0 +1,5 @@
*~
.bzr
.venv
unit_tests/__pycache__
*.pyc

View File

@ -0,0 +1,5 @@
omit:
- .git
- .gitignore
- .gitmodules
- revision

View File

@ -0,0 +1,29 @@
build: virtualenv lint test
virtualenv:
virtualenv .venv
.venv/bin/pip install -q -r requirements.txt
lint: virtualenv
@.venv/bin/flake8 hooks unit_tests --exclude=charmhelpers
@.venv/bin/charm proof
test: virtualenv
@CHARM_DIR=. PYTHONPATH=./hooks .venv/bin/py.test -v unit_tests/*
functional-test:
@bundletester
release: check-path virtualenv
@.venv/bin/pip install git-vendor
@.venv/bin/git-vendor sync -d ${KUBERNETES_MASTER_BZR}
check-path:
ifndef KUBERNETES_MASTER_BZR
$(error KUBERNETES_MASTER_BZR is undefined)
endif
clean:
rm -rf .venv
find -name *.pyc -delete

View File

@ -0,0 +1,101 @@
# Kubernetes Master Charm
[Kubernetes](https://github.com/googlecloudplatform/kubernetes) is an open
source system for managing containerized applications across multiple hosts.
Kubernetes uses [Docker](http://www.docker.io/) to package, instantiate and run
containerized applications.
The Kubernetes Juju charms enable you to run Kubernetes on all the cloud
platforms that Juju supports.
A Kubernetes deployment consists of several independent charms that can be
scaled to meet your needs
### Etcd
Etcd is a key value store for Kubernetes. All persistent master state
is stored in `etcd`.
### Flannel-docker
Flannel is a
[software defined networking](http://en.wikipedia.org/wiki/Software-defined_networking)
component that provides individual subnets for each machine in the cluster.
### Docker
Docker is an open platform for distributing applications for system administrators.
### Kubernetes master
The controlling unit in a Kubernetes cluster is called the master. It is the
main management contact point providing many management services for the worker
nodes.
### Kubernetes minion
The servers that perform the work are known as minions. Minions must be able to
communicate with the master and run the workloads that are assigned to them.
## Usage
#### Deploying the Development Focus
To deploy a Kubernetes environment in Juju :
juju deploy cs:~kubernetes/trusty/etcd
juju deploy cs:trusty/flannel-docker
juju deploy cs:trusty/docker
juju deploy local:trusty/kubernetes-master
juju deploy local:trusty/kubernetes
juju add-relation etcd flannel-docker
juju add-relation flannel-docker:network docker:network
juju add-relation flannel-docker:docker-host docker
juju add-relation etcd kubernetes
juju add-relation etcd kubernetes-master
juju add-relation kubernetes kubernetes-master
#### Deploying the recommended configuration
A bundle can be used to deploy Kubernetes onto any cloud it can be
orchestrated directly in the Juju Graphical User Interface, when using
`juju quickstart`:
juju quickstart https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml
For more information on the recommended bundle deployment, see the
[Kubernetes bundle documentation](https://github.com/whitmo/bundle-kubernetes)
#### Post Deployment
To interact with the kubernetes environment, either build or
[download](https://github.com/GoogleCloudPlatform/kubernetes/releases) the
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
binary (available in the releases binary tarball) and point it to the master with :
$ juju status kubernetes-master | grep public
public-address: 104.131.108.99
$ export KUBERNETES_MASTER="104.131.108.99"
# Configuration
For you convenience this charm supports changing the version of kubernetes binaries.
This can be done through the Juju GUI or on the command line:
juju set kubernetes version=”v0.10.0”
If the charm does not already contain the tar file with the desired architecture
and version it will attempt to download the kubernetes binaries using the gsutil
command.
Congratulations you know have deployed a Kubernetes environment! Use the
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
to interact with the environment.
# Kubernetes information
- [Kubernetes github project](https://github.com/GoogleCloudPlatform/kubernetes)
- [Kubernetes issue tracker](https://github.com/GoogleCloudPlatform/kubernetes/issues)
- [Kubernetes Documenation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs)
- [Kubernetes releases](https://github.com/GoogleCloudPlatform/kubernetes/releases)

View File

@ -0,0 +1,9 @@
options:
version:
type: string
default: "v0.15.0"
description: |
The kubernetes release to use in this charm. The binary files are
compiled from the source identified by this tag in github. Using the
value of "source" will use the master kubernetes branch when compiling
the binaries.

View File

@ -0,0 +1,13 @@
Copyright 2015 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,20 @@
description "Kubernetes Controller"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 30 # wait 30s between SIGTERM and SIGKILL.
exec /usr/local/bin/apiserver \
--address=%(api_bind_address)s \
--etcd_servers=%(etcd_servers)s \
--logtostderr=true \
--portal_net=10.244.240.0/20

View File

@ -0,0 +1,20 @@
description "Kubernetes Controller"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 30 # wait 30s between SIGTERM and SIGKILL.
exec /usr/local/bin/controller-manager \
--address=%(bind_address)s \
--logtostderr=true \
--master=%(api_server_address)s

View File

@ -0,0 +1,6 @@
server {
listen %(api_bind_address)s:80;
location %(web_uri)s {
alias /opt/kubernetes/_output/local/bin/linux/amd64/;
}
}

View File

@ -0,0 +1,39 @@
# HTTP/HTTPS server
#
server {
listen 80;
server_name localhost;
root html;
index index.html index.htm;
# ssl on;
# ssl_certificate /usr/share/nginx/server.cert;
# ssl_certificate_key /usr/share/nginx/server.key;
# ssl_session_timeout 5m;
# ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2;
# ssl_ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+3DES:!aNULL:!MD5:!DSS;
# ssl_prefer_server_ciphers on;
location / {
# auth_basic "Restricted";
# auth_basic_user_file /usr/share/nginx/htpasswd;
# Proxy settings
# disable buffering so that watch works
proxy_buffering off;
proxy_pass %(api_server_address)s;
proxy_connect_timeout 159s;
proxy_send_timeout 600s;
proxy_read_timeout 600s;
# Disable retry
proxy_next_upstream off;
# Support web sockets
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}

View File

@ -0,0 +1,20 @@
description "Kubernetes Scheduler"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 30 # wait 30s between SIGTERM and SIGKILL.
exec /usr/local/bin/scheduler \
--address=%(bind_address)s \
--logtostderr=true \
--master=%(api_server_address)s

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1,211 @@
#!/usr/bin/python
"""
The main hook file is called by Juju.
"""
import contextlib
import os
import socket
import subprocess
import sys
from charmhelpers.core import hookenv, host
from kubernetes_installer import KubernetesInstaller
from path import path
hooks = hookenv.Hooks()
@contextlib.contextmanager
def check_sentinel(filepath):
"""
A context manager method to write a file while the code block is doing
something and remove the file when done.
"""
fail = False
try:
yield filepath.exists()
except:
fail = True
filepath.touch()
raise
finally:
if fail is False and filepath.exists():
filepath.remove()
@hooks.hook('config-changed')
def config_changed():
"""
On the execution of the juju event 'config-changed' this function
determines the appropriate architecture and the configured version to
create kubernetes binary files.
"""
hookenv.log('Starting config-changed')
charm_dir = path(hookenv.charm_dir())
config = hookenv.config()
# Get the version of kubernetes to install.
version = config['version']
# Get the package architecture, rather than the from the kernel (uname -m).
arch = subprocess.check_output(['dpkg', '--print-architecture']).strip()
kubernetes_dir = path('/opt/kubernetes')
if not kubernetes_dir.exists():
print('The source directory {0} does not exist'.format(kubernetes_dir))
print('Was the kubernetes code cloned during install?')
exit(1)
if version in ['source', 'head', 'master']:
branch = 'master'
else:
# Create a branch to a tag.
branch = 'tags/{0}'.format(version)
# Construct the path to the binaries using the arch.
output_path = kubernetes_dir / '_output/local/bin/linux' / arch
installer = KubernetesInstaller(arch, version, output_path)
# Change to the kubernetes directory (git repository).
with kubernetes_dir:
# Create a command to get the current branch.
git_branch = 'git branch | grep "\*" | cut -d" " -f2'
current_branch = subprocess.check_output(git_branch, shell=True).strip()
print('Current branch: ', current_branch)
# Create the path to a file to indicate if the build was broken.
broken_build = charm_dir / '.broken_build'
# write out the .broken_build file while this block is executing.
with check_sentinel(broken_build) as last_build_failed:
print('Last build failed: ', last_build_failed)
# Rebuild if the current version is different or last build failed.
if current_branch != version or last_build_failed:
installer.build(branch)
if not output_path.exists():
broken_build.touch()
else:
print('Notifying minions of verison ' + version)
# Notify the minions of a version change.
for r in hookenv.relation_ids('minions-api'):
hookenv.relation_set(r, version=version)
print('Done notifing minions of version ' + version)
# Create the symoblic links to the right directories.
installer.install()
relation_changed()
hookenv.log('The config-changed hook completed successfully.')
@hooks.hook('etcd-relation-changed', 'minions-api-relation-changed')
def relation_changed():
template_data = get_template_data()
# Check required keys
for k in ('etcd_servers',):
if not template_data.get(k):
print "Missing data for", k, template_data
return
print "Running with\n", template_data
# Render and restart as needed
for n in ('apiserver', 'controller-manager', 'scheduler'):
if render_file(n, template_data) or not host.service_running(n):
host.service_restart(n)
# Render the file that makes the kubernetes binaries available to minions.
if render_file(
'distribution', template_data,
'conf.tmpl', '/etc/nginx/sites-enabled/distribution') or \
not host.service_running('nginx'):
host.service_reload('nginx')
# Render the default nginx template.
if render_file(
'nginx', template_data,
'conf.tmpl', '/etc/nginx/sites-enabled/default') or \
not host.service_running('nginx'):
host.service_reload('nginx')
# Send api endpoint to minions
notify_minions()
def notify_minions():
print("Notify minions.")
config = hookenv.config()
for r in hookenv.relation_ids('minions-api'):
hookenv.relation_set(
r,
hostname=hookenv.unit_private_ip(),
port=8080,
version=config['version'])
def get_template_data():
rels = hookenv.relations()
config = hookenv.config()
template_data = {}
template_data['etcd_servers'] = ",".join([
"http://%s:%s" % (s[0], s[1]) for s in sorted(
get_rel_hosts('etcd', rels, ('hostname', 'port')))])
template_data['minions'] = ",".join(get_rel_hosts('minions-api', rels))
template_data['api_bind_address'] = _bind_addr(hookenv.unit_private_ip())
template_data['bind_address'] = "127.0.0.1"
template_data['api_server_address'] = "http://%s:%s" % (
hookenv.unit_private_ip(), 8080)
arch = subprocess.check_output(['dpkg', '--print-architecture']).strip()
template_data['web_uri'] = "/kubernetes/%s/local/bin/linux/%s/" % (
config['version'], arch)
_encode(template_data)
return template_data
def _bind_addr(addr):
if addr.replace('.', '').isdigit():
return addr
try:
return socket.gethostbyname(addr)
except socket.error:
raise ValueError("Could not resolve private address")
def _encode(d):
for k, v in d.items():
if isinstance(v, unicode):
d[k] = v.encode('utf8')
def get_rel_hosts(rel_name, rels, keys=('private-address',)):
hosts = []
for r, data in rels.get(rel_name, {}).items():
for unit_id, unit_data in data.items():
if unit_id == hookenv.local_unit():
continue
values = [unit_data.get(k) for k in keys]
if not all(values):
continue
hosts.append(len(values) == 1 and values[0] or values)
return hosts
def render_file(name, data, src_suffix="upstart.tmpl", tgt_path=None):
tmpl_path = os.path.join(
os.environ.get('CHARM_DIR'), 'files', '%s.%s' % (name, src_suffix))
with open(tmpl_path) as fh:
tmpl = fh.read()
rendered = tmpl % data
if tgt_path is None:
tgt_path = '/etc/init/%s.conf' % name
if os.path.exists(tgt_path):
with open(tgt_path) as fh:
contents = fh.read()
if contents == rendered:
return False
with open(tgt_path, 'w') as fh:
fh.write(rendered)
return True
if __name__ == '__main__':
hooks.execute(sys.argv)

View File

@ -0,0 +1 @@
install.py

View File

@ -0,0 +1,90 @@
#!/usr/bin/python
import setup
setup.pre_install()
import subprocess
from charmhelpers.core import hookenv
from charmhelpers import fetch
from charmhelpers.fetch import archiveurl
from path import path
def install():
install_packages()
hookenv.log('Installing go')
download_go()
hookenv.log('Adding kubernetes and go to the path')
strings = [
'export GOROOT=/usr/local/go\n',
'export PATH=$PATH:$GOROOT/bin\n',
'export KUBE_MASTER_IP=0.0.0.0\n',
'export KUBERNETES_MASTER=http://$KUBE_MASTER_IP\n',
]
update_rc_files(strings)
hookenv.log('Downloading kubernetes code')
clone_repository()
hookenv.open_port(8080)
hookenv.log('Install complete')
def download_go():
"""
Kubernetes charm strives to support upstream. Part of this is installing a
fairly recent edition of GO. This fetches the golang archive and installs
it in /usr/local
"""
go_url = 'https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz'
go_sha1 = '5020af94b52b65cc9b6f11d50a67e4bae07b0aff'
handler = archiveurl.ArchiveUrlFetchHandler()
handler.install(go_url, '/usr/local', go_sha1, 'sha1')
def clone_repository():
"""
Clone the upstream repository into /opt/kubernetes for deployment compilation
of kubernetes. Subsequently used during upgrades.
"""
repository = 'https://github.com/GoogleCloudPlatform/kubernetes.git'
kubernetes_directory = '/opt/kubernetes'
command = ['git', 'clone', repository, kubernetes_directory]
print(command)
output = subprocess.check_output(command)
print(output)
def install_packages():
"""
Install required packages to build the k8s source, and syndicate between
minion nodes. In addition, fetch pip to handle python dependencies
"""
hookenv.log('Installing Debian packages')
# Create the list of packages to install.
apt_packages = ['build-essential', 'git', 'make', 'nginx', 'python-pip']
fetch.apt_install(fetch.filter_installed_packages(apt_packages))
def update_rc_files(strings):
"""
Preseed the bash environment for ubuntu and root with K8's env vars to
make interfacing with the api easier. (see: kubectrl docs)
"""
rc_files = [path('/home/ubuntu/.bashrc'), path('/root/.bashrc')]
for rc_file in rc_files:
lines = rc_file.lines()
for string in strings:
if string not in lines:
lines.append(string)
rc_file.write_lines(lines)
if __name__ == "__main__":
install()

View File

@ -0,0 +1,91 @@
import os
import shlex
import subprocess
from path import path
def run(command, shell=False):
""" A convience method for executing all the commands. """
print(command)
if shell is False:
command = shlex.split(command)
output = subprocess.check_output(command, shell=shell)
print(output)
return output
class KubernetesInstaller():
"""
This class contains the logic needed to install kuberentes binary files.
"""
def __init__(self, arch, version, output_dir):
""" Gather the required variables for the install. """
# The kubernetes-master charm needs certain commands to be aliased.
self.aliases = {'kube-apiserver': 'apiserver',
'kube-controller-manager': 'controller-manager',
'kube-proxy': 'kube-proxy',
'kube-scheduler': 'scheduler',
'kubectl': 'kubectl',
'kubelet': 'kubelet'}
self.arch = arch
self.version = version
self.output_dir = path(output_dir)
def build(self, branch):
""" Build kubernetes from a github repository using the Makefile. """
# Remove any old build artifacts.
make_clean = 'make clean'
run(make_clean)
# Always checkout the master to get the latest repository information.
git_checkout_cmd = 'git checkout master'
run(git_checkout_cmd)
# When checking out a tag, delete the old branch (not master).
if branch != 'master':
git_drop_branch = 'git branch -D {0}'.format(self.version)
print(git_drop_branch)
rc = subprocess.call(git_drop_branch.split())
if rc != 0:
print('returned: %d' % rc)
# Make sure the git repository is up-to-date.
git_fetch = 'git fetch origin {0}'.format(branch)
run(git_fetch)
if branch == 'master':
git_reset = 'git reset --hard origin/master'
run(git_reset)
else:
# Checkout a branch of kubernetes so the repo is correct.
checkout = 'git checkout -b {0} {1}'.format(self.version, branch)
run(checkout)
# Create an environment with the path to the GO binaries included.
go_path = ('/usr/local/go/bin', os.environ.get('PATH', ''))
go_env = os.environ.copy()
go_env['PATH'] = ':'.join(go_path)
print(go_env['PATH'])
# Compile the binaries with the make command using the WHAT variable.
make_what = "make all WHAT='cmd/kube-apiserver cmd/kubectl "\
"cmd/kube-controller-manager plugin/cmd/kube-scheduler "\
"cmd/kubelet cmd/kube-proxy'"
print(make_what)
rc = subprocess.call(shlex.split(make_what), env=go_env)
def install(self, install_dir=path('/usr/local/bin')):
""" Install kubernetes binary files from the output directory. """
if not install_dir.isdir():
install_dir.makedirs_p()
# Create the symbolic links to the real kubernetes binaries.
for key, value in self.aliases.iteritems():
target = self.output_dir / key
if target.exists():
link = install_dir / value
if link.exists():
link.remove()
target.symlink(link)
else:
print('Error target file {0} does not exist.'.format(target))
exit(1)

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1,30 @@
def pre_install():
"""
Do any setup required before the install hook.
"""
install_charmhelpers()
install_path()
def install_charmhelpers():
"""
Install the charmhelpers library, if not present.
"""
try:
import charmhelpers # noqa
except ImportError:
import subprocess
subprocess.check_call(['apt-get', 'install', '-y', 'python-pip'])
subprocess.check_call(['pip', 'install', 'charmhelpers'])
def install_path():
"""
Install the path.py library, when not present.
"""
try:
import path # noqa
except ImportError:
import subprocess
subprocess.check_call(['apt-get', 'install', '-y', 'python-pip'])
subprocess.check_call(['pip', 'install', 'path.py'])

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 76 KiB

View File

@ -0,0 +1,19 @@
name: kubernetes-master
summary: Container Cluster Management Master
description: |
Provides a kubernetes api endpoint, scheduler for managing containers.
maintainers:
- Matt Bruzek <matt.bruzek@canonical.com>
- Whit Morriss <whit.morriss@canonical.com>
- Charles Butler <charles.butler@canonical.com>
tags:
- ops
- network
provides:
client-api:
interface: kubernetes-client
minions-api:
interface: kubernetes-api
requires:
etcd:
interface: etcd

View File

@ -0,0 +1,75 @@
kubernetes-master
-----------------
notes on src
------------
current provider responsibilities
- instances
- load blanacers
- zones (not useful as its only for apiserver).
provider functionality currently hardcoded to gce across codebase
- persistent storage
ideas
-----
- juju provider impl
- file provider for machines/minions
- openvpn as overlay per extant salt config.
cloud
-----
todo
----
- token auth file
- format csv -> token, user, uid
- config privileged
- config log-level
- config / check logs collection endpoint
- config / version and binary location via url
Q/A
----
https://botbot.me/freenode/google-containers/2014-10-17/?msg=23696683&page=6
Q. The new volumes/storage provider api appears to be hardcoded to
gce.. Is there a plan to abstract that anytime soon?
A. effectively it is abstract enough for the moment, no plans to
change, but willing subject to suitable abstraction.
Q.The zone provider api appears to return the address only of the api
server afaics. How is that useful? afaics the better semantic would be
an attribute on the minions to instantiate multiple templates across
zones?
A. apparently not considered, current solution for ha is multiple k8s
per zone with external lb. pointed out this was inane.
Q. Several previous platforms supported have been moved to the icebox,
just curious what was subject to bitrot. the salt/shell script for
those platforms or something more api intrinsic?
A. apparently the change to ship binaries instead of build from src
broke them.. somehow.
Q. i'm mostly interested in flannel due to its portability. Does the
inter pod networking setup need to include the other components of the
system, ie does api talk directly to containers, or only via kubelet.
A. api server only talks to kubelet
Q. Status of HA?
A. not done yet, election package merged, nothing using it.
Afaics design discussion doesn't take place on the list.
Q. Is minion registration supported, ie. bypassing cloud provider
filter all instances via regex match?
A. not done yet, pull request in review for minions in etcd (not
found, perhaps merged)
-------------
cadvisor usage helper
https://github.com/GoogleCloudPlatform/heapster

View File

@ -0,0 +1,5 @@
flake8
pytest
bundletester
path.py
charmhelpers

View File

@ -0,0 +1,105 @@
from mock import patch
from path import path
from path import Path
import pytest
import subprocess
import sys
# Add the hooks directory to the python path.
hooks_dir = Path('__file__').parent.abspath() / 'hooks'
sys.path.insert(0, hooks_dir.abspath())
# Import the module to be tested.
import kubernetes_installer
def test_run():
""" Test the run method both with valid commands and invalid commands. """
ls = 'ls -l {0}/kubernetes_installer.py'.format(hooks_dir)
output = kubernetes_installer.run(ls, False)
assert output
assert 'kubernetes_installer.py' in output
output = kubernetes_installer.run(ls, True)
assert output
assert 'kubernetes_installer.py' in output
invalid_directory = path('/not/a/real/directory')
assert not invalid_directory.exists()
invalid_command = 'ls {0}'.format(invalid_directory)
with pytest.raises(subprocess.CalledProcessError) as error:
kubernetes_installer.run(invalid_command)
print(error)
with pytest.raises(subprocess.CalledProcessError) as error:
kubernetes_installer.run(invalid_command, shell=True)
print(error)
class TestKubernetesInstaller():
def makeone(self, *args, **kw):
""" Create the KubernetesInstaller object and return it. """
from kubernetes_installer import KubernetesInstaller
return KubernetesInstaller(*args, **kw)
def test_init(self):
""" Test that the init method correctly assigns the variables. """
ki = self.makeone('i386', '3.0.1', '/tmp/does_not_exist')
assert ki.aliases
assert 'kube-apiserver' in ki.aliases
assert 'kube-controller-manager' in ki.aliases
assert 'kube-scheduler' in ki.aliases
assert 'kubectl' in ki.aliases
assert 'kubelet' in ki.aliases
assert ki.arch == 'i386'
assert ki.version == '3.0.1'
assert ki.output_dir == path('/tmp/does_not_exist')
@patch('kubernetes_installer.run')
@patch('kubernetes_installer.subprocess.call')
def test_build(self, cmock, rmock):
""" Test the build method with master and non-master branches. """
directory = path('/tmp/kubernetes_installer_test/build')
ki = self.makeone('amd64', 'v99.00.11', directory)
assert not directory.exists(), 'The %s directory exists!' % directory
# Call the build method with "master" branch.
ki.build("master")
# TODO: run is called many times but mock only remembers last one.
rmock.assert_called_with('git reset --hard origin/master')
# TODO: call is complex and hard to verify with mock, fix that.
cmock.assert_called_once()
# Call the build method with something other than "master" branch.
ki.build("branch")
# TODO: run is called many times, but mock only remembers last one.
rmock.assert_called_with('git checkout -b v99.00.11 branch')
# TODO: call is complex and hard to verify with mock, fix that.
cmock.assert_called_once()
directory.rmtree_p()
def test_install(self):
""" Test the install method that it creates the correct links. """
directory = path('/tmp/kubernetes_installer_test/install')
ki = self.makeone('ppc64le', '1.2.3', directory)
assert not directory.exists(), 'The %s directory exits!' % directory
directory.makedirs_p()
# Create the files for the install method to link to.
(directory / 'kube-apiserver').touch()
(directory / 'kube-controller-manager').touch()
(directory / 'kube-proxy').touch()
(directory / 'kube-scheduler').touch()
(directory / 'kubectl').touch()
(directory / 'kubelet').touch()
results = directory / 'install/results/go/here'
assert not results.exists()
ki.install(results)
assert results.isdir()
# Check that all the files were correctly aliased and are links.
assert (results / 'apiserver').islink()
assert (results / 'controller-manager').islink()
assert (results / 'kube-proxy').islink()
assert (results / 'scheduler').islink()
assert (results / 'kubectl').islink()
assert (results / 'kubelet').islink()
directory.rmtree_p()

View File

@ -0,0 +1,92 @@
from mock import patch, Mock, MagicMock
from path import Path
import pytest
import sys
# Munge the python path so we can find our hook code
d = Path('__file__').parent.abspath() / 'hooks'
sys.path.insert(0, d.abspath())
# Import the modules from the hook
import install
class TestInstallHook():
@patch('install.path')
def test_update_rc_files(self, pmock):
"""
Test happy path on updating env files. Assuming everything
exists and is in place.
"""
pmock.return_value.lines.return_value = ['line1', 'line2']
install.update_rc_files(['test1', 'test2'])
pmock.return_value.write_lines.assert_called_with(['line1', 'line2',
'test1', 'test2'])
def test_update_rc_files_with_nonexistant_path(self):
"""
Test an unhappy path if the bashrc/users do not exist.
"""
with pytest.raises(OSError) as exinfo:
install.update_rc_files(['test1','test2'])
@patch('install.fetch')
@patch('install.hookenv')
def test_package_installation(self, hemock, ftmock):
"""
Verify we are calling the known essentials to build and syndicate
kubes.
"""
pkgs = ['build-essential', 'git',
'make', 'nginx', 'python-pip']
install.install_packages()
hemock.log.assert_called_with('Installing Debian packages')
ftmock.filter_installed_packages.assert_called_with(pkgs)
@patch('install.archiveurl.ArchiveUrlFetchHandler')
def test_go_download(self, aumock):
"""
Test that we are actually handing off to charm-helpers to
download a specific archive of Go. This is non-configurable so
its reasonably safe to assume we're going to always do this,
and when it changes we shall curse the brittleness of this test.
"""
ins_mock = aumock.return_value.install
install.download_go()
url = 'https://storage.googleapis.com/golang/go1.4.2.linux-amd64.tar.gz'
sha1='5020af94b52b65cc9b6f11d50a67e4bae07b0aff'
ins_mock.assert_called_with(url, '/usr/local', sha1, 'sha1')
@patch('install.subprocess')
def test_clone_repository(self, spmock):
"""
We're not using a unit-tested git library - so ensure our subprocess
call is consistent. If we change this, we want to know we've broken it.
"""
install.clone_repository()
repo = 'https://github.com/GoogleCloudPlatform/kubernetes.git'
direct = '/opt/kubernetes'
spmock.check_output.assert_called_with(['git', 'clone', repo, direct])
@patch('install.install_packages')
@patch('install.download_go')
@patch('install.clone_repository')
@patch('install.update_rc_files')
@patch('install.hookenv')
def test_install_main(self, hemock, urmock, crmock, dgmock, ipmock):
"""
Ensure the driver/main method is calling all the supporting methods.
"""
strings = [
'export GOROOT=/usr/local/go\n',
'export PATH=$PATH:$GOROOT/bin\n',
'export KUBE_MASTER_IP=0.0.0.0\n',
'export KUBERNETES_MASTER=http://$KUBE_MASTER_IP\n',
]
install.install()
crmock.assert_called_once()
dgmock.assert_called_once()
crmock.assert_called_once()
urmock.assert_called_with(strings)
hemock.open_port.assert_called_with(8080)

View File

@ -0,0 +1 @@
.git

View File

@ -0,0 +1,6 @@
.bzr
*.pyc
*~
*\#*
/files/.kubernetes-*
.venv

View File

@ -0,0 +1,5 @@
omit:
- .git
- .gitignore
- .gitmodules
- revision

View File

@ -0,0 +1,29 @@
build: virtualenv lint test
virtualenv:
virtualenv .venv
.venv/bin/pip install -q -r requirements.txt
lint: virtualenv
@.venv/bin/flake8 hooks unit_tests --exclude=charmhelpers
@.venv/bin/charm proof
test: virtualenv
@CHARM_DIR=. PYTHONPATH=./hooks .venv/bin/py.test unit_tests/*
functional-test:
@bundletester
release: check-path virtualenv
@.venv/bin/pip install git-vendor
@.venv/bin/git-vendor sync -d ${KUBERNETES_BZR}
check-path:
ifndef KUBERNETES_BZR
$(error KUBERNETES_BZR is undefined)
endif
clean:
rm -rf .venv
find -name *.pyc -delete

View File

@ -0,0 +1,100 @@
# Kubernetes Minion Charm
[Kubernetes](https://github.com/googlecloudplatform/kubernetes) is an open
source system for managing containerized applications across multiple hosts.
Kubernetes uses [Docker](http://www.docker.io/) to package, instantiate and run
containerized applications.
The Kubernetes Juju charms enable you to run Kubernetes on all the cloud
platforms that Juju supports.
A Kubernetes deployment consists of several independent charms that can be
scaled to meet your needs
### Etcd
Etcd is a key value store for Kubernetes. All persistent master state
is stored in `etcd`.
### Flannel-docker
Flannel is a
[software defined networking](http://en.wikipedia.org/wiki/Software-defined_networking)
component that provides individual subnets for each machine in the cluster.
### Docker
Docker is an open platform for distributing applications for system administrators.
### Kubernetes master
The controlling unit in a Kubernetes cluster is called the master. It is the
main management contact point providing many management services for the worker
nodes.
### Kubernetes minion
The servers that perform the work are known as minions. Minions must be able to
communicate with the master and run the workloads that are assigned to them.
## Usage
#### Deploying the Development Focus
To deploy a Kubernetes environment in Juju :
juju deploy cs:~kubernetes/trusty/etcd
juju deploy cs:trusty/flannel-docker
juju deploy cs:trusty/docker
juju deploy local:trusty/kubernetes-master
juju deploy local:trusty/kubernetes
juju add-relation etcd flannel-docker
juju add-relation flannel-docker:network docker:network
juju add-relation flannel-docker:docker-host docker
juju add-relation etcd kubernetes
juju add-relation etcd kubernetes-master
juju add-relation kubernetes kubernetes-master
#### Deploying the recommended configuration
A bundle can be used to deploy Kubernetes onto any cloud it can be
orchestrated directly in the Juju Graphical User Interface, when using
`juju quickstart`:
juju quickstart https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml
For more information on the recommended bundle deployment, see the
[Kubernetes bundle documentation](https://github.com/whitmo/bundle-kubernetes)
#### Post Deployment
To interact with the kubernetes environment, either build or
[download](https://github.com/GoogleCloudPlatform/kubernetes/releases) the
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
binary (available in the releases binary tarball) and point it to the master with :
$ juju status kubernetes-master | grep public
public-address: 104.131.108.99
$ export KUBERNETES_MASTER="104.131.108.99"
# Configuration
For you convenience this charm supports changing the version of kubernetes binaries.
This can be done through the Juju GUI or on the command line:
juju set kubernetes version=”v0.10.0”
If the charm does not already contain the tar file with the desired architecture
and version it will attempt to download the kubernetes binaries using the gsutil
command.
Congratulations you know have deployed a Kubernetes environment! Use the
[kubectl](https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/kubectl.md)
to interact with the environment.
# Kubernetes information
- [Kubernetes github project](https://github.com/GoogleCloudPlatform/kubernetes)
- [Kubernetes issue tracker](https://github.com/GoogleCloudPlatform/kubernetes/issues)
- [Kubernetes Documenation](https://github.com/GoogleCloudPlatform/kubernetes/tree/master/docs)
- [Kubernetes releases](https://github.com/GoogleCloudPlatform/kubernetes/releases)

View File

@ -0,0 +1,13 @@
Copyright 2015 Google Inc. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,16 @@
description "cadvisor container metrics"
start on started docker
stop on stopping docker
limit nofile 20000 20000
kill timeout 60 # wait 60s between SIGTERM and SIGKILL.
exec docker run \
--volume=/var/run:/var/run:rw \
--volume=/sys/fs/cgroup:/sys/fs/cgroup:ro \
--volume=/var/lib/docker/:/var/lib/docker:ro \
--publish=127.0.0.1:4193:8080 \
--name=cadvisor \
google/cadvisor:latest

View File

@ -0,0 +1,15 @@
description "kubernetes kubelet"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 60 # wait 60s between SIGTERM and SIGKILL.
exec /usr/local/bin/kubelet \
--address=%(kubelet_bind_addr)s \
--api_servers=%(kubeapi_server)s \
--hostname_override=%(kubelet_bind_addr)s \
--cadvisor_port=4193 \
--logtostderr=true

View File

@ -0,0 +1,12 @@
description "kubernetes proxy"
start on runlevel [2345]
stop on runlevel [!2345]
limit nofile 20000 20000
kill timeout 60 # wait 60s between SIGTERM and SIGKILL.
exec /usr/local/bin/proxy \
--master=%(kubeapi_server)s \
--logtostderr=true

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1,225 @@
#!/usr/bin/python
"""
The main hook file that is called by Juju.
"""
import json
import httplib
import os
import time
import socket
import subprocess
import sys
import urlparse
from charmhelpers.core import hookenv, host
from kubernetes_installer import KubernetesInstaller
from path import path
from lib.registrator import Registrator
hooks = hookenv.Hooks()
@hooks.hook('api-relation-changed')
def api_relation_changed():
"""
On the relation to the api server, this function determines the appropriate
architecture and the configured version to copy the kubernetes binary files
from the kubernetes-master charm and installs it locally on this machine.
"""
hookenv.log('Starting api-relation-changed')
charm_dir = path(hookenv.charm_dir())
# Get the package architecture, rather than the from the kernel (uname -m).
arch = subprocess.check_output(['dpkg', '--print-architecture']).strip()
kubernetes_bin_dir = path('/opt/kubernetes/bin')
# Get the version of kubernetes to install.
version = subprocess.check_output(['relation-get', 'version']).strip()
print('Relation version: ', version)
if not version:
print('No version present in the relation.')
exit(0)
version_file = charm_dir / '.version'
if version_file.exists():
previous_version = version_file.text()
print('Previous version: ', previous_version)
if version == previous_version:
exit(0)
# Can not download binaries while the service is running, so stop it.
# TODO: Figure out a better way to handle upgraded kubernetes binaries.
for service in ('kubelet', 'proxy'):
if host.service_running(service):
host.service_stop(service)
command = ['relation-get', 'private-address']
# Get the kubernetes-master address.
server = subprocess.check_output(command).strip()
print('Kubernetes master private address: ', server)
installer = KubernetesInstaller(arch, version, server, kubernetes_bin_dir)
installer.download()
installer.install()
# Write the most recently installed version number to the file.
version_file.write_text(version)
relation_changed()
@hooks.hook('etcd-relation-changed',
'network-relation-changed')
def relation_changed():
"""Connect the parts and go :-)
"""
template_data = get_template_data()
# Check required keys
for k in ('etcd_servers', 'kubeapi_server'):
if not template_data.get(k):
print('Missing data for %s %s' % (k, template_data))
return
print('Running with\n%s' % template_data)
# Setup kubernetes supplemental group
setup_kubernetes_group()
# Register upstart managed services
for n in ('kubelet', 'proxy'):
if render_upstart(n, template_data) or not host.service_running(n):
print('Starting %s' % n)
host.service_restart(n)
# Register machine via api
print('Registering machine')
register_machine(template_data['kubeapi_server'])
# Save the marker (for restarts to detect prev install)
template_data.save()
def get_template_data():
rels = hookenv.relations()
template_data = hookenv.Config()
template_data.CONFIG_FILE_NAME = '.unit-state'
overlay_type = get_scoped_rel_attr('network', rels, 'overlay_type')
etcd_servers = get_rel_hosts('etcd', rels, ('hostname', 'port'))
api_servers = get_rel_hosts('api', rels, ('hostname', 'port'))
# kubernetes master isn't ha yet.
if api_servers:
api_info = api_servers.pop()
api_servers = 'http://%s:%s' % (api_info[0], api_info[1])
template_data['overlay_type'] = overlay_type
template_data['kubelet_bind_addr'] = _bind_addr(
hookenv.unit_private_ip())
template_data['proxy_bind_addr'] = _bind_addr(
hookenv.unit_get('public-address'))
template_data['kubeapi_server'] = api_servers
template_data['etcd_servers'] = ','.join([
'http://%s:%s' % (s[0], s[1]) for s in sorted(etcd_servers)])
template_data['identifier'] = os.environ['JUJU_UNIT_NAME'].replace(
'/', '-')
return _encode(template_data)
def _bind_addr(addr):
if addr.replace('.', '').isdigit():
return addr
try:
return socket.gethostbyname(addr)
except socket.error:
raise ValueError('Could not resolve private address')
def _encode(d):
for k, v in d.items():
if isinstance(v, unicode):
d[k] = v.encode('utf8')
return d
def get_scoped_rel_attr(rel_name, rels, attr):
private_ip = hookenv.unit_private_ip()
for r, data in rels.get(rel_name, {}).items():
for unit_id, unit_data in data.items():
if unit_data.get('private-address') != private_ip:
continue
if unit_data.get(attr):
return unit_data.get(attr)
def get_rel_hosts(rel_name, rels, keys=('private-address',)):
hosts = []
for r, data in rels.get(rel_name, {}).items():
for unit_id, unit_data in data.items():
if unit_id == hookenv.local_unit():
continue
values = [unit_data.get(k) for k in keys]
if not all(values):
continue
hosts.append(len(values) == 1 and values[0] or values)
return hosts
def render_upstart(name, data):
tmpl_path = os.path.join(
os.environ.get('CHARM_DIR'), 'files', '%s.upstart.tmpl' % name)
with open(tmpl_path) as fh:
tmpl = fh.read()
rendered = tmpl % data
tgt_path = '/etc/init/%s.conf' % name
if os.path.exists(tgt_path):
with open(tgt_path) as fh:
contents = fh.read()
if contents == rendered:
return False
with open(tgt_path, 'w') as fh:
fh.write(rendered)
return True
def register_machine(apiserver, retry=False):
parsed = urlparse.urlparse(apiserver)
# identity = hookenv.local_unit().replace('/', '-')
private_address = hookenv.unit_private_ip()
with open('/proc/meminfo') as fh:
info = fh.readline()
mem = info.strip().split(':')[1].strip().split()[0]
cpus = os.sysconf('SC_NPROCESSORS_ONLN')
registration_request = Registrator()
registration_request.data['Kind'] = 'Minion'
registration_request.data['id'] = private_address
registration_request.data['name'] = private_address
registration_request.data['metadata']['name'] = private_address
registration_request.data['spec']['capacity']['mem'] = mem + ' K'
registration_request.data['spec']['capacity']['cpu'] = cpus
registration_request.data['spec']['externalID'] = private_address
registration_request.data['status']['hostIP'] = private_address
response, result = registration_request.register(parsed.hostname,
parsed.port,
'/api/v1beta3/nodes')
print(response)
try:
registration_request.command_succeeded(response, result)
except ValueError:
# This happens when we have already registered
# for now this is OK
pass
def setup_kubernetes_group():
output = subprocess.check_output(['groups', 'kubernetes'])
# TODO: check group exists
if 'docker' not in output:
subprocess.check_output(
['usermod', '-a', '-G', 'docker', 'kubernetes'])
if __name__ == '__main__':
hooks.execute(sys.argv)

View File

@ -0,0 +1,32 @@
#!/bin/bash
set -ex
# Install is guaranteed to run once per rootfs
echo "Installing kubernetes-node on $JUJU_UNIT_NAME"
apt-get update -qq
apt-get install -q -y \
bridge-utils \
python-dev \
python-pip \
wget
pip install path.py
# Create the necessary kubernetes group.
groupadd kubernetes
useradd -d /var/lib/kubernetes \
-g kubernetes \
-s /sbin/nologin \
--system \
kubernetes
install -d -m 0744 -o kubernetes -g kubernetes /var/lib/kubernetes
install -d -m 0744 -o kubernetes -g kubernetes /etc/kubernetes/manifests
# wait for the world, depends on where we installed it from distro
#sudo service docker.io stop
# or upstream archive
#sudo service docker stop

View File

@ -0,0 +1,52 @@
import subprocess
from path import path
class KubernetesInstaller():
"""
This class contains the logic needed to install kuberentes binary files.
"""
def __init__(self, arch, version, master, output_dir):
""" Gather the required variables for the install. """
# The kubernetes charm needs certain commands to be aliased.
self.aliases = {'kube-proxy': 'proxy',
'kubelet': 'kubelet'}
self.arch = arch
self.version = version
self.master = master
self.output_dir = output_dir
def download(self):
""" Download the kuberentes binaries from the kubernetes master. """
url = 'http://{0}/kubernetes/{1}/local/bin/linux/{2}'.format(
self.master, self.version, self.arch)
if not self.output_dir.isdir():
self.output_dir.makedirs_p()
for key in self.aliases:
uri = '{0}/{1}'.format(url, key)
destination = self.output_dir / key
wget = 'wget -nv {0} -O {1}'.format(uri, destination)
print(wget)
output = subprocess.check_output(wget.split())
print(output)
destination.chmod(0o755)
def install(self, install_dir=path('/usr/local/bin')):
""" Create links to the binary files to the install directory. """
if not install_dir.isdir():
install_dir.makedirs_p()
# Create the symbolic links to the real kubernetes binaries.
for key, value in self.aliases.iteritems():
target = self.output_dir / key
if target.exists():
link = install_dir / value
if link.exists():
link.remove()
target.symlink(link)
else:
print('Error target file {0} does not exist.'.format(target))
exit(1)

View File

@ -0,0 +1,82 @@
import httplib
import json
import time
class Registrator:
def __init__(self):
self.ds ={
"creationTimestamp": "",
"kind": "Minion",
"name": "", # private_address
"metadata": {
"name": "", #private_address,
},
"spec": {
"externalID": "", #private_address
"capacity": {
"mem": "", # mem + ' K',
"cpu": "", # cpus
}
},
"status": {
"conditions": [],
"hostIP": "", #private_address
}
}
@property
def data(self):
''' Returns a data-structure for population to make a request. '''
return self.ds
def register(self, hostname, port, api_path):
''' Contact the API Server for a new registration '''
headers = {"Content-type": "application/json",
"Accept": "application/json"}
connection = httplib.HTTPConnection(hostname, port)
print 'CONN {}'.format(connection)
connection.request("POST", api_path, json.dumps(self.data), headers)
response = connection.getresponse()
body = response.read()
print(body)
result = json.loads(body)
print("Response status:%s reason:%s body:%s" % \
(response.status, response.reason, result))
return response, result
def update(self):
''' Contact the API Server to update a registration '''
# do a get on the API for the node
# repost to the API with any modified data
pass
def save(self):
''' Marshall the registration data '''
# TODO
pass
def command_succeeded(self, response, result):
''' Evaluate response data to determine if the command was successful '''
if response.status in [200, 201]:
print("Registered")
return True
elif response.status in [409,]:
print("Status Conflict")
# Suggested return a PUT instead of a POST with this response
# code, this predicates use of the UPDATE method
# TODO
elif response.status in (500,) and result.get(
'message', '').startswith('The requested resource does not exist'):
# There's something fishy in the kube api here (0.4 dev), first time we
# go to register a new minion, we always seem to get this error.
# https://github.com/GoogleCloudPlatform/kubernetes/issues/1995
time.sleep(1)
print("Retrying registration...")
raise ValueError("Registration returned 500, retry")
# return register_machine(apiserver, retry=True)
else:
print("Registration error")
# TODO - get request data
raise RuntimeError("Unable to register machine with")

View File

@ -0,0 +1 @@
hooks.py

View File

@ -0,0 +1,15 @@
#!/bin/bash
set -ex
# Start is guaranteed to be called once when after the unit is installed
# *AND* once everytime a machine is rebooted.
if [ ! -f $CHARM_DIR/.unit-state ]
then
exit 0;
fi
service docker restart
service proxy restart
service kubelet restart

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 76 KiB

View File

@ -0,0 +1,23 @@
name: kubernetes
summary: Container Cluster Management Node
maintainers:
- Matt Bruzek <matthew.bruzek@canonical.com>
- Whit Morriss <whit.morriss@canonical.com>
- Charles Butler <charles.butler@canonical.com>
description: |
Provides a kubernetes node for running containers
See http://goo.gl/CSggxE
tags:
- ops
- network
subordinate: true
requires:
etcd:
interface: etcd
api:
interface: kubernetes-api
network:
interface: overlay-network
docker-host:
interface: juju-info
scope: container

View File

@ -0,0 +1,4 @@
flake8
pytest
bundletester
path.py

View File

@ -0,0 +1,45 @@
import json
from mock import MagicMock, patch, call
from path import Path
import pytest
import sys
d = Path('__file__').parent.abspath() / 'hooks'
sys.path.insert(0, d.abspath())
from lib.registrator import Registrator
class TestRegistrator():
def setup_method(self, method):
self.r = Registrator()
def test_data_type(self):
if type(self.r.data) is not dict:
pytest.fail("Invalid type")
@patch('json.loads')
@patch('httplib.HTTPConnection')
def test_register(self, httplibmock, jsonmock):
result = self.r.register('foo', 80, '/v1beta1/test')
httplibmock.assert_called_with('foo', 80)
requestmock = httplibmock().request
requestmock.assert_called_with(
"POST", "/v1beta1/test",
json.dumps(self.r.data),
{"Content-type": "application/json",
"Accept": "application/json"})
def test_command_succeeded(self):
response = MagicMock()
result = json.loads('{"status": "Failure", "kind": "Status", "code": 409, "apiVersion": "v1beta2", "reason": "AlreadyExists", "details": {"kind": "minion", "id": "10.200.147.200"}, "message": "minion \\"10.200.147.200\\" already exists", "creationTimestamp": null}')
response.status = 200
self.r.command_succeeded(response, result)
response.status = 500
with pytest.raises(RuntimeError):
self.r.command_succeeded(response, result)
response.status = 409
with pytest.raises(ValueError):
self.r.command_succeeded(response, result)

View File

@ -0,0 +1,8 @@
# import pytest
class TestHooks():
# TODO: Actually write tests.
def test_fake(self):
pass

View File

@ -20,12 +20,12 @@ set -o nounset
set -o pipefail
function check_for_ppa(){
function check_for_ppa() {
local repo="$1"
grep -qsw $repo /etcc/apt/sources.list /etc/apt/sources.list.d/*
grep -qsw $repo /etc/apt/sources.list /etc/apt/sources.list.d/*
}
function package_status(){
function package_status() {
local pkgname=$1
local pkgstatus
pkgstatus=$(dpkg-query -W --showformat='${Status}\n' "${pkgname}")
@ -33,10 +33,9 @@ function package_status(){
echo "Missing package ${pkgname}"
sudo apt-get --force-yes --yes install ${pkgname}
fi
}
function gather_installation_reqs(){
function gather_installation_reqs() {
if ! check_for_ppa "juju"; then
echo "... Detected missing dependencies.. running"
echo "... add-apt-repository ppa:juju/stable"
@ -45,5 +44,5 @@ function gather_installation_reqs(){
fi
package_status 'juju-quickstart'
package_status 'juju-deployer'
}

14
cluster/juju/return-node-ips.py Executable file
View File

@ -0,0 +1,14 @@
#!/usr/bin/env python
import json
import sys
# This script helps parse out the private IP addreses from the
# `juju run` command's JSON object, see cluster/juju/util.sh
if len(sys.argv) > 1:
# It takes the JSON output as the first argument.
nodes = json.loads(sys.argv[1])
# There can be multiple nodes to print the Stdout.
for num in nodes:
print num['Stdout'].rstrip()
else:
exit(1)

View File

@ -19,8 +19,13 @@ set -o errexit
set -o nounset
set -o pipefail
source $KUBE_ROOT/cluster/juju/prereqs/ubuntu-juju.sh
KUBE_BUNDLE_URL='https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml'
UTIL_SCRIPT=$(readlink -m "${BASH_SOURCE}")
JUJU_PATH=$(dirname ${UTIL_SCRIPT})
source ${JUJU_PATH}/prereqs/ubuntu-juju.sh
export JUJU_REPOSITORY=${JUJU_PATH}/charms
#KUBE_BUNDLE_URL='https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml'
KUBE_BUNDLE_PATH=${JUJU_PATH}/bundles/local.yaml
function verify-prereqs() {
gather_installation_reqs
}
@ -30,66 +35,67 @@ function get-password() {
}
function kube-up() {
# If something were to happen that I'm not accounting for, do not
# punish the user by making them tear things down. In a perfect world
# quickstart should handle this situation, so be nice in the meantime
local envstatus
envstatus=$(juju status kubernetes-master --format=oneline)
if [[ "" == "${envstatus}" ]]; then
if [[ -d "~/.juju/current-env" ]]; then
juju quickstart -i --no-browser -i $KUBE_BUNDLE_URL
else
juju quickstart --no-browser ${KUBE_BUNDLE_URL}
fi
sleep 60
if [[ -d "~/.juju/current-env" ]]; then
juju quickstart -i --no-browser
else
juju quickstart --no-browser
fi
# The juju-deployer command will deploy the bundle and can be run
# multiple times to continue deploying the parts that fail.
juju deployer -c ${KUBE_BUNDLE_PATH}
# Sleep due to juju bug http://pad.lv/1432759
sleep-status
detect-master
detect-minions
}
function kube-down() {
local jujuenv
jujuenv=$(cat ~/.juju/current-environment)
juju destroy-environment $jujuenv
}
function detect-master() {
local kubestatus
# Capturing a newline, and my awk-fu was weak - pipe through tr -d
kubestatus=$(juju status --format=oneline kubernetes-master | awk '{print $3}' | tr -d "\n")
export KUBE_MASTER_IP=${kubestatus}
export KUBE_MASTER=$KUBE_MASTER_IP:8080
export KUBERNETES_MASTER=$KUBE_MASTER
export KUBE_MASTER=${KUBE_MASTER_IP}
export KUBERNETES_MASTER=http://${KUBE_MASTER}:8080
echo "Kubernetes master: " ${KUBERNETES_MASTER}
}
}
function detect-minions(){
# Strip out the components except for STDOUT return
# and trim out the single quotes to build an array of minions
function detect-minions() {
# Run the Juju command that gets the minion private IP addresses.
local ipoutput
ipoutput=$(juju run --service kubernetes "unit-get private-address" --format=json)
echo $ipoutput
# Strip out the IP addresses
#
# Example Output:
#- MachineId: "10"
# Stdout: '10.197.55.232
#'
# Stdout: |
# 10.197.55.232
# UnitId: kubernetes/0
# - MachineId: "11"
# Stdout: '10.202.146.124
# '
# Stdout: |
# 10.202.146.124
# UnitId: kubernetes/1
KUBE_MINION_IP_ADDRESSES=($(juju run --service kubernetes \
"unit-get private-address" --format=yaml \
| awk '/Stdout/ {gsub(/'\''/,""); print $2}'))
NUM_MINIONS=${#KUBE_MINION_IP_ADDRESSES[@]}
MINION_NAMES=$KUBE_MINION_IP_ADDRESSES
export KUBE_MINION_IP_ADDRESSES=($(${JUJU_PATH}/return-node-ips.py "${ipoutput}"))
echo "Kubernetes minions: " ${KUBE_MINION_IP_ADDRESSES[@]}
export NUM_MINIONS=${#KUBE_MINION_IP_ADDRESSES[@]}
export MINION_NAMES=$KUBE_MINION_IP_ADDRESSES
}
function setup-logging-firewall(){
function setup-logging-firewall() {
echo "TODO: setup logging and firewall rules"
}
function teardown-logging-firewall(){
function teardown-logging-firewall() {
echo "TODO: teardown logging and firewall rules"
}
function sleep-status(){
function sleep-status() {
local i
local maxtime
local jujustatus
@ -97,10 +103,17 @@ function sleep-status(){
maxtime=900
jujustatus=''
echo "Waiting up to 15 minutes to allow the cluster to come online... wait for it..."
jujustatus=$(juju status kubernetes-master --format=oneline)
if [[ $jujustatus == *"started"* ]];
then
return
fi
while [[ $i < $maxtime && $jujustatus != *"started"* ]]; do
sleep 15
i+=15
jujustatus=$(juju status kubernetes-master --format=oneline)
sleep 30
i+=30
done
# sleep because we cannot get the status back of where the minions are in the deploy phase
@ -109,4 +122,3 @@ function sleep-status(){
echo "Sleeping an additional minute to allow the cluster to settle"
sleep 60
}

View File

@ -1,12 +1,17 @@
## Getting start with Juju
## Getting started with Juju
Juju handles provisioning machines and deploying complex systems to a
wide number of clouds.
wide number of clouds, supporting service orchestration once the bundle of
services has been deployed.
### Prerequisites
> Note: If you're running kube-up, on ubuntu - all of the dependencies
> will be handled for you. You may safely skip to the section:
> [Launch Kubernetes Cluster](#launch-kubernetes-cluster)
#### On Ubuntu
[Install the Juju client](https://juju.ubuntu.com/install) on your
@ -39,13 +44,19 @@ interface.
## Launch Kubernetes cluster
juju quickstart https://raw.githubusercontent.com/whitmo/bundle-kubernetes/master/bundles.yaml
You will need to have the Kubernetes tools compiled before launching the cluster
First this command will start a curses based gui allowing you to set
up credentials and other environmental settings for several different
providers including Azure and AWS.
make all WHAT=cmd/kubectl
export KUBERNETES_PROVIDER=juju
cluster/kube-up.sh
Next it will deploy the kubernetes master, etcd, 2 minions with flannel networking.
If this is your first time running the `kube-up.sh` script, it will install
the required predependencies to get started with Juju, additionally it will
launch a curses based configuration utility allowing you to select your cloud
provider and enter the proper access credentials.
Next it will deploy the kubernetes master, etcd, 2 minions with flannel based
Software Defined Networking.
## Exploring the cluster
@ -53,14 +64,15 @@ Next it will deploy the kubernetes master, etcd, 2 minions with flannel networki
Juju status provides information about each unit in the cluster:
juju status --format=oneline
- etcd/0: 52.0.74.109 (started)
- flannel/0: 52.0.149.150 (started)
- flannel/1: 52.0.185.81 (started)
- juju-gui/0: 52.1.150.81 (started)
- kubernetes/0: 52.0.149.150 (started)
- kubernetes/1: 52.0.185.81 (started)
- kubernetes-master/0: 52.1.120.142 (started)
- docker/0: 52.4.92.78 (started)
- flannel-docker/0: 52.4.92.78 (started)
- kubernetes/0: 52.4.92.78 (started)
- docker/1: 52.6.104.142 (started)
- flannel-docker/1: 52.6.104.142 (started)
- kubernetes/1: 52.6.104.142 (started)
- etcd/0: 52.5.216.210 (started) 4001/tcp
- juju-gui/0: 52.5.205.174 (started) 80/tcp, 443/tcp
- kubernetes-master/0: 52.6.19.238 (started) 8080/tcp
You can use `juju ssh` to access any of the units:
@ -150,8 +162,7 @@ Finally delete the pod:
We can add minion units like so:
juju add-unit flannel # creates unit flannel/2
juju add-unit kubernetes --to flannel/2
juju add-unit docker # creates unit docker/2, kubernetes/2, docker-flannel/2
## Tear down cluster
@ -175,6 +186,16 @@ Kubernetes Bundle on Github
Juju runs natively against a variety of cloud providers and can be
made to work against many more using a generic manual provider.
Provider | v0.15.0
-------------- | -------
AWS | TBD
HPCloud | TBD
OpenStack | TBD
Joyent | TBD
Azure | TBD
Digital Ocean | TBD
MAAS (bare metal) | TBD
GCE | TBD
Provider | v0.8.1
@ -187,4 +208,3 @@ Azure | TBD
Digital Ocean | TBD
MAAS (bare metal) | TBD
GCE | TBD