diff --git a/cluster/juju/layers/kubernetes-master/README.md b/cluster/juju/layers/kubernetes-master/README.md index c1cc84a6cf0..6279a54cfc9 100644 --- a/cluster/juju/layers/kubernetes-master/README.md +++ b/cluster/juju/layers/kubernetes-master/README.md @@ -1,19 +1,19 @@ # Kubernetes-master -[Kubernetes](http://kubernetes.io/) is an open source system for managing +[Kubernetes](http://kubernetes.io/) is an open source system for managing application containers across a cluster of hosts. The Kubernetes project was -started by Google in 2014, combining the experience of running production +started by Google in 2014, combining the experience of running production workloads combined with best practices from the community. The Kubernetes project defines some new terms that may be unfamiliar to users -or operators. For more information please refer to the concept guide in the +or operators. For more information please refer to the concept guide in the [getting started guide](https://kubernetes.io/docs/home/). -This charm is an encapsulation of the Kubernetes master processes and the +This charm is an encapsulation of the Kubernetes master processes and the operations to run on any cloud for the entire lifecycle of the cluster. This charm is built from other charm layers using the Juju reactive framework. -The other layers focus on specific subset of operations making this layer +The other layers focus on specific subset of operations making this layer specific to operations of Kubernetes master processes. # Deployment @@ -23,15 +23,15 @@ charms to model a complete Kubernetes cluster. A Kubernetes cluster needs a distributed key value store such as [Etcd](https://coreos.com/etcd/) and the kubernetes-worker charm which delivers the Kubernetes node services. A cluster requires a Software Defined Network (SDN) and Transport Layer Security (TLS) so -the components in a cluster communicate securely. +the components in a cluster communicate securely. -Please take a look at the [Canonical Distribution of Kubernetes](https://jujucharms.com/canonical-kubernetes/) -or the [Kubernetes core](https://jujucharms.com/kubernetes-core/) bundles for +Please take a look at the [Canonical Distribution of Kubernetes](https://jujucharms.com/canonical-kubernetes/) +or the [Kubernetes core](https://jujucharms.com/kubernetes-core/) bundles for examples of complete models of Kubernetes clusters. # Resources -The kubernetes-master charm takes advantage of the [Juju Resources](https://jujucharms.com/docs/2.0/developer-resources) +The kubernetes-master charm takes advantage of the [Juju Resources](https://jujucharms.com/docs/2.0/developer-resources) feature to deliver the Kubernetes software. In deployments on public clouds the Charm Store provides the resource to the @@ -40,9 +40,41 @@ firewall rules may not be able to contact the Charm Store. In these network restricted environments the resource can be uploaded to the model by the Juju operator. +#### Snap Refresh + +The kubernetes resources used by this charm are snap packages. When not +specified during deployment, these resources come from the public store. By +default, the `snapd` daemon will refresh all snaps installed from the store +four (4) times per day. A charm configuration option is provided for operators +to control this refresh frequency. + +>NOTE: this is a global configuration option and will affect the refresh +time for all snaps installed on a system. + +Examples: + +```sh +## refresh kubernetes-master snaps every tuesday +juju config kubernetes-master snapd_refresh="tue" + +## refresh snaps at 11pm on the last (5th) friday of the month +juju config kubernetes-master snapd_refresh="fri5,23:00" + +## delay the refresh as long as possible +juju config kubernetes-master snapd_refresh="max" + +## use the system default refresh timer +juju config kubernetes-master snapd_refresh="" +``` + +For more information on the possible values for `snapd_refresh`, see the +*refresh.timer* section in the [system options][] documentation. + +[system options]: https://forum.snapcraft.io/t/system-options/87 + # Configuration -This charm supports some configuration options to set up a Kubernetes cluster +This charm supports some configuration options to set up a Kubernetes cluster that works in your environment: #### dns_domain @@ -61,14 +93,14 @@ Enable RBAC and Node authorisation. # DNS for the cluster The DNS add-on allows the pods to have a DNS names in addition to IP addresses. -The Kubernetes cluster DNS server (based off the SkyDNS library) supports -forward lookups (A records), service lookups (SRV records) and reverse IP +The Kubernetes cluster DNS server (based off the SkyDNS library) supports +forward lookups (A records), service lookups (SRV records) and reverse IP address lookups (PTR records). More information about the DNS can be obtained from the [Kubernetes DNS admin guide](http://kubernetes.io/docs/admin/dns/). # Actions -The kubernetes-master charm models a few one time operations called +The kubernetes-master charm models a few one time operations called [Juju actions](https://jujucharms.com/docs/stable/actions) that can be run by Juju users. @@ -80,7 +112,7 @@ requires a relation to the ceph-mon charm before it can create the volume. #### restart -This action restarts the master processes `kube-apiserver`, +This action restarts the master processes `kube-apiserver`, `kube-controller-manager`, and `kube-scheduler` when the user needs a restart. # More information @@ -93,7 +125,7 @@ This action restarts the master processes `kube-apiserver`, # Contact The kubernetes-master charm is free and open source operations created -by the containers team at Canonical. +by the containers team at Canonical. Canonical also offers enterprise support and customization services. Please refer to the [Kubernetes product page](https://www.ubuntu.com/cloud/kubernetes) diff --git a/cluster/juju/layers/kubernetes-master/config.yaml b/cluster/juju/layers/kubernetes-master/config.yaml index 01171c058fd..e6acd542526 100644 --- a/cluster/juju/layers/kubernetes-master/config.yaml +++ b/cluster/juju/layers/kubernetes-master/config.yaml @@ -147,6 +147,15 @@ options: default: true description: | If true the metrics server for Kubernetes will be deployed onto the cluster. + snapd_refresh: + default: "max" + type: string + description: | + How often snapd handles updates for installed snaps. Setting an empty + string will check 4x per day. Set to "max" to delay the refresh as long + as possible. You may also set a custom string as described in the + 'refresh.timer' section here: + https://forum.snapcraft.io/t/system-options/87 default-storage: type: string default: "auto" diff --git a/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py b/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py index fadb163a89d..2f9a34fa7a4 100644 --- a/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py +++ b/cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py @@ -427,6 +427,38 @@ def set_app_version(): hookenv.application_version_set(version.split(b' v')[-1].rstrip()) +@when('kubernetes-master.snaps.installed') +@when('snap.refresh.set') +@when('leadership.is_leader') +def process_snapd_timer(): + ''' Set the snapd refresh timer on the leader so all cluster members + (present and future) will refresh near the same time. ''' + # Get the current snapd refresh timer; we know layer-snap has set this + # when the 'snap.refresh.set' flag is present. + timer = snap.get(snapname='core', key='refresh.timer').decode('utf-8') + + # The first time through, data_changed will be true. Subsequent calls + # should only update leader data if something changed. + if data_changed('master_snapd_refresh', timer): + hookenv.log('setting snapd_refresh timer to: {}'.format(timer)) + leader_set({'snapd_refresh': timer}) + + +@when('kubernetes-master.snaps.installed') +@when('snap.refresh.set') +@when('leadership.changed.snapd_refresh') +@when_not('leadership.is_leader') +def set_snapd_timer(): + ''' Set the snapd refresh.timer on non-leader cluster members. ''' + # NB: This method should only be run when 'snap.refresh.set' is present. + # Layer-snap will always set a core refresh.timer, which may not be the + # same as our leader. Gating with 'snap.refresh.set' ensures layer-snap + # has finished and we are free to set our config to the leader's timer. + timer = leader_get('snapd_refresh') + hookenv.log('setting snapd_refresh timer to: {}'.format(timer)) + snap.set_refresh_timer(timer) + + @hookenv.atexit def set_final_status(): ''' Set the final status of the charm as we leave hook execution ''' diff --git a/cluster/juju/layers/kubernetes-worker/README.md b/cluster/juju/layers/kubernetes-worker/README.md index 965199eaf75..a661696f653 100644 --- a/cluster/juju/layers/kubernetes-worker/README.md +++ b/cluster/juju/layers/kubernetes-worker/README.md @@ -27,6 +27,38 @@ To add additional compute capacity to your Kubernetes workers, you may join any related kubernetes-master, and enlist themselves as ready once the deployment is complete. +## Snap Configuration + +The kubernetes resources used by this charm are snap packages. When not +specified during deployment, these resources come from the public store. By +default, the `snapd` daemon will refresh all snaps installed from the store +four (4) times per day. A charm configuration option is provided for operators +to control this refresh frequency. + +>NOTE: this is a global configuration option and will affect the refresh +time for all snaps installed on a system. + +Examples: + +```sh +## refresh kubernetes-worker snaps every tuesday +juju config kubernetes-worker snapd_refresh="tue" + +## refresh snaps at 11pm on the last (5th) friday of the month +juju config kubernetes-worker snapd_refresh="fri5,23:00" + +## delay the refresh as long as possible +juju config kubernetes-worker snapd_refresh="max" + +## use the system default refresh timer +juju config kubernetes-worker snapd_refresh="" +``` + +For more information on the possible values for `snapd_refresh`, see the +*refresh.timer* section in the [system options][] documentation. + +[system options]: https://forum.snapcraft.io/t/system-options/87 + ## Operational actions The kubernetes-worker charm supports the following Operational Actions: @@ -89,7 +121,7 @@ service is not reachable. Note: When debugging connection issues with NodePort services, its important to first check the kube-proxy service on the worker units. If kube-proxy is not running, the associated port-mapping will not be configured in the iptables -rulechains. +rulechains. If you need to close the NodePort once a workload has been terminated, you can follow the same steps inversely. @@ -97,4 +129,3 @@ follow the same steps inversely. ``` juju run --application kubernetes-worker close-port 30510 ``` - diff --git a/cluster/juju/layers/kubernetes-worker/config.yaml b/cluster/juju/layers/kubernetes-worker/config.yaml index 7123fc21b7b..048cf9eb816 100644 --- a/cluster/juju/layers/kubernetes-worker/config.yaml +++ b/cluster/juju/layers/kubernetes-worker/config.yaml @@ -80,3 +80,12 @@ options: description: | Docker image to use for the default backend. Auto will select an image based on architecture. + snapd_refresh: + default: "max" + type: string + description: | + How often snapd handles updates for installed snaps. Setting an empty + string will check 4x per day. Set to "max" to delay the refresh as long + as possible. You may also set a custom string as described in the + 'refresh.timer' section here: + https://forum.snapcraft.io/t/system-options/87 diff --git a/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py b/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py index 215fa7a3fea..d2af6f58e40 100644 --- a/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py +++ b/cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py @@ -22,6 +22,8 @@ import shutil import subprocess import time +from charms.leadership import leader_get, leader_set + from pathlib import Path from shlex import split from subprocess import check_call, check_output @@ -289,6 +291,38 @@ def set_app_version(): hookenv.application_version_set(version.split(b' v')[-1].rstrip()) +@when('kubernetes-worker.snaps.installed') +@when('snap.refresh.set') +@when('leadership.is_leader') +def process_snapd_timer(): + ''' Set the snapd refresh timer on the leader so all cluster members + (present and future) will refresh near the same time. ''' + # Get the current snapd refresh timer; we know layer-snap has set this + # when the 'snap.refresh.set' flag is present. + timer = snap.get(snapname='core', key='refresh.timer').decode('utf-8') + + # The first time through, data_changed will be true. Subsequent calls + # should only update leader data if something changed. + if data_changed('worker_snapd_refresh', timer): + hookenv.log('setting snapd_refresh timer to: {}'.format(timer)) + leader_set({'snapd_refresh': timer}) + + +@when('kubernetes-worker.snaps.installed') +@when('snap.refresh.set') +@when('leadership.changed.snapd_refresh') +@when_not('leadership.is_leader') +def set_snapd_timer(): + ''' Set the snapd refresh.timer on non-leader cluster members. ''' + # NB: This method should only be run when 'snap.refresh.set' is present. + # Layer-snap will always set a core refresh.timer, which may not be the + # same as our leader. Gating with 'snap.refresh.set' ensures layer-snap + # has finished and we are free to set our config to the leader's timer. + timer = leader_get('snapd_refresh') + hookenv.log('setting snapd_refresh timer to: {}'.format(timer)) + snap.set_refresh_timer(timer) + + @when('kubernetes-worker.snaps.installed') @when_not('kube-control.dns.available') def notify_user_transient_status():