Merge pull request #40324 from chuckbutler/upstream-rebase-forreal

Automatic merge from submit-queue (batch tested with PRs 40335, 40320, 40324, 39103, 40315)

Splitting master/node services into separate charm layers

**What this PR does / why we need it**:

This branch includes a roll-up series of commits from a fork of the
Kubernetes repository pre 1.5 release because we didn't make the code freeze.
This additional effort has been fully tested and has results submit into
the gubernator to enhance confidence in this code quality vs. the single
layer, posing as both master/node.

To reference the gubernator results, please see:
https://k8s-gubernator.appspot.com/builds/canonical-kubernetes-tests/logs/kubernetes-gce-e2e-node/

Apologies in advance for the large commit however, we did not want to
submit without having successful upstream automated testing results.

This commit includes:

 - Support for CNI networking plugins
 - Support for durable storage provided by Ceph
 - Building from upstream templates (read: kubedns - no more template
 drift!)
 - An e2e charm-layer to make running validation tests much simpler/repeatable
 - Changes to support the 1.5.x series of Kubernetes



**Special notes for your reviewer**:

Additional note: We will be targeting -all- future work against upstream
so large pull requests of this magnitude will not occur again.

**Release note**:




```release-note
- Splits Juju Charm layers into master/worker roles
- Adds support for 1.5.x series of Kubernetes
- Introduces a tactic for keeping templates in sync with upstream eliminating template drift
- Adds CNI support to the Juju Charms
- Adds durable storage support to the Juju Charms
- Introduces an e2e Charm layer for repeatable testing efforts and validation of clusters

```
This commit is contained in:
Kubernetes Submit Queue 2017-01-24 17:30:06 -08:00 committed by GitHub
commit e3ba25714f
79 changed files with 4977 additions and 1371 deletions

View File

@ -0,0 +1,5 @@
# kubeapi-load-balancer
Simple NGINX reverse proxy to lend a hand in HA kubernetes-master deployments.

View File

@ -0,0 +1,5 @@
options:
port:
type: int
default: 443
description: The port to run the loadbalancer

View File

@ -0,0 +1,13 @@
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,412 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
width="96"
height="96"
id="svg6517"
version="1.1"
inkscape:version="0.91 r13725"
sodipodi:docname="kubapi-load-balancer_circle.svg"
viewBox="0 0 96 96">
<defs
id="defs6519">
<linearGradient
id="Background">
<stop
id="stop4178"
offset="0"
style="stop-color:#22779e;stop-opacity:1" />
<stop
id="stop4180"
offset="1"
style="stop-color:#2991c0;stop-opacity:1" />
</linearGradient>
<filter
style="color-interpolation-filters:sRGB"
inkscape:label="Inner Shadow"
id="filter1121">
<feFlood
flood-opacity="0.59999999999999998"
flood-color="rgb(0,0,0)"
result="flood"
id="feFlood1123" />
<feComposite
in="flood"
in2="SourceGraphic"
operator="out"
result="composite1"
id="feComposite1125" />
<feGaussianBlur
in="composite1"
stdDeviation="1"
result="blur"
id="feGaussianBlur1127" />
<feOffset
dx="0"
dy="2"
result="offset"
id="feOffset1129" />
<feComposite
in="offset"
in2="SourceGraphic"
operator="atop"
result="composite2"
id="feComposite1131" />
</filter>
<filter
style="color-interpolation-filters:sRGB"
inkscape:label="Drop Shadow"
id="filter950">
<feFlood
flood-opacity="0.25"
flood-color="rgb(0,0,0)"
result="flood"
id="feFlood952" />
<feComposite
in="flood"
in2="SourceGraphic"
operator="in"
result="composite1"
id="feComposite954" />
<feGaussianBlur
in="composite1"
stdDeviation="1"
result="blur"
id="feGaussianBlur956" />
<feOffset
dx="0"
dy="1"
result="offset"
id="feOffset958" />
<feComposite
in="SourceGraphic"
in2="offset"
operator="over"
result="composite2"
id="feComposite960" />
</filter>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath873">
<g
transform="matrix(0,-0.66666667,0.66604479,0,-258.25992,677.00001)"
id="g875"
inkscape:label="Layer 1"
style="display:inline;fill:#ff00ff;fill-opacity:1;stroke:none">
<path
style="display:inline;fill:#ff00ff;fill-opacity:1;stroke:none"
d="M 46.702703,898.22775 H 97.297297 C 138.16216,898.22775 144,904.06497 144,944.92583 v 50.73846 c 0,40.86071 -5.83784,46.69791 -46.702703,46.69791 H 46.702703 C 5.8378378,1042.3622 0,1036.525 0,995.66429 v -50.73846 c 0,-40.86086 5.8378378,-46.69808 46.702703,-46.69808 z"
id="path877"
inkscape:connector-curvature="0"
sodipodi:nodetypes="sssssssss" />
</g>
</clipPath>
<style
id="style867"
type="text/css"><![CDATA[
.fil0 {fill:#1F1A17}
]]></style>
<clipPath
id="clipPath16">
<path
id="path18"
d="M -9,-9 H 605 V 222 H -9 Z"
inkscape:connector-curvature="0" />
</clipPath>
<clipPath
id="clipPath116">
<path
id="path118"
d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129"
inkscape:connector-curvature="0" />
</clipPath>
<clipPath
id="clipPath128">
<path
id="path130"
d="m 91.7368,146.3253 -9.7039,-1.577 -8.8548,-3.8814 -7.5206,-4.7308 -7.1566,-8.7335 -4.0431,-4.282 -3.9093,-1.4409 -1.034,2.5271 1.8079,2.6096 0.4062,3.6802 1.211,-0.0488 1.3232,-1.2069 -0.3569,3.7488 -1.4667,0.9839 0.0445,1.4286 -3.4744,-1.9655 -3.1462,-3.712 -0.6559,-3.3176 1.3453,-2.6567 1.2549,-4.5133 2.5521,-1.2084 2.6847,0.1318 2.5455,1.4791 -1.698,-8.6122 1.698,-9.5825 -1.8692,-4.4246 -6.1223,-6.5965 1.0885,-3.941 2.9002,-4.5669 5.4688,-3.8486 2.9007,-0.3969 3.225,-0.1094 -2.012,-8.2601 7.3993,-3.0326 9.2188,-1.2129 3.1535,2.0619 0.2427,5.5797 3.5178,5.8224 0.2426,4.6094 8.4909,-0.6066 7.8843,0.7279 -7.8843,-4.7307 1.3343,-5.701 4.9731,-7.763 4.8521,-2.0622 3.8814,1.5769 1.577,3.1538 8.1269,6.1861 1.5769,-1.3343 12.7363,-0.485 2.5473,2.0619 0.2426,3.6391 -0.849,1.5767 -0.6066,9.8251 -4.2454,8.4909 0.7276,3.7605 2.5475,-1.3343 7.1566,-6.6716 3.5175,-0.2424 3.8815,1.5769 3.8818,2.9109 1.9406,6.3077 11.4021,-0.7277 6.914,2.6686 5.5797,5.2157 4.0028,7.5206 0.9706,8.8546 -0.8493,10.3105 -2.1832,9.2185 -2.1836,2.9112 -3.0322,0.9706 -5.3373,-5.8224 -4.8518,-1.6982 -4.2455,7.0353 -4.2454,3.8815 -2.3049,1.4556 -9.2185,7.6419 -7.3993,4.0028 -7.3993,0.6066 -8.6119,-1.4556 -7.5206,-2.7899 -5.2158,-4.2454 -4.1241,-4.9734 -4.2454,-1.2129"
inkscape:connector-curvature="0" />
</clipPath>
<linearGradient
id="linearGradient3850"
inkscape:collect="always">
<stop
id="stop3852"
offset="0"
style="stop-color:#000000;stop-opacity:1;" />
<stop
id="stop3854"
offset="1"
style="stop-color:#000000;stop-opacity:0;" />
</linearGradient>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath3095">
<path
d="M 976.648,389.551 H 134.246 V 1229.55 H 976.648 V 389.551"
id="path3097"
inkscape:connector-curvature="0" />
</clipPath>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath3195">
<path
d="m 611.836,756.738 -106.34,105.207 c -8.473,8.289 -13.617,20.102 -13.598,33.379 L 598.301,790.207 c -0.031,-13.418 5.094,-25.031 13.535,-33.469"
id="path3197"
inkscape:connector-curvature="0" />
</clipPath>
<clipPath
clipPathUnits="userSpaceOnUse"
id="clipPath3235">
<path
d="m 1095.64,1501.81 c 35.46,-35.07 70.89,-70.11 106.35,-105.17 4.4,-4.38 7.11,-10.53 7.11,-17.55 l -106.37,105.21 c 0,7 -2.71,13.11 -7.09,17.51"
id="path3237"
inkscape:connector-curvature="0" />
</clipPath>
<clipPath
id="clipPath4591"
clipPathUnits="userSpaceOnUse">
<path
inkscape:connector-curvature="0"
d="m 1106.6009,730.43734 -0.036,21.648 c -0.01,3.50825 -2.8675,6.61375 -6.4037,6.92525 l -83.6503,7.33162 c -3.5205,0.30763 -6.3812,-2.29987 -6.3671,-5.8145 l 0.036,-21.6475 20.1171,-1.76662 -0.011,4.63775 c 0,1.83937 1.4844,3.19925 3.3262,3.0395 l 49.5274,-4.33975 c 1.8425,-0.166 3.3425,-1.78125 3.3538,-3.626 l 0.01,-4.63025 20.1,-1.7575"
style="fill:#ff00ff;fill-opacity:1;fill-rule:nonzero;stroke:none"
id="path4593" />
</clipPath>
<radialGradient
gradientUnits="userSpaceOnUse"
gradientTransform="matrix(-1.4333926,-2.2742838,1.1731823,-0.73941125,-174.08025,98.374394)"
r="20.40658"
fy="93.399292"
fx="-26.508606"
cy="93.399292"
cx="-26.508606"
id="radialGradient3856"
xlink:href="#linearGradient3850"
inkscape:collect="always" />
<linearGradient
gradientTransform="translate(-318.48033,212.32022)"
gradientUnits="userSpaceOnUse"
y2="993.19702"
x2="-51.879555"
y1="593.11615"
x1="348.20132"
id="linearGradient3895"
xlink:href="#linearGradient3850"
inkscape:collect="always" />
<clipPath
id="clipPath3906"
clipPathUnits="userSpaceOnUse">
<rect
transform="scale(1,-1)"
style="color:#000000;display:inline;overflow:visible;visibility:visible;opacity:0.8;fill:#ff00ff;stroke:none;stroke-width:4;marker:none;enable-background:accumulate"
id="rect3908"
width="1019.1371"
height="1019.1371"
x="357.9816"
y="-1725.8152" />
</clipPath>
</defs>
<sodipodi:namedview
id="base"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageopacity="0.0"
inkscape:pageshadow="2"
inkscape:zoom="7.9580781"
inkscape:cx="-61.002332"
inkscape:cy="48.450019"
inkscape:document-units="px"
inkscape:current-layer="layer1"
showgrid="false"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0"
inkscape:window-width="1920"
inkscape:window-height="1029"
inkscape:window-x="0"
inkscape:window-y="24"
inkscape:window-maximized="1"
showborder="true"
showguides="false"
inkscape:guide-bbox="true"
inkscape:showpageshadow="false"
inkscape:snap-global="false"
inkscape:snap-bbox="true"
inkscape:bbox-paths="true"
inkscape:bbox-nodes="true"
inkscape:snap-bbox-edge-midpoints="true"
inkscape:snap-bbox-midpoints="true"
inkscape:object-paths="true"
inkscape:snap-intersection-paths="true"
inkscape:object-nodes="true"
inkscape:snap-smooth-nodes="true"
inkscape:snap-midpoints="true"
inkscape:snap-object-midpoints="true"
inkscape:snap-center="true"
inkscape:snap-nodes="true"
inkscape:snap-others="true"
inkscape:snap-page="true">
<inkscape:grid
type="xygrid"
id="grid821" />
<sodipodi:guide
orientation="1,0"
position="16,48"
id="guide823"
inkscape:locked="false" />
<sodipodi:guide
orientation="0,1"
position="64,80"
id="guide825"
inkscape:locked="false" />
<sodipodi:guide
orientation="1,0"
position="80,40"
id="guide827"
inkscape:locked="false" />
<sodipodi:guide
orientation="0,1"
position="64,16"
id="guide829"
inkscape:locked="false" />
</sodipodi:namedview>
<metadata
id="metadata6522">
<rdf:RDF>
<cc:Work
rdf:about="">
<dc:format>image/svg+xml</dc:format>
<dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" />
<dc:title />
</cc:Work>
</rdf:RDF>
</metadata>
<g
inkscape:label="BACKGROUND"
inkscape:groupmode="layer"
id="layer1"
transform="translate(268,-635.29076)"
style="display:inline">
<path
style="display:inline;fill:#ffffff;fill-opacity:1;stroke:none"
d="M 48 0 A 48 48 0 0 0 0 48 A 48 48 0 0 0 48 96 A 48 48 0 0 0 96 48 A 48 48 0 0 0 48 0 z "
id="path6455"
transform="translate(-268,635.29076)" />
<path
inkscape:connector-curvature="0"
style="display:inline;fill:#326de6;fill-opacity:1;stroke:none"
d="m -220,635.29076 a 48,48 0 0 0 -48,48 48,48 0 0 0 48,48 48,48 0 0 0 48,-48 48,48 0 0 0 -48,-48 z"
id="path6455-3" />
<path
inkscape:connector-curvature="0"
style="color:#000000;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:normal;font-family:sans-serif;text-indent:0;text-align:start;text-decoration:none;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;baseline-shift:baseline;text-anchor:start;white-space:normal;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:#326de6;fill-opacity:1;fill-rule:nonzero;stroke:#ffffff;stroke-width:2;stroke-linecap:butt;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
d="m -257.13275,693.64544 a 5.0524169,5.01107 0 0 0 0.28787,0.39638 l 18.28736,22.73877 a 5.0524169,5.01107 0 0 0 3.95007,1.88616 l 29.32654,-0.003 a 5.0524169,5.01107 0 0 0 3.94943,-1.88675 l 18.28255,-22.74294 a 5.0524169,5.01107 0 0 0 0.97485,-4.2391 l -6.52857,-28.3566 a 5.0524169,5.01107 0 0 0 -2.73381,-3.39906 l -26.4238,-12.61752 a 5.0524169,5.01107 0 0 0 -4.38381,4.3e-4 l -26.42114,12.62305 a 5.0524169,5.01107 0 0 0 -2.73296,3.39983 l -6.52262,28.35798 a 5.0524169,5.01107 0 0 0 0.68804,3.84268 z"
id="path4809" />
<path
style="color:#000000;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:normal;font-family:sans-serif;text-indent:0;text-align:start;text-decoration:none;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;baseline-shift:baseline;text-anchor:start;white-space:normal;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:#ffffff;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:3;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
d="M 47.976562,17.478516 C 47.148902,17.491446 46.488127,18.172324 46.5,19 l -1,27.669922 -10.861328,24.701172 c -0.91351,1.842103 1.912154,3.147502 2.722656,1.257812 L 48,53.08 58.638672,72.628906 c 0.810502,1.88969 3.636166,0.584291 2.722656,-1.257812 L 50.5,46.671875 49.5,19 c 0.01214,-0.846036 -0.677418,-1.534706 -1.523438,-1.521484 z"
transform="translate(-268,635.29076)"
id="path4207"
inkscape:connector-curvature="0"
sodipodi:nodetypes="scccccccccs" />
<path
sodipodi:type="star"
style="color:#000000;display:inline;overflow:visible;visibility:visible;opacity:1;fill:#ffffff;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:3;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;enable-background:accumulate"
id="path4218"
sodipodi:sides="3"
sodipodi:cx="-232"
sodipodi:cy="706.29077"
sodipodi:r1="5.8309517"
sodipodi:r2="2.9154758"
sodipodi:arg1="0.52359878"
sodipodi:arg2="1.5707963"
inkscape:flatsided="true"
inkscape:rounded="0"
inkscape:randomized="0"
d="m -226.95025,709.20625 -10.0995,0 5.04975,-8.74643 z"
inkscape:transform-center-y="0.28435141"
transform="matrix(-1.1408434,-0.54609465,0.54609465,-1.1408434,-881.23628,1383.8624)"
inkscape:transform-center-x="-1.5920962" />
<path
inkscape:transform-center-x="1.5920987"
transform="matrix(1.1408434,-0.54609465,-0.54609465,-1.1408434,441.40253,1383.8624)"
inkscape:transform-center-y="0.28435141"
d="m -226.95025,709.20625 -10.0995,0 5.04975,-8.74643 z"
inkscape:randomized="0"
inkscape:rounded="0"
inkscape:flatsided="true"
sodipodi:arg2="1.5707963"
sodipodi:arg1="0.52359878"
sodipodi:r2="2.9154758"
sodipodi:r1="5.8309517"
sodipodi:cy="706.29077"
sodipodi:cx="-232"
sodipodi:sides="3"
id="path4225"
style="color:#000000;display:inline;overflow:visible;visibility:visible;opacity:1;fill:#ffffff;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:3;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;enable-background:accumulate"
sodipodi:type="star" />
<path
style="color:#000000;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:normal;font-family:sans-serif;text-indent:0;text-align:start;text-decoration:none;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;baseline-shift:baseline;text-anchor:start;white-space:normal;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:#ffffff;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:3;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate"
d="m -220.61945,681.04476 -19.90428,4.99988 -1.00407,-4.6236 -9.45532,8.58951 12.16601,3.8932 -0.95421,-4.40019 15.62258,-2.55254 6.59459,-1.95609 z"
id="path4227"
inkscape:connector-curvature="0"
sodipodi:nodetypes="ccccccccc" />
<path
sodipodi:nodetypes="ccccccccc"
inkscape:connector-curvature="0"
id="path4234"
d="m -219.4808,681.04476 19.90428,4.99988 1.00407,-4.6236 9.45532,8.58951 -12.16601,3.8932 0.95421,-4.40019 -15.62258,-2.55254 -6.59459,-1.95609 z"
style="color:#000000;font-style:normal;font-variant:normal;font-weight:normal;font-stretch:normal;font-size:medium;line-height:normal;font-family:sans-serif;text-indent:0;text-align:start;text-decoration:none;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;letter-spacing:normal;word-spacing:normal;text-transform:none;direction:ltr;block-progression:tb;writing-mode:lr-tb;baseline-shift:baseline;text-anchor:start;white-space:normal;clip-rule:nonzero;display:inline;overflow:visible;visibility:visible;opacity:1;isolation:auto;mix-blend-mode:normal;color-interpolation:sRGB;color-interpolation-filters:linearRGB;solid-color:#000000;solid-opacity:1;fill:#ffffff;fill-opacity:1;fill-rule:nonzero;stroke:none;stroke-width:3;stroke-linecap:round;stroke-linejoin:miter;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;marker:none;color-rendering:auto;image-rendering:auto;shape-rendering:auto;text-rendering:auto;enable-background:accumulate" />
</g>
<g
inkscape:groupmode="layer"
id="layer3"
inkscape:label="PLACE YOUR PICTOGRAM HERE"
style="display:inline">
<g
id="g4185" />
</g>
<style
id="style4217"
type="text/css">
.st0{fill:#419EDA;}
</style>
<style
id="style4285"
type="text/css">
.st0{clip-path:url(#SVGID_2_);fill:#EFBF1B;}
.st1{clip-path:url(#SVGID_2_);fill:#40BEB0;}
.st2{clip-path:url(#SVGID_2_);fill:#0AA5DE;}
.st3{clip-path:url(#SVGID_2_);fill:#231F20;}
.st4{fill:#D7A229;}
.st5{fill:#009B8F;}
</style>
<style
id="style4240"
type="text/css">
.st0{fill:#E8478B;}
.st1{fill:#40BEB0;}
.st2{fill:#37A595;}
.st3{fill:#231F20;}
</style>
<style
id="style4812"
type="text/css">
.st0{fill:#0AA5DE;}
.st1{fill:#40BEB0;}
.st2{opacity:0.26;fill:#353535;}
.st3{fill:#231F20;}
</style>
</svg>

After

Width:  |  Height:  |  Size: 20 KiB

View File

@ -0,0 +1,12 @@
repo: https://github.com/kubernetes/kubernetes.git
includes:
- 'layer:nginx'
- 'layer:tls-client'
- 'interface:public-address'
options:
tls-client:
ca_certificate_path: '/srv/kubernetes/ca.crt'
server_certificate_path: '/srv/kubernetes/server.crt'
server_key_path: '/srv/kubernetes/server.key'
client_certificate_path: '/srv/kubernetes/client.crt'
client_key_path: '/srv/kubernetes/client.key'

View File

@ -0,0 +1,17 @@
name: kubeapi-load-balancer
summary: Nginx Load Balancer
maintainers:
- Charles Butler <charles.butler@canonical.com>
description: |
A round robin Nginx load balancer to distribute traffic for kubernetes apiservers.
tags:
- misc
subordinate: false
series:
- xenial
requires:
apiserver:
interface: http
provides:
loadbalancer:
interface: public-address

View File

@ -0,0 +1,113 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import socket
import subprocess
from charms import layer
from charms.reactive import when
from charmhelpers.core import hookenv
from charms.layer import nginx
from subprocess import Popen
from subprocess import PIPE
from subprocess import STDOUT
@when('certificates.available')
def request_server_certificates(tls):
'''Send the data that is required to create a server certificate for
this server.'''
# Use the public ip of this unit as the Common Name for the certificate.
common_name = hookenv.unit_public_ip()
# Create SANs that the tls layer will add to the server cert.
sans = [
hookenv.unit_public_ip(),
hookenv.unit_private_ip(),
socket.gethostname(),
]
# Create a path safe name by removing path characters from the unit name.
certificate_name = hookenv.local_unit().replace('/', '_')
# Request a server cert with this information.
tls.request_server_cert(common_name, sans, certificate_name)
@when('nginx.available', 'apiserver.available',
'certificates.server.cert.available')
def install_load_balancer(apiserver, tls):
''' Create the default vhost template for load balancing '''
# Get the tls paths from the layer data.
layer_options = layer.options('tls-client')
server_cert_path = layer_options.get('server_certificate_path')
cert_exists = server_cert_path and os.path.isfile(server_cert_path)
server_key_path = layer_options.get('server_key_path')
key_exists = server_key_path and os.path.isfile(server_key_path)
# Do both the the key and certificate exist?
if cert_exists and key_exists:
# At this point the cert and key exist, and they are owned by root.
chown = ['chown', 'www-data:www-data', server_cert_path]
# Change the owner to www-data so the nginx process can read the cert.
subprocess.call(chown)
chown = ['chown', 'www-data:www-data', server_key_path]
# Change the owner to www-data so the nginx process can read the key.
subprocess.call(chown)
hookenv.open_port(hookenv.config('port'))
services = apiserver.services()
nginx.configure_site(
'apilb',
'apilb.conf',
server_name='_',
services=services,
port=hookenv.config('port'),
server_certificate=server_cert_path,
server_key=server_key_path,
)
hookenv.status_set('active', 'Loadbalancer ready.')
@when('nginx.available')
def set_nginx_version():
''' Surface the currently deployed version of nginx to Juju '''
cmd = 'nginx -v'
p = Popen(cmd, shell=True,
stdin=PIPE,
stdout=PIPE,
stderr=STDOUT,
close_fds=True)
raw = p.stdout.read()
# The version comes back as:
# nginx version: nginx/1.10.0 (Ubuntu)
version = raw.split(b'/')[-1].split(b' ')[0]
hookenv.application_version_set(version.rstrip())
@when('website.available')
def provide_application_details(website):
''' re-use the nginx layer website relation to relay the hostname/port
to any consuming kubernetes-workers, or other units that require the
kubernetes API '''
website.configure(port=hookenv.config('port'))
@when('loadbalancer.available')
def provide_loadbalancing(loadbalancer):
'''Send the public address and port to the public-address interface, so
the subordinates can get the public address of this loadbalancer.'''
loadbalancer.set_address_port(hookenv.unit_get('public-address'),
hookenv.config('port'))

View File

@ -0,0 +1,36 @@
{% for app in services -%}
upstream target_service {
{% for host in app['hosts'] -%}
server {{ host['hostname'] }}:{{ host['port'] }};
{% endfor %}
}
{% endfor %}
server {
listen 443;
server_name {{ server_name }};
access_log /var/log/nginx.access.log;
error_log /var/log/nginx.error.log;
ssl on;
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_certificate {{ server_certificate }};
ssl_certificate_key {{ server_key }};
ssl_ciphers HIGH:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!MD5:!PSK:!RC4;
ssl_prefer_server_ciphers on;
location / {
proxy_buffering off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_ssl_certificate {{ server_certificate }};
proxy_ssl_certificate_key {{ server_key }};
proxy_pass https://target_service;
proxy_read_timeout 90;
}
}

View File

@ -0,0 +1,141 @@
# Kubernetes end to end
End-to-end (e2e) tests for Kubernetes provide a mechanism to test end-to-end
behavior of the system, and is the last signal to ensure end user operations
match developer specifications. Although unit and integration tests provide a
good signal, in a distributed system like Kubernetes it is not uncommon that a
minor change may pass all unit and integration tests, but cause unforeseen
changes at the system level.
The primary objectives of the e2e tests are to ensure a consistent and reliable
behavior of the kubernetes code base, and to catch hard-to-test bugs before
users do, when unit and integration tests are insufficient.
## Usage
To deploy the end-to-end test suite, it is best to deploy the
[kubernetes-core bundle](https://github.com/juju-solutions/bundle-kubernetes-core)
and then relate the `kubernetes-e2e` charm.
```shell
juju deploy kubernetes-core
juju deploy kubernetes-e2e
juju add-relation kubernetes-e2e kubernetes-master
juju add-relation kubernetes-e2e easyrsa
```
Once the relations have settled, and the `kubernetes-e2e` charm reports
`Ready to test.` - you may kick off an end to end validation test.
### Running the e2e test
The e2e test is encapsulated as an action to ensure consistent runs of the
end to end test. The defaults are sensible for most deployments.
```shell
juju run-action kubernetes-e2e/0 test
```
### Tuning the e2e test
The e2e test is configurable. By default it will focus on or skip the declared
conformance tests in a cloud agnostic way. Default behaviors are configurable.
This allows the operator to test only a subset of the conformance tests, or to
test more behaviors not enabled by default. You can see all tunable options on
the charm by inspecting the schema output of the actions:
```shell
$ juju actions kubernetes-e2e --format=yaml --schema
test:
description: Run end-to-end validation test suite
properties:
focus:
default: \[Conformance\]
description: Regex focus for executing the test
type: string
skip:
default: \[Flaky\]
description: Regex of tests to skip
type: string
timeout:
default: 30000
description: Timeout in nanoseconds
type: integer
title: test
type: object
```
As an example, you can run a more limited set of tests for rapid validation of
a deployed cluster. The following example will skip the `Flaky`, `Slow`, and
`Feature` labeled tests:
```shell
juju run-action kubernetes-e2e/0 skip='\[(Flaky|Slow|Feature:.*)\]'
```
> Note: the escaping of the regex due to how bash handles brackets.
To see the different types of tests the Kubernetes end-to-end charm has access
to, we encourage you to see the upstream documentation on the different types
of tests, and to strongly understand what subsets of the tests you are running.
[Kinds of tests](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md#kinds-of-tests)
### More information on end-to-end testing
Along with the above descriptions, end-to-end testing is a much larger subject
than this readme can encapsulate. There is far more information in the
[end-to-end testing guide](https://github.com/kubernetes/kubernetes/blob/master/docs/devel/e2e-tests.md).
### Evaluating end-to-end results
It is not enough to just simply run the test. Result output is stored in two
places. The raw output of the e2e run is available in the `juju show-action-output`
command, as well as a flat file on disk on the `kubernetes-e2e` unit that
executed the test.
> Note: The results will only be available once the action has
completed the test run. End-to-end testing can be quite time intensive. Often
times taking **greater than 1 hour**, depending on configuration.
##### Flat file
```shell
$ juju run-action kubernetes-e2e/0 test
Action queued with id: 4ceed33a-d96d-465a-8f31-20d63442e51b
$ juju scp kubernetes-e2e/0:4ceed33a-d96d-465a-8f31-20d63442e51b.log .
```
##### Action result output
```shell
$ juju run-action kubernetes-e2e/0 test
Action queued with id: 4ceed33a-d96d-465a-8f31-20d63442e51b
$ juju show-action-output 4ceed33a-d96d-465a-8f31-20d63442e51b
```
## Known issues
The e2e test suite assumes egress network access. It will pull container
images from `gcr.io`. You will need to have this registry unblocked in your
firewall to successfully run e2e test results. Or you may use the exposed
proxy settings [properly configured](https://github.com/juju-solutions/bundle-canonical-kubernetes#proxy-configuration)
on the kubernetes-worker units.
## Contact information
Primary Authors: The ~containers team at Canonical
- [Matt Bruzek &lt;matthew.bruzek@canonical.com&gt;](mailto:matthew.bruzek@canonical.com)
- [Charles Butler &lt;charles.butler@canonical.com&gt;](mailto:charles.butler@canonical.com)
More resources for help:
- [Bug Tracker](https://github.com/juju-solutions/bundle-canonical-kubernetes/issues)
- [Github Repository](https://github.com/kubernetes/kubernetes/)
- [Mailing List](mailto:juju@lists.ubuntu.com)

View File

@ -0,0 +1,19 @@
test:
description: "Execute an end to end test."
params:
focus:
default: "\\[Conformance\\]"
description: Run tests matching the focus regex pattern.
type: string
parallelism:
default: 25
description: The number of test nodes to run in parallel.
type: integer
skip:
default: "\\[Flaky\\]|\\[Serial\\]"
description: Skip tests matching the skip regex pattern.
type: string
timeout:
default: 30000
description: Timeout in nanoseconds
type: integer

View File

@ -0,0 +1,47 @@
#!/bin/bash
set -ex
# Grab the action parameter values
FOCUS=$(action-get focus)
SKIP=$(action-get skip)
PARALLELISM=$(action-get parallelism)
if [ ! -f /home/ubuntu/.kube/config ]
then
action-fail "Missing Kubernetes configuration."
action-set suggestion="Relate to the certificate authority, and kubernetes-master"
exit 0
fi
# get the host from the config file
SERVER=$(cat /home/ubuntu/.kube/config | grep server | sed 's/ server: //')
ACTION_HOME=/home/ubuntu
ACTION_LOG=$ACTION_HOME/${JUJU_ACTION_UUID}.log
ACTION_LOG_TGZ=$ACTION_LOG.tar.gz
ACTION_JUNIT=$ACTION_HOME/${JUJU_ACTION_UUID}-junit
ACTION_JUNIT_TGZ=$ACTION_JUNIT.tar.gz
# This initializes an e2e build log with the START TIMESTAMP.
echo "JUJU_E2E_START=$(date -u +%s)" | tee $ACTION_LOG
echo "JUJU_E2E_VERSION=$(kubectl version | grep Server | cut -d " " -f 5 | cut -d ":" -f 2 | sed s/\"// | sed s/\",//)" | tee -a $ACTION_LOG
ginkgo -nodes=$PARALLELISM $(which e2e.test) -- \
-kubeconfig /home/ubuntu/.kube/config \
-host $SERVER \
-ginkgo.focus $FOCUS \
-ginkgo.skip "$SKIP" \
-report-dir $ACTION_JUNIT 2>&1 | tee -a $ACTION_LOG
# This appends the END TIMESTAMP to the e2e build log
echo "JUJU_E2E_END=$(date -u +%s)" | tee -a $ACTION_LOG
# set cwd to /home/ubuntu and tar the artifacts using a minimal directory
# path. Extracing "home/ubuntu/1412341234/foobar.log is cumbersome in ci
cd $ACTION_HOME/${JUJU_ACTION_UUID}-junit
tar -czf $ACTION_JUNIT_TGZ *
cd ..
tar -czf $ACTION_LOG_TGZ ${JUJU_ACTION_UUID}.log
action-set log="$ACTION_LOG_TGZ"
action-set junit="$ACTION_JUNIT_TGZ"

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

View File

@ -0,0 +1,10 @@
repo: https://github.com/juju-solutions/layer-kubernetes-e2e
includes:
- layer:basic
- layer:tls-client
- interface:http
options:
tls-client:
ca_certificate_path: '/srv/kubernetes/ca.crt'
client_certificate_path: '/srv/kubernetes/client.crt'
client_key_path: '/srv/kubernetes/client.key'

View File

@ -0,0 +1,29 @@
name: kubernetes-e2e
summary: Run end-2-end validation of a clusters conformance
maintainers:
- Matthew Bruzek <matthew.bruzek@canonical.com>
- Charles Butler <charles.butler@canonical.com>
description: |
Deploy the Kubernetes e2e framework and validate the conformance of a
deployed kubernetes cluster
tags:
- validation
- conformance
series:
- xenial
requires:
kubernetes-master:
interface: http
resources:
e2e_amd64:
type: file
filename: e2e_amd64.tar.gz
description: Tarball of the e2e binary, and kubectl binary for amd64
e2e_ppc64el:
type: file
filename: e2e_ppc64le.tar.gz
description: Tarball of the e2e binary, and kubectl binary for ppc64le
e2e_s390x:
type: file
filename: e2e_s390x.tar.gz
description: Tarball of the e2e binary, and kubectl binary for s390x

View File

@ -0,0 +1,202 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from charms import layer
from charms.reactive import hook
from charms.reactive import is_state
from charms.reactive import remove_state
from charms.reactive import set_state
from charms.reactive import when
from charms.reactive import when_not
from charmhelpers.core import hookenv
from shlex import split
from subprocess import call
from subprocess import check_call
from subprocess import check_output
@hook('upgrade-charm')
def reset_delivery_states():
''' Remove the state set when resources are unpacked. '''
remove_state('kubernetes-e2e.installed')
@when('kubernetes-e2e.installed')
def messaging():
''' Probe our relations to determine the propper messaging to the
end user '''
missing_services = []
if not is_state('kubernetes-master.available'):
missing_services.append('kubernetes-master')
if not is_state('certificates.available'):
missing_services.append('certificates')
if missing_services:
if len(missing_services) > 1:
subject = 'relations'
else:
subject = 'relation'
services = ','.join(missing_services)
message = 'Missing {0}: {1}'.format(subject, services)
hookenv.status_set('blocked', message)
return
hookenv.status_set('active', 'Ready to test.')
@when_not('kubernetes-e2e.installed')
def install_kubernetes_e2e():
''' Deliver the e2e and kubectl components from the binary resource stream
packages declared in the charm '''
charm_dir = os.getenv('CHARM_DIR')
arch = determine_arch()
# Get the resource via resource_get
resource = 'e2e_{}'.format(arch)
try:
archive = hookenv.resource_get(resource)
except Exception:
message = 'Error fetching the {} resource.'.format(resource)
hookenv.log(message)
hookenv.status_set('blocked', message)
return
if not archive:
hookenv.log('Missing {} resource.'.format(resource))
hookenv.status_set('blocked', 'Missing {} resource.'.format(resource))
return
# Handle null resource publication, we check if filesize < 1mb
filesize = os.stat(archive).st_size
if filesize < 1000000:
hookenv.status_set('blocked',
'Incomplete {} resource.'.format(resource))
return
hookenv.status_set('maintenance',
'Unpacking {} resource.'.format(resource))
unpack_path = '{}/files/kubernetes'.format(charm_dir)
os.makedirs(unpack_path, exist_ok=True)
cmd = ['tar', 'xfvz', archive, '-C', unpack_path]
hookenv.log(cmd)
check_call(cmd)
services = ['e2e.test', 'ginkgo', 'kubectl']
for service in services:
unpacked = '{}/{}'.format(unpack_path, service)
app_path = '/usr/local/bin/{}'.format(service)
install = ['install', '-v', unpacked, app_path]
call(install)
set_state('kubernetes-e2e.installed')
@when('tls_client.ca.saved', 'tls_client.client.certificate.saved',
'tls_client.client.key.saved', 'kubernetes-master.available',
'kubernetes-e2e.installed')
@when_not('kubeconfig.ready')
def prepare_kubeconfig_certificates(master):
''' Prepare the data to feed to create the kubeconfig file. '''
layer_options = layer.options('tls-client')
# Get all the paths to the tls information required for kubeconfig.
ca = layer_options.get('ca_certificate_path')
key = layer_options.get('client_key_path')
cert = layer_options.get('client_certificate_path')
servers = get_kube_api_servers(master)
# pedantry
kubeconfig_path = '/home/ubuntu/.kube/config'
# Create kubernetes configuration in the default location for ubuntu.
create_kubeconfig('/root/.kube/config', servers[0], ca, key, cert,
user='root')
create_kubeconfig(kubeconfig_path, servers[0], ca, key, cert,
user='ubuntu')
# Set permissions on the ubuntu users kubeconfig to ensure a consistent UX
cmd = ['chown', 'ubuntu:ubuntu', kubeconfig_path]
check_call(cmd)
set_state('kubeconfig.ready')
@when('kubernetes-e2e.installed', 'kubeconfig.ready')
def set_app_version():
''' Declare the application version to juju '''
cmd = ['kubectl', 'version', '--client']
from subprocess import CalledProcessError
try:
version = check_output(cmd).decode('utf-8')
except CalledProcessError:
message = "Missing kubeconfig causes errors. Skipping version set."
hookenv.log(message)
return
git_version = version.split('GitVersion:"v')[-1]
version_from = git_version.split('",')[0]
hookenv.application_version_set(version_from.rstrip())
def create_kubeconfig(kubeconfig, server, ca, key, certificate, user='ubuntu',
context='juju-context', cluster='juju-cluster'):
'''Create a configuration for Kubernetes based on path using the supplied
arguments for values of the Kubernetes server, CA, key, certificate, user
context and cluster.'''
# Create the config file with the address of the master server.
cmd = 'kubectl config --kubeconfig={0} set-cluster {1} ' \
'--server={2} --certificate-authority={3} --embed-certs=true'
check_call(split(cmd.format(kubeconfig, cluster, server, ca)))
# Create the credentials using the client flags.
cmd = 'kubectl config --kubeconfig={0} set-credentials {1} ' \
'--client-key={2} --client-certificate={3} --embed-certs=true'
check_call(split(cmd.format(kubeconfig, user, key, certificate)))
# Create a default context with the cluster.
cmd = 'kubectl config --kubeconfig={0} set-context {1} ' \
'--cluster={2} --user={3}'
check_call(split(cmd.format(kubeconfig, context, cluster, user)))
# Make the config use this new context.
cmd = 'kubectl config --kubeconfig={0} use-context {1}'
check_call(split(cmd.format(kubeconfig, context)))
def get_kube_api_servers(master):
'''Return the kubernetes api server address and port for this
relationship.'''
hosts = []
# Iterate over every service from the relation object.
for service in master.services():
for unit in service['hosts']:
hosts.append('https://{0}:{1}'.format(unit['hostname'],
unit['port']))
return hosts
def determine_arch():
''' dpkg wrapper to surface the architecture we are tied to'''
cmd = ['dpkg', '--print-architecture']
output = check_output(cmd).decode('utf-8')
return output.rstrip()

View File

@ -0,0 +1,96 @@
# Kubernetes-master
[Kubernetes](http://kubernetes.io/) is an open source system for managing
application containers across a cluster of hosts. The Kubernetes project was
started by Google in 2014, combining the experience of running production
workloads combined with best practices from the community.
The Kubernetes project defines some new terms that may be unfamiliar to users
or operators. For more information please refer to the concept guide in the
[getting started guide](http://kubernetes.io/docs/user-guide/#concept-guide).
This charm is an encapsulation of the Kubernetes master processes and the
operations to run on any cloud for the entire lifecycle of the cluster.
This charm is built from other charm layers using the Juju reactive framework.
The other layers focus on specific subset of operations making this layer
specific to operations of Kubernetes master processes.
# Deployment
This charm is not fully functional when deployed by itself. It requires other
charms to model a complete Kubernetes cluster. A Kubernetes cluster needs a
distributed key value store such as [Etcd](https://coreos.com/etcd/) and the
kubernetes-worker charm which delivers the Kubernetes node services. A cluster
requires a Software Defined Network (SDN) and Transport Layer Security (TLS) so
the components in a cluster communicate securely.
Please take a look at the [Canonical Distribution of Kubernetes](https://jujucharms.com/canonical-kubernetes/)
or the [Kubernetes core](https://jujucharms.com/kubernetes-core/) bundles for
examples of complete models of Kubernetes clusters.
# Resources
The kubernetes-master charm takes advantage of the [Juju Resources](https://jujucharms.com/docs/2.0/developer-resources)
feature to deliver the Kubernetes software.
In deployments on public clouds the Charm Store provides the resource to the
charm automatically with no user intervention. Some environments with strict
firewall rules may not be able to contact the Charm Store. In these network
restricted environments the resource can be uploaded to the model by the Juju
operator.
# Configuration
This charm supports some configuration options to set up a Kubernetes cluster
that works in your environment:
#### dns_domain
The domain name to use for the Kubernetes cluster for DNS.
#### enable-dashboard-addons
Enables the installation of Kubernetes dashboard, Heapster, Grafana, and
InfluxDB.
# DNS for the cluster
The DNS add-on allows the pods to have a DNS names in addition to IP addresses.
The Kubernetes cluster DNS server (based off the SkyDNS library) supports
forward lookups (A records), service lookups (SRV records) and reverse IP
address lookups (PTR records). More information about the DNS can be obtained
from the [Kubernetes DNS admin guide](http://kubernetes.io/docs/admin/dns/).
# Actions
The kubernetes-master charm models a few one time operations called
[Juju actions](https://jujucharms.com/docs/stable/actions) that can be run by
Juju users.
#### create-rbd-pv
This action creates RADOS Block Device (RBD) in Ceph and defines a Persistent
Volume in Kubernetes so the containers can use durable storage. This action
requires a relation to the ceph-mon charm before it can create the volume.
#### restart
This action restarts the master processes `kube-apiserver`,
`kube-controller-manager`, and `kube-scheduler` when the user needs a restart.
# More information
- [Kubernetes github project](https://github.com/kubernetes/kubernetes)
- [Kubernetes issue tracker](https://github.com/kubernetes/kubernetes/issues)
- [Kubernetes documentation](http://kubernetes.io/docs/)
- [Kubernetes releases](https://github.com/kubernetes/kubernetes/releases)
# Contact
The kubernetes-master charm is free and open source operations created
by the containers team at Canonical.
Canonical also offers enterprise support and customization services. Please
refer to the [Kubernetes product page](https://www.ubuntu.com/cloud/kubernetes)
for more details.

View File

@ -0,0 +1,28 @@
restart:
description: Restart the Kubernetes master services on demand.
create-rbd-pv:
description: Create RADOS Block Device (RDB) volume in Ceph and creates PersistentVolume.
params:
name:
type: string
description: Name the persistent volume.
minLength: 1
size:
type: integer
description: Size in MB of the RBD volume.
minimum: 1
mode:
type: string
default: ReadWriteOnce
description: Access mode for the persistent volume.
filesystem:
type: string
default: xfs
description: File system type to format the volume.
skip-size-check:
type: boolean
default: false
description: Allow creation of overprovisioned RBD.
required:
- name
- size

View File

@ -0,0 +1,297 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from charms.templating.jinja2 import render
from charms.reactive import is_state
from charmhelpers.core.hookenv import action_get
from charmhelpers.core.hookenv import action_set
from charmhelpers.core.hookenv import action_fail
from subprocess import check_call
from subprocess import check_output
from subprocess import CalledProcessError
from tempfile import TemporaryDirectory
import re
import os
import sys
def main():
''' Control logic to enlist Ceph RBD volumes as PersistentVolumes in
Kubernetes. This will invoke the validation steps, and only execute if
this script thinks the environment is 'sane' enough to provision volumes.
'''
# validate relationship pre-reqs before additional steps can be taken
if not validate_relation():
print('Failed ceph relationship check')
action_fail('Failed ceph relationship check')
return
if not is_ceph_healthy():
print('Ceph was not healthy.')
action_fail('Ceph was not healthy.')
return
context = {}
context['RBD_NAME'] = action_get_or_default('name').strip()
context['RBD_SIZE'] = action_get_or_default('size')
context['RBD_FS'] = action_get_or_default('filesystem').strip()
context['PV_MODE'] = action_get_or_default('mode').strip()
# Ensure we're not exceeding available space in the pool
if not validate_space(context['RBD_SIZE']):
return
# Ensure our paramters match
param_validation = validate_parameters(context['RBD_NAME'],
context['RBD_FS'],
context['PV_MODE'])
if not param_validation == 0:
return
if not validate_unique_volume_name(context['RBD_NAME']):
action_fail('Volume name collision detected. Volume creation aborted.')
return
context['monitors'] = get_monitors()
# Invoke creation and format the mount device
create_rbd_volume(context['RBD_NAME'],
context['RBD_SIZE'],
context['RBD_FS'])
# Create a temporary workspace to render our persistentVolume template, and
# enlist the RDB based PV we've just created
with TemporaryDirectory() as active_working_path:
temp_template = '{}/pv.yaml'.format(active_working_path)
render('rbd-persistent-volume.yaml', temp_template, context)
cmd = ['kubectl', 'create', '-f', temp_template]
debug_command(cmd)
check_call(cmd)
def action_get_or_default(key):
''' Convenience method to manage defaults since actions dont appear to
properly support defaults '''
value = action_get(key)
if value:
return value
elif key == 'filesystem':
return 'xfs'
elif key == 'size':
return 0
elif key == 'mode':
return "ReadWriteOnce"
elif key == 'skip-size-check':
return False
else:
return ''
def create_rbd_volume(name, size, filesystem):
''' Create the RBD volume in Ceph. Then mount it locally to format it for
the requested filesystem.
:param name - The name of the RBD volume
:param size - The size in MB of the volume
:param filesystem - The type of filesystem to format the block device
'''
# Create the rbd volume
# $ rbd create foo --size 50 --image-feature layering
command = ['rbd', 'create', '--size', '{}'.format(size), '--image-feature',
'layering', name]
debug_command(command)
check_call(command)
# Lift the validation sequence to determine if we actually created the
# rbd volume
if validate_unique_volume_name(name):
# we failed to create the RBD volume. whoops
action_fail('RBD Volume not listed after creation.')
print('Ceph RBD volume {} not found in rbd list'.format(name))
# hack, needs love if we're killing the process thread this deep in
# the call stack.
sys.exit(0)
mount = ['rbd', 'map', name]
debug_command(mount)
device_path = check_output(mount).strip()
try:
format_command = ['mkfs.{}'.format(filesystem), device_path]
debug_command(format_command)
check_call(format_command)
unmount = ['rbd', 'unmap', name]
debug_command(unmount)
check_call(unmount)
except CalledProcessError:
print('Failed to format filesystem and unmount. RBD created but not'
' enlisted.')
action_fail('Failed to format filesystem and unmount.'
' RDB created but not enlisted.')
def is_ceph_healthy():
''' Probe the remote ceph cluster for health status '''
command = ['ceph', 'health']
debug_command(command)
health_output = check_output(command)
if b'HEALTH_OK' in health_output:
return True
else:
return False
def get_monitors():
''' Parse the monitors out of /etc/ceph/ceph.conf '''
found_hosts = []
# This is kind of hacky. We should be piping this in from juju relations
with open('/etc/ceph/ceph.conf', 'r') as ceph_conf:
for line in ceph_conf.readlines():
if 'mon host' in line:
# strip out the key definition
hosts = line.lstrip('mon host = ').split(' ')
for host in hosts:
found_hosts.append(host)
return found_hosts
def get_available_space():
''' Determine the space available in the RBD pool. Throw an exception if
the RBD pool ('rbd') isn't found. '''
command = ['ceph', 'df']
debug_command(command)
out = check_output(command).decode('utf-8')
for line in out.splitlines():
stripped = line.strip()
if stripped.startswith('rbd'):
M = stripped.split()[-2].replace('M', '')
return int(M)
raise UnknownAvailableSpaceException('Unable to determine available space.') # noqa
def validate_unique_volume_name(name):
''' Poll the CEPH-MON services to determine if we have a unique rbd volume
name to use. If there is naming collisions, block the request for volume
provisioning.
:param name - The name of the RBD volume
'''
command = ['rbd', 'list']
debug_command(command)
raw_out = check_output(command)
# Split the output on newlines
# output spec:
# $ rbd list
# foo
# foobar
volume_list = raw_out.decode('utf-8').splitlines()
for volume in volume_list:
if volume.strip() == name:
return False
return True
def validate_relation():
''' Determine if we are related to ceph. If we are not, we should
note this in the action output and fail this action run. We are relying
on specific files in specific paths to be placed in order for this function
to work. This method verifies those files are placed. '''
# TODO: Validate that the ceph-common package is installed
if not is_state('ceph-storage.available'):
message = 'Failed to detect connected ceph-mon'
print(message)
action_set({'pre-req.ceph-relation': message})
return False
if not os.path.isfile('/etc/ceph/ceph.conf'):
message = 'No Ceph configuration found in /etc/ceph/ceph.conf'
print(message)
action_set({'pre-req.ceph-configuration': message})
return False
# TODO: Validate ceph key
return True
def validate_space(size):
if action_get_or_default('skip-size-check'):
return True
available_space = get_available_space()
if available_space < size:
msg = 'Unable to allocate RBD of size {}MB, only {}MB are available.'
action_fail(msg.format(size, available_space))
return False
return True
def validate_parameters(name, fs, mode):
''' Validate the user inputs to ensure they conform to what the
action expects. This method will check the naming characters used
for the rbd volume, ensure they have selected a fstype we are expecting
and the mode against our whitelist '''
name_regex = '^[a-zA-z0-9][a-zA-Z0-9|-]'
fs_whitelist = ['xfs', 'ext4']
# see http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes
# for supported operations on RBD volumes.
mode_whitelist = ['ReadWriteOnce', 'ReadOnlyMany']
fails = 0
if not re.match(name_regex, name):
message = 'Validation failed for RBD volume-name'
action_fail(message)
fails = fails + 1
action_set({'validation.name': message})
if fs not in fs_whitelist:
message = 'Validation failed for file system'
action_fail(message)
fails = fails + 1
action_set({'validation.filesystem': message})
if mode not in mode_whitelist:
message = "Validation failed for mode"
action_fail(message)
fails = fails + 1
action_set({'validation.mode': message})
return fails
def debug_command(cmd):
''' Print a debug statement of the command invoked '''
print("Invoking {}".format(cmd))
class UnknownAvailableSpaceException(Exception):
pass
if __name__ == '__main__':
main()

View File

@ -0,0 +1,17 @@
#!/bin/bash
set +ex
# Restart the apiserver, controller-manager, and scheduler
systemctl restart kube-apiserver
action-set 'apiserver.status' 'restarted'
systemctl restart kube-controller-manager
action-set 'controller-manager.status' 'restarted'
systemctl restart kube-scheduler
action-set 'kube-scheduler.status' 'restarted'

View File

@ -0,0 +1,13 @@
options:
enable-dashboard-addons:
type: boolean
default: True
description: Deploy the Kubernetes Dashboard and Heapster addons
dns_domain:
type: string
default: cluster.local
description: The local domain for cluster dns
service-cidr:
type: string
default: 10.152.183.0/24
description: CIDR to user for Kubernetes services. Cannot be changed after deployment.

View File

@ -0,0 +1,13 @@
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,13 @@
#!/bin/sh
set -ux
alias kubectl="kubectl --kubeconfig=/home/ubuntu/config"
kubectl cluster-info > $DEBUG_SCRIPT_DIR/cluster-info
kubectl cluster-info dump > $DEBUG_SCRIPT_DIR/cluster-info-dump
for obj in pods svc ingress secrets pv pvc rc; do
kubectl describe $obj --all-namespaces > $DEBUG_SCRIPT_DIR/describe-$obj
done
for obj in nodes; do
kubectl describe $obj > $DEBUG_SCRIPT_DIR/describe-$obj
done

View File

@ -0,0 +1,13 @@
#!/bin/sh
set -ux
for service in kube-apiserver kube-controller-manager kube-scheduler; do
systemctl status $service > $DEBUG_SCRIPT_DIR/$service-systemctl-status
journalctl -u $service > $DEBUG_SCRIPT_DIR/$service-journal
done
mkdir -p $DEBUG_SCRIPT_DIR/etc-default
cp -v /etc/default/kube* $DEBUG_SCRIPT_DIR/etc-default
mkdir -p $DEBUG_SCRIPT_DIR/lib-systemd-system
cp -v /lib/systemd/system/kube* $DEBUG_SCRIPT_DIR/lib-systemd-system

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

View File

@ -0,0 +1,23 @@
repo: https://github.com/kubernetes/kubernetes.git
includes:
- 'layer:basic'
- 'layer:tls-client'
- 'layer:debug'
- 'interface:etcd'
- 'interface:http'
- 'interface:kubernetes-cni'
- 'interface:kube-dns'
- 'interface:ceph-admin'
- 'interface:public-address'
options:
basic:
packages:
- socat
tls-client:
ca_certificate_path: '/srv/kubernetes/ca.crt'
server_certificate_path: '/srv/kubernetes/server.crt'
server_key_path: '/srv/kubernetes/server.key'
client_certificate_path: '/srv/kubernetes/client.crt'
client_key_path: '/srv/kubernetes/client.key'
tactics:
- 'tactics.update_addons.UpdateAddonsTactic'

View File

@ -0,0 +1,135 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from charmhelpers.core import unitdata
class FlagManager:
'''
FlagManager - A Python class for managing the flags to pass to an
application without remembering what's been set previously.
This is a blind class assuming the operator knows what they are doing.
Each instance of this class should be initialized with the intended
application to manage flags. Flags are then appended to a data-structure
and cached in unitdata for later recall.
THe underlying data-provider is backed by a SQLITE database on each unit,
tracking the dictionary, provided from the 'charmhelpers' python package.
Summary:
opts = FlagManager('docker')
opts.add('bip', '192.168.22.2')
opts.to_s()
'''
def __init__(self, daemon, opts_path=None):
self.db = unitdata.kv()
self.daemon = daemon
if not self.db.get(daemon):
self.data = {}
else:
self.data = self.db.get(daemon)
def __save(self):
self.db.set(self.daemon, self.data)
def add(self, key, value, strict=False):
'''
Adds data to the map of values for the DockerOpts file.
Supports single values, or "multiopt variables". If you
have a flag only option, like --tlsverify, set the value
to None. To preserve the exact value, pass strict
eg:
opts.add('label', 'foo')
opts.add('label', 'foo, bar, baz')
opts.add('flagonly', None)
opts.add('cluster-store', 'consul://a:4001,b:4001,c:4001/swarm',
strict=True)
'''
if strict:
self.data['{}-strict'.format(key)] = value
self.__save()
return
if value:
values = [x.strip() for x in value.split(',')]
# handle updates
if key in self.data and self.data[key] is not None:
item_data = self.data[key]
for c in values:
c = c.strip()
if c not in item_data:
item_data.append(c)
self.data[key] = item_data
else:
# handle new
self.data[key] = values
else:
# handle flagonly
self.data[key] = None
self.__save()
def remove(self, key, value):
'''
Remove a flag value from the DockerOpts manager
Assuming the data is currently {'foo': ['bar', 'baz']}
d.remove('foo', 'bar')
> {'foo': ['baz']}
:params key:
:params value:
'''
self.data[key].remove(value)
self.__save()
def destroy(self, key, strict=False):
'''
Destructively remove all values and key from the FlagManager
Assuming the data is currently {'foo': ['bar', 'baz']}
d.wipe('foo')
>{}
:params key:
:params strict:
'''
try:
if strict:
self.data.pop('{}-strict'.format(key))
else:
self.data.pop('key')
except KeyError:
pass
def to_s(self):
'''
Render the flags to a single string, prepared for the Docker
Defaults file. Typically in /etc/default/docker
d.to_s()
> "--foo=bar --foo=baz"
'''
flags = []
for key in self.data:
if self.data[key] is None:
# handle flagonly
flags.append("{}".format(key))
elif '-strict' in key:
# handle strict values, and do it in 2 steps.
# If we rstrip -strict it strips a tailing s
proper_key = key.rstrip('strict').rstrip('-')
flags.append("{}={}".format(proper_key, self.data[key]))
else:
# handle multiopt and typical flags
for item in self.data[key]:
flags.append("{}={}".format(key, item))
return ' '.join(flags)

View File

@ -1,5 +1,5 @@
name: kubernetes
summary: Kubernetes is an application container orchestration platform.
name: kubernetes-master
summary: The Kubernetes control plane.
maintainers:
- Matthew Bruzek <matthew.bruzek@canonical.com>
- Charles Butler <charles.butler@canonical.com>
@ -11,9 +11,28 @@ description: |
restart and place containers on healthy nodes if a node ever goes away.
tags:
- infrastructure
- kubernetes
- master
subordinate: false
requires:
etcd:
interface: etcd
series:
series:
- xenial
provides:
kube-api-endpoint:
interface: http
cluster-dns:
interface: kube-dns
cni:
interface: kubernetes-cni
scope: container
requires:
etcd:
interface: etcd
loadbalancer:
interface: public-address
ceph-storage:
interface: ceph-admin
resources:
kubernetes:
type: file
filename: kubernetes.tar.gz
description: "A tarball packaged release of the kubernetes bins."

View File

@ -0,0 +1,668 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import base64
import os
import random
import socket
import string
from shlex import split
from subprocess import call
from subprocess import check_call
from subprocess import check_output
from subprocess import CalledProcessError
from charms import layer
from charms.reactive import hook
from charms.reactive import remove_state
from charms.reactive import set_state
from charms.reactive import when
from charms.reactive import when_not
from charms.reactive.helpers import data_changed
from charms.kubernetes.flagmanager import FlagManager
from charmhelpers.core import hookenv
from charmhelpers.core import host
from charmhelpers.core import unitdata
from charmhelpers.core.templating import render
from charmhelpers.fetch import apt_install
dashboard_templates = [
'dashboard-controller.yaml',
'dashboard-service.yaml',
'influxdb-grafana-controller.yaml',
'influxdb-service.yaml',
'grafana-service.yaml',
'heapster-controller.yaml',
'heapster-service.yaml'
]
def service_cidr():
''' Return the charm's service-cidr config '''
db = unitdata.kv()
frozen_cidr = db.get('kubernetes-master.service-cidr')
return frozen_cidr or hookenv.config('service-cidr')
def freeze_service_cidr():
''' Freeze the service CIDR. Once the apiserver has started, we can no
longer safely change this value. '''
db = unitdata.kv()
db.set('kubernetes-master.service-cidr', service_cidr())
@hook('upgrade-charm')
def reset_states_for_delivery():
'''An upgrade charm event was triggered by Juju, react to that here.'''
services = ['kube-apiserver',
'kube-controller-manager',
'kube-scheduler']
for service in services:
hookenv.log('Stopping {0} service.'.format(service))
host.service_stop(service)
remove_state('kubernetes-master.components.started')
remove_state('kubernetes-master.components.installed')
remove_state('kube-dns.available')
remove_state('kubernetes.dashboard.available')
@when_not('kubernetes-master.components.installed')
def install():
'''Unpack and put the Kubernetes master files on the path.'''
# Get the resource via resource_get
try:
archive = hookenv.resource_get('kubernetes')
except Exception:
message = 'Error fetching the kubernetes resource.'
hookenv.log(message)
hookenv.status_set('blocked', message)
return
if not archive:
hookenv.log('Missing kubernetes resource.')
hookenv.status_set('blocked', 'Missing kubernetes resource.')
return
# Handle null resource publication, we check if filesize < 1mb
filesize = os.stat(archive).st_size
if filesize < 1000000:
hookenv.status_set('blocked', 'Incomplete kubernetes resource.')
return
hookenv.status_set('maintenance', 'Unpacking kubernetes resource.')
files_dir = os.path.join(hookenv.charm_dir(), 'files')
os.makedirs(files_dir, exist_ok=True)
command = 'tar -xvzf {0} -C {1}'.format(archive, files_dir)
hookenv.log(command)
check_call(split(command))
apps = [
{'name': 'kube-apiserver', 'path': '/usr/local/bin'},
{'name': 'kube-controller-manager', 'path': '/usr/local/bin'},
{'name': 'kube-scheduler', 'path': '/usr/local/bin'},
{'name': 'kubectl', 'path': '/usr/local/bin'},
]
for app in apps:
unpacked = '{}/{}'.format(files_dir, app['name'])
app_path = os.path.join(app['path'], app['name'])
install = ['install', '-v', '-D', unpacked, app_path]
hookenv.log(install)
check_call(install)
set_state('kubernetes-master.components.installed')
@when('cni.connected')
@when_not('cni.configured')
def configure_cni(cni):
''' Set master configuration on the CNI relation. This lets the CNI
subordinate know that we're the master so it can respond accordingly. '''
cni.set_config(is_master=True, kubeconfig_path='')
@when('kubernetes-master.components.installed')
@when_not('authentication.setup')
def setup_authentication():
'''Setup basic authentication and token access for the cluster.'''
api_opts = FlagManager('kube-apiserver')
controller_opts = FlagManager('kube-controller-manager')
api_opts.add('--basic-auth-file', '/srv/kubernetes/basic_auth.csv')
api_opts.add('--token-auth-file', '/srv/kubernetes/known_tokens.csv')
api_opts.add('--service-cluster-ip-range', service_cidr())
hookenv.status_set('maintenance', 'Rendering authentication templates.')
htaccess = '/srv/kubernetes/basic_auth.csv'
if not os.path.isfile(htaccess):
setup_basic_auth('admin', 'admin', 'admin')
known_tokens = '/srv/kubernetes/known_tokens.csv'
if not os.path.isfile(known_tokens):
setup_tokens(None, 'admin', 'admin')
setup_tokens(None, 'kubelet', 'kubelet')
setup_tokens(None, 'kube_proxy', 'kube_proxy')
# Generate the default service account token key
os.makedirs('/etc/kubernetes', exist_ok=True)
cmd = ['openssl', 'genrsa', '-out', '/etc/kubernetes/serviceaccount.key',
'2048']
check_call(cmd)
api_opts.add('--service-account-key-file',
'/etc/kubernetes/serviceaccount.key')
controller_opts.add('--service-account-private-key-file',
'/etc/kubernetes/serviceaccount.key')
set_state('authentication.setup')
@when('kubernetes-master.components.installed')
def set_app_version():
''' Declare the application version to juju '''
version = check_output(['kube-apiserver', '--version'])
hookenv.application_version_set(version.split(b' v')[-1].rstrip())
@when('kube-dns.available', 'kubernetes-master.components.installed')
def idle_status():
''' Signal at the end of the run that we are running. '''
if hookenv.config('service-cidr') != service_cidr():
hookenv.status_set('active', 'WARN: cannot change service-cidr, still using ' + service_cidr())
else:
hookenv.status_set('active', 'Kubernetes master running.')
@when('etcd.available', 'kubernetes-master.components.installed',
'certificates.server.cert.available')
@when_not('kubernetes-master.components.started')
def start_master(etcd, tls):
'''Run the Kubernetes master components.'''
hookenv.status_set('maintenance',
'Rendering the Kubernetes master systemd files.')
freeze_service_cidr()
handle_etcd_relation(etcd)
# Use the etcd relation object to render files with etcd information.
render_files()
hookenv.status_set('maintenance',
'Starting the Kubernetes master services.')
services = ['kube-apiserver',
'kube-controller-manager',
'kube-scheduler']
for service in services:
hookenv.log('Starting {0} service.'.format(service))
host.service_start(service)
hookenv.open_port(6443)
hookenv.status_set('active', 'Kubernetes master services ready.')
set_state('kubernetes-master.components.started')
@when('cluster-dns.connected')
def send_cluster_dns_detail(cluster_dns):
''' Send cluster DNS info '''
# Note that the DNS server doesn't necessarily exist at this point. We know
# where we're going to put it, though, so let's send the info anyway.
dns_ip = get_dns_ip()
cluster_dns.set_dns_info(53, hookenv.config('dns_domain'), dns_ip)
@when('kube-api-endpoint.available')
def push_service_data(kube_api):
''' Send configuration to the load balancer, and close access to the
public interface '''
kube_api.configure(port=6443)
@when('certificates.available')
def send_data(tls):
'''Send the data that is required to create a server certificate for
this server.'''
# Use the public ip of this unit as the Common Name for the certificate.
common_name = hookenv.unit_public_ip()
# Get the SDN gateway based on the cidr address.
kubernetes_service_ip = get_kubernetes_service_ip()
domain = hookenv.config('dns_domain')
# Create SANs that the tls layer will add to the server cert.
sans = [
hookenv.unit_public_ip(),
hookenv.unit_private_ip(),
socket.gethostname(),
kubernetes_service_ip,
'kubernetes',
'kubernetes.{0}'.format(domain),
'kubernetes.default',
'kubernetes.default.svc',
'kubernetes.default.svc.{0}'.format(domain)
]
# Create a path safe name by removing path characters from the unit name.
certificate_name = hookenv.local_unit().replace('/', '_')
# Request a server cert with this information.
tls.request_server_cert(common_name, sans, certificate_name)
@when('kube-api.connected')
def push_api_data(kube_api):
''' Send configuration to remote consumer.'''
# Since all relations already have the private ip address, only
# send the port on the relation object to all consumers.
# The kubernetes api-server uses 6443 for the default secure port.
kube_api.set_api_port('6443')
@when('kubernetes-master.components.started', 'kube-dns.available')
@when_not('kubernetes.dashboard.available')
def install_dashboard_addons():
''' Launch dashboard addons if they are enabled in config '''
if hookenv.config('enable-dashboard-addons'):
hookenv.log('Launching kubernetes dashboard.')
context = {}
context['arch'] = arch()
try:
context['pillar'] = {'num_nodes': get_node_count()}
for template in dashboard_templates:
create_addon(template, context)
set_state('kubernetes.dashboard.available')
except CalledProcessError:
hookenv.log('Kubernetes dashboard waiting on kubeapi')
@when('kubernetes-master.components.started', 'kubernetes.dashboard.available')
def remove_dashboard_addons():
''' Removes dashboard addons if they are disabled in config '''
if not hookenv.config('enable-dashboard-addons'):
hookenv.log('Removing kubernetes dashboard.')
for template in dashboard_templates:
delete_addon(template)
remove_state('kubernetes.dashboard.available')
@when('kubernetes-master.components.installed')
@when_not('kube-dns.available')
def start_kube_dns():
''' State guard to starting DNS '''
# Interrogate the cluster to find out if we have at least one worker
# that is capable of running the workload.
cmd = ['kubectl', 'get', 'nodes']
try:
out = check_output(cmd)
if b'NAME' not in out:
hookenv.log('Unable to determine node count, waiting '
'until nodes are ready')
return
except CalledProcessError:
hookenv.log('kube-apiserver not ready, not requesting dns deployment')
return
message = 'Rendering the Kubernetes DNS files.'
hookenv.log(message)
hookenv.status_set('maintenance', message)
context = {
'arch': arch(),
# The dictionary named 'pillar' is a construct of the k8s template files.
'pillar': {
'dns_server': get_dns_ip(),
'dns_replicas': 1,
'dns_domain': hookenv.config('dns_domain')
}
}
create_addon('kubedns-controller.yaml', context)
create_addon('kubedns-svc.yaml', context)
set_state('kube-dns.available')
@when('kubernetes-master.components.installed', 'loadbalancer.available',
'certificates.ca.available', 'certificates.client.cert.available')
def loadbalancer_kubeconfig(loadbalancer, ca, client):
# Get the potential list of loadbalancers from the relation object.
hosts = loadbalancer.get_addresses_ports()
# Get the public address of loadbalancers so users can access the cluster.
address = hosts[0].get('public-address')
# Get the port of the loadbalancer so users can access the cluster.
port = hosts[0].get('port')
server = 'https://{0}:{1}'.format(address, port)
build_kubeconfig(server)
@when('kubernetes-master.components.installed',
'certificates.ca.available', 'certificates.client.cert.available')
@when_not('loadbalancer.available')
def create_self_config(ca, client):
'''Create a kubernetes configuration for the master unit.'''
server = 'https://{0}:{1}'.format(hookenv.unit_get('public-address'), 6443)
build_kubeconfig(server)
@when('ceph-storage.available')
def ceph_state_control(ceph_admin):
''' Determine if we should remove the state that controls the re-render
and execution of the ceph-relation-changed event because there
are changes in the relationship data, and we should re-render any
configs, keys, and/or service pre-reqs '''
ceph_relation_data = {
'mon_hosts': ceph_admin.mon_hosts(),
'fsid': ceph_admin.fsid(),
'auth_supported': ceph_admin.auth(),
'hostname': socket.gethostname(),
'key': ceph_admin.key()
}
# Re-execute the rendering if the data has changed.
if data_changed('ceph-config', ceph_relation_data):
remove_state('ceph-storage.configured')
@when('ceph-storage.available')
@when_not('ceph-storage.configured')
def ceph_storage(ceph_admin):
'''Ceph on kubernetes will require a few things - namely a ceph
configuration, and the ceph secret key file used for authentication.
This method will install the client package, and render the requisit files
in order to consume the ceph-storage relation.'''
ceph_context = {
'mon_hosts': ceph_admin.mon_hosts(),
'fsid': ceph_admin.fsid(),
'auth_supported': ceph_admin.auth(),
'use_syslog': "true",
'ceph_public_network': '',
'ceph_cluster_network': '',
'loglevel': 1,
'hostname': socket.gethostname(),
}
# Install the ceph common utilities.
apt_install(['ceph-common'], fatal=True)
etc_ceph_directory = '/etc/ceph'
if not os.path.isdir(etc_ceph_directory):
os.makedirs(etc_ceph_directory)
charm_ceph_conf = os.path.join(etc_ceph_directory, 'ceph.conf')
# Render the ceph configuration from the ceph conf template
render('ceph.conf', charm_ceph_conf, ceph_context)
# The key can rotate independently of other ceph config, so validate it
admin_key = os.path.join(etc_ceph_directory,
'ceph.client.admin.keyring')
try:
with open(admin_key, 'w') as key_file:
key_file.write("[client.admin]\n\tkey = {}\n".format(
ceph_admin.key()))
except IOError as err:
hookenv.log("IOError writing admin.keyring: {}".format(err))
# Enlist the ceph-admin key as a kubernetes secret
if ceph_admin.key():
encoded_key = base64.b64encode(ceph_admin.key().encode('utf-8'))
else:
# We didn't have a key, and cannot proceed. Do not set state and
# allow this method to re-execute
return
context = {'secret': encoded_key.decode('ascii')}
render('ceph-secret.yaml', '/tmp/ceph-secret.yaml', context)
try:
# At first glance this is deceptive. The apply stanza will create if
# it doesn't exist, otherwise it will update the entry, ensuring our
# ceph-secret is always reflective of what we have in /etc/ceph
# assuming we have invoked this anytime that file would change.
cmd = ['kubectl', 'apply', '-f', '/tmp/ceph-secret.yaml']
check_call(cmd)
os.remove('/tmp/ceph-secret.yaml')
except:
# the enlistment in kubernetes failed, return and prepare for re-exec
return
# when complete, set a state relating to configuration of the storage
# backend that will allow other modules to hook into this and verify we
# have performed the necessary pre-req steps to interface with a ceph
# deployment.
set_state('ceph-storage.configured')
def create_addon(template, context):
'''Create an addon from a template'''
source = 'addons/' + template
target = '/etc/kubernetes/addons/' + template
render(source, target, context)
cmd = ['kubectl', 'apply', '-f', target]
check_call(cmd)
def delete_addon(template):
'''Delete an addon from a template'''
target = '/etc/kubernetes/addons/' + template
cmd = ['kubectl', 'delete', '-f', target]
call(cmd)
def get_node_count():
'''Return the number of Kubernetes nodes in the cluster'''
cmd = ['kubectl', 'get', 'nodes', '-o', 'name']
output = check_output(cmd)
node_count = len(output.splitlines())
return node_count
def arch():
'''Return the package architecture as a string. Raise an exception if the
architecture is not supported by kubernetes.'''
# Get the package architecture for this system.
architecture = check_output(['dpkg', '--print-architecture']).rstrip()
# Convert the binary result into a string.
architecture = architecture.decode('utf-8')
return architecture
def build_kubeconfig(server):
'''Gather the relevant data for Kubernetes configuration objects and create
a config object with that information.'''
# Get the options from the tls-client layer.
layer_options = layer.options('tls-client')
# Get all the paths to the tls information required for kubeconfig.
ca = layer_options.get('ca_certificate_path')
ca_exists = ca and os.path.isfile(ca)
key = layer_options.get('client_key_path')
key_exists = key and os.path.isfile(key)
cert = layer_options.get('client_certificate_path')
cert_exists = cert and os.path.isfile(cert)
# Do we have everything we need?
if ca_exists and key_exists and cert_exists:
# Cache last server string to know if we need to regenerate the config.
if not data_changed('kubeconfig.server', server):
return
# The final destination of the kubeconfig and kubectl.
destination_directory = '/home/ubuntu'
# Create an absolute path for the kubeconfig file.
kubeconfig_path = os.path.join(destination_directory, 'config')
# Create the kubeconfig on this system so users can access the cluster.
create_kubeconfig(kubeconfig_path, server, ca, key, cert)
# Copy the kubectl binary to the destination directory.
cmd = ['install', '-v', '-o', 'ubuntu', '-g', 'ubuntu',
'/usr/local/bin/kubectl', destination_directory]
check_call(cmd)
# Make the config file readable by the ubuntu users so juju scp works.
cmd = ['chown', 'ubuntu:ubuntu', kubeconfig_path]
check_call(cmd)
def create_kubeconfig(kubeconfig, server, ca, key, certificate, user='ubuntu',
context='juju-context', cluster='juju-cluster'):
'''Create a configuration for Kubernetes based on path using the supplied
arguments for values of the Kubernetes server, CA, key, certificate, user
context and cluster.'''
# Create the config file with the address of the master server.
cmd = 'kubectl config --kubeconfig={0} set-cluster {1} ' \
'--server={2} --certificate-authority={3} --embed-certs=true'
check_call(split(cmd.format(kubeconfig, cluster, server, ca)))
# Create the credentials using the client flags.
cmd = 'kubectl config --kubeconfig={0} set-credentials {1} ' \
'--client-key={2} --client-certificate={3} --embed-certs=true'
check_call(split(cmd.format(kubeconfig, user, key, certificate)))
# Create a default context with the cluster.
cmd = 'kubectl config --kubeconfig={0} set-context {1} ' \
'--cluster={2} --user={3}'
check_call(split(cmd.format(kubeconfig, context, cluster, user)))
# Make the config use this new context.
cmd = 'kubectl config --kubeconfig={0} use-context {1}'
check_call(split(cmd.format(kubeconfig, context)))
def get_dns_ip():
'''Get an IP address for the DNS server on the provided cidr.'''
# Remove the range from the cidr.
ip = service_cidr().split('/')[0]
# Take the last octet off the IP address and replace it with 10.
return '.'.join(ip.split('.')[0:-1]) + '.10'
def get_kubernetes_service_ip():
'''Get the IP address for the kubernetes service based on the cidr.'''
# Remove the range from the cidr.
ip = service_cidr().split('/')[0]
# Remove the last octet and replace it with 1.
return '.'.join(ip.split('.')[0:-1]) + '.1'
def handle_etcd_relation(reldata):
''' Save the client credentials and set appropriate daemon flags when
etcd declares itself as available'''
connection_string = reldata.get_connection_string()
# Define where the etcd tls files will be kept.
etcd_dir = '/etc/ssl/etcd'
# Create paths to the etcd client ca, key, and cert file locations.
ca = os.path.join(etcd_dir, 'client-ca.pem')
key = os.path.join(etcd_dir, 'client-key.pem')
cert = os.path.join(etcd_dir, 'client-cert.pem')
# Save the client credentials (in relation data) to the paths provided.
reldata.save_client_credentials(key, cert, ca)
api_opts = FlagManager('kube-apiserver')
# Never use stale data, always prefer whats coming in during context
# building. if its stale, its because whats in unitdata is stale
data = api_opts.data
if data.get('--etcd-servers-strict') or data.get('--etcd-servers'):
api_opts.destroy('--etcd-cafile')
api_opts.destroy('--etcd-keyfile')
api_opts.destroy('--etcd-certfile')
api_opts.destroy('--etcd-servers', strict=True)
api_opts.destroy('--etcd-servers')
# Set the apiserver flags in the options manager
api_opts.add('--etcd-cafile', ca)
api_opts.add('--etcd-keyfile', key)
api_opts.add('--etcd-certfile', cert)
api_opts.add('--etcd-servers', connection_string, strict=True)
def render_files():
'''Use jinja templating to render the docker-compose.yml and master.json
file to contain the dynamic data for the configuration files.'''
context = {}
config = hookenv.config()
# Add the charm configuration data to the context.
context.update(config)
# Update the context with extra values: arch, and networking information
context.update({'arch': arch(),
'master_address': hookenv.unit_get('private-address'),
'public_address': hookenv.unit_get('public-address'),
'private_address': hookenv.unit_get('private-address')})
api_opts = FlagManager('kube-apiserver')
controller_opts = FlagManager('kube-controller-manager')
scheduler_opts = FlagManager('kube-scheduler')
# Get the tls paths from the layer data.
layer_options = layer.options('tls-client')
ca_cert_path = layer_options.get('ca_certificate_path')
server_cert_path = layer_options.get('server_certificate_path')
server_key_path = layer_options.get('server_key_path')
# Handle static options for now
api_opts.add('--min-request-timeout', '300')
api_opts.add('--v', '4')
api_opts.add('--client-ca-file', ca_cert_path)
api_opts.add('--tls-cert-file', server_cert_path)
api_opts.add('--tls-private-key-file', server_key_path)
scheduler_opts.add('--v', '2')
# Default to 3 minute resync. TODO: Make this configureable?
controller_opts.add('--min-resync-period', '3m')
controller_opts.add('--v', '2')
controller_opts.add('--root-ca-file', ca_cert_path)
context.update({'kube_apiserver_flags': api_opts.to_s(),
'kube_scheduler_flags': scheduler_opts.to_s(),
'kube_controller_manager_flags': controller_opts.to_s()})
# Render the configuration files that contains parameters for
# the apiserver, scheduler, and controller-manager
render_service('kube-apiserver', context)
render_service('kube-controller-manager', context)
render_service('kube-scheduler', context)
# explicitly render the generic defaults file
render('kube-defaults.defaults', '/etc/default/kube-defaults', context)
# when files change on disk, we need to inform systemd of the changes
call(['systemctl', 'daemon-reload'])
call(['systemctl', 'enable', 'kube-apiserver'])
call(['systemctl', 'enable', 'kube-controller-manager'])
call(['systemctl', 'enable', 'kube-scheduler'])
def render_service(service_name, context):
'''Render the systemd service by name.'''
unit_directory = '/lib/systemd/system'
source = '{0}.service'.format(service_name)
target = os.path.join(unit_directory, '{0}.service'.format(service_name))
render(source, target, context)
conf_directory = '/etc/default'
source = '{0}.defaults'.format(service_name)
target = os.path.join(conf_directory, service_name)
render(source, target, context)
def setup_basic_auth(username='admin', password='admin', user='admin'):
'''Create the htacces file and the tokens.'''
srv_kubernetes = '/srv/kubernetes'
if not os.path.isdir(srv_kubernetes):
os.makedirs(srv_kubernetes)
htaccess = os.path.join(srv_kubernetes, 'basic_auth.csv')
with open(htaccess, 'w') as stream:
stream.write('{0},{1},{2}'.format(username, password, user))
def setup_tokens(token, username, user):
'''Create a token file for kubernetes authentication.'''
srv_kubernetes = '/srv/kubernetes'
if not os.path.isdir(srv_kubernetes):
os.makedirs(srv_kubernetes)
known_tokens = os.path.join(srv_kubernetes, 'known_tokens.csv')
if not token:
alpha = string.ascii_letters + string.digits
token = ''.join(random.SystemRandom().choice(alpha) for _ in range(32))
with open(known_tokens, 'w') as stream:
stream.write('{0},{1},{2}'.format(token, username, user))

View File

@ -0,0 +1,16 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

View File

@ -0,0 +1,160 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import os
import shutil
import subprocess
import tempfile
import logging
from contextlib import contextmanager
import charmtools.utils
from charmtools.build.tactics import Tactic
description = """
Update addon manifests for the charm.
This will clone the kubernetes repo and place the addons in
<charm>/templates/addons.
Can be run with no arguments and from any folder.
"""
log = logging.getLogger(__name__)
def clean_addon_dir(addon_dir):
""" Remove and recreate the addons folder """
log.debug("Cleaning " + addon_dir)
shutil.rmtree(addon_dir, ignore_errors=True)
os.makedirs(addon_dir)
@contextmanager
def kubernetes_repo():
""" Shallow clone kubernetes repo and clean up when we are done """
repo = "https://github.com/kubernetes/kubernetes.git"
path = tempfile.mkdtemp(prefix="kubernetes")
try:
log.info("Cloning " + repo)
cmd = ["git", "clone", "--depth", "1", repo, path]
process = subprocess.Popen(cmd, stderr=subprocess.PIPE)
stderr = process.communicate()[1].rstrip()
process.wait()
if process.returncode != 0:
log.error(stderr)
raise Exception("clone failed: exit code %d" % process.returncode)
log.debug(stderr)
yield path
finally:
shutil.rmtree(path)
def add_addon(source, dest):
""" Add an addon manifest from the given source.
Any occurrences of 'amd64' are replaced with '{{ arch }}' so the charm can
fill it in during deployment. """
if os.path.isdir(dest):
dest = os.path.join(dest, os.path.basename(source))
log.debug("Copying: %s -> %s" % (source, dest))
with open(source, "r") as f:
content = f.read()
content = content.replace("amd64", "{{ arch }}")
with open(dest, "w") as f:
f.write(content)
def update_addons(dest):
""" Update addons. This will clean the addons folder and add new manifests
from upstream. """
with kubernetes_repo() as repo:
log.info("Copying addons to charm")
clean_addon_dir(dest)
add_addon(repo + "/cluster/addons/dashboard/dashboard-controller.yaml",
dest)
add_addon(repo + "/cluster/addons/dashboard/dashboard-service.yaml",
dest)
add_addon(repo + "/cluster/addons/dns/kubedns-controller.yaml.in",
dest + "/kubedns-controller.yaml")
add_addon(repo + "/cluster/addons/dns/kubedns-svc.yaml.in",
dest + "/kubedns-svc.yaml")
influxdb = "/cluster/addons/cluster-monitoring/influxdb"
add_addon(repo + influxdb + "/grafana-service.yaml", dest)
add_addon(repo + influxdb + "/heapster-controller.yaml", dest)
add_addon(repo + influxdb + "/heapster-service.yaml", dest)
add_addon(repo + influxdb + "/influxdb-grafana-controller.yaml", dest)
add_addon(repo + influxdb + "/influxdb-service.yaml", dest)
# Entry points
class UpdateAddonsTactic(Tactic):
""" This tactic is used by charm-tools to dynamically populate the
template/addons folder at `charm build` time. """
@classmethod
def trigger(cls, entity, target=None, layer=None, next_config=None):
""" Determines which files the tactic should apply to. We only want
this tactic to trigger once, so let's use the templates/ folder
"""
relpath = entity.relpath(layer.directory) if layer else entity
return relpath == "templates"
@property
def dest(self):
""" The destination we are writing to. This isn't a Tactic thing,
it's just a helper for UpdateAddonsTactic """
return self.target / "templates" / "addons"
def __call__(self):
""" When the tactic is called, update addons and put them directly in
our build destination """
update_addons(self.dest)
def sign(self):
""" Return signatures for the charm build manifest. We need to do this
because the addon template files were added dynamically """
sigs = {}
for file in os.listdir(self.dest):
path = self.dest / file
relpath = path.relpath(self.target.directory)
sigs[relpath] = (
self.current.url,
"dynamic",
charmtools.utils.sign(path)
)
return sigs
def parse_args():
""" Parse args. This is solely done for the usage output with -h """
parser = argparse.ArgumentParser(description=description)
parser.parse_args()
def main():
""" Update addons into the layer's templates/addons folder """
parse_args()
dest = os.path.abspath(os.path.join(os.path.dirname(__file__),
"../templates/addons"))
update_addons(dest)
if __name__ == "__main__":
main()

View File

@ -0,0 +1,7 @@
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
type: Opaque
data:
key: {{ secret }}

View File

@ -0,0 +1,18 @@
[global]
auth cluster required = {{ auth_supported }}
auth service required = {{ auth_supported }}
auth client required = {{ auth_supported }}
keyring = /etc/ceph/$cluster.$name.keyring
mon host = {{ mon_hosts }}
fsid = {{ fsid }}
log to syslog = {{ use_syslog }}
err to syslog = {{ use_syslog }}
clog to syslog = {{ use_syslog }}
mon cluster log to syslog = {{ use_syslog }}
debug mon = {{ loglevel }}/5
debug osd = {{ loglevel }}/5
[client]
log file = /var/log/ceph.log

View File

@ -0,0 +1,17 @@
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=127.0.0.1"
# The port on the local server to listen on.
KUBE_API_PORT="--insecure-port=8080"
# default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota"
# Add your own!
KUBE_API_ARGS="{{ kube_apiserver_flags }}"

View File

@ -0,0 +1,22 @@
[Unit]
Description=Kubernetes API Server
Documentation=http://kubernetes.io/docs/admin/kube-apiserver/
After=network.target
[Service]
EnvironmentFile=-/etc/default/kube-defaults
EnvironmentFile=-/etc/default/kube-apiserver
ExecStart=/usr/local/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,8 @@
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="{{ kube_controller_manager_flags }}"

View File

@ -0,0 +1,18 @@
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
[Service]
EnvironmentFile=-/etc/default/kube-defaults
EnvironmentFile=-/etc/default/kube-controller-manager
ExecStart=/usr/local/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,22 @@
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://127.0.0.1:8080"

View File

@ -0,0 +1,7 @@
###
# kubernetes scheduler config
# default config should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="{{ kube_scheduler_flags }}"

View File

@ -0,0 +1,17 @@
[Unit]
Description=Kubernetes Scheduler Plugin
Documentation=http://kubernetes.io/docs/admin/multiple-schedulers/
[Service]
EnvironmentFile=-/etc/default/kube-defaults
EnvironmentFile=-/etc/default/kube-scheduler
ExecStart=/usr/local/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,26 @@
# JUJU Internal Template used to enlist RBD volumes from the
# `create-rbd-pv` action. This is a temporary file on disk to enlist resources.
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ RBD_NAME }}
annotations:
volume.beta.kubernetes.io/storage-class: "rbd"
spec:
capacity:
storage: {{ RBD_SIZE }}M
accessModes:
- {{ PV_MODE }}
rbd:
monitors:
{% for host in monitors %}
- {{ host }}
{% endfor %}
pool: rbd
image: {{ RBD_NAME }}
user: admin
secretRef:
name: ceph-secret
fsType: {{ RBD_FS }}
readOnly: false
# persistentVolumeReclaimPolicy: Recycle

View File

@ -0,0 +1,25 @@
# Kubernetes Worker
### Building from the layer
You can clone the kubenetes-worker layer with git and build locally if you
have the charm package/snap installed.
```shell
# Instal the snap
sudo snap install charm --channel=edge
# Set the build environment
export JUJU_REPOSITORY=$HOME
# Clone the layer and build it to our JUJU_REPOSITORY
git clone https://github.com/juju-solutions/kubernetes
cd kubernetes/cluster/juju/layers/kubernetes-worker
charm build -r
```
### Contributing
TBD

View File

@ -0,0 +1,52 @@
# Kubernetes Worker
## Usage
This charm deploys a container runtime, and additionally stands up the Kubernetes
worker applications: kubelet, and kube-proxy.
In order for this charm to be useful, it should be deployed with its companion
charm [kubernetes-master](https://jujucharms.com/u/containers/kubernetes-master)
and linked with an SDN-Plugin.
This charm has also been bundled up for your convenience so you can skip the
above steps, and deploy it with a single command:
```shell
juju deploy canonical-kubernetes
```
For more information about [Canonical Kubernetes](https://jujucharms.com/canonical-kubernetes)
consult the bundle `README.md` file.
## Scale out
To add additional compute capacity to your Kubernetes workers, you may
`juju add-unit` scale the cluster of applications. They will automatically
join any related kubernetes-master, and enlist themselves as ready once the
deployment is complete.
## Operational actions
The kubernetes-worker charm supports the following Operational Actions:
#### Pause
Pausing the workload enables administrators to both [drain](http://kubernetes.io/docs/user-guide/kubectl/kubectl_drain/) and [cordon](http://kubernetes.io/docs/user-guide/kubectl/kubectl_cordon/)
a unit for maintenance.
#### Resume
Resuming the workload will [uncordon](http://kubernetes.io/docs/user-guide/kubectl/kubectl_uncordon/) a paused unit. Workloads will automatically migrate unless otherwise directed via their application declaration.
## Known Limitations
Kubernetes workers currently only support 'phaux' HA scenarios. Even when configured with an HA cluster string, they will only ever contact the first unit in the cluster map. To enalbe a proper HA story, kubernetes-worker units are encouraged to proxy through a [kubeapi-load-balancer](https://jujucharms.com/kubeapi-load-balancer)
application. This enables a HA deployment without the need to
re-render configuration and disrupt the worker services.
External access to pods must be performed through a [Kubernetes
Ingress Resource](http://kubernetes.io/docs/user-guide/ingress/). More
information

View File

@ -0,0 +1,17 @@
pause:
description: |
Cordon the unit, draining all active workloads.
resume:
description: |
UnCordon the unit, enabling workload scheduling.
microbot:
description: Launch microbot containers
params:
replicas:
type: integer
default: 3
description: Number of microbots to launch in Kubernetes.
delete:
type: boolean
default: False
description: Removes a microbots deployment, service, and ingress if True.

View File

@ -0,0 +1,71 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
from charmhelpers.core.hookenv import action_get
from charmhelpers.core.hookenv import action_set
from charmhelpers.core.hookenv import unit_public_ip
from charms.templating.jinja2 import render
from subprocess import call
context = {}
context['replicas'] = action_get('replicas')
context['delete'] = action_get('delete')
context['public_address'] = unit_public_ip()
if not context['replicas']:
context['replicas'] = 3
# Declare a kubectl template when invoking kubectl
kubectl = ['kubectl', '--kubeconfig=/srv/kubernetes/config']
# Remove deployment if requested
if context['delete']:
service_del = kubectl + ['delete', 'svc', 'microbot']
service_response = call(service_del)
deploy_del = kubectl + ['delete', 'deployment', 'microbot']
deploy_response = call(deploy_del)
ingress_del = kubectl + ['delete', 'ing', 'microbot-ingress']
ingress_response = call(ingress_del)
if ingress_response != 0:
action_set({'microbot-ing':
'Failed removal of microbot ingress resource.'})
if deploy_response != 0:
action_set({'microbot-deployment':
'Failed removal of microbot deployment resource.'})
if service_response != 0:
action_set({'microbot-service':
'Failed removal of microbot service resource.'})
sys.exit(0)
# Creation request
render('microbot-example.yaml', '/etc/kubernetes/addons/microbot.yaml',
context)
create_command = kubectl + ['create', '-f',
'/etc/kubernetes/addons/microbot.yaml']
create_response = call(create_command)
if create_response == 0:
action_set({'address':
'microbot.{}.xip.io'.format(context['public_address'])})
else:
action_set({'microbot-create': 'Failed microbot creation.'})

View File

@ -0,0 +1,7 @@
#!/bin/bash
set -ex
kubectl --kubeconfig=/srv/kubernetes/config cordon $(hostname)
kubectl --kubeconfig=/srv/kubernetes/config drain $(hostname) --force
status-set 'waiting' 'Kubernetes unit paused'

View File

@ -0,0 +1,6 @@
#!/bin/bash
set -ex
kubectl --kubeconfig=/srv/kubernetes/config uncordon $(hostname)
status-set 'active' 'Kubernetes unit resumed'

View File

@ -0,0 +1,13 @@
options:
ingress:
type: boolean
default: true
description: |
Deploy the default http backend and ingress controller to handle
ingress requests.
labels:
type: string
default: ""
description: |
Labels can be used to organize and to select subsets of nodes in the
cluster. Declare node labels in key=value format, separated by spaces.

View File

@ -0,0 +1,13 @@
Copyright 2016 The Kubernetes Authors.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

View File

@ -0,0 +1,8 @@
#!/bin/sh
set -ux
# We had to bump inotify limits once in the past, hence why this oddly specific
# script lives here in kubernetes-worker.
sysctl fs.inotify > $DEBUG_SCRIPT_DIR/sysctl-limits
ls -l /proc/*/fd/* | grep inotify > $DEBUG_SCRIPT_DIR/inotify-instances

View File

@ -0,0 +1,13 @@
#!/bin/sh
set -ux
alias kubectl="kubectl --kubeconfig=/srv/kubernetes/config"
kubectl cluster-info > $DEBUG_SCRIPT_DIR/cluster-info
kubectl cluster-info dump > $DEBUG_SCRIPT_DIR/cluster-info-dump
for obj in pods svc ingress secrets pv pvc rc; do
kubectl describe $obj --all-namespaces > $DEBUG_SCRIPT_DIR/describe-$obj
done
for obj in nodes; do
kubectl describe $obj > $DEBUG_SCRIPT_DIR/describe-$obj
done

View File

@ -0,0 +1,13 @@
#!/bin/sh
set -ux
for service in kubelet kube-proxy; do
systemctl status $service > $DEBUG_SCRIPT_DIR/$service-systemctl-status
journalctl -u $service > $DEBUG_SCRIPT_DIR/$service-journal
done
mkdir -p $DEBUG_SCRIPT_DIR/etc-default
cp -v /etc/default/kube* $DEBUG_SCRIPT_DIR/etc-default
mkdir -p $DEBUG_SCRIPT_DIR/lib-systemd-system
cp -v /lib/systemd/system/kube* $DEBUG_SCRIPT_DIR/lib-systemd-system

View File

@ -0,0 +1,2 @@
# This stubs out charm-pre-install coming from layer-docker as a workaround for
# offline installs until https://github.com/juju/charm-tools/issues/301 is fixed.

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 26 KiB

View File

@ -0,0 +1,21 @@
repo: https://github.com/kubernetes/kubernetes.git
includes:
- 'layer:basic'
- 'layer:docker'
- 'layer:tls-client'
- 'layer:debug'
- 'interface:http'
- 'interface:kubernetes-cni'
- 'interface:kube-dns'
options:
basic:
packages:
- 'nfs-common'
- 'ceph-common'
- 'socat'
tls-client:
ca_certificate_path: '/srv/kubernetes/ca.crt'
server_certificate_path: '/srv/kubernetes/server.crt'
server_key_path: '/srv/kubernetes/server.key'
client_certificate_path: '/srv/kubernetes/client.crt'
client_key_path: '/srv/kubernetes/client.key'

View File

@ -0,0 +1,135 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from charmhelpers.core import unitdata
class FlagManager:
'''
FlagManager - A Python class for managing the flags to pass to an
application without remembering what's been set previously.
This is a blind class assuming the operator knows what they are doing.
Each instance of this class should be initialized with the intended
application to manage flags. Flags are then appended to a data-structure
and cached in unitdata for later recall.
THe underlying data-provider is backed by a SQLITE database on each unit,
tracking the dictionary, provided from the 'charmhelpers' python package.
Summary:
opts = FlagManager('docker')
opts.add('bip', '192.168.22.2')
opts.to_s()
'''
def __init__(self, daemon, opts_path=None):
self.db = unitdata.kv()
self.daemon = daemon
if not self.db.get(daemon):
self.data = {}
else:
self.data = self.db.get(daemon)
def __save(self):
self.db.set(self.daemon, self.data)
def add(self, key, value, strict=False):
'''
Adds data to the map of values for the DockerOpts file.
Supports single values, or "multiopt variables". If you
have a flag only option, like --tlsverify, set the value
to None. To preserve the exact value, pass strict
eg:
opts.add('label', 'foo')
opts.add('label', 'foo, bar, baz')
opts.add('flagonly', None)
opts.add('cluster-store', 'consul://a:4001,b:4001,c:4001/swarm',
strict=True)
'''
if strict:
self.data['{}-strict'.format(key)] = value
self.__save()
return
if value:
values = [x.strip() for x in value.split(',')]
# handle updates
if key in self.data and self.data[key] is not None:
item_data = self.data[key]
for c in values:
c = c.strip()
if c not in item_data:
item_data.append(c)
self.data[key] = item_data
else:
# handle new
self.data[key] = values
else:
# handle flagonly
self.data[key] = None
self.__save()
def remove(self, key, value):
'''
Remove a flag value from the DockerOpts manager
Assuming the data is currently {'foo': ['bar', 'baz']}
d.remove('foo', 'bar')
> {'foo': ['baz']}
:params key:
:params value:
'''
self.data[key].remove(value)
self.__save()
def destroy(self, key, strict=False):
'''
Destructively remove all values and key from the FlagManager
Assuming the data is currently {'foo': ['bar', 'baz']}
d.wipe('foo')
>{}
:params key:
:params strict:
'''
try:
if strict:
self.data.pop('{}-strict'.format(key))
else:
self.data.pop('key')
except KeyError:
pass
def to_s(self):
'''
Render the flags to a single string, prepared for the Docker
Defaults file. Typically in /etc/default/docker
d.to_s()
> "--foo=bar --foo=baz"
'''
flags = []
for key in self.data:
if self.data[key] is None:
# handle flagonly
flags.append("{}".format(key))
elif '-strict' in key:
# handle strict values, and do it in 2 steps.
# If we rstrip -strict it strips a tailing s
proper_key = key.rstrip('strict').rstrip('-')
flags.append("{}={}".format(proper_key, self.data[key]))
else:
# handle multiopt and typical flags
for item in self.data[key]:
flags.append("{}={}".format(key, item))
return ' '.join(flags)

View File

@ -0,0 +1,30 @@
name: kubernetes-worker
summary: The workload bearing units of a kubernetes cluster
maintainers:
- Charles Butler <charles.butler@canonical.com>
- Matthew Bruzek <matthew.bruzek@canonical.com>
description: |
Kubernetes is an open-source platform for deploying, scaling, and operations
of application containers across a cluster of hosts. Kubernetes is portable
in that it works with public, private, and hybrid clouds. Extensible through
a pluggable infrastructure. Self healing in that it will automatically
restart and place containers on healthy nodes if a node ever goes away.
tags:
- misc
series:
- xenial
subordinate: false
requires:
kube-api-endpoint:
interface: http
kube-dns:
interface: kube-dns
provides:
cni:
interface: kubernetes-cni
scope: container
resources:
kubernetes:
type: file
filename: kubernetes.tar.gz
description: "An archive of kubernetes binaries for the worker."

View File

@ -0,0 +1,485 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from shlex import split
from subprocess import call, check_call, check_output
from subprocess import CalledProcessError
from socket import gethostname
from charms import layer
from charms.reactive import hook
from charms.reactive import set_state, remove_state
from charms.reactive import when, when_not
from charms.reactive.helpers import data_changed
from charms.kubernetes.flagmanager import FlagManager
from charms.templating.jinja2 import render
from charmhelpers.core import hookenv
from charmhelpers.core.host import service_stop
kubeconfig_path = '/srv/kubernetes/config'
@hook('upgrade-charm')
def remove_installed_state():
remove_state('kubernetes-worker.components.installed')
@hook('stop')
def shutdown():
''' When this unit is destroyed:
- delete the current node
- stop the kubelet service
- stop the kube-proxy service
- remove the 'kubernetes-worker.components.installed' state
'''
kubectl('delete', 'node', gethostname())
service_stop('kubelet')
service_stop('kube-proxy')
remove_state('kubernetes-worker.components.installed')
@when('docker.available')
@when_not('kubernetes-worker.components.installed')
def install_kubernetes_components():
''' Unpack the kubernetes worker binaries '''
charm_dir = os.getenv('CHARM_DIR')
# Get the resource via resource_get
try:
archive = hookenv.resource_get('kubernetes')
except Exception:
message = 'Error fetching the kubernetes resource.'
hookenv.log(message)
hookenv.status_set('blocked', message)
return
if not archive:
hookenv.log('Missing kubernetes resource.')
hookenv.status_set('blocked', 'Missing kubernetes resource.')
return
# Handle null resource publication, we check if filesize < 1mb
filesize = os.stat(archive).st_size
if filesize < 1000000:
hookenv.status_set('blocked', 'Incomplete kubernetes resource.')
return
hookenv.status_set('maintenance', 'Unpacking kubernetes resource.')
unpack_path = '{}/files/kubernetes'.format(charm_dir)
os.makedirs(unpack_path, exist_ok=True)
cmd = ['tar', 'xfvz', archive, '-C', unpack_path]
hookenv.log(cmd)
check_call(cmd)
apps = [
{'name': 'kubelet', 'path': '/usr/local/bin'},
{'name': 'kube-proxy', 'path': '/usr/local/bin'},
{'name': 'kubectl', 'path': '/usr/local/bin'},
{'name': 'loopback', 'path': '/opt/cni/bin'}
]
for app in apps:
unpacked = '{}/{}'.format(unpack_path, app['name'])
app_path = os.path.join(app['path'], app['name'])
install = ['install', '-v', '-D', unpacked, app_path]
hookenv.log(install)
check_call(install)
set_state('kubernetes-worker.components.installed')
@when('kubernetes-worker.components.installed')
def set_app_version():
''' Declare the application version to juju '''
cmd = ['kubelet', '--version']
version = check_output(cmd)
hookenv.application_version_set(version.split(b' v')[-1].rstrip())
@when('kubernetes-worker.components.installed')
@when_not('kube-dns.available')
def notify_user_transient_status():
''' Notify to the user we are in a transient state and the application
is still converging. Potentially remotely, or we may be in a detached loop
wait state '''
# During deployment the worker has to start kubelet without cluster dns
# configured. If this is the first unit online in a service pool waiting
# to self host the dns pod, and configure itself to query the dns service
# declared in the kube-system namespace
hookenv.status_set('waiting', 'Waiting for cluster DNS.')
@when('kubernetes-worker.components.installed', 'kube-dns.available')
def charm_status(kube_dns):
'''Update the status message with the current status of kubelet.'''
update_kubelet_status()
def update_kubelet_status():
''' There are different states that the kubelt can be in, where we are
waiting for dns, waiting for cluster turnup, or ready to serve
applications.'''
if (_systemctl_is_active('kubelet')):
hookenv.status_set('active', 'Kubernetes worker running.')
# if kubelet is not running, we're waiting on something else to converge
elif (not _systemctl_is_active('kubelet')):
hookenv.status_set('waiting', 'Waiting for kubelet to start.')
@when('kubernetes-worker.components.installed', 'kube-api-endpoint.available',
'tls_client.ca.saved', 'tls_client.client.certificate.saved',
'tls_client.client.key.saved', 'kube-dns.available', 'cni.available')
def start_worker(kube_api, kube_dns, cni):
''' Start kubelet using the provided API and DNS info.'''
servers = get_kube_api_servers(kube_api)
# Note that the DNS server doesn't necessarily exist at this point. We know
# what its IP will eventually be, though, so we can go ahead and configure
# kubelet with that info. This ensures that early pods are configured with
# the correct DNS even though the server isn't ready yet.
dns = kube_dns.details()
if (data_changed('kube-api-servers', servers) or
data_changed('kube-dns', dns)):
# Initialize a FlagManager object to add flags to unit data.
opts = FlagManager('kubelet')
# Append the DNS flags + data to the FlagManager object.
opts.add('--cluster-dns', dns['sdn-ip']) # FIXME: sdn-ip needs a rename
opts.add('--cluster-domain', dns['domain'])
create_config(servers[0])
render_init_scripts(servers)
set_state('kubernetes-worker.config.created')
restart_unit_services()
update_kubelet_status()
@when('cni.connected')
@when_not('cni.configured')
def configure_cni(cni):
''' Set worker configuration on the CNI relation. This lets the CNI
subordinate know that we're the worker so it can respond accordingly. '''
cni.set_config(is_master=False, kubeconfig_path=kubeconfig_path)
@when('config.changed.ingress')
def toggle_ingress_state():
''' Ingress is a toggled state. Remove ingress.available if set when
toggled '''
remove_state('kubernetes-worker.ingress.available')
@when('docker.sdn.configured')
def sdn_changed():
'''The Software Defined Network changed on the container so restart the
kubernetes services.'''
restart_unit_services()
update_kubelet_status()
remove_state('docker.sdn.configured')
@when('kubernetes-worker.config.created')
@when_not('kubernetes-worker.ingress.available')
def render_and_launch_ingress():
''' If configuration has ingress RC enabled, launch the ingress load
balancer and default http backend. Otherwise attempt deletion. '''
config = hookenv.config()
# If ingress is enabled, launch the ingress controller
if config.get('ingress'):
launch_default_ingress_controller()
else:
hookenv.log('Deleting the http backend and ingress.')
kubectl_manifest('delete',
'/etc/kubernetes/addons/default-http-backend.yaml')
kubectl_manifest('delete',
'/etc/kubernetes/addons/ingress-replication-controller.yaml') # noqa
hookenv.close_port(80)
hookenv.close_port(443)
@when('kubernetes-worker.ingress.available')
def scale_ingress_controller():
''' Scale the number of ingress controller replicas to match the number of
nodes. '''
try:
output = kubectl('get', 'nodes', '-o', 'name')
count = len(output.splitlines())
kubectl('scale', '--replicas=%d' % count, 'rc/nginx-ingress-controller') # noqa
except CalledProcessError:
hookenv.log('Failed to scale ingress controllers. Will attempt again next update.') # noqa
@when('config.changed.labels', 'kubernetes-worker.config.created')
def apply_node_labels():
''' Parse the labels configuration option and apply the labels to the node.
'''
# scrub and try to format an array from the configuration option
config = hookenv.config()
user_labels = _parse_labels(config.get('labels'))
# For diffing sake, iterate the previous label set
if config.previous('labels'):
previous_labels = _parse_labels(config.previous('labels'))
hookenv.log('previous labels: {}'.format(previous_labels))
else:
# this handles first time run if there is no previous labels config
previous_labels = _parse_labels("")
# Calculate label removal
for label in previous_labels:
if label not in user_labels:
hookenv.log('Deleting node label {}'.format(label))
try:
_apply_node_label(label, delete=True)
except CalledProcessError:
hookenv.log('Error removing node label {}'.format(label))
# if the label is in user labels we do nothing here, it will get set
# during the atomic update below.
# Atomically set a label
for label in user_labels:
_apply_node_label(label)
def arch():
'''Return the package architecture as a string. Raise an exception if the
architecture is not supported by kubernetes.'''
# Get the package architecture for this system.
architecture = check_output(['dpkg', '--print-architecture']).rstrip()
# Convert the binary result into a string.
architecture = architecture.decode('utf-8')
return architecture
def create_config(server):
'''Create a kubernetes configuration for the worker unit.'''
# Get the options from the tls-client layer.
layer_options = layer.options('tls-client')
# Get all the paths to the tls information required for kubeconfig.
ca = layer_options.get('ca_certificate_path')
key = layer_options.get('client_key_path')
cert = layer_options.get('client_certificate_path')
# Create kubernetes configuration in the default location for ubuntu.
create_kubeconfig('/home/ubuntu/.kube/config', server, ca, key, cert,
user='ubuntu')
# Make the config dir readable by the ubuntu users so juju scp works.
cmd = ['chown', '-R', 'ubuntu:ubuntu', '/home/ubuntu/.kube']
check_call(cmd)
# Create kubernetes configuration in the default location for root.
create_kubeconfig('/root/.kube/config', server, ca, key, cert,
user='root')
# Create kubernetes configuration for kubelet, and kube-proxy services.
create_kubeconfig(kubeconfig_path, server, ca, key, cert,
user='kubelet')
def render_init_scripts(api_servers):
''' We have related to either an api server or a load balancer connected
to the apiserver. Render the config files and prepare for launch '''
context = {}
context.update(hookenv.config())
# Get the tls paths from the layer data.
layer_options = layer.options('tls-client')
context['ca_cert_path'] = layer_options.get('ca_certificate_path')
context['client_cert_path'] = layer_options.get('client_certificate_path')
context['client_key_path'] = layer_options.get('client_key_path')
unit_name = os.getenv('JUJU_UNIT_NAME').replace('/', '-')
context.update({'kube_api_endpoint': ','.join(api_servers),
'JUJU_UNIT_NAME': unit_name})
# Create a flag manager for kubelet to render kubelet_opts.
kubelet_opts = FlagManager('kubelet')
# Declare to kubelet it needs to read from kubeconfig
kubelet_opts.add('--require-kubeconfig', None)
kubelet_opts.add('--kubeconfig', kubeconfig_path)
kubelet_opts.add('--network-plugin', 'cni')
context['kubelet_opts'] = kubelet_opts.to_s()
# Create a flag manager for kube-proxy to render kube_proxy_opts.
kube_proxy_opts = FlagManager('kube-proxy')
kube_proxy_opts.add('--kubeconfig', kubeconfig_path)
context['kube_proxy_opts'] = kube_proxy_opts.to_s()
os.makedirs('/var/lib/kubelet', exist_ok=True)
# Set the user when rendering config
context['user'] = 'kubelet'
# Set the user when rendering config
context['user'] = 'kube-proxy'
render('kube-default', '/etc/default/kube-default', context)
render('kubelet.defaults', '/etc/default/kubelet', context)
render('kube-proxy.defaults', '/etc/default/kube-proxy', context)
render('kube-proxy.service', '/lib/systemd/system/kube-proxy.service',
context)
render('kubelet.service', '/lib/systemd/system/kubelet.service', context)
def create_kubeconfig(kubeconfig, server, ca, key, certificate, user='ubuntu',
context='juju-context', cluster='juju-cluster'):
'''Create a configuration for Kubernetes based on path using the supplied
arguments for values of the Kubernetes server, CA, key, certificate, user
context and cluster.'''
# Create the config file with the address of the master server.
cmd = 'kubectl config --kubeconfig={0} set-cluster {1} ' \
'--server={2} --certificate-authority={3} --embed-certs=true'
check_call(split(cmd.format(kubeconfig, cluster, server, ca)))
# Create the credentials using the client flags.
cmd = 'kubectl config --kubeconfig={0} set-credentials {1} ' \
'--client-key={2} --client-certificate={3} --embed-certs=true'
check_call(split(cmd.format(kubeconfig, user, key, certificate)))
# Create a default context with the cluster.
cmd = 'kubectl config --kubeconfig={0} set-context {1} ' \
'--cluster={2} --user={3}'
check_call(split(cmd.format(kubeconfig, context, cluster, user)))
# Make the config use this new context.
cmd = 'kubectl config --kubeconfig={0} use-context {1}'
check_call(split(cmd.format(kubeconfig, context)))
def launch_default_ingress_controller():
''' Launch the Kubernetes ingress controller & default backend (404) '''
context = {}
context['arch'] = arch()
addon_path = '/etc/kubernetes/addons/{}'
manifest = addon_path.format('default-http-backend.yaml')
# Render the default http backend (404) replicationcontroller manifest
render('default-http-backend.yaml', manifest, context)
hookenv.log('Creating the default http backend.')
kubectl_manifest('create', manifest)
# Render the ingress replication controller manifest
manifest = addon_path.format('ingress-replication-controller.yaml')
render('ingress-replication-controller.yaml', manifest, context)
if kubectl_manifest('create', manifest):
hookenv.log('Creating the ingress replication controller.')
set_state('kubernetes-worker.ingress.available')
hookenv.open_port(80)
hookenv.open_port(443)
else:
hookenv.log('Failed to create ingress controller. Will attempt again next update.') # noqa
hookenv.close_port(80)
hookenv.close_port(443)
def restart_unit_services():
'''Reload the systemd configuration and restart the services.'''
# Tell systemd to reload configuration from disk for all daemons.
call(['systemctl', 'daemon-reload'])
# Ensure the services available after rebooting.
call(['systemctl', 'enable', 'kubelet.service'])
call(['systemctl', 'enable', 'kube-proxy.service'])
# Restart the services.
hookenv.log('Restarting kubelet, and kube-proxy.')
call(['systemctl', 'restart', 'kubelet'])
call(['systemctl', 'restart', 'kube-proxy'])
def get_kube_api_servers(kube_api):
'''Return the kubernetes api server address and port for this
relationship.'''
hosts = []
# Iterate over every service from the relation object.
for service in kube_api.services():
for unit in service['hosts']:
hosts.append('https://{0}:{1}'.format(unit['hostname'],
unit['port']))
return hosts
def kubectl(*args):
''' Run a kubectl cli command with a config file. Returns stdout and throws
an error if the command fails. '''
command = ['kubectl', '--kubeconfig=' + kubeconfig_path] + list(args)
hookenv.log('Executing {}'.format(command))
return check_output(command)
def kubectl_success(*args):
''' Runs kubectl with the given args. Returns True if succesful, False if
not. '''
try:
kubectl(*args)
return True
except CalledProcessError:
return False
def kubectl_manifest(operation, manifest):
''' Wrap the kubectl creation command when using filepath resources
:param operation - one of get, create, delete, replace
:param manifest - filepath to the manifest
'''
# Deletions are a special case
if operation == 'delete':
# Ensure we immediately remove requested resources with --now
return kubectl_success(operation, '-f', manifest, '--now')
else:
# Guard against an error re-creating the same manifest multiple times
if operation == 'create':
# If we already have the definition, its probably safe to assume
# creation was true.
if kubectl_success('get', '-f', manifest):
hookenv.log('Skipping definition for {}'.format(manifest))
return True
# Execute the requested command that did not match any of the special
# cases above
return kubectl_success(operation, '-f', manifest)
def _systemctl_is_active(application):
''' Poll systemctl to determine if the application is running '''
cmd = ['systemctl', 'is-active', application]
try:
raw = check_output(cmd)
return b'active' in raw
except Exception:
return False
def _apply_node_label(label, delete=False):
''' Invoke kubectl to apply node label changes '''
hostname = gethostname()
# TODO: Make this part of the kubectl calls instead of a special string
cmd_base = 'kubectl --kubeconfig={0} label node {1} {2}'
if delete is True:
label_key = label.split('=')[0]
cmd = cmd_base.format(kubeconfig_path, hostname, label_key)
cmd = cmd + '-'
else:
cmd = cmd_base.format(kubeconfig_path, hostname, label)
check_call(split(cmd))
def _parse_labels(labels):
''' Parse labels from a key=value string separated by space.'''
label_array = labels.split(' ')
sanitized_labels = []
for item in label_array:
if '=' in item:
sanitized_labels.append(item)
else:
hookenv.log('Skipping malformed option: {}'.format(item))
return sanitized_labels

View File

@ -0,0 +1,43 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: default-http-backend
spec:
replicas: 1
selector:
app: default-http-backend
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
# Any image is permissable as long as:
# 1. It serves a 404 page at /
# 2. It serves 200 on a /healthz endpoint
image: gcr.io/google_containers/defaultbackend:1.0
livenessProbe:
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend
labels:
app: default-http-backend
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: default-http-backend

View File

@ -0,0 +1,47 @@
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-controller
labels:
k8s-app: nginx-ingress-lb
spec:
replicas: 1
selector:
k8s-app: nginx-ingress-lb
template:
metadata:
labels:
k8s-app: nginx-ingress-lb
name: nginx-ingress-lb
spec:
terminationGracePeriodSeconds: 60
# hostPort doesn't work with CNI, so we have to use hostNetwork instead
# see https://github.com/kubernetes/kubernetes/issues/23920
hostNetwork: true
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.8.3
name: nginx-ingress-lb
imagePullPolicy: Always
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 30
timeoutSeconds: 5
# use downward API
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- containerPort: 80
- containerPort: 443
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend

View File

@ -0,0 +1,22 @@
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
KUBE_ALLOW_PRIV="--allow-privileged=false"
# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master={{ kube_api_endpoint }}"

View File

@ -0,0 +1 @@
KUBE_PROXY_ARGS="{{ kube_proxy_opts }}"

View File

@ -0,0 +1,19 @@
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=http://kubernetes.io/docs/admin/kube-proxy/
After=network.target
[Service]
EnvironmentFile=-/etc/default/kube-default
EnvironmentFile=-/etc/default/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_PROXY_ARGS
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,14 @@
# kubernetes kubelet (node) config
# The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"
# The port for the info server to serve on
KUBELET_PORT="--port=10250"
# You may leave this blank to use the actual hostname. If you override this
# reachability problems become your own issue.
# KUBELET_HOSTNAME="--hostname-override={{ JUJU_UNIT_NAME }}"
# Add your own!
KUBELET_ARGS="{{ kubelet_opts }}"

View File

@ -0,0 +1,22 @@
[Unit]
Description=Kubernetes Kubelet Server
Documentation=http://kubernetes.io/docs/admin/kubelet/
After=docker.service
Requires=docker.service
[Service]
WorkingDirectory=/var/lib/kubelet
EnvironmentFile=-/etc/default/kube-default
EnvironmentFile=-/etc/default/kubelet
ExecStart=/usr/local/bin/kubelet \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBELET_ADDRESS \
$KUBELET_PORT \
$KUBELET_HOSTNAME \
$KUBE_ALLOW_PRIV \
$KUBELET_ARGS
Restart=on-failure
[Install]
WantedBy=multi-user.target

View File

@ -0,0 +1,63 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: microbot
name: microbot
spec:
replicas: {{ replicas }}
selector:
matchLabels:
app: microbot
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: microbot
spec:
containers:
- image: dontrebootme/microbot:v1
imagePullPolicy: ""
name: microbot
ports:
- containerPort: 80
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 5
timeoutSeconds: 30
resources: {}
restartPolicy: Always
serviceAccountName: ""
status: {}
---
apiVersion: v1
kind: Service
metadata:
name: microbot
labels:
app: microbot
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: microbot
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: microbot-ingress
spec:
rules:
- host: microbot.{{ public_address }}.xip.io
http:
paths:
- path: /
backend:
serviceName: microbot
servicePort: 80

View File

@ -0,0 +1 @@
charms.templating.jinja2>=0.0.1,<2.0.0

View File

@ -1,112 +0,0 @@
# kubernetes
[Kubernetes](https://github.com/kubernetes/kubernetes) is an open
source system for managing application containers across multiple hosts.
This version of Kubernetes uses [Docker](http://www.docker.io/) to package,
instantiate and run containerized applications.
This charm is an encapsulation of the
[Running Kubernetes locally via
Docker](http://kubernetes.io/docs/getting-started-guides/docker)
document. The released hyperkube image (`gcr.io/google_containers/hyperkube`)
is currently pulled from a [Google owned container repository
repository](https://cloud.google.com/container-registry/). For this charm to
work it will need access to the repository to `docker pull` the images.
This charm was built from other charm layers using the reactive framework. The
`layer:docker` is the base layer. For more information please read [Getting
Started Developing charms](https://jujucharms.com/docs/devel/developer-getting-started)
# Deployment
The kubernetes charms require a relation to a distributed key value store
(ETCD) which Kubernetes uses for persistent storage of all of its REST API
objects.
```
juju deploy etcd
juju deploy kubernetes
juju add-relation kubernetes etcd
```
# Configuration
For your convenience this charm supports some configuration options to set up
a Kubernetes cluster that works in your environment:
**version**: Set the version of the Kubernetes containers to deploy. The
version string must be in the following format "v#.#.#" where the numbers
match with the
[kubernetes release labels](https://github.com/kubernetes/kubernetes/releases)
of the [kubernetes github project](https://github.com/kubernetes/kubernetes).
Changing the version causes the all the Kubernetes containers to be restarted.
**cidr**: Set the IP range for the Kubernetes cluster. eg: 10.1.0.0/16
**dns_domain**: Set the DNS domain for the Kubernetes cluster.
# Storage
The kubernetes charm is built to handle multiple storage devices if the cloud
provider works with
[Juju storage](https://jujucharms.com/docs/devel/charms-storage).
The 16.04 (xenial) release introduced [ZFS](https://en.wikipedia.org/wiki/ZFS)
to Ubuntu. The xenial charm can use ZFS witha raidz pool. A raidz pool
distributes parity along with the data (similar to a raid5 pool) and can suffer
the loss of one drive while still retaining data. The raidz pool requires a
minimum of 3 disks, but will accept more if they are provided.
You can add storage to the kubernetes charm in increments of 3 or greater:
```
juju add-storage kubernetes/0 disk-pool=ebs,3,1G
```
**Note**: Due to a limitation of raidz you can not add individual disks to an
existing pool. Should you need to expand the storage of the raidz pool, the
additional add-storage commands must be the same number of disks as the original
command. At this point the charm will have two raidz pools added together, both
of which could handle the loss of one disk each.
The storage code handles the addition of devices to the charm and when it
receives three disks creates a raidz pool that is mounted at the /srv/kubernetes
directory by default. If you need the storage in another location you must
change the `mount-point` value in layer.yaml before the charms is deployed.
To avoid data loss you must attach the storage before making the connection to
the etcd cluster.
## State Events
While this charm is meant to be a top layer, it can be used to build other
solutions. This charm sets or removes states from the reactive framework that
other layers could react appropriately. The states that other layers would be
interested in are as follows:
**kubelet.available** - The hyperkube container has been run with the kubelet
service and configuration that started the apiserver, controller-manager and
scheduler containers.
**proxy.available** - The hyperkube container has been run with the proxy
service and configuration that handles Kubernetes networking.
**kubectl.package.created** - Indicates the availability of the `kubectl`
application along with the configuration needed to contact the cluster
securely. You will need to download the `/home/ubuntu/kubectl_package.tar.gz`
from the kubernetes leader unit to your machine so you can control the cluster.
**kubedns.available** - Indicates when the Domain Name System (DNS) for the
cluster is operational.
# Kubernetes information
- [Kubernetes github project](https://github.com/kubernetes/kubernetes)
- [Kubernetes issue tracker](https://github.com/kubernetes/kubernetes/issues)
- [Kubernetes Documenation](http://kubernetes.io/docs/)
- [Kubernetes releases](https://github.com/kubernetes/kubernetes/releases)
# Contact
* Charm Author: Matthew Bruzek &lt;Matthew.Bruzek@canonical.com&gt;
* Charm Contributor: Charles Butler &lt;Charles.Butler@canonical.com&gt;
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/cluster/juju/layers/kubernetes/README.md?pixel)]()

View File

@ -1,2 +0,0 @@
guestbook-example:
description: Launch the guestbook example in your k8s cluster

View File

@ -1,35 +0,0 @@
#!/bin/bash
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Launch the Guestbook example in Kubernetes. This will use the pod and service
# definitions from `files/guestbook-example/*.yaml` to launch a leader/follower
# redis cluster, with a web-front end to collect user data and store in redis.
# This example app can easily scale across multiple nodes, and exercises the
# networking, pod creation/scale, service definition, and replica controller of
# kubernetes.
#
# Lifted from github.com/kubernetes/kubernetes/examples/guestbook-example
set -e
if [ ! -d files/guestbook-example ]; then
mkdir -p files/guestbook-example
curl -o $CHARM_DIR/files/guestbook-example/guestbook-all-in-one.yaml https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/guestbook/all-in-one/guestbook-all-in-one.yaml
fi
kubectl create -f files/guestbook-example/guestbook-all-in-one.yaml

View File

@ -1,21 +0,0 @@
options:
version:
type: string
default: "v1.2.3"
description: |
The version of Kubernetes to use in this charm. The version is inserted
in the configuration files that specify the hyperkube container to use
when starting a Kubernetes cluster. Changing this value will restart the
Kubernetes cluster.
cidr:
type: string
default: 10.1.0.0/16
description: |
Network CIDR to assign to Kubernetes service groups. This must not
overlap with any IP ranges assigned to nodes for pods.
dns_domain:
type: string
default: cluster.local
description: |
The domain name to use for the Kubernetes cluster by the
kubedns service.

File diff suppressed because one or more lines are too long

Before

Width:  |  Height:  |  Size: 76 KiB

View File

@ -1,6 +0,0 @@
includes: ['layer:leadership', 'layer:docker', 'layer:flannel', 'layer:storage', 'layer:tls', 'interface:etcd']
repo: https://github.com/mbruzek/layer-k8s.git
options:
storage:
storage-driver: zfs
mount-point: '/srv/kubernetes'

View File

@ -1,485 +0,0 @@
#!/usr/bin/env python
# Copyright 2015 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
from shlex import split
from subprocess import call
from subprocess import check_call
from subprocess import check_output
from charms.docker.compose import Compose
from charms.reactive import hook
from charms.reactive import remove_state
from charms.reactive import set_state
from charms.reactive import when
from charms.reactive import when_any
from charms.reactive import when_not
from charmhelpers.core import hookenv
from charmhelpers.core.hookenv import is_leader
from charmhelpers.core.hookenv import leader_set
from charmhelpers.core.hookenv import leader_get
from charmhelpers.core.templating import render
from charmhelpers.core import unitdata
from charmhelpers.core.host import chdir
import tlslib
@when('leadership.is_leader')
def i_am_leader():
'''The leader is the Kubernetes master node. '''
leader_set({'master-address': hookenv.unit_private_ip()})
@when_not('tls.client.authorization.required')
def configure_easrsa():
'''Require the tls layer to generate certificates with "clientAuth". '''
# By default easyrsa generates the server certificates without clientAuth
# Setting this state before easyrsa is configured ensures the tls layer is
# configured to generate certificates with client authentication.
set_state('tls.client.authorization.required')
domain = hookenv.config().get('dns_domain')
cidr = hookenv.config().get('cidr')
sdn_ip = get_sdn_ip(cidr)
# Create extra sans that the tls layer will add to the server cert.
extra_sans = [
sdn_ip,
'kubernetes',
'kubernetes.{0}'.format(domain),
'kubernetes.default',
'kubernetes.default.svc',
'kubernetes.default.svc.{0}'.format(domain)
]
unitdata.kv().set('extra_sans', extra_sans)
@hook('config-changed')
def config_changed():
'''If the configuration values change, remove the available states.'''
config = hookenv.config()
if any(config.changed(key) for key in config.keys()):
hookenv.log('The configuration options have changed.')
# Use the Compose class that encapsulates the docker-compose commands.
compose = Compose('files/kubernetes')
if is_leader():
hookenv.log('Removing master container and kubelet.available state.') # noqa
# Stop and remove the Kubernetes kubelet container.
compose.kill('master')
compose.rm('master')
compose.kill('proxy')
compose.rm('proxy')
# Remove the state so the code can react to restarting kubelet.
remove_state('kubelet.available')
else:
hookenv.log('Removing kubelet container and kubelet.available state.') # noqa
# Stop and remove the Kubernetes kubelet container.
compose.kill('kubelet')
compose.rm('kubelet')
# Remove the state so the code can react to restarting kubelet.
remove_state('kubelet.available')
hookenv.log('Removing proxy container and proxy.available state.')
# Stop and remove the Kubernetes proxy container.
compose.kill('proxy')
compose.rm('proxy')
# Remove the state so the code can react to restarting proxy.
remove_state('proxy.available')
if config.changed('version'):
hookenv.log('The version changed removing the states so the new '
'version of kubectl will be downloaded.')
remove_state('kubectl.downloaded')
remove_state('kubeconfig.created')
@when('tls.server.certificate available')
@when_not('k8s.server.certificate available')
def server_cert():
'''When the server certificate is available, get the server certificate
from the charm unitdata and write it to the kubernetes directory. '''
server_cert = '/srv/kubernetes/server.crt'
server_key = '/srv/kubernetes/server.key'
# Save the server certificate from unit data to the destination.
tlslib.server_cert(None, server_cert, user='ubuntu', group='ubuntu')
# Copy the server key from the default location to the destination.
tlslib.server_key(None, server_key, user='ubuntu', group='ubuntu')
set_state('k8s.server.certificate available')
@when('tls.client.certificate available')
@when_not('k8s.client.certficate available')
def client_cert():
'''When the client certificate is available, get the client certificate
from the charm unitdata and write it to the kubernetes directory. '''
client_cert = '/srv/kubernetes/client.crt'
client_key = '/srv/kubernetes/client.key'
# Save the client certificate from the default location to the destination.
tlslib.client_cert(None, client_cert, user='ubuntu', group='ubuntu')
# Copy the client key from the default location to the destination.
tlslib.client_key(None, client_key, user='ubuntu', group='ubuntu')
set_state('k8s.client.certficate available')
@when('tls.certificate.authority available')
@when_not('k8s.certificate.authority available')
def ca():
'''When the Certificate Authority is available, copy the CA from the
default location to the /srv/kubernetes directory. '''
ca_crt = '/srv/kubernetes/ca.crt'
# Copy the Certificate Authority to the destination directory.
tlslib.ca(None, ca_crt, user='ubuntu', group='ubuntu')
set_state('k8s.certificate.authority available')
@when('kubelet.available', 'leadership.is_leader')
@when_not('kubedns.available', 'skydns.available')
def launch_dns():
'''Create the "kube-system" namespace, the kubedns resource controller,
and the kubedns service. '''
hookenv.log('Creating kubernetes kubedns on the master node.')
# Only launch and track this state on the leader.
# Launching duplicate kubeDNS rc will raise an error
# Run a command to check if the apiserver is responding.
return_code = call(split('kubectl cluster-info'))
if return_code != 0:
hookenv.log('kubectl command failed, waiting for apiserver to start.')
remove_state('kubedns.available')
# Return without setting kubedns.available so this method will retry.
return
# Check for the "kube-system" namespace.
return_code = call(split('kubectl get namespace kube-system'))
if return_code != 0:
# Create the kube-system namespace that is used by the kubedns files.
check_call(split('kubectl create namespace kube-system'))
# Check for the kubedns replication controller.
return_code = call(split('kubectl get -f files/manifests/kubedns-controller.yaml'))
if return_code != 0:
# Create the kubedns replication controller from the rendered file.
check_call(split('kubectl create -f files/manifests/kubedns-controller.yaml'))
# Check for the kubedns service.
return_code = call(split('kubectl get -f files/manifests/kubedns-svc.yaml'))
if return_code != 0:
# Create the kubedns service from the rendered file.
check_call(split('kubectl create -f files/manifests/kubedns-svc.yaml'))
set_state('kubedns.available')
@when('skydns.available', 'leadership.is_leader')
def convert_to_kubedns():
'''Delete the skydns containers to make way for the kubedns containers.'''
hookenv.log('Deleteing the old skydns deployment.')
# Delete the skydns replication controller.
return_code = call(split('kubectl delete rc kube-dns-v11'))
# Delete the skydns service.
return_code = call(split('kubectl delete svc kube-dns'))
remove_state('skydns.available')
@when('docker.available')
@when_not('etcd.available')
def relation_message():
'''Take over messaging to let the user know they are pending a relationship
to the ETCD cluster before going any further. '''
status_set('waiting', 'Waiting for relation to ETCD')
@when('kubeconfig.created')
@when('etcd.available')
@when_not('kubelet.available', 'proxy.available')
def start_kubelet(etcd):
'''Run the hyperkube container that starts the kubernetes services.
When the leader, run the master services (apiserver, controller, scheduler,
proxy)
using the master.json from the rendered manifest directory.
When a follower, start the node services (kubelet, and proxy). '''
render_files(etcd)
# Use the Compose class that encapsulates the docker-compose commands.
compose = Compose('files/kubernetes')
status_set('maintenance', 'Starting the Kubernetes services.')
if is_leader():
compose.up('master')
compose.up('proxy')
set_state('kubelet.available')
# Open the secure port for api-server.
hookenv.open_port(6443)
else:
# Start the Kubernetes kubelet container using docker-compose.
compose.up('kubelet')
set_state('kubelet.available')
# Start the Kubernetes proxy container using docker-compose.
compose.up('proxy')
set_state('proxy.available')
status_set('active', 'Kubernetes services started')
@when('docker.available')
@when_not('kubectl.downloaded')
def download_kubectl():
'''Download the kubectl binary to test and interact with the cluster.'''
status_set('maintenance', 'Downloading the kubectl binary')
version = hookenv.config()['version']
cmd = 'wget -nv -O /usr/local/bin/kubectl https://storage.googleapis.com' \
'/kubernetes-release/release/{0}/bin/linux/{1}/kubectl'
cmd = cmd.format(version, arch())
hookenv.log('Downloading kubelet: {0}'.format(cmd))
check_call(split(cmd))
cmd = 'chmod +x /usr/local/bin/kubectl'
check_call(split(cmd))
set_state('kubectl.downloaded')
@when('kubectl.downloaded', 'leadership.is_leader', 'k8s.certificate.authority available', 'k8s.client.certficate available') # noqa
@when_not('kubeconfig.created')
def master_kubeconfig():
'''Create the kubernetes configuration for the master unit. The master
should create a package with the client credentials so the user can
interact securely with the apiserver.'''
hookenv.log('Creating Kubernetes configuration for master node.')
directory = '/srv/kubernetes'
ca = '/srv/kubernetes/ca.crt'
key = '/srv/kubernetes/client.key'
cert = '/srv/kubernetes/client.crt'
# Get the public address of the apiserver so users can access the master.
server = 'https://{0}:{1}'.format(hookenv.unit_public_ip(), '6443')
# Create the client kubeconfig so users can access the master node.
create_kubeconfig(directory, server, ca, key, cert)
# Copy the kubectl binary to this directory.
cmd = 'cp -v /usr/local/bin/kubectl {0}'.format(directory)
check_call(split(cmd))
# Use a context manager to run the tar command in a specific directory.
with chdir(directory):
# Create a package with kubectl and the files to use it externally.
cmd = 'tar -cvzf /home/ubuntu/kubectl_package.tar.gz ca.crt ' \
'client.key client.crt kubectl kubeconfig'
check_call(split(cmd))
# This sets up the client workspace consistently on the leader and nodes.
node_kubeconfig()
set_state('kubeconfig.created')
@when('kubectl.downloaded', 'k8s.certificate.authority available', 'k8s.server.certificate available') # noqa
@when_not('kubeconfig.created', 'leadership.is_leader')
def node_kubeconfig():
'''Create the kubernetes configuration (kubeconfig) for this unit.
The the nodes will create a kubeconfig with the server credentials so
the services can interact securely with the apiserver.'''
hookenv.log('Creating Kubernetes configuration for worker node.')
directory = '/var/lib/kubelet'
ca = '/srv/kubernetes/ca.crt'
cert = '/srv/kubernetes/server.crt'
key = '/srv/kubernetes/server.key'
# Get the private address of the apiserver for communication between units.
server = 'https://{0}:{1}'.format(leader_get('master-address'), '6443')
# Create the kubeconfig for the other services.
kubeconfig = create_kubeconfig(directory, server, ca, key, cert)
# Install the kubeconfig in the root user's home directory.
install_kubeconfig(kubeconfig, '/root/.kube', 'root')
# Install the kubeconfig in the ubunut user's home directory.
install_kubeconfig(kubeconfig, '/home/ubuntu/.kube', 'ubuntu')
set_state('kubeconfig.created')
@when('proxy.available')
@when_not('cadvisor.available')
def start_cadvisor():
'''Start the cAdvisor container that gives metrics about the other
application containers on this system. '''
compose = Compose('files/kubernetes')
compose.up('cadvisor')
hookenv.open_port(8088)
status_set('active', 'cadvisor running on port 8088')
set_state('cadvisor.available')
@when('kubelet.available', 'kubeconfig.created')
@when_any('proxy.available', 'cadvisor.available', 'kubedns.available')
def final_message():
'''Issue some final messages when the services are started. '''
# TODO: Run a simple/quick health checks before issuing this message.
status_set('active', 'Kubernetes running.')
def gather_sdn_data():
'''Get the Software Defined Network (SDN) information and return it as a
dictionary. '''
sdn_data = {}
# The dictionary named 'pillar' is a construct of the k8s template files.
pillar = {}
# SDN Providers pass data via the unitdata.kv module
db = unitdata.kv()
# Ideally the DNS address should come from the sdn cidr.
subnet = db.get('sdn_subnet')
if subnet:
# Generate the DNS ip address on the SDN cidr (this is desired).
pillar['dns_server'] = get_dns_ip(subnet)
else:
# There is no SDN cider fall back to the kubernetes config cidr option.
pillar['dns_server'] = get_dns_ip(hookenv.config().get('cidr'))
# The pillar['dns_domain'] value is used in the kubedns-controller.yaml
pillar['dns_domain'] = hookenv.config().get('dns_domain')
# Use a 'pillar' dictionary so we can reuse the upstream kubedns templates.
sdn_data['pillar'] = pillar
return sdn_data
def install_kubeconfig(kubeconfig, directory, user):
'''Copy the a file from the target to a new directory creating directories
if necessary. '''
# The file and directory must be owned by the correct user.
chown = 'chown {0}:{0} {1}'
if not os.path.isdir(directory):
os.makedirs(directory)
# Change the ownership of the config file to the right user.
check_call(split(chown.format(user, directory)))
# kubectl looks for a file named "config" in the ~/.kube directory.
config = os.path.join(directory, 'config')
# Copy the kubeconfig file to the directory renaming it to "config".
cmd = 'cp -v {0} {1}'.format(kubeconfig, config)
check_call(split(cmd))
# Change the ownership of the config file to the right user.
check_call(split(chown.format(user, config)))
def create_kubeconfig(directory, server, ca, key, cert, user='ubuntu'):
'''Create a configuration for kubernetes in a specific directory using
the supplied arguments, return the path to the file.'''
context = 'default-context'
cluster_name = 'kubernetes'
# Ensure the destination directory exists.
if not os.path.isdir(directory):
os.makedirs(directory)
# The configuration file should be in this directory named kubeconfig.
kubeconfig = os.path.join(directory, 'kubeconfig')
# Create the config file with the address of the master server.
cmd = 'kubectl config set-cluster --kubeconfig={0} {1} ' \
'--server={2} --certificate-authority={3}'
check_call(split(cmd.format(kubeconfig, cluster_name, server, ca)))
# Create the credentials using the client flags.
cmd = 'kubectl config set-credentials --kubeconfig={0} {1} ' \
'--client-key={2} --client-certificate={3}'
check_call(split(cmd.format(kubeconfig, user, key, cert)))
# Create a default context with the cluster.
cmd = 'kubectl config set-context --kubeconfig={0} {1} ' \
'--cluster={2} --user={3}'
check_call(split(cmd.format(kubeconfig, context, cluster_name, user)))
# Make the config use this new context.
cmd = 'kubectl config use-context --kubeconfig={0} {1}'
check_call(split(cmd.format(kubeconfig, context)))
hookenv.log('kubectl configuration created at {0}.'.format(kubeconfig))
return kubeconfig
def get_dns_ip(cidr):
'''Get an IP address for the DNS server on the provided cidr.'''
# Remove the range from the cidr.
ip = cidr.split('/')[0]
# Take the last octet off the IP address and replace it with 10.
return '.'.join(ip.split('.')[0:-1]) + '.10'
def get_sdn_ip(cidr):
'''Get the IP address for the SDN gateway based on the provided cidr.'''
# Remove the range from the cidr.
ip = cidr.split('/')[0]
# Remove the last octet and replace it with 1.
return '.'.join(ip.split('.')[0:-1]) + '.1'
def render_files(reldata=None):
'''Use jinja templating to render the docker-compose.yml and master.json
file to contain the dynamic data for the configuration files.'''
context = {}
# Load the context data with SDN data.
context.update(gather_sdn_data())
# Add the charm configuration data to the context.
context.update(hookenv.config())
if reldata:
connection_string = reldata.get_connection_string()
# Define where the etcd tls files will be kept.
etcd_dir = '/etc/ssl/etcd'
# Create paths to the etcd client ca, key, and cert file locations.
ca = os.path.join(etcd_dir, 'client-ca.pem')
key = os.path.join(etcd_dir, 'client-key.pem')
cert = os.path.join(etcd_dir, 'client-cert.pem')
# Save the client credentials (in relation data) to the paths provided.
reldata.save_client_credentials(key, cert, ca)
# Update the context so the template has the etcd information.
context.update({'etcd_dir': etcd_dir,
'connection_string': connection_string,
'etcd_ca': ca,
'etcd_key': key,
'etcd_cert': cert})
charm_dir = hookenv.charm_dir()
rendered_kube_dir = os.path.join(charm_dir, 'files/kubernetes')
if not os.path.exists(rendered_kube_dir):
os.makedirs(rendered_kube_dir)
rendered_manifest_dir = os.path.join(charm_dir, 'files/manifests')
if not os.path.exists(rendered_manifest_dir):
os.makedirs(rendered_manifest_dir)
# Update the context with extra values, arch, manifest dir, and private IP.
context.update({'arch': arch(),
'master_address': leader_get('master-address'),
'manifest_directory': rendered_manifest_dir,
'public_address': hookenv.unit_get('public-address'),
'private_address': hookenv.unit_get('private-address')})
# Adapted from: http://kubernetes.io/docs/getting-started-guides/docker/
target = os.path.join(rendered_kube_dir, 'docker-compose.yml')
# Render the files/kubernetes/docker-compose.yml file that contains the
# definition for kubelet and proxy.
render('docker-compose.yml', target, context)
if is_leader():
# Source: https://github.com/kubernetes/...master/cluster/images/hyperkube # noqa
target = os.path.join(rendered_manifest_dir, 'master.json')
# Render the files/manifests/master.json that contains parameters for
# the apiserver, controller, and controller-manager
render('master.json', target, context)
# Source: ...cluster/addons/dns/kubedns-svc.yaml.in
target = os.path.join(rendered_manifest_dir, 'kubedns-svc.yaml')
# Render files/kubernetes/kubedns-svc.yaml for the DNS service.
render('kubedns-svc.yaml', target, context)
# Source: ...cluster/addons/dns/kubedns-controller.yaml.in
target = os.path.join(rendered_manifest_dir, 'kubedns-controller.yaml')
# Render files/kubernetes/kubedns-controller.yaml for the DNS pod.
render('kubedns-controller.yaml', target, context)
def status_set(level, message):
'''Output status message with leadership information.'''
if is_leader():
message = '{0} (master) '.format(message)
hookenv.status_set(level, message)
def arch():
'''Return the package architecture as a string. Raise an exception if the
architecture is not supported by kubernetes.'''
# Get the package architecture for this system.
architecture = check_output(['dpkg', '--print-architecture']).rstrip()
# Convert the binary result into a string.
architecture = architecture.decode('utf-8')
# Validate the architecture is supported by kubernetes.
if architecture not in ['amd64', 'arm', 'arm64', 'ppc64le', 's390x']:
message = 'Unsupported machine architecture: {0}'.format(architecture)
status_set('blocked', message)
raise Exception(message)
return architecture

View File

@ -1,134 +0,0 @@
# http://kubernetes.io/docs/getting-started-guides/docker/
# # Start kubelet and then start master components as pods
# docker run \
# --net=host \
# --pid=host \
# --privileged \
# --restart=on-failure \
# -d \
# -v /sys:/sys:ro \
# -v /var/run:/var/run:rw \
# -v /:/rootfs:ro \
# -v /var/lib/docker/:/var/lib/docker:rw \
# -v /var/lib/kubelet/:/var/lib/kubelet:rw \
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
# /hyperkube kubelet \
# --address=0.0.0.0 \
# --allow-privileged=true \
# --enable-server \
# --api-servers=http://localhost:8080 \
# --config=/etc/kubernetes/manifests-multi \
# --cluster-dns=10.0.0.10 \
# --cluster-domain=cluster.local \
# --containerized \
# --v=2
master:
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
net: host
pid: host
privileged: true
restart: always
volumes:
- /:/rootfs:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:rw
- /var/lib/kubelet/:/var/lib/kubelet:rw
- /var/run:/var/run:rw
- {{ manifest_directory }}:/etc/kubernetes/manifests:rw
- /srv/kubernetes:/srv/kubernetes
command: |
/hyperkube kubelet
--address="0.0.0.0"
--allow-privileged=true
--api-servers=http://localhost:8080
--cluster-dns={{ pillar['dns_server'] }}
--cluster-domain={{ pillar['dns_domain'] }}
--config=/etc/kubernetes/manifests
--containerized
--hostname-override="{{ private_address }}"
--tls-cert-file="/srv/kubernetes/server.crt"
--tls-private-key-file="/srv/kubernetes/server.key"
--v=2
# Start kubelet without the config option and only kubelet starts.
# kubelet gets the tls credentials from /var/lib/kubelet/kubeconfig
# docker run \
# --net=host \
# --pid=host \
# --privileged \
# --restart=on-failure \
# -d \
# -v /sys:/sys:ro \
# -v /var/run:/var/run:rw \
# -v /:/rootfs:ro \
# -v /var/lib/docker/:/var/lib/docker:rw \
# -v /var/lib/kubelet/:/var/lib/kubelet:rw \
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
# /hyperkube kubelet \
# --allow-privileged=true \
# --api-servers=http://${MASTER_IP}:8080 \
# --address=0.0.0.0 \
# --enable-server \
# --cluster-dns=10.0.0.10 \
# --cluster-domain=cluster.local \
# --containerized \
# --v=2
kubelet:
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
net: host
pid: host
privileged: true
restart: always
volumes:
- /:/rootfs:ro
- /sys:/sys:ro
- /var/lib/docker/:/var/lib/docker:rw
- /var/lib/kubelet/:/var/lib/kubelet:rw
- /var/run:/var/run:rw
- /srv/kubernetes:/srv/kubernetes
command: |
/hyperkube kubelet
--address="0.0.0.0"
--allow-privileged=true
--api-servers=https://{{ master_address }}:6443
--cluster-dns={{ pillar['dns_server'] }}
--cluster-domain={{ pillar['dns_domain'] }}
--containerized
--hostname-override="{{ private_address }}"
--v=2
# docker run \
# -d \
# --net=host \
# --privileged \
# --restart=on-failure \
# gcr.io/google_containers/hyperkube-${ARCH}:v${K8S_VERSION} \
# /hyperkube proxy \
# --master=http://${MASTER_IP}:8080 \
# --v=2
proxy:
net: host
privileged: true
restart: always
image: gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}
command: |
/hyperkube proxy
--master=http://{{ master_address }}:8080
--v=2
# cAdvisor (Container Advisor) provides container users an understanding of
# the resource usage and performance characteristics of their running containers.
cadvisor:
image: google/cadvisor:latest
volumes:
- /:/rootfs:ro
- /var/run:/var/run:rw
- /sys:/sys:ro
- /var/lib/docker:/var/lib/docker:ro
ports:
- 8088:8080
restart: always

View File

@ -1,146 +0,0 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file should be kept in sync with cluster/addons/dns/kubedns-controller.yaml.base
# Warning: This is a file generated from the base underscore template file: kubedns-controller.yaml.base
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
rollingUpdate:
maxSurge: 10%
maxUnavailable: 0
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ''
scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
spec:
containers:
- name: kubedns
image: gcr.io/google_containers/k8s-dns-kube-dns-{{ arch }}:1.11.0
resources:
# TODO: Set memory limits when we've profiled the container for large
# clusters, then set request = limit to keep this container in
# guaranteed class. Currently, this container falls into the
# "burstable" category so the kubelet doesn't backoff from restarting it.
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
livenessProbe:
httpGet:
path: /healthcheck/kubedns
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /readiness
port: 8081
scheme: HTTP
# we poll on pod startup for the Kubernetes master service and
# only setup the /readiness HTTP server once that's available.
initialDelaySeconds: 3
timeoutSeconds: 5
args:
# command = "/kube-dns"
- --domain={{ pillar['dns_domain'] }}.
- --dns-port=10053
- --config-map=kube-dns
- --v=2
- --kube_master_url=http://{{ private_address }}:8080
{{ pillar['federations_domain_map'] }}
env:
- name: PROMETHEUS_PORT
value: "10055"
ports:
- containerPort: 10053
name: dns-local
protocol: UDP
- containerPort: 10053
name: dns-tcp-local
protocol: TCP
- containerPort: 10055
name: metrics
protocol: TCP
- name: dnsmasq
image: gcr.io/google_containers/kube-dnsmasq-{{ arch }}:1.4
livenessProbe:
httpGet:
path: /healthcheck/dnsmasq
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --cache-size=1000
- --no-resolv
- --server=127.0.0.1#10053
- --log-facility=-
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- name: sidecar
image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.11.0
livenessProbe:
httpGet:
path: /metrics
port: 10054
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
args:
- --v=2
- --logtostderr
- --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.{{ pillar['dns_domain'] }},5,A
- --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.{{ pillar['dns_domain'] }},5,A
ports:
- containerPort: 10054
name: metrics
protocol: TCP
resources:
requests:
memory: 20Mi
cpu: 10m
dnsPolicy: Default # Don't use cluster DNS.

View File

@ -1,38 +0,0 @@
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# This file should be kept in sync with cluster/addons/dns/kubedns-svc.yaml.base
# Warning: This is a file generated from the base underscore template file: kubedns-svc.yaml.base
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeDNS"
spec:
selector:
k8s-app: kube-dns
clusterIP: {{ pillar['dns_server'] }}
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP

View File

@ -1,106 +0,0 @@
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {"name":"k8s-master"},
"spec":{
"hostNetwork": true,
"containers":[
{
"name": "controller-manager",
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
"command": [
"/hyperkube",
"controller-manager",
"--master=127.0.0.1:8080",
"--service-account-private-key-file=/srv/kubernetes/server.key",
"--root-ca-file=/srv/kubernetes/ca.crt",
"--min-resync-period=3m",
"--v=2"
],
"volumeMounts": [
{
"name": "data",
"mountPath": "/srv/kubernetes"
}
]
},
{
"name": "apiserver",
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
"command": [
"/hyperkube",
"apiserver",
"--service-cluster-ip-range={{ cidr }}",
"--insecure-bind-address=0.0.0.0",
{% if etcd_dir -%}
"--etcd-cafile={{ etcd_ca }}",
"--etcd-keyfile={{ etcd_key }}",
"--etcd-certfile={{ etcd_cert }}",
{%- endif %}
"--etcd-servers={{ connection_string }}",
"--admission-control=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota",
"--client-ca-file=/srv/kubernetes/ca.crt",
"--basic-auth-file=/srv/kubernetes/basic_auth.csv",
"--min-request-timeout=300",
"--tls-cert-file=/srv/kubernetes/server.crt",
"--tls-private-key-file=/srv/kubernetes/server.key",
"--token-auth-file=/srv/kubernetes/known_tokens.csv",
"--allow-privileged=true",
"--v=4"
],
"volumeMounts": [
{
"name": "data",
"mountPath": "/srv/kubernetes"
},
{% if etcd_dir -%}
{
"name": "etcd-tls",
"mountPath": "{{ etcd_dir }}"
}
{%- endif %}
]
},
{
"name": "scheduler",
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
"command": [
"/hyperkube",
"scheduler",
"--master=127.0.0.1:8080",
"--v=2"
]
},
{
"name": "setup",
"image": "gcr.io/google_containers/hyperkube-{{ arch }}:{{ version }}",
"command": [
"/setup-files.sh",
"IP:{{ private_address }},IP:{{ public_address }},DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local"
],
"volumeMounts": [
{
"name": "data",
"mountPath": "/data"
}
]
}
],
"volumes": [
{
"hostPath": {
"path": "/srv/kubernetes"
},
"name": "data"
},
{% if etcd_dir -%}
{
"hostPath": {
"path": "{{ etcd_dir }}"
},
"name": "etcd-tls"
}
{%- endif %}
]
}
}

View File

@ -1,5 +0,0 @@
tests: "*kubernetes*"
bootstrap: false
reset: false
python_packages:
- tox

View File

@ -27,11 +27,14 @@ cluster/gce/gci/configure-helper.sh: sed -i -e "s@{{pillar\['allow_privileged'\
cluster/gce/trusty/configure-helper.sh: sed -i -e "s@{{ *storage_backend *}}@${STORAGE_BACKEND:-}@g" "${temp_file}"
cluster/gce/trusty/configure-helper.sh: sed -i -e "s@{{pillar\['allow_privileged'\]}}@true@g" "${src_file}"
cluster/gce/util.sh: local node_ip=$(gcloud compute instances describe --project "${PROJECT}" --zone "${ZONE}" \
cluster/juju/layers/kubernetes/reactive/k8s.py: check_call(split(cmd.format(kubeconfig, cluster_name, server, ca)))
cluster/juju/layers/kubernetes/reactive/k8s.py: check_call(split(cmd.format(kubeconfig, context, cluster_name, user)))
cluster/juju/layers/kubernetes/reactive/k8s.py: client_key = '/srv/kubernetes/client.key'
cluster/juju/layers/kubernetes/reactive/k8s.py: cluster_name = 'kubernetes'
cluster/juju/layers/kubernetes/reactive/k8s.py: tlslib.client_key(None, client_key, user='ubuntu', group='ubuntu')
cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py: context['pillar'] = {'num_nodes': get_node_count()}
cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py: cluster_dns.set_dns_info(53, hookenv.config('dns_domain'), dns_ip)
cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py: ip = service_cidr().split('/')[0]
cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py: ip = service_cidr().split('/')[0]
cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py:def send_cluster_dns_detail(cluster_dns):
cluster/juju/layers/kubernetes-master/reactive/kubernetes_master.py:def service_cidr():
cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py: context.update({'kube_api_endpoint': ','.join(api_servers),
cluster/juju/layers/kubernetes-worker/reactive/kubernetes_worker.py:def render_init_scripts(api_servers):
cluster/lib/logging.sh: local source_file=${BASH_SOURCE[$frame_no]}
cluster/lib/logging.sh: local source_file=${BASH_SOURCE[$stack_skip]}
cluster/log-dump.sh: local -r node_name="${1}"