mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-26 05:03:09 +00:00
Merge pull request #40108 from MrHohn/addon-ensure-exist
Automatic merge from submit-queue Supports 'ensure exist' class addon in Addon-manager Fixes #39561, fixes #37047 and fixes #36411. Depends on #40057. This PR splits cluster addons into two categories: - Reconcile: Addons that need to be reconciled (`kube-dns` for instance). - EnsureExists: Addons that need to be exist but changeable (`default-storage-class`). The behavior for the 'EnsureExists' class addon would be: - Create it if not exist. - Users could do any modification they want, addon-manager will not reconcile it. - If it is deleted, addon-manager will recreate it with the given template. - It will not be updated/clobbered during upgrade. As Brian pointed out in [#37048/comment](https://github.com/kubernetes/kubernetes/issues/37048#issuecomment-272510835), this may not be the best solution for addon-manager. Though #39561 needs to be fixed in 1.6 and we might not have enough bandwidth to do a big surgery. @mikedanese @thockin cc @kubernetes/sig-cluster-lifecycle-misc --- Tasks for this PR: - [x] Supports 'ensure exist' class addon and switch to use new labels in addon-manager. - [x] Updates READMEs regarding the new behavior of addon-manager. - [x] Updated `test/e2e/addon_update.go` to match the new behavior. - [x] Go through all current addons and apply the new labels on them regarding what they need. - [x] Bump addon-manager and update its template files.
This commit is contained in:
commit
b6b3ff59be
@ -1,53 +1,34 @@
|
|||||||
# Cluster add-ons
|
# Cluster add-ons
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
Cluster add-ons are resources like Services and Deployments (with pods) that are
|
Cluster add-ons are resources like Services and Deployments (with pods) that are
|
||||||
shipped with the Kubernetes binaries and are considered an inherent part of the
|
shipped with the Kubernetes binaries and are considered an inherent part of the
|
||||||
Kubernetes clusters. The add-ons are visible through the API (they can be listed using
|
Kubernetes clusters.
|
||||||
`kubectl`), but direct manipulation of these objects through Apiserver is discouraged
|
|
||||||
because the system will bring them back to the original state, in particular:
|
|
||||||
- If an add-on is deleted, it will be recreated automatically.
|
|
||||||
- If an add-on is updated through Apiserver, it will be reconfigured to the state given by
|
|
||||||
the supplied fields in the initial config.
|
|
||||||
|
|
||||||
On the cluster, the add-ons are kept in `/etc/kubernetes/addons` on the master node, in
|
There are currently two classes of add-ons:
|
||||||
yaml / json files. The addon manager periodically `kubectl apply`s the contents of this
|
- Add-ons that will be reconciled.
|
||||||
directory. Any legit modification would be reflected on the API objects accordingly.
|
- Add-ons that will be created if they don't exist.
|
||||||
Particularly, rolling-update for deployments is now supported.
|
|
||||||
|
|
||||||
Each add-on must specify the following label: `kubernetes.io/cluster-service: true`.
|
More details could be found in [addon-manager/README.md](addon-manager/README.md).
|
||||||
Config files that do not define this label will be ignored. For those resources
|
|
||||||
exist in `kube-system` namespace but not in `/etc/kubernetes/addons`, addon manager
|
|
||||||
will attempt to remove them if they are attached with this label. Currently the other
|
|
||||||
usage of `kubernetes.io/cluster-service` is for `kubectl cluster-info` command to recognize
|
|
||||||
these cluster services.
|
|
||||||
|
|
||||||
The suggested naming for most types of resources is just `<basename>` (with no version
|
## Cooperating Horizontal / Vertical Auto-Scaling with "reconcile class addons"
|
||||||
number) because we do not expect the resource name to change. But resources like `Pod`
|
|
||||||
, `ReplicationController` and `DaemonSet` are exceptional. As `Pod` updates may not change
|
|
||||||
fields other than `containers[*].image` or `spec.activeDeadlineSeconds` and may not add or
|
|
||||||
remove containers, it may not be sufficient during a major update. For `ReplicationController`,
|
|
||||||
most of the modifications would be legit, but the underlying pods would not got re-created
|
|
||||||
automatically. `DaemonSet` has similar problem as the `ReplicationController`. In these
|
|
||||||
cases, the suggested naming is `<basename>-<version>`. When version changes, the system will
|
|
||||||
delete the old one and create the new one (order not guaranteed).
|
|
||||||
|
|
||||||
# Add-on update procedure
|
"Reconcile" class addons will be periodically reconciled to the original state given
|
||||||
|
by the initial config. In order to make Horizontal / Vertical Auto-scaling functional,
|
||||||
|
the related fields in config should be left unset. More specifically, leave `replicas`
|
||||||
|
in `ReplicationController` / `Deployment` / `ReplicaSet` unset for Horizontal Scaling,
|
||||||
|
leave `resources` for container unset for Vertical Scaling. The periodic reconcile
|
||||||
|
won't clobbered these fields, hence they could be managed by Horizontal / Vertical
|
||||||
|
Auto-scaler.
|
||||||
|
|
||||||
To update add-ons, just update the contents of `/etc/kubernetes/addons`
|
## Add-on naming
|
||||||
directory with the desired definition of add-ons. Then the system will take care
|
|
||||||
of:
|
|
||||||
|
|
||||||
- Removing objects from the API server whose manifest was removed.
|
The suggested naming for most of the resources is `<basename>` (with no version number).
|
||||||
- Creating objects from new manifests
|
Though resources like `Pod`, `ReplicationController` and `DaemonSet` are exceptional.
|
||||||
- Updating objects whose fields are legally changed.
|
It would be hard to update `Pod` because many fields in `Pod` are immutable. For
|
||||||
|
`ReplicationController` and `DaemonSet`, in-place update may not trigger the underlying
|
||||||
# Cooperating with Horizontal / Vertical Auto-Scaling
|
pods to be re-created. You probably need to change their names during update to trigger
|
||||||
|
a complete deletion and creation.
|
||||||
As all cluster add-ons will be reconciled to the original state given by the initial config.
|
|
||||||
In order to make Horizontal / Vertical Auto-scaling functional, the related fields in config should
|
|
||||||
be left unset. More specifically, leave `replicas` in `ReplicationController` / `Deployment`
|
|
||||||
/ `ReplicaSet` unset for Horizontal Scaling, and leave `resources` for container unset for Vertical
|
|
||||||
Scaling. The periodical update won't include these specs, which will be managed by Horizontal / Vertical
|
|
||||||
Auto-scaler.
|
|
||||||
|
|
||||||
[]()
|
[]()
|
||||||
|
@ -1,3 +1,6 @@
|
|||||||
|
### Version 6.4-alpha.3 (Fri February 24 2017 Zihong Zheng <zihongz@google.com>)
|
||||||
|
- Support 'ensure exist' class addon and use addon-manager specific label.
|
||||||
|
|
||||||
### Version 6.4-alpha.2 (Wed February 16 2017 Zihong Zheng <zihongz@google.com>)
|
### Version 6.4-alpha.2 (Wed February 16 2017 Zihong Zheng <zihongz@google.com>)
|
||||||
- Update kubectl to v1.6.0-alpha.2 to use HPA in autoscaling/v1 instead of extensions/v1beta1.
|
- Update kubectl to v1.6.0-alpha.2 to use HPA in autoscaling/v1 instead of extensions/v1beta1.
|
||||||
|
|
||||||
|
@ -15,7 +15,10 @@
|
|||||||
IMAGE=gcr.io/google-containers/kube-addon-manager
|
IMAGE=gcr.io/google-containers/kube-addon-manager
|
||||||
ARCH?=amd64
|
ARCH?=amd64
|
||||||
TEMP_DIR:=$(shell mktemp -d)
|
TEMP_DIR:=$(shell mktemp -d)
|
||||||
VERSION=v6.4-alpha.2
|
VERSION=v6.4-alpha.3
|
||||||
|
# TODO: Current Addon Manager is built with kubectl on head
|
||||||
|
# (GitCommit:"17375fc59fff39135af63bd1750bb07c36ef873b").
|
||||||
|
# Should use next released kubectl once available.
|
||||||
KUBECTL_VERSION?=v1.6.0-alpha.2
|
KUBECTL_VERSION?=v1.6.0-alpha.2
|
||||||
|
|
||||||
ifeq ($(ARCH),amd64)
|
ifeq ($(ARCH),amd64)
|
||||||
|
@ -1,15 +1,35 @@
|
|||||||
### addon-manager
|
### Addon-manager
|
||||||
|
|
||||||
The `addon-manager` periodically `kubectl apply`s the Kubernetes manifest in the `/etc/kubernetes/addons` directory,
|
addon-manager manages two classes of addons with given template files.
|
||||||
and handles any added / updated / deleted addon.
|
- Addons with label `addonmanager.kubernetes.io/mode=Reconcile` will be periodically
|
||||||
|
reconciled. Direct manipulation to these addons through apiserver is discouraged because
|
||||||
|
addon-manager will bring them back to the original state. In particular:
|
||||||
|
- Addon will be re-created if it is deleted.
|
||||||
|
- Addon will be reconfigured to the state given by the supplied fields in the template
|
||||||
|
file periodically.
|
||||||
|
- Addon will be deleted when its manifest file is deleted.
|
||||||
|
- Addons with label `addonmanager.kubernetes.io/mode=EnsureExists` will be checked for
|
||||||
|
existence only. Users can edit these addons as they want. In particular:
|
||||||
|
- Addon will only be created/re-created with the given template file when there is no
|
||||||
|
instance of the resource with that name.
|
||||||
|
- Addon will not be deleted when the manifest file is deleted.
|
||||||
|
|
||||||
It supports all types of resource.
|
Notes:
|
||||||
The requirement is to label them with `kubernetes.io/cluster-service: "true"`.
|
- Label `kubernetes.io/cluster-service=true` is deprecated (only for Addon Manager).
|
||||||
|
In future release (after one year), Addon Manager may not respect it anymore. Addons
|
||||||
The `addon-manager` is built for multiple architectures.
|
have this label but without `addonmanager.kubernetes.io/mode=EnsureExists` will be
|
||||||
|
treated as "reconcile class addons" for now.
|
||||||
|
- Resources under $ADDON_PATH (default `/etc/kubernetes/addons/`) needs to have either one
|
||||||
|
of these two labels. Meanwhile namespaced resources need to be in `kube-system` namespace.
|
||||||
|
Otherwise it will be omitted.
|
||||||
|
- The above label and namespace rule does not stand for `/opt/namespace.yaml` and
|
||||||
|
resources under `/etc/kubernetes/admission-controls/`. addon-manager will attempt to
|
||||||
|
create them regardless during startup.
|
||||||
|
|
||||||
#### How to release
|
#### How to release
|
||||||
|
|
||||||
|
The `addon-manager` is built for multiple architectures.
|
||||||
|
|
||||||
1. Change something in the source
|
1. Change something in the source
|
||||||
2. Bump `VERSION` in the `Makefile`
|
2. Bump `VERSION` in the `Makefile`
|
||||||
3. Bump `KUBECTL_VERSION` in the `Makefile` if required
|
3. Bump `KUBECTL_VERSION` in the `Makefile` if required
|
||||||
|
@ -37,6 +37,16 @@ ADDON_PATH=${ADDON_PATH:-/etc/kubernetes/addons}
|
|||||||
|
|
||||||
SYSTEM_NAMESPACE=kube-system
|
SYSTEM_NAMESPACE=kube-system
|
||||||
|
|
||||||
|
# Addons could use this label with two modes:
|
||||||
|
# - ADDON_MANAGER_LABEL=Reconcile
|
||||||
|
# - ADDON_MANAGER_LABEL=EnsureExists
|
||||||
|
ADDON_MANAGER_LABEL="addonmanager.kubernetes.io/mode"
|
||||||
|
# This label is deprecated (only for Addon Manager). In future release
|
||||||
|
# addon-manager may not respect it anymore. Addons with
|
||||||
|
# CLUSTER_SERVICE_LABEL=true and without ADDON_MANAGER_LABEL=EnsureExists
|
||||||
|
# will be reconciled for now.
|
||||||
|
CLUSTER_SERVICE_LABEL="kubernetes.io/cluster-service"
|
||||||
|
|
||||||
# Remember that you can't log from functions that print some output (because
|
# Remember that you can't log from functions that print some output (because
|
||||||
# logs are also printed on stdout).
|
# logs are also printed on stdout).
|
||||||
# $1 level
|
# $1 level
|
||||||
@ -70,28 +80,6 @@ function log() {
|
|||||||
esac
|
esac
|
||||||
}
|
}
|
||||||
|
|
||||||
# $1 command to execute.
|
|
||||||
# $2 count of tries to execute the command.
|
|
||||||
# $3 delay in seconds between two consecutive tries
|
|
||||||
function run_until_success() {
|
|
||||||
local -r command=$1
|
|
||||||
local tries=$2
|
|
||||||
local -r delay=$3
|
|
||||||
local -r command_name=$1
|
|
||||||
while [ ${tries} -gt 0 ]; do
|
|
||||||
log DBG "executing: '$command'"
|
|
||||||
# let's give the command as an argument to bash -c, so that we can use
|
|
||||||
# && and || inside the command itself
|
|
||||||
/bin/bash -c "${command}" && \
|
|
||||||
log DB3 "== Successfully executed ${command_name} at $(date -Is) ==" && \
|
|
||||||
return 0
|
|
||||||
let tries=tries-1
|
|
||||||
log WRN "== Failed to execute ${command_name} at $(date -Is). ${tries} tries remaining. =="
|
|
||||||
sleep ${delay}
|
|
||||||
done
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# $1 filename of addon to start.
|
# $1 filename of addon to start.
|
||||||
# $2 count of tries to start the addon.
|
# $2 count of tries to start the addon.
|
||||||
# $3 delay in seconds between two consecutive tries
|
# $3 delay in seconds between two consecutive tries
|
||||||
@ -133,7 +121,7 @@ function annotate_addons() {
|
|||||||
|
|
||||||
# Annotate to objects already have this annotation should fail.
|
# Annotate to objects already have this annotation should fail.
|
||||||
# Only try once for now.
|
# Only try once for now.
|
||||||
${KUBECTL} ${KUBECTL_OPTS} annotate ${obj_type} --namespace=${SYSTEM_NAMESPACE} -l kubernetes.io/cluster-service=true \
|
${KUBECTL} ${KUBECTL_OPTS} annotate ${obj_type} --namespace=${SYSTEM_NAMESPACE} -l ${CLUSTER_SERVICE_LABEL}=true \
|
||||||
kubectl.kubernetes.io/last-applied-configuration='' --overwrite=false
|
kubectl.kubernetes.io/last-applied-configuration='' --overwrite=false
|
||||||
|
|
||||||
if [[ $? -eq 0 ]]; then
|
if [[ $? -eq 0 ]]; then
|
||||||
@ -144,19 +132,34 @@ function annotate_addons() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
# $1 enable --prune or not.
|
# $1 enable --prune or not.
|
||||||
# $2 additional option for command.
|
function reconcile_addons() {
|
||||||
function update_addons() {
|
|
||||||
local -r enable_prune=$1;
|
local -r enable_prune=$1;
|
||||||
local -r additional_opt=$2;
|
|
||||||
|
|
||||||
run_until_success "${KUBECTL} ${KUBECTL_OPTS} apply --namespace=${SYSTEM_NAMESPACE} -f ${ADDON_PATH} \
|
# TODO: Remove the first command in future release.
|
||||||
--prune=${enable_prune} -l kubernetes.io/cluster-service=true --recursive ${additional_opt}" 3 5
|
# Adding this for backward compatibility. Old addons have CLUSTER_SERVICE_LABEL=true and don't have
|
||||||
|
# ADDON_MANAGER_LABEL=EnsureExists will still be reconciled.
|
||||||
|
# Filter out `configured` message to not noisily log.
|
||||||
|
# `created`, `pruned` and errors will be logged.
|
||||||
|
log INFO "== Reconciling with deprecated label =="
|
||||||
|
${KUBECTL} ${KUBECTL_OPTS} apply --namespace=${SYSTEM_NAMESPACE} -f ${ADDON_PATH} \
|
||||||
|
-l ${CLUSTER_SERVICE_LABEL}=true,${ADDON_MANAGER_LABEL}!=EnsureExists \
|
||||||
|
--prune=${enable_prune} --recursive | grep -v configured
|
||||||
|
|
||||||
if [[ $? -eq 0 ]]; then
|
log INFO "== Reconciling with addon-manager label =="
|
||||||
log INFO "== Kubernetes addon update completed successfully at $(date -Is) =="
|
${KUBECTL} ${KUBECTL_OPTS} apply --namespace=${SYSTEM_NAMESPACE} -f ${ADDON_PATH} \
|
||||||
else
|
-l ${CLUSTER_SERVICE_LABEL}!=true,${ADDON_MANAGER_LABEL}=Reconcile \
|
||||||
log WRN "== Kubernetes addon update completed with errors at $(date -Is) =="
|
--prune=${enable_prune} --recursive | grep -v configured
|
||||||
fi
|
|
||||||
|
log INFO "== Kubernetes addon reconcile completed at $(date -Is) =="
|
||||||
|
}
|
||||||
|
|
||||||
|
function ensure_addons() {
|
||||||
|
# Create objects already exist should fail.
|
||||||
|
# Filter out `AlreadyExists` message to not noisily log.
|
||||||
|
${KUBECTL} ${KUBECTL_OPTS} create --namespace=${SYSTEM_NAMESPACE} -f ${ADDON_PATH} \
|
||||||
|
-l ${ADDON_MANAGER_LABEL}=EnsureExists --recursive 2>&1 | grep -v AlreadyExists
|
||||||
|
|
||||||
|
log INFO "== Kubernetes addon ensure completed at $(date -Is) =="
|
||||||
}
|
}
|
||||||
|
|
||||||
# The business logic for whether a given object should be created
|
# The business logic for whether a given object should be created
|
||||||
@ -188,9 +191,11 @@ for obj in $(find /etc/kubernetes/admission-controls \( -name \*.yaml -o -name \
|
|||||||
log INFO "++ obj ${obj} is created ++"
|
log INFO "++ obj ${obj} is created ++"
|
||||||
done
|
done
|
||||||
|
|
||||||
|
# TODO: The annotate and spin up parts should be removed after 1.6 is released.
|
||||||
|
|
||||||
# Fake the "kubectl.kubernetes.io/last-applied-configuration" annotation on old resources
|
# Fake the "kubectl.kubernetes.io/last-applied-configuration" annotation on old resources
|
||||||
# in order to clean them up by `kubectl apply --prune`.
|
# in order to clean them up by `kubectl apply --prune`.
|
||||||
# RCs have to be annotated for 1.4->1.5 upgrade, because we are migrating from RCs to deployments for all default addons.
|
# RCs have to be annotated for 1.4->1.5+ upgrade, because we migrated from RCs to deployments for all default addons in 1.5.
|
||||||
# Other types resources will also need this fake annotation if their names are changed,
|
# Other types resources will also need this fake annotation if their names are changed,
|
||||||
# otherwise they would be leaked during upgrade.
|
# otherwise they would be leaked during upgrade.
|
||||||
log INFO "== Annotating the old addon resources at $(date -Is) =="
|
log INFO "== Annotating the old addon resources at $(date -Is) =="
|
||||||
@ -202,7 +207,8 @@ annotate_addons Deployment
|
|||||||
# The new Deployments will not fight for pods created by old RCs with the same label because the additional `pod-template-hash` label.
|
# The new Deployments will not fight for pods created by old RCs with the same label because the additional `pod-template-hash` label.
|
||||||
# Apply will fail if some fields are modified but not are allowed, in that case should bump up addon version and name (e.g. handle externally).
|
# Apply will fail if some fields are modified but not are allowed, in that case should bump up addon version and name (e.g. handle externally).
|
||||||
log INFO "== Executing apply to spin up new addon resources at $(date -Is) =="
|
log INFO "== Executing apply to spin up new addon resources at $(date -Is) =="
|
||||||
update_addons false
|
reconcile_addons false
|
||||||
|
ensure_addons
|
||||||
|
|
||||||
# Wait for new addons to be spinned up before delete old resources
|
# Wait for new addons to be spinned up before delete old resources
|
||||||
log INFO "== Wait for addons to be spinned up at $(date -Is) =="
|
log INFO "== Wait for addons to be spinned up at $(date -Is) =="
|
||||||
@ -215,7 +221,8 @@ log INFO "== Entering periodical apply loop at $(date -Is) =="
|
|||||||
while true; do
|
while true; do
|
||||||
start_sec=$(date +"%s")
|
start_sec=$(date +"%s")
|
||||||
# Only print stderr for the readability of logging
|
# Only print stderr for the readability of logging
|
||||||
update_addons true ">/dev/null"
|
reconcile_addons true
|
||||||
|
ensure_addons
|
||||||
end_sec=$(date +"%s")
|
end_sec=$(date +"%s")
|
||||||
len_sec=$((${end_sec}-${start_sec}))
|
len_sec=$((${end_sec}-${start_sec}))
|
||||||
# subtract the time passed from the sleep time
|
# subtract the time passed from the sleep time
|
||||||
|
@ -4,6 +4,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: calico-etcd
|
k8s-app: calico-etcd
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
name: calico-etcd
|
name: calico-etcd
|
||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
spec:
|
spec:
|
||||||
|
@ -5,6 +5,7 @@ metadata:
|
|||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
k8s-app: calico-etcd
|
k8s-app: calico-etcd
|
||||||
spec:
|
spec:
|
||||||
serviceName: calico-etcd
|
serviceName: calico-etcd
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: calico-policy
|
k8s-app: calico-policy
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
@ -7,6 +7,7 @@ metadata:
|
|||||||
k8s-app: glbc
|
k8s-app: glbc
|
||||||
kubernetes.io/name: "GLBC"
|
kubernetes.io/name: "GLBC"
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
@ -8,6 +8,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: glbc
|
k8s-app: glbc
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "GLBCDefaultBackend"
|
kubernetes.io/name: "GLBCDefaultBackend"
|
||||||
spec:
|
spec:
|
||||||
# The default backend must be of type NodePort.
|
# The default backend must be of type NodePort.
|
||||||
|
@ -19,6 +19,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: heapster
|
k8s-app: heapster
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
version: v1.3.0-beta.0
|
version: v1.3.0-beta.0
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
|
@ -5,6 +5,7 @@ metadata:
|
|||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "Heapster"
|
kubernetes.io/name: "Heapster"
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
|
@ -19,6 +19,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: heapster
|
k8s-app: heapster
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
version: v1.3.0-beta.0
|
version: v1.3.0-beta.0
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
|
@ -5,6 +5,7 @@ metadata:
|
|||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "Grafana"
|
kubernetes.io/name: "Grafana"
|
||||||
spec:
|
spec:
|
||||||
# On production clusters, consider setting up auth for grafana, and
|
# On production clusters, consider setting up auth for grafana, and
|
||||||
|
@ -19,6 +19,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: heapster
|
k8s-app: heapster
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
version: v1.3.0-beta.0
|
version: v1.3.0-beta.0
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
|
@ -5,6 +5,7 @@ metadata:
|
|||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "Heapster"
|
kubernetes.io/name: "Heapster"
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
|
@ -7,6 +7,7 @@ metadata:
|
|||||||
k8s-app: influxGrafana
|
k8s-app: influxGrafana
|
||||||
version: v4
|
version: v4
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
@ -5,6 +5,7 @@ metadata:
|
|||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "InfluxDB"
|
kubernetes.io/name: "InfluxDB"
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
|
@ -17,6 +17,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: heapster
|
k8s-app: heapster
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
version: v1.3.0-beta.0
|
version: v1.3.0-beta.0
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
|
@ -5,6 +5,7 @@ metadata:
|
|||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "Heapster"
|
kubernetes.io/name: "Heapster"
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kubernetes-dashboard
|
k8s-app: kubernetes-dashboard
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
matchLabels:
|
matchLabels:
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kubernetes-dashboard
|
k8s-app: kubernetes-dashboard
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
k8s-app: kubernetes-dashboard
|
k8s-app: kubernetes-dashboard
|
||||||
|
@ -20,6 +20,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kube-dns-autoscaler
|
k8s-app: kube-dns-autoscaler
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -25,6 +25,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kube-dns
|
k8s-app: kube-dns
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
# replicas: not specified here:
|
# replicas: not specified here:
|
||||||
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
|
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
|
||||||
|
@ -25,6 +25,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kube-dns
|
k8s-app: kube-dns
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
# replicas: not specified here:
|
# replicas: not specified here:
|
||||||
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
|
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
|
||||||
|
@ -25,6 +25,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kube-dns
|
k8s-app: kube-dns
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
# replicas: not specified here:
|
# replicas: not specified here:
|
||||||
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
|
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
|
||||||
|
@ -4,3 +4,4 @@ metadata:
|
|||||||
name: kube-dns
|
name: kube-dns
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
|
@ -22,6 +22,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kube-dns
|
k8s-app: kube-dns
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "KubeDNS"
|
kubernetes.io/name: "KubeDNS"
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
|
@ -22,6 +22,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kube-dns
|
k8s-app: kube-dns
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "KubeDNS"
|
kubernetes.io/name: "KubeDNS"
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
|
@ -22,6 +22,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kube-dns
|
k8s-app: kube-dns
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "KubeDNS"
|
kubernetes.io/name: "KubeDNS"
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
|
@ -9,6 +9,7 @@ metadata:
|
|||||||
name: kubelet-cluster-admin
|
name: kubelet-cluster-admin
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
roleRef:
|
roleRef:
|
||||||
apiGroup: rbac.authorization.k8s.io
|
apiGroup: rbac.authorization.k8s.io
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
|
@ -9,6 +9,7 @@ metadata:
|
|||||||
name: todo-remove-grabbag-cluster-admin
|
name: todo-remove-grabbag-cluster-admin
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
roleRef:
|
roleRef:
|
||||||
apiGroup: rbac.authorization.k8s.io
|
apiGroup: rbac.authorization.k8s.io
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
|
@ -7,6 +7,7 @@ metadata:
|
|||||||
k8s-app: elasticsearch-logging
|
k8s-app: elasticsearch-logging
|
||||||
version: v1
|
version: v1
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
replicas: 2
|
replicas: 2
|
||||||
selector:
|
selector:
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: elasticsearch-logging
|
k8s-app: elasticsearch-logging
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "Elasticsearch"
|
kubernetes.io/name: "Elasticsearch"
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: fluentd-es
|
k8s-app: fluentd-es
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
version: v1.22
|
version: v1.22
|
||||||
spec:
|
spec:
|
||||||
template:
|
template:
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kibana-logging
|
k8s-app: kibana-logging
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kibana-logging
|
k8s-app: kibana-logging
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "Kibana"
|
kubernetes.io/name: "Kibana"
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
|
@ -7,6 +7,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: fluentd-gcp
|
k8s-app: fluentd-gcp
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
version: v1.38
|
version: v1.38
|
||||||
spec:
|
spec:
|
||||||
template:
|
template:
|
||||||
|
@ -5,6 +5,7 @@ metadata:
|
|||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
---
|
---
|
||||||
apiVersion: rbac.authorization.k8s.io/v1alpha1
|
apiVersion: rbac.authorization.k8s.io/v1alpha1
|
||||||
kind: ClusterRoleBinding
|
kind: ClusterRoleBinding
|
||||||
@ -12,6 +13,7 @@ metadata:
|
|||||||
name: npd-binding
|
name: npd-binding
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
roleRef:
|
roleRef:
|
||||||
apiGroup: rbac.authorization.k8s.io
|
apiGroup: rbac.authorization.k8s.io
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
@ -30,6 +32,7 @@ metadata:
|
|||||||
k8s-app: node-problem-detector
|
k8s-app: node-problem-detector
|
||||||
version: v0.3.0-alpha.1
|
version: v0.3.0-alpha.1
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -4,6 +4,7 @@ metadata:
|
|||||||
name: npd-binding
|
name: npd-binding
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
roleRef:
|
roleRef:
|
||||||
apiGroup: rbac.authorization.k8s.io
|
apiGroup: rbac.authorization.k8s.io
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
|
@ -4,6 +4,7 @@ metadata:
|
|||||||
name: apiserver-node-proxy
|
name: apiserver-node-proxy
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
roleRef:
|
roleRef:
|
||||||
apiGroup: rbac.authorization.k8s.io
|
apiGroup: rbac.authorization.k8s.io
|
||||||
kind: ClusterRole
|
kind: ClusterRole
|
||||||
|
@ -4,6 +4,7 @@ metadata:
|
|||||||
name: node-proxy
|
name: node-proxy
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
rules:
|
rules:
|
||||||
- apiGroups:
|
- apiGroups:
|
||||||
- ""
|
- ""
|
||||||
|
@ -4,6 +4,7 @@ metadata:
|
|||||||
name: kube-system-kube-registry-pv
|
name: kube-system-kube-registry-pv
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
{% if pillar.get('cluster_registry_disk_type', '') == 'gce' %}
|
{% if pillar.get('cluster_registry_disk_type', '') == 'gce' %}
|
||||||
capacity:
|
capacity:
|
||||||
|
@ -5,6 +5,7 @@ metadata:
|
|||||||
namespace: kube-system
|
namespace: kube-system
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
accessModes:
|
accessModes:
|
||||||
- ReadWriteOnce
|
- ReadWriteOnce
|
||||||
|
@ -7,6 +7,7 @@ metadata:
|
|||||||
k8s-app: kube-registry
|
k8s-app: kube-registry
|
||||||
version: v0
|
version: v0
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
replicas: 1
|
replicas: 1
|
||||||
selector:
|
selector:
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
labels:
|
labels:
|
||||||
k8s-app: kube-registry
|
k8s-app: kube-registry
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
kubernetes.io/name: "KubeRegistry"
|
kubernetes.io/name: "KubeRegistry"
|
||||||
spec:
|
spec:
|
||||||
selector:
|
selector:
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
storageclass.beta.kubernetes.io/is-default-class: "true"
|
storageclass.beta.kubernetes.io/is-default-class: "true"
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: EnsureExists
|
||||||
provisioner: kubernetes.io/aws-ebs
|
provisioner: kubernetes.io/aws-ebs
|
||||||
parameters:
|
parameters:
|
||||||
type: gp2
|
type: gp2
|
||||||
|
@ -6,4 +6,5 @@ metadata:
|
|||||||
storageclass.beta.kubernetes.io/is-default-class: "true"
|
storageclass.beta.kubernetes.io/is-default-class: "true"
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: EnsureExists
|
||||||
provisioner: kubernetes.io/azure-disk
|
provisioner: kubernetes.io/azure-disk
|
||||||
|
@ -6,6 +6,7 @@ metadata:
|
|||||||
storageclass.beta.kubernetes.io/is-default-class: "true"
|
storageclass.beta.kubernetes.io/is-default-class: "true"
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: EnsureExists
|
||||||
provisioner: kubernetes.io/gce-pd
|
provisioner: kubernetes.io/gce-pd
|
||||||
parameters:
|
parameters:
|
||||||
type: pd-standard
|
type: pd-standard
|
||||||
|
@ -6,4 +6,5 @@ metadata:
|
|||||||
storageclass.beta.kubernetes.io/is-default-class: "true"
|
storageclass.beta.kubernetes.io/is-default-class: "true"
|
||||||
labels:
|
labels:
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: EnsureExists
|
||||||
provisioner: kubernetes.io/cinder
|
provisioner: kubernetes.io/cinder
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
"containers": [
|
"containers": [
|
||||||
{
|
{
|
||||||
"name": "kube-addon-manager",
|
"name": "kube-addon-manager",
|
||||||
"image": "REGISTRY/kube-addon-manager-ARCH:v6.4-alpha.2",
|
"image": "REGISTRY/kube-addon-manager-ARCH:v6.4-alpha.3",
|
||||||
"resources": {
|
"resources": {
|
||||||
"requests": {
|
"requests": {
|
||||||
"cpu": "5m",
|
"cpu": "5m",
|
||||||
|
@ -11,7 +11,7 @@
|
|||||||
"containers": [
|
"containers": [
|
||||||
{
|
{
|
||||||
"name": "kube-addon-manager",
|
"name": "kube-addon-manager",
|
||||||
"image": "REGISTRY/kube-addon-manager-ARCH:v6.4-alpha.2",
|
"image": "REGISTRY/kube-addon-manager-ARCH:v6.4-alpha.3",
|
||||||
"resources": {
|
"resources": {
|
||||||
"requests": {
|
"requests": {
|
||||||
"cpu": "5m",
|
"cpu": "5m",
|
||||||
|
@ -13,7 +13,7 @@ spec:
|
|||||||
# - cluster/images/hyperkube/static-pods/addon-manager-singlenode.json
|
# - cluster/images/hyperkube/static-pods/addon-manager-singlenode.json
|
||||||
# - cluster/images/hyperkube/static-pods/addon-manager-multinode.json
|
# - cluster/images/hyperkube/static-pods/addon-manager-multinode.json
|
||||||
# - test/kubemark/resources/manifests/kube-addon-manager.yaml
|
# - test/kubemark/resources/manifests/kube-addon-manager.yaml
|
||||||
image: gcr.io/google-containers/kube-addon-manager:v6.4-alpha.2
|
image: gcr.io/google-containers/kube-addon-manager:v6.4-alpha.3
|
||||||
command:
|
command:
|
||||||
- /bin/bash
|
- /bin/bash
|
||||||
- -c
|
- -c
|
||||||
|
@ -26,6 +26,7 @@ import (
|
|||||||
|
|
||||||
"golang.org/x/crypto/ssh"
|
"golang.org/x/crypto/ssh"
|
||||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||||
|
"k8s.io/apimachinery/pkg/labels"
|
||||||
"k8s.io/kubernetes/pkg/client/clientset_generated/clientset"
|
"k8s.io/kubernetes/pkg/client/clientset_generated/clientset"
|
||||||
"k8s.io/kubernetes/test/e2e/framework"
|
"k8s.io/kubernetes/test/e2e/framework"
|
||||||
|
|
||||||
@ -35,126 +36,156 @@ import (
|
|||||||
|
|
||||||
// TODO: it would probably be slightly better to build up the objects
|
// TODO: it would probably be slightly better to build up the objects
|
||||||
// in the code and then serialize to yaml.
|
// in the code and then serialize to yaml.
|
||||||
var addon_controller_v1 = `
|
var reconcile_addon_controller = `
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: ReplicationController
|
kind: ReplicationController
|
||||||
metadata:
|
metadata:
|
||||||
name: addon-test-v1
|
name: addon-reconcile-test
|
||||||
namespace: %s
|
namespace: %s
|
||||||
labels:
|
labels:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-reconcile-test
|
||||||
version: v1
|
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
spec:
|
spec:
|
||||||
replicas: 2
|
replicas: 2
|
||||||
selector:
|
selector:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-reconcile-test
|
||||||
version: v1
|
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels:
|
labels:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-reconcile-test
|
||||||
version: v1
|
|
||||||
kubernetes.io/cluster-service: "true"
|
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- image: gcr.io/google_containers/serve_hostname:v1.4
|
- image: gcr.io/google_containers/serve_hostname:v1.4
|
||||||
name: addon-test
|
name: addon-reconcile-test
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 9376
|
- containerPort: 9376
|
||||||
protocol: TCP
|
protocol: TCP
|
||||||
`
|
`
|
||||||
|
|
||||||
var addon_controller_v2 = `
|
// Should update "reconcile" class addon.
|
||||||
|
var reconcile_addon_controller_updated = `
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: ReplicationController
|
kind: ReplicationController
|
||||||
metadata:
|
metadata:
|
||||||
name: addon-test-v2
|
name: addon-reconcile-test
|
||||||
namespace: %s
|
namespace: %s
|
||||||
labels:
|
labels:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-reconcile-test
|
||||||
version: v2
|
|
||||||
kubernetes.io/cluster-service: "true"
|
kubernetes.io/cluster-service: "true"
|
||||||
|
addonmanager.kubernetes.io/mode: Reconcile
|
||||||
|
newLabel: addon-reconcile-test
|
||||||
spec:
|
spec:
|
||||||
replicas: 2
|
replicas: 2
|
||||||
selector:
|
selector:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-reconcile-test
|
||||||
version: v2
|
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels:
|
labels:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-reconcile-test
|
||||||
version: v2
|
|
||||||
kubernetes.io/cluster-service: "true"
|
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- image: gcr.io/google_containers/serve_hostname:v1.4
|
- image: gcr.io/google_containers/serve_hostname:v1.4
|
||||||
name: addon-test
|
name: addon-reconcile-test
|
||||||
ports:
|
ports:
|
||||||
- containerPort: 9376
|
- containerPort: 9376
|
||||||
protocol: TCP
|
protocol: TCP
|
||||||
`
|
`
|
||||||
|
|
||||||
var addon_service_v1 = `
|
var ensure_exists_addon_service = `
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
metadata:
|
metadata:
|
||||||
name: addon-test
|
name: addon-ensure-exists-test
|
||||||
namespace: %s
|
namespace: %s
|
||||||
labels:
|
labels:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-ensure-exists-test
|
||||||
kubernetes.io/cluster-service: "true"
|
addonmanager.kubernetes.io/mode: EnsureExists
|
||||||
kubernetes.io/name: addon-test
|
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
- port: 9376
|
- port: 9376
|
||||||
protocol: TCP
|
protocol: TCP
|
||||||
targetPort: 9376
|
targetPort: 9376
|
||||||
selector:
|
selector:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-ensure-exists-test
|
||||||
`
|
`
|
||||||
|
|
||||||
var addon_service_v2 = `
|
// Should create but don't update "ensure exist" class addon.
|
||||||
|
var ensure_exists_addon_service_updated = `
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: Service
|
kind: Service
|
||||||
metadata:
|
metadata:
|
||||||
name: addon-test-updated
|
name: addon-ensure-exists-test
|
||||||
namespace: %s
|
namespace: %s
|
||||||
labels:
|
labels:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-ensure-exists-test
|
||||||
kubernetes.io/cluster-service: "true"
|
addonmanager.kubernetes.io/mode: EnsureExists
|
||||||
kubernetes.io/name: addon-test
|
newLabel: addon-ensure-exists-test
|
||||||
newLabel: newValue
|
|
||||||
spec:
|
spec:
|
||||||
ports:
|
ports:
|
||||||
- port: 9376
|
- port: 9376
|
||||||
protocol: TCP
|
protocol: TCP
|
||||||
targetPort: 9376
|
targetPort: 9376
|
||||||
selector:
|
selector:
|
||||||
k8s-app: addon-test
|
k8s-app: addon-ensure-exists-test
|
||||||
`
|
`
|
||||||
|
|
||||||
// Wrong label case
|
var deprecated_label_addon_service = `
|
||||||
var invalid_addon_controller_v1 = `
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: addon-deprecated-label-test
|
||||||
|
namespace: %s
|
||||||
|
labels:
|
||||||
|
k8s-app: addon-deprecated-label-test
|
||||||
|
kubernetes.io/cluster-service: "true"
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 9376
|
||||||
|
protocol: TCP
|
||||||
|
targetPort: 9376
|
||||||
|
selector:
|
||||||
|
k8s-app: addon-deprecated-label-test
|
||||||
|
`
|
||||||
|
|
||||||
|
// Should update addon with label "kubernetes.io/cluster-service=true".
|
||||||
|
var deprecated_label_addon_service_updated = `
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: addon-deprecated-label-test
|
||||||
|
namespace: %s
|
||||||
|
labels:
|
||||||
|
k8s-app: addon-deprecated-label-test
|
||||||
|
kubernetes.io/cluster-service: "true"
|
||||||
|
newLabel: addon-deprecated-label-test
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 9376
|
||||||
|
protocol: TCP
|
||||||
|
targetPort: 9376
|
||||||
|
selector:
|
||||||
|
k8s-app: addon-deprecated-label-test
|
||||||
|
`
|
||||||
|
|
||||||
|
// Should not create addon without valid label.
|
||||||
|
var invalid_addon_controller = `
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: ReplicationController
|
kind: ReplicationController
|
||||||
metadata:
|
metadata:
|
||||||
name: invalid-addon-test-v1
|
name: invalid-addon-test
|
||||||
namespace: %s
|
namespace: %s
|
||||||
labels:
|
labels:
|
||||||
k8s-app: invalid-addon-test
|
k8s-app: invalid-addon-test
|
||||||
version: v1
|
addonmanager.kubernetes.io/mode: NotMatch
|
||||||
spec:
|
spec:
|
||||||
replicas: 2
|
replicas: 2
|
||||||
selector:
|
selector:
|
||||||
k8s-app: invalid-addon-test
|
k8s-app: invalid-addon-test
|
||||||
version: v1
|
|
||||||
template:
|
template:
|
||||||
metadata:
|
metadata:
|
||||||
labels:
|
labels:
|
||||||
k8s-app: invalid-addon-test
|
k8s-app: invalid-addon-test
|
||||||
version: v1
|
|
||||||
kubernetes.io/cluster-service: "true"
|
|
||||||
spec:
|
spec:
|
||||||
containers:
|
containers:
|
||||||
- image: gcr.io/google_containers/serve_hostname:v1.4
|
- image: gcr.io/google_containers/serve_hostname:v1.4
|
||||||
@ -164,49 +195,10 @@ spec:
|
|||||||
protocol: TCP
|
protocol: TCP
|
||||||
`
|
`
|
||||||
|
|
||||||
// Wrong label case
|
|
||||||
var invalid_addon_service_v1 = `
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: ivalid-addon-test
|
|
||||||
namespace: %s
|
|
||||||
labels:
|
|
||||||
k8s-app: invalid-addon-test
|
|
||||||
kubernetes.io/name: invalid-addon-test
|
|
||||||
spec:
|
|
||||||
ports:
|
|
||||||
- port: 9377
|
|
||||||
protocol: TCP
|
|
||||||
targetPort: 9376
|
|
||||||
selector:
|
|
||||||
k8s-app: invalid-addon-test
|
|
||||||
`
|
|
||||||
|
|
||||||
// Wrong namespace case
|
|
||||||
var invalid_addon_service_v2 = `
|
|
||||||
apiVersion: v1
|
|
||||||
kind: Service
|
|
||||||
metadata:
|
|
||||||
name: ivalid-addon-test-v2
|
|
||||||
namespace: %s
|
|
||||||
labels:
|
|
||||||
k8s-app: invalid-addon-test-v2
|
|
||||||
kubernetes.io/cluster-service: "true"
|
|
||||||
spec:
|
|
||||||
ports:
|
|
||||||
- port: 9377
|
|
||||||
protocol: TCP
|
|
||||||
targetPort: 9376
|
|
||||||
selector:
|
|
||||||
k8s-app: invalid-addon-test
|
|
||||||
`
|
|
||||||
|
|
||||||
const (
|
const (
|
||||||
addonTestPollInterval = 3 * time.Second
|
addonTestPollInterval = 3 * time.Second
|
||||||
addonTestPollTimeout = 5 * time.Minute
|
addonTestPollTimeout = 5 * time.Minute
|
||||||
defaultNsName = metav1.NamespaceDefault
|
addonNsName = metav1.NamespaceSystem
|
||||||
addonNsName = "kube-system"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
type stringPair struct {
|
type stringPair struct {
|
||||||
@ -257,23 +249,23 @@ var _ = framework.KubeDescribe("Addon update", func() {
|
|||||||
defer sshExec(sshClient, fmt.Sprintf("rm -rf %s", temporaryRemotePathPrefix)) // ignore the result in cleanup
|
defer sshExec(sshClient, fmt.Sprintf("rm -rf %s", temporaryRemotePathPrefix)) // ignore the result in cleanup
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("mkdir -p %s", temporaryRemotePath))
|
sshExecAndVerify(sshClient, fmt.Sprintf("mkdir -p %s", temporaryRemotePath))
|
||||||
|
|
||||||
rcv1 := "addon-controller-v1.yaml"
|
rcAddonReconcile := "addon-reconcile-controller.yaml"
|
||||||
rcv2 := "addon-controller-v2.yaml"
|
rcAddonReconcileUpdated := "addon-reconcile-controller-Updated.yaml"
|
||||||
rcInvalid := "invalid-addon-controller-v1.yaml"
|
rcInvalid := "invalid-addon-controller.yaml"
|
||||||
|
|
||||||
svcv1 := "addon-service-v1.yaml"
|
svcAddonDeprecatedLabel := "addon-deprecated-label-service.yaml"
|
||||||
svcv2 := "addon-service-v2.yaml"
|
svcAddonDeprecatedLabelUpdated := "addon-deprecated-label-service-updated.yaml"
|
||||||
svcInvalid := "invalid-addon-service-v1.yaml"
|
svcAddonEnsureExists := "addon-ensure-exists-service.yaml"
|
||||||
svcInvalidv2 := "invalid-addon-service-v2.yaml"
|
svcAddonEnsureExistsUpdated := "addon-ensure-exists-service-updated.yaml"
|
||||||
|
|
||||||
var remoteFiles []stringPair = []stringPair{
|
var remoteFiles []stringPair = []stringPair{
|
||||||
{fmt.Sprintf(addon_controller_v1, addonNsName), rcv1},
|
{fmt.Sprintf(reconcile_addon_controller, addonNsName), rcAddonReconcile},
|
||||||
{fmt.Sprintf(addon_controller_v2, addonNsName), rcv2},
|
{fmt.Sprintf(reconcile_addon_controller_updated, addonNsName), rcAddonReconcileUpdated},
|
||||||
{fmt.Sprintf(addon_service_v1, addonNsName), svcv1},
|
{fmt.Sprintf(deprecated_label_addon_service, addonNsName), svcAddonDeprecatedLabel},
|
||||||
{fmt.Sprintf(addon_service_v2, addonNsName), svcv2},
|
{fmt.Sprintf(deprecated_label_addon_service_updated, addonNsName), svcAddonDeprecatedLabelUpdated},
|
||||||
{fmt.Sprintf(invalid_addon_controller_v1, addonNsName), rcInvalid},
|
{fmt.Sprintf(ensure_exists_addon_service, addonNsName), svcAddonEnsureExists},
|
||||||
{fmt.Sprintf(invalid_addon_service_v1, addonNsName), svcInvalid},
|
{fmt.Sprintf(ensure_exists_addon_service_updated, addonNsName), svcAddonEnsureExistsUpdated},
|
||||||
{fmt.Sprintf(invalid_addon_service_v2, defaultNsName), svcInvalidv2},
|
{fmt.Sprintf(invalid_addon_controller, addonNsName), rcInvalid},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, p := range remoteFiles {
|
for _, p := range remoteFiles {
|
||||||
@ -292,51 +284,54 @@ var _ = framework.KubeDescribe("Addon update", func() {
|
|||||||
defer sshExec(sshClient, fmt.Sprintf("sudo rm -rf %s", destinationDirPrefix)) // ignore result in cleanup
|
defer sshExec(sshClient, fmt.Sprintf("sudo rm -rf %s", destinationDirPrefix)) // ignore result in cleanup
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo mkdir -p %s", destinationDir))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo mkdir -p %s", destinationDir))
|
||||||
|
|
||||||
By("copy invalid manifests to the destination dir (without kubernetes.io/cluster-service label)")
|
By("copy invalid manifests to the destination dir")
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, rcInvalid, destinationDir, rcInvalid))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, rcInvalid, destinationDir, rcInvalid))
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, svcInvalid, destinationDir, svcInvalid))
|
|
||||||
// we will verify at the end of the test that the objects weren't created from the invalid manifests
|
// we will verify at the end of the test that the objects weren't created from the invalid manifests
|
||||||
|
|
||||||
By("copy new manifests")
|
By("copy new manifests")
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, rcv1, destinationDir, rcv1))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, rcAddonReconcile, destinationDir, rcAddonReconcile))
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, svcv1, destinationDir, svcv1))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, svcAddonDeprecatedLabel, destinationDir, svcAddonDeprecatedLabel))
|
||||||
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, svcAddonEnsureExists, destinationDir, svcAddonEnsureExists))
|
||||||
|
// Delete the "ensure exist class" addon at the end.
|
||||||
|
defer func() {
|
||||||
|
framework.Logf("Cleaning up ensure exist class addon.")
|
||||||
|
Expect(f.ClientSet.Core().Services(addonNsName).Delete("addon-ensure-exists-test", nil)).NotTo(HaveOccurred())
|
||||||
|
}()
|
||||||
|
|
||||||
waitForServiceInAddonTest(f.ClientSet, addonNsName, "addon-test", true)
|
waitForReplicationControllerInAddonTest(f.ClientSet, addonNsName, "addon-reconcile-test", true)
|
||||||
waitForReplicationControllerInAddonTest(f.ClientSet, addonNsName, "addon-test-v1", true)
|
waitForServiceInAddonTest(f.ClientSet, addonNsName, "addon-deprecated-label-test", true)
|
||||||
|
waitForServiceInAddonTest(f.ClientSet, addonNsName, "addon-ensure-exists-test", true)
|
||||||
|
|
||||||
|
// Replace the manifests with new contents.
|
||||||
By("update manifests")
|
By("update manifests")
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, rcv2, destinationDir, rcv2))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, rcAddonReconcileUpdated, destinationDir, rcAddonReconcile))
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, svcv2, destinationDir, svcv2))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, svcAddonDeprecatedLabelUpdated, destinationDir, svcAddonDeprecatedLabel))
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo rm %s/%s", destinationDir, rcv1))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo cp %s/%s %s/%s", temporaryRemotePath, svcAddonEnsureExistsUpdated, destinationDir, svcAddonEnsureExists))
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo rm %s/%s", destinationDir, svcv1))
|
|
||||||
/**
|
|
||||||
* Note that we have a small race condition here - the kube-addon-updater
|
|
||||||
* May notice that a new rc/service file appeared, while the old one will still be there.
|
|
||||||
* But it is ok - as long as we don't have rolling update, the result will be the same
|
|
||||||
*/
|
|
||||||
|
|
||||||
waitForServiceInAddonTest(f.ClientSet, addonNsName, "addon-test-updated", true)
|
// Wait for updated addons to have the new added label.
|
||||||
waitForReplicationControllerInAddonTest(f.ClientSet, addonNsName, "addon-test-v2", true)
|
reconcileSelector := labels.SelectorFromSet(labels.Set(map[string]string{"newLabel": "addon-reconcile-test"}))
|
||||||
|
waitForReplicationControllerwithSelectorInAddonTest(f.ClientSet, addonNsName, true, reconcileSelector)
|
||||||
waitForServiceInAddonTest(f.ClientSet, addonNsName, "addon-test", false)
|
deprecatedLabelSelector := labels.SelectorFromSet(labels.Set(map[string]string{"newLabel": "addon-deprecated-label-test"}))
|
||||||
waitForReplicationControllerInAddonTest(f.ClientSet, addonNsName, "addon-test-v1", false)
|
waitForServicewithSelectorInAddonTest(f.ClientSet, addonNsName, true, deprecatedLabelSelector)
|
||||||
|
// "Ensure exist class" addon should not be updated.
|
||||||
|
ensureExistSelector := labels.SelectorFromSet(labels.Set(map[string]string{"newLabel": "addon-ensure-exists-test"}))
|
||||||
|
waitForServicewithSelectorInAddonTest(f.ClientSet, addonNsName, false, ensureExistSelector)
|
||||||
|
|
||||||
By("remove manifests")
|
By("remove manifests")
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo rm %s/%s", destinationDir, rcv2))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo rm %s/%s", destinationDir, rcAddonReconcile))
|
||||||
sshExecAndVerify(sshClient, fmt.Sprintf("sudo rm %s/%s", destinationDir, svcv2))
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo rm %s/%s", destinationDir, svcAddonDeprecatedLabel))
|
||||||
|
sshExecAndVerify(sshClient, fmt.Sprintf("sudo rm %s/%s", destinationDir, svcAddonEnsureExists))
|
||||||
|
|
||||||
waitForServiceInAddonTest(f.ClientSet, addonNsName, "addon-test-updated", false)
|
waitForReplicationControllerInAddonTest(f.ClientSet, addonNsName, "addon-reconcile-test", false)
|
||||||
waitForReplicationControllerInAddonTest(f.ClientSet, addonNsName, "addon-test-v2", false)
|
waitForServiceInAddonTest(f.ClientSet, addonNsName, "addon-deprecated-label-test", false)
|
||||||
|
// "Ensure exist class" addon will not be deleted when manifest is removed.
|
||||||
|
waitForServiceInAddonTest(f.ClientSet, addonNsName, "addon-ensure-exists-test", true)
|
||||||
|
|
||||||
By("verify invalid API addons weren't created")
|
By("verify invalid addons weren't created")
|
||||||
_, err = f.ClientSet.Core().ReplicationControllers(addonNsName).Get("invalid-addon-test-v1", metav1.GetOptions{})
|
_, err = f.ClientSet.Core().ReplicationControllers(addonNsName).Get("invalid-addon-test", metav1.GetOptions{})
|
||||||
Expect(err).To(HaveOccurred())
|
|
||||||
_, err = f.ClientSet.Core().Services(addonNsName).Get("ivalid-addon-test", metav1.GetOptions{})
|
|
||||||
Expect(err).To(HaveOccurred())
|
|
||||||
_, err = f.ClientSet.Core().Services(defaultNsName).Get("ivalid-addon-test-v2", metav1.GetOptions{})
|
|
||||||
Expect(err).To(HaveOccurred())
|
Expect(err).To(HaveOccurred())
|
||||||
|
|
||||||
// invalid addons will be deleted by the deferred function
|
// Invalid addon manifests and the "ensure exist class" addon will be deleted by the deferred function.
|
||||||
})
|
})
|
||||||
})
|
})
|
||||||
|
|
||||||
@ -348,6 +343,15 @@ func waitForReplicationControllerInAddonTest(c clientset.Interface, addonNamespa
|
|||||||
framework.ExpectNoError(framework.WaitForReplicationController(c, addonNamespace, name, exist, addonTestPollInterval, addonTestPollTimeout))
|
framework.ExpectNoError(framework.WaitForReplicationController(c, addonNamespace, name, exist, addonTestPollInterval, addonTestPollTimeout))
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func waitForServicewithSelectorInAddonTest(c clientset.Interface, addonNamespace string, exist bool, selector labels.Selector) {
|
||||||
|
framework.ExpectNoError(framework.WaitForServiceWithSelector(c, addonNamespace, selector, exist, addonTestPollInterval, addonTestPollTimeout))
|
||||||
|
}
|
||||||
|
|
||||||
|
func waitForReplicationControllerwithSelectorInAddonTest(c clientset.Interface, addonNamespace string, exist bool, selector labels.Selector) {
|
||||||
|
framework.ExpectNoError(framework.WaitForReplicationControllerwithSelector(c, addonNamespace, selector, exist, addonTestPollInterval,
|
||||||
|
addonTestPollTimeout))
|
||||||
|
}
|
||||||
|
|
||||||
// TODO use the framework.SSH code, either adding an SCP to it or copying files
|
// TODO use the framework.SSH code, either adding an SCP to it or copying files
|
||||||
// differently.
|
// differently.
|
||||||
func getMasterSSHClient() (*ssh.Client, error) {
|
func getMasterSSHClient() (*ssh.Client, error) {
|
||||||
|
@ -1456,17 +1456,11 @@ func WaitForService(c clientset.Interface, namespace, name string, exist bool, i
|
|||||||
_, err := c.Core().Services(namespace).Get(name, metav1.GetOptions{})
|
_, err := c.Core().Services(namespace).Get(name, metav1.GetOptions{})
|
||||||
switch {
|
switch {
|
||||||
case err == nil:
|
case err == nil:
|
||||||
if !exist {
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
Logf("Service %s in namespace %s found.", name, namespace)
|
Logf("Service %s in namespace %s found.", name, namespace)
|
||||||
return true, nil
|
return exist, nil
|
||||||
case apierrs.IsNotFound(err):
|
case apierrs.IsNotFound(err):
|
||||||
if exist {
|
|
||||||
return false, nil
|
|
||||||
}
|
|
||||||
Logf("Service %s in namespace %s disappeared.", name, namespace)
|
Logf("Service %s in namespace %s disappeared.", name, namespace)
|
||||||
return true, nil
|
return !exist, nil
|
||||||
default:
|
default:
|
||||||
Logf("Get service %s in namespace %s failed: %v", name, namespace, err)
|
Logf("Get service %s in namespace %s failed: %v", name, namespace, err)
|
||||||
return false, nil
|
return false, nil
|
||||||
@ -1479,6 +1473,30 @@ func WaitForService(c clientset.Interface, namespace, name string, exist bool, i
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// WaitForServiceWithSelector waits until any service with given selector appears (exist == true), or disappears (exist == false)
|
||||||
|
func WaitForServiceWithSelector(c clientset.Interface, namespace string, selector labels.Selector, exist bool, interval,
|
||||||
|
timeout time.Duration) error {
|
||||||
|
err := wait.PollImmediate(interval, timeout, func() (bool, error) {
|
||||||
|
services, err := c.Core().Services(namespace).List(metav1.ListOptions{LabelSelector: selector.String()})
|
||||||
|
switch {
|
||||||
|
case len(services.Items) != 0:
|
||||||
|
Logf("Service with %s in namespace %s found.", selector.String(), namespace)
|
||||||
|
return exist, nil
|
||||||
|
case len(services.Items) == 0:
|
||||||
|
Logf("Service with %s in namespace %s disappeared.", selector.String(), namespace)
|
||||||
|
return !exist, nil
|
||||||
|
default:
|
||||||
|
Logf("List service with %s in namespace %s failed: %v", selector.String(), namespace, err)
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
stateMsg := map[bool]string{true: "to appear", false: "to disappear"}
|
||||||
|
return fmt.Errorf("error waiting for service with %s in namespace %s %s: %v", selector.String(), namespace, stateMsg[exist], err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
//WaitForServiceEndpointsNum waits until the amount of endpoints that implement service to expectNum.
|
//WaitForServiceEndpointsNum waits until the amount of endpoints that implement service to expectNum.
|
||||||
func WaitForServiceEndpointsNum(c clientset.Interface, namespace, serviceName string, expectNum int, interval, timeout time.Duration) error {
|
func WaitForServiceEndpointsNum(c clientset.Interface, namespace, serviceName string, expectNum int, interval, timeout time.Duration) error {
|
||||||
return wait.Poll(interval, timeout, func() (bool, error) {
|
return wait.Poll(interval, timeout, func() (bool, error) {
|
||||||
@ -1524,6 +1542,30 @@ func WaitForReplicationController(c clientset.Interface, namespace, name string,
|
|||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// WaitForReplicationControllerwithSelector waits until any RC with given selector appears (exist == true), or disappears (exist == false)
|
||||||
|
func WaitForReplicationControllerwithSelector(c clientset.Interface, namespace string, selector labels.Selector, exist bool, interval,
|
||||||
|
timeout time.Duration) error {
|
||||||
|
err := wait.PollImmediate(interval, timeout, func() (bool, error) {
|
||||||
|
rcs, err := c.Core().ReplicationControllers(namespace).List(metav1.ListOptions{LabelSelector: selector.String()})
|
||||||
|
switch {
|
||||||
|
case len(rcs.Items) != 0:
|
||||||
|
Logf("ReplicationController with %s in namespace %s found.", selector.String(), namespace)
|
||||||
|
return exist, nil
|
||||||
|
case len(rcs.Items) == 0:
|
||||||
|
Logf("ReplicationController with %s in namespace %s disappeared.", selector.String(), namespace)
|
||||||
|
return !exist, nil
|
||||||
|
default:
|
||||||
|
Logf("List ReplicationController with %s in namespace %s failed: %v", selector.String(), namespace, err)
|
||||||
|
return false, nil
|
||||||
|
}
|
||||||
|
})
|
||||||
|
if err != nil {
|
||||||
|
stateMsg := map[bool]string{true: "to appear", false: "to disappear"}
|
||||||
|
return fmt.Errorf("error waiting for ReplicationControllers with %s in namespace %s %s: %v", selector.String(), namespace, stateMsg[exist], err)
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
func WaitForEndpoint(c clientset.Interface, ns, name string) error {
|
func WaitForEndpoint(c clientset.Interface, ns, name string) error {
|
||||||
for t := time.Now(); time.Since(t) < EndpointRegisterTimeout; time.Sleep(Poll) {
|
for t := time.Now(); time.Since(t) < EndpointRegisterTimeout; time.Sleep(Poll) {
|
||||||
endpoint, err := c.Core().Endpoints(ns).Get(name, metav1.GetOptions{})
|
endpoint, err := c.Core().Endpoints(ns).Get(name, metav1.GetOptions{})
|
||||||
|
@ -9,7 +9,7 @@ spec:
|
|||||||
hostNetwork: true
|
hostNetwork: true
|
||||||
containers:
|
containers:
|
||||||
- name: kube-addon-manager
|
- name: kube-addon-manager
|
||||||
image: {{kube_docker_registry}}/kube-addon-manager:v6.4-alpha.2
|
image: {{kube_docker_registry}}/kube-addon-manager:v6.4-alpha.3
|
||||||
command:
|
command:
|
||||||
- /bin/bash
|
- /bin/bash
|
||||||
- -c
|
- -c
|
||||||
|
Loading…
Reference in New Issue
Block a user