Generate a token for kube-proxy.

Tested on GCE.
Includes untested modifications for AWS and Vagrant.
No changes for any other distros.
Probably will work on other up-to-date providers
but beware.  Symptom would be that service proxying
stops working.

 1. Generates a token kube-proxy in AWS, GCE, and Vagrant setup scripts.
 1. Distributes the token via salt-overlay, and salt to /var/lib/kube-proxy/kubeconfig
 1. Changes kube-proxy args:
   - use the --kubeconfig argument
   - changes --master argument from http://MASTER:7080 to https://MASTER
     - http -> https
     - explicit port 7080 -> implied 443

Possible ways this might break other distros:

Mitigation: there is an default empty kubeconfig file.
If the distro does not populate the salt-overlay, then
it should get the empty, which parses to an empty
object, which, combined with the --master argument,
should still work.

Mitigation:
  - azure: Special case to use 7080 in
  - rackspace: way out of date, so don't care.
  - vsphere: way out of date, so don't care.
  - other distros: not using salt.
This commit is contained in:
Eric Tune
2015-04-24 09:27:11 -07:00
parent ee5cad84e0
commit 9044177bb6
7 changed files with 111 additions and 13 deletions

View File

@@ -238,7 +238,7 @@ EOF
}
# This should only happen on cluster initialization. Uses
# KUBE_BEARER_TOKEN, KUBELET_TOKEN, and /dev/urandom to generate
# KUBE_BEARER_TOKEN, KUBELET_TOKEN, and KUBE_PROXY_TOKEN to generate
# known_tokens.csv (KNOWN_TOKENS_FILE). After the first boot and
# on upgrade, this file exists on the master-pd and should never
# be touched again (except perhaps an additional service account,
@@ -248,13 +248,40 @@ function create-salt-auth() {
mkdir -p /srv/salt-overlay/salt/kube-apiserver
(umask 077;
echo "${KUBE_BEARER_TOKEN},admin,admin" > "${KNOWN_TOKENS_FILE}";
echo "${KUBELET_TOKEN},kubelet,kubelet" >> "${KNOWN_TOKENS_FILE}")
echo "${KUBELET_TOKEN},kubelet,kubelet" >> "${KNOWN_TOKENS_FILE}";
echo "${KUBE_PROXY_TOKEN},kube_proxy,kube_proxy" >> "${KNOWN_TOKENS_FILE}")
mkdir -p /srv/salt-overlay/salt/kubelet
kubelet_auth_file="/srv/salt-overlay/salt/kubelet/kubernetes_auth"
(umask 077;
echo "{\"BearerToken\": \"${KUBELET_TOKEN}\", \"Insecure\": true }" > "${kubelet_auth_file}")
mkdir -p /srv/salt-overlay/salt/kube-proxy
kube_proxy_kubeconfig_file="/srv/salt-overlay/salt/kube-proxy/kubeconfig"
# Make a kubeconfig file with the token.
# TODO(etune): put apiserver certs into secret too, and reference from authfile,
# so that "Insecure" is not needed.
(umask 077;
cat > "${kube_proxy_kubeconfig_file}" <<EOF
apiVersion: v1
kind: Config
users:
- name: kube-proxy
user:
token: ${KUBE_PROXY_TOKEN}
clusters:
- name: local
cluster:
insecure-skip-tls-verify: true
contexts:
- context:
cluster: local
user: kube-proxy
name: service-account-context
current-context: service-account-context
EOF
)
# Generate tokens for other "service accounts". Append to known_tokens.
#
# NB: If this list ever changes, this script actually has to