mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-16 22:53:22 +00:00
Switch DNS addons from skydns to kubedns
Unified skydns templates using a simple underscore based template and added transform sed scripts to transform into salt and sed yaml templates Moved all content out of cluster/addons/dns into build/kube-dns and saltbase/salt/kube-dns
This commit is contained in:
@@ -1,6 +0,0 @@
|
||||
# Maintainers
|
||||
|
||||
Tim Hockin <thockin@google.com>
|
||||
|
||||
|
||||
[]()
|
@@ -1,3 +0,0 @@
|
||||
assignees:
|
||||
- ArtfulCoder
|
||||
- thockin
|
@@ -1,262 +0,0 @@
|
||||
# DNS in Kubernetes
|
||||
|
||||
Kubernetes offers a DNS cluster addon, which most of the supported environments
|
||||
enable by default. We use [SkyDNS](https://github.com/skynetservices/skydns)
|
||||
as the DNS server, with some custom logic to slave it to the kubernetes API
|
||||
server.
|
||||
|
||||
## What things get DNS names?
|
||||
The only objects to which we are assigning DNS names are Services. Every
|
||||
Kubernetes Service is assigned a virtual IP address which is stable as long as
|
||||
the Service exists (as compared to Pod IPs which can change over time due to
|
||||
crashes or scheduling changes). This maps well to DNS, which has a long
|
||||
history of clients that, on purpose or on accident, do not respect DNS TTLs
|
||||
(see previous remark about Pod IPs changing).
|
||||
|
||||
## Where does resolution work?
|
||||
Kubernetes Service DNS names can be resolved using standard methods (e.g. [`gethostbyname`](
|
||||
http://linux.die.net/man/3/gethostbyname)) inside any pod, except pods which
|
||||
have the `hostNetwork` field set to `true`.
|
||||
|
||||
## Supported DNS schema
|
||||
The following sections detail the supported record types and layout that is
|
||||
supported. Any other layout or names or queries that happen to work are
|
||||
considered implementation details and are subject to change without warning.
|
||||
|
||||
### Services
|
||||
|
||||
#### A records
|
||||
"Normal" (not headless) Services are assigned a DNS A record for a name of the
|
||||
form `my-svc.my-namespace.svc.cluster.local`. This resolves to the cluster IP
|
||||
of the Service.
|
||||
|
||||
"Headless" (without a cluster IP) Services are also assigned a DNS A record for
|
||||
a name of the form `my-svc.my-namespace.svc.cluster.local`. Unlike normal
|
||||
Services, this resolves to the set of IPs of the pods selected by the Service.
|
||||
Clients are expected to consume the set or else use standard round-robin
|
||||
selection from the set.
|
||||
|
||||
### SRV records
|
||||
SRV Records are created for named ports that are part of normal or Headless
|
||||
Services.
|
||||
For each named port, the SRV record would have the form
|
||||
`_my-port-name._my-port-protocol.my-svc.my-namespace.svc.cluster.local`.
|
||||
For a regular service, this resolves to the port number and the CNAME:
|
||||
`my-svc.my-namespace.svc.cluster.local`.
|
||||
For a headless service, this resolves to multiple answers, one for each pod
|
||||
that is backing the service, and contains the port number and a CNAME of the pod
|
||||
of the form `auto-generated-name.my-svc.my-namespace.svc.cluster.local`.
|
||||
|
||||
### Backwards compatibility
|
||||
Previous versions of kube-dns made names of the for
|
||||
`my-svc.my-namespace.cluster.local` (the 'svc' level was added later). This
|
||||
is no longer supported.
|
||||
|
||||
### Pods
|
||||
|
||||
#### A Records
|
||||
When enabled, pods are assigned a DNS A record in the form of `pod-ip-address.my-namespace.pod.cluster.local`.
|
||||
|
||||
For example, a pod with ip `1.2.3.4` in the namespace `default` with a dns name of `cluster.local` would have an entry: `1-2-3-4.default.pod.cluster.local`.
|
||||
|
||||
|
||||
####A Records and hostname based on Pod's hostname and subdomain fields
|
||||
Currently when a pod is created, its hostname is the Pod's `metadata.name` value.
|
||||
|
||||
With v1.2, users can specify a Pod annotation, `pod.beta.kubernetes.io/hostname`, to specify what the Pod's hostname should be.
|
||||
If the annotation is specified, the annotation value takes precendence over the Pod's name, to be the hostname of the pod.
|
||||
For example, given a Pod with annotation `pod.beta.kubernetes.io/hostname: my-pod-name`, the Pod will have its hostname set to "my-pod-name".
|
||||
|
||||
With v1.3, the PodSpec has a `hostname` field, which can be used to specify the Pod's hostname. This field value takes precedence over the
|
||||
`pod.beta.kubernetes.io/hostname` annotation value.
|
||||
|
||||
v1.2 introduces a beta feature where the user can specify a Pod annotation, `pod.beta.kubernetes.io/subdomain`, to specify what the Pod's subdomain should be.
|
||||
If the annotation is specified, the fully qualified Pod hostname will be "<hostname>.<subdomain>.<pod namespace>.svc.<cluster domain>".
|
||||
For example, given a Pod with the hostname annotation set to "foo", and the subdomain annotation set to "bar", in namespace "my-namespace", the pod will set its own FQDN as "foo.bar.my-namespace.svc.cluster.local"
|
||||
|
||||
With v1.3, the PodSpec has a `subdomain` field, which can be used to specify the Pod's subdomain. This field value takes precedence over the
|
||||
`pod.beta.kubernetes.io/subdomain` annotation value.
|
||||
|
||||
Example:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox
|
||||
namespace: default
|
||||
spec:
|
||||
hostname: busybox-1
|
||||
subdomain: default
|
||||
containers:
|
||||
- image: busybox
|
||||
command:
|
||||
- sleep
|
||||
- "3600"
|
||||
name: busybox
|
||||
```
|
||||
|
||||
If there exists a headless service in the same namespace as the pod and with the same name as the subdomain, the cluster's KubeDNS Server will also return an A record for the Pod's fully qualified hostname.
|
||||
Given a Pod with the hostname set to "foo" and the subdomain set to "bar", and a headless Service named "bar" in the same namespace, the pod will see it's own FQDN as "foo.bar.my-namespace.svc.cluster.local". DNS will serve an A record at that name, pointing to the Pod's IP.
|
||||
|
||||
With v1.2, the Endpoints object also has a new annotation `endpoints.beta.kubernetes.io/hostnames-map`. Its value is the json representation of map[string(IP)][endpoints.HostRecord], for example: '{"10.245.1.6":{HostName: "my-webserver"}}'.
|
||||
If the Endpoints are for a headless service, then A records will be created with the format <hostname>.<service name>.<pod namespace>.svc.<cluster domain>
|
||||
For the example json, if endpoints are for a headless service named "bar", and one of the endpoints has IP "10.245.1.6", then a A record will be created with the name "my-webserver.bar.my-namespace.svc.cluster.local" and the A record lookup would return "10.245.1.6".
|
||||
This endpoints annotation generally does not need to be specified by end-users, but can used by the internal service controller to deliver the aforementioned feature.
|
||||
|
||||
With v1.3, The Endpoints object can specify the `hostname` for any endpoint, along with its IP. The hostname field takes precedence over the hostname value
|
||||
that might have been specified via the `endpoints.beta.kubernetes.io/hostnames-map` annotation.
|
||||
|
||||
With v1.3, the following annotations are deprecated: `pod.beta.kubernetes.io/hostname`, `pod.beta.kubernetes.io/subdomain`, `endpoints.beta.kubernetes.io/hostnames-map`
|
||||
|
||||
## How do I find the DNS server?
|
||||
The DNS server itself runs as a Kubernetes Service. This gives it a stable IP
|
||||
address. When you run the SkyDNS service, you want to assign a static IP to use for
|
||||
the Service. For example, if you assign the DNS Service IP as `10.0.0.10`, you
|
||||
can configure your kubelet to pass that on to each container as a DNS server.
|
||||
|
||||
Of course, giving services a name is just half of the problem - DNS names need a
|
||||
domain also. This implementation uses a configurable local domain, which can
|
||||
also be passed to containers by kubelet as a DNS search suffix.
|
||||
|
||||
## How do I configure it?
|
||||
The easiest way to use DNS is to use a supported kubernetes cluster setup,
|
||||
which should have the required logic to read some config variables and plumb
|
||||
them all the way down to kubelet.
|
||||
|
||||
Supported environments offer the following config flags, which are used at
|
||||
cluster turn-up to create the SkyDNS pods and configure the kubelets. For
|
||||
example, see `cluster/gce/config-default.sh`.
|
||||
|
||||
```sh
|
||||
ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
|
||||
DNS_SERVER_IP="10.0.0.10"
|
||||
DNS_DOMAIN="cluster.local"
|
||||
DNS_REPLICAS=1
|
||||
```
|
||||
|
||||
This enables DNS with a DNS Service IP of `10.0.0.10` and a local domain of
|
||||
`cluster.local`, served by a single copy of SkyDNS.
|
||||
|
||||
If you are not using a supported cluster setup, you will have to replicate some
|
||||
of this yourself. First, each kubelet needs to run with the following flags
|
||||
set:
|
||||
|
||||
```
|
||||
--cluster-dns=<DNS service ip>
|
||||
--cluster-domain=<default local domain>
|
||||
```
|
||||
|
||||
Second, you need to start the DNS server ReplicationController and Service. See
|
||||
the example files ([ReplicationController](skydns-rc.yaml.in) and
|
||||
[Service](skydns-svc.yaml.in)), but keep in mind that these are templated for
|
||||
Salt. You will need to replace the `{{ <param> }}` blocks with your own values
|
||||
for the config variables mentioned above. Other than the templating, these are
|
||||
normal kubernetes objects, and can be instantiated with `kubectl create`.
|
||||
|
||||
## How do I test if it is working?
|
||||
First deploy DNS as described above.
|
||||
|
||||
### 1 Create a simple Pod to use as a test environment.
|
||||
|
||||
Create a file named busybox.yaml with the
|
||||
following contents:
|
||||
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
name: busybox
|
||||
namespace: default
|
||||
spec:
|
||||
containers:
|
||||
- image: busybox
|
||||
command:
|
||||
- sleep
|
||||
- "3600"
|
||||
imagePullPolicy: IfNotPresent
|
||||
name: busybox
|
||||
restartPolicy: Always
|
||||
```
|
||||
|
||||
Then create a pod using this file:
|
||||
|
||||
```
|
||||
kubectl create -f busybox.yaml
|
||||
```
|
||||
|
||||
### 2 Wait for this pod to go into the running state.
|
||||
|
||||
You can get its status with:
|
||||
```
|
||||
kubectl get pods busybox
|
||||
```
|
||||
|
||||
You should see:
|
||||
```
|
||||
NAME READY STATUS RESTARTS AGE
|
||||
busybox 1/1 Running 0 <some-time>
|
||||
```
|
||||
|
||||
### 3 Validate DNS works
|
||||
Once that pod is running, you can exec nslookup in that environment:
|
||||
```
|
||||
kubectl exec busybox -- nslookup kubernetes.default
|
||||
```
|
||||
|
||||
You should see something like:
|
||||
```
|
||||
Server: 10.0.0.10
|
||||
Address 1: 10.0.0.10
|
||||
|
||||
Name: kubernetes.default
|
||||
Address 1: 10.0.0.1
|
||||
```
|
||||
|
||||
If you see that, DNS is working correctly.
|
||||
|
||||
|
||||
## How does it work?
|
||||
SkyDNS depends on etcd for what to serve, but it doesn't really need all of
|
||||
what etcd offers (at least not in the way we use it). For simplicity, we run
|
||||
etcd and SkyDNS together in a pod, and we do not try to link etcd instances
|
||||
across replicas. A helper container called [kube2sky](kube2sky/) also runs in
|
||||
the pod and acts a bridge between Kubernetes and SkyDNS. It finds the
|
||||
Kubernetes master through the `kubernetes` service (via environment
|
||||
variables), pulls service info from the master, and writes that to etcd for
|
||||
SkyDNS to find.
|
||||
|
||||
## Inheriting DNS from the node
|
||||
When running a pod, kubelet will prepend the cluster DNS server and search
|
||||
paths to the node's own DNS settings. If the node is able to resolve DNS names
|
||||
specific to the larger environment, pods should be able to, also. See "Known
|
||||
issues" below for a caveat.
|
||||
|
||||
If you don't want this, or if you want a different DNS config for pods, you can
|
||||
use the kubelet's `--resolv-conf` flag. Setting it to "" means that pods will
|
||||
not inherit DNS. Setting it to a valid file path means that kubelet will use
|
||||
this file instead of `/etc/resolv.conf` for DNS inheritance.
|
||||
|
||||
## Known issues
|
||||
Kubernetes installs do not configure the nodes' resolv.conf files to use the
|
||||
cluster DNS by default, because that process is inherently distro-specific.
|
||||
This should probably be implemented eventually.
|
||||
|
||||
Linux's libc is impossibly stuck ([see this bug from
|
||||
2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just
|
||||
3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to
|
||||
consume 1 `nameserver` record and 3 `search` records. This means that if a
|
||||
local installation already uses 3 `nameserver`s or uses more than 3 `search`es,
|
||||
some of those settings will be lost. As a partial workaround, the node can run
|
||||
`dnsmasq` which will provide more `nameserver` entries, but not more `search`
|
||||
entries. You can also use kubelet's `--resolv-conf` flag.
|
||||
|
||||
## Making changes
|
||||
Please observe the release process for making changes to the `kube2sky`
|
||||
image that is documented in [RELEASES.md](kube2sky/RELEASES.md). Any significant changes
|
||||
to the YAML template for `kube-dns` should result a bump of the version number
|
||||
for the `kube-dns` replication controller and well as the `version` label. This
|
||||
will permit a rolling update of `kube-dns`.
|
||||
|
||||
|
||||
[]()
|
1
cluster/addons/dns/kube2sky/.gitignore
vendored
1
cluster/addons/dns/kube2sky/.gitignore
vendored
@@ -1 +0,0 @@
|
||||
kube2sky
|
@@ -1,40 +0,0 @@
|
||||
## Version 1.15 (Apr 7 2016 Lucas Käldström <lucas.kaldstrom@hotmail.co.uk>)
|
||||
- No code changes since 1.14
|
||||
- Built in a dockerized env instead of using go on host to make it more reliable. `1.15` was built with `go1.6`
|
||||
- Made it possible to compile this image for multiple architectures, so the main naming of this image is
|
||||
now `gcr.io/google_containers/kube2sky-arch:tag`. `arch` may be one of `amd64`, `arm`, `arm64` or `ppc64le`.
|
||||
`gcr.io/google_containers/kube2sky:tag` is still pushed for backward compability
|
||||
|
||||
## Version 1.14 (Mar 4 2016 Abhishek Shah <abshah@google.com>)
|
||||
- If Endpoint has hostnames-map annotation (endpoints.net.beta.kubernetes.io/hostnames-map),
|
||||
the hostnames supplied via the annotation will be used to generate A Records for Headless Service.
|
||||
|
||||
## Version 1.13 (Mar 1 2016 Prashanth.B <beeps@google.com>)
|
||||
- Synchronously wait for the Kubernetes service at startup.
|
||||
- Add a SIGTERM/SIGINT handler.
|
||||
|
||||
## Version 1.12 (Dec 15 2015 Abhishek Shah <abshah@google.com>)
|
||||
- Gave pods their own cache store. (034ecbd)
|
||||
- Allow pods to have dns. (717660a)
|
||||
|
||||
|
||||
## Version 1.10 (Jun 19 2015 Tim Hockin <thockin@google.com>)
|
||||
- Fall back on service account tokens if no other auth is specified.
|
||||
|
||||
|
||||
## Version 1.9 (May 28 2015 Abhishek Shah <abshah@google.com>)
|
||||
- Add SRV support.
|
||||
|
||||
|
||||
## Version 1.8 (May 28 2015 Vishnu Kannan <vishnuk@google.com>)
|
||||
- Avoid making connections to the master insecure by default
|
||||
- Let users override the master URL in kubeconfig via a flag
|
||||
|
||||
|
||||
## Version 1.7 (May 25 2015 Vishnu Kannan <vishnuk@google.com>)
|
||||
- Adding support for headless services. All pods backing a headless service is
|
||||
addressible via DNS RR.
|
||||
|
||||
|
||||
## Version 1.4 (Fri May 15 2015 Tim Hockin <thockin@google.com>)
|
||||
- First Changelog entry
|
@@ -1,19 +0,0 @@
|
||||
# Copyright 2016 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM BASEIMAGE
|
||||
MAINTAINER Tim Hockin <thockin@google.com>
|
||||
ADD kube2sky /
|
||||
ADD kube2sky.go /
|
||||
ENTRYPOINT ["/kube2sky"]
|
@@ -1,86 +0,0 @@
|
||||
# Copyright 2016 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Makefile for the Docker image gcr.io/google_containers/kube2sky
|
||||
# MAINTAINER: Tim Hockin <thockin@google.com>
|
||||
# If you update this image please bump the tag value before pushing.
|
||||
#
|
||||
# Usage:
|
||||
# [ARCH=amd64] [TAG=1.14] [REGISTRY=gcr.io/google_containers] [BASEIMAGE=busybox] make (build|push)
|
||||
|
||||
# Default registry, arch and tag. This can be overwritten by arguments to make
|
||||
ARCH?=amd64
|
||||
TAG?=1.15
|
||||
REGISTRY?=gcr.io/google_containers
|
||||
GOLANG_VERSION=1.6
|
||||
GOARM=6
|
||||
KUBE_ROOT=$(shell pwd)/../../../..
|
||||
TEMP_DIR:=$(shell mktemp -d)
|
||||
|
||||
ifeq ($(ARCH),amd64)
|
||||
BASEIMAGE?=busybox
|
||||
endif
|
||||
ifeq ($(ARCH),arm)
|
||||
BASEIMAGE?=armel/busybox
|
||||
endif
|
||||
ifeq ($(ARCH),arm64)
|
||||
BASEIMAGE?=aarch64/busybox
|
||||
endif
|
||||
ifeq ($(ARCH),ppc64le)
|
||||
BASEIMAGE?=ppc64le/busybox
|
||||
endif
|
||||
|
||||
|
||||
all: container
|
||||
|
||||
kube2sky: kube2sky.go
|
||||
# Only build kube2sky. This requires go and godep in PATH
|
||||
CGO_ENABLED=0 GOARCH=$(ARCH) GOARM=$(GOARM) go build -a -installsuffix cgo --ldflags '-w' ./kube2sky.go
|
||||
|
||||
container:
|
||||
# Copy the content in this dir to the temp dir
|
||||
cp ./* $(TEMP_DIR)
|
||||
|
||||
# Build the binary dockerized. Mount the whole Kubernetes source first, and then the temporary dir to kube2sky source.
|
||||
# It runs "make kube2sky" inside the docker container, and the binary is put in the temporary dir.
|
||||
docker run -it \
|
||||
-v $(KUBE_ROOT):/go/src/k8s.io/kubernetes \
|
||||
-v $(TEMP_DIR):/go/src/k8s.io/kubernetes/cluster/addons/dns/kube2sky \
|
||||
golang:$(GOLANG_VERSION) /bin/bash -c \
|
||||
"go get github.com/tools/godep \
|
||||
&& make -C /go/src/k8s.io/kubernetes/cluster/addons/dns/kube2sky kube2sky ARCH=$(ARCH)"
|
||||
|
||||
# Replace BASEIMAGE with the real base image
|
||||
cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
|
||||
|
||||
# And build the image
|
||||
docker build -t $(REGISTRY)/kube2sky-$(ARCH):$(TAG) $(TEMP_DIR)
|
||||
|
||||
push: container
|
||||
gcloud docker push $(REGISTRY)/kube2sky-$(ARCH):$(TAG)
|
||||
|
||||
ifeq ($(ARCH),amd64)
|
||||
# Backward compatability. TODO: deprecate this image tag
|
||||
docker tag -f $(REGISTRY)/kube2sky-$(ARCH):$(TAG) $(REGISTRY)/kube2sky:$(TAG)
|
||||
gcloud docker push $(REGISTRY)/kube2sky:$(TAG)
|
||||
endif
|
||||
|
||||
clean:
|
||||
rm -f kube2sky
|
||||
|
||||
test: clean
|
||||
go test -v --vmodule=*=4
|
||||
|
||||
|
||||
.PHONY: all kube2sky container push clean test
|
@@ -1,37 +0,0 @@
|
||||
# kube2sky
|
||||
==============
|
||||
|
||||
A bridge between Kubernetes and SkyDNS. This will watch the kubernetes API for
|
||||
changes in Services and then publish those changes to SkyDNS through etcd.
|
||||
|
||||
For now, this is expected to be run in a pod alongside the etcd and SkyDNS
|
||||
containers.
|
||||
|
||||
## Namespaces
|
||||
|
||||
Kubernetes namespaces become another level of the DNS hierarchy. See the
|
||||
description of `--domain` below.
|
||||
|
||||
## Flags
|
||||
|
||||
`--domain`: Set the domain under which all DNS names will be hosted. For
|
||||
example, if this is set to `kubernetes.io`, then a service named "nifty" in the
|
||||
"default" namespace would be exposed through DNS as
|
||||
"nifty.default.svc.kubernetes.io".
|
||||
|
||||
`--v`: Set logging level
|
||||
|
||||
`--etcd-mutation-timeout`: For how long the application will keep retrying etcd
|
||||
mutation (insertion or removal of a dns entry) before giving up and crashing.
|
||||
|
||||
`--etcd-server`: The etcd server that is being used by skydns.
|
||||
|
||||
`--kube-master-url`: URL of kubernetes master. Required if `--kubecfg_file` is not set.
|
||||
|
||||
`--kubecfg-file`: Path to kubecfg file that contains the master URL and tokens to authenticate with the master.
|
||||
|
||||
`--log-dir`: If non empty, write log files in this directory
|
||||
|
||||
`--logtostderr`: Logs to stderr instead of files
|
||||
|
||||
[]()
|
@@ -1,43 +0,0 @@
|
||||
# Cutting a release
|
||||
|
||||
Until we have a proper setup for building this automatically with every binary
|
||||
release, here are the steps for making a release. We make releases when they
|
||||
are ready, not on every PR.
|
||||
|
||||
1. Build the container for testing: `make container PREFIX=<your-docker-hub> TAG=rc`
|
||||
|
||||
2. Manually deploy this to your own cluster by updating the replication
|
||||
controller and deleting the running pod(s).
|
||||
|
||||
3. Verify it works.
|
||||
|
||||
4. Update the TAG version in `Makefile` and update the `Changelog`. Update the
|
||||
`*.yaml.in` to point to the new tag. Send a PR but mark it as "DO NOT MERGE".
|
||||
|
||||
5. Once the PR is approved, build and push the container for real for all architectures:
|
||||
|
||||
```console
|
||||
# Build for linux/amd64 (default)
|
||||
$ make push ARCH=amd64
|
||||
# ---> gcr.io/google_containers/kube2sky-amd64:TAG
|
||||
# ---> gcr.io/google_containers/kube2sky:TAG (image with backwards-compatible naming)
|
||||
|
||||
$ make push ARCH=arm
|
||||
# ---> gcr.io/google_containers/kube2sky-arm:TAG
|
||||
|
||||
$ make push ARCH=arm64
|
||||
# ---> gcr.io/google_containers/kube2sky-arm64:TAG
|
||||
|
||||
$ make push ARCH=ppc64le
|
||||
# ---> gcr.io/google_containers/kube2sky-ppc64le:TAG
|
||||
```
|
||||
|
||||
6. Manually deploy this to your own cluster by updating the replication
|
||||
controller and deleting the running pod(s).
|
||||
|
||||
7. Verify it works.
|
||||
|
||||
8. Allow the PR to be merged.
|
||||
|
||||
|
||||
[]()
|
@@ -1,697 +0,0 @@
|
||||
/*
|
||||
Copyright 2014 The Kubernetes Authors All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
// kube2sky is a bridge between Kubernetes and SkyDNS. It watches the
|
||||
// Kubernetes master for changes in Services and manifests them into etcd for
|
||||
// SkyDNS to serve as DNS records.
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"hash/fnv"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"sync"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
etcd "github.com/coreos/go-etcd/etcd"
|
||||
"github.com/golang/glog"
|
||||
skymsg "github.com/skynetservices/skydns/msg"
|
||||
flag "github.com/spf13/pflag"
|
||||
kapi "k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/api/endpoints"
|
||||
"k8s.io/kubernetes/pkg/api/unversioned"
|
||||
kcache "k8s.io/kubernetes/pkg/client/cache"
|
||||
"k8s.io/kubernetes/pkg/client/restclient"
|
||||
kclient "k8s.io/kubernetes/pkg/client/unversioned"
|
||||
kclientcmd "k8s.io/kubernetes/pkg/client/unversioned/clientcmd"
|
||||
kframework "k8s.io/kubernetes/pkg/controller/framework"
|
||||
kselector "k8s.io/kubernetes/pkg/fields"
|
||||
etcdutil "k8s.io/kubernetes/pkg/storage/etcd/util"
|
||||
utilflag "k8s.io/kubernetes/pkg/util/flag"
|
||||
"k8s.io/kubernetes/pkg/util/validation"
|
||||
"k8s.io/kubernetes/pkg/util/wait"
|
||||
)
|
||||
|
||||
// The name of the "master" Kubernetes Service.
|
||||
const kubernetesSvcName = "kubernetes"
|
||||
|
||||
var (
|
||||
argDomain = flag.String("domain", "cluster.local", "domain under which to create names")
|
||||
argEtcdMutationTimeout = flag.Duration("etcd-mutation-timeout", 10*time.Second, "crash after retrying etcd mutation for a specified duration")
|
||||
argEtcdServer = flag.String("etcd-server", "http://127.0.0.1:4001", "URL to etcd server")
|
||||
argKubecfgFile = flag.String("kubecfg-file", "", "Location of kubecfg file for access to kubernetes master service; --kube-master-url overrides the URL part of this; if neither this nor --kube-master-url are provided, defaults to service account tokens")
|
||||
argKubeMasterURL = flag.String("kube-master-url", "", "URL to reach kubernetes master. Env variables in this flag will be expanded.")
|
||||
healthzPort = flag.Int("healthz-port", 8081, "port on which to serve a kube2sky HTTP readiness probe.")
|
||||
)
|
||||
|
||||
const (
|
||||
// Maximum number of attempts to connect to etcd server.
|
||||
maxConnectAttempts = 12
|
||||
// Resync period for the kube controller loop.
|
||||
resyncPeriod = 30 * time.Minute
|
||||
// A subdomain added to the user specified domain for all services.
|
||||
serviceSubdomain = "svc"
|
||||
// A subdomain added to the user specified dmoain for all pods.
|
||||
podSubdomain = "pod"
|
||||
)
|
||||
|
||||
type etcdClient interface {
|
||||
Set(path, value string, ttl uint64) (*etcd.Response, error)
|
||||
RawGet(key string, sort, recursive bool) (*etcd.RawResponse, error)
|
||||
Delete(path string, recursive bool) (*etcd.Response, error)
|
||||
}
|
||||
|
||||
type nameNamespace struct {
|
||||
name string
|
||||
namespace string
|
||||
}
|
||||
|
||||
type kube2sky struct {
|
||||
// Etcd client.
|
||||
etcdClient etcdClient
|
||||
// DNS domain name.
|
||||
domain string
|
||||
// Etcd mutation timeout.
|
||||
etcdMutationTimeout time.Duration
|
||||
// A cache that contains all the endpoints in the system.
|
||||
endpointsStore kcache.Store
|
||||
// A cache that contains all the services in the system.
|
||||
servicesStore kcache.Store
|
||||
// A cache that contains all the pods in the system.
|
||||
podsStore kcache.Store
|
||||
// Lock for controlling access to headless services.
|
||||
mlock sync.Mutex
|
||||
}
|
||||
|
||||
// Removes 'subdomain' from etcd.
|
||||
func (ks *kube2sky) removeDNS(subdomain string) error {
|
||||
glog.V(2).Infof("Removing %s from DNS", subdomain)
|
||||
resp, err := ks.etcdClient.RawGet(skymsg.Path(subdomain), false, true)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if resp.StatusCode == http.StatusNotFound {
|
||||
glog.V(2).Infof("Subdomain %q does not exist in etcd", subdomain)
|
||||
return nil
|
||||
}
|
||||
_, err = ks.etcdClient.Delete(skymsg.Path(subdomain), true)
|
||||
return err
|
||||
}
|
||||
|
||||
func (ks *kube2sky) writeSkyRecord(subdomain string, data string) error {
|
||||
// Set with no TTL, and hope that kubernetes events are accurate.
|
||||
_, err := ks.etcdClient.Set(skymsg.Path(subdomain), data, uint64(0))
|
||||
return err
|
||||
}
|
||||
|
||||
// Generates skydns records for a headless service.
|
||||
func (ks *kube2sky) newHeadlessService(subdomain string, service *kapi.Service) error {
|
||||
// Create an A record for every pod in the service.
|
||||
// This record must be periodically updated.
|
||||
// Format is as follows:
|
||||
// For a service x, with pods a and b create DNS records,
|
||||
// a.x.ns.domain. and, b.x.ns.domain.
|
||||
ks.mlock.Lock()
|
||||
defer ks.mlock.Unlock()
|
||||
key, err := kcache.MetaNamespaceKeyFunc(service)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
e, exists, err := ks.endpointsStore.GetByKey(key)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get endpoints object from endpoints store - %v", err)
|
||||
}
|
||||
if !exists {
|
||||
glog.V(1).Infof("Could not find endpoints for service %q in namespace %q. DNS records will be created once endpoints show up.", service.Name, service.Namespace)
|
||||
return nil
|
||||
}
|
||||
if e, ok := e.(*kapi.Endpoints); ok {
|
||||
return ks.generateRecordsForHeadlessService(subdomain, e, service)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func getSkyMsg(ip string, port int) *skymsg.Service {
|
||||
return &skymsg.Service{
|
||||
Host: ip,
|
||||
Port: port,
|
||||
Priority: 10,
|
||||
Weight: 10,
|
||||
Ttl: 30,
|
||||
}
|
||||
}
|
||||
|
||||
func (ks *kube2sky) generateRecordsForHeadlessService(subdomain string, e *kapi.Endpoints, svc *kapi.Service) error {
|
||||
// TODO: remove this after v1.4 is released and the old annotations are EOL
|
||||
podHostnames, err := getPodHostnamesFromAnnotation(e.Annotations)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
for idx := range e.Subsets {
|
||||
for subIdx := range e.Subsets[idx].Addresses {
|
||||
address := &e.Subsets[idx].Addresses[subIdx]
|
||||
endpointIP := address.IP
|
||||
b, err := json.Marshal(getSkyMsg(endpointIP, 0))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
recordValue := string(b)
|
||||
var recordLabel string
|
||||
if hostLabel, exists := getHostname(address, podHostnames); exists {
|
||||
recordLabel = hostLabel
|
||||
} else {
|
||||
recordLabel = getHash(recordValue)
|
||||
}
|
||||
|
||||
recordKey := buildDNSNameString(subdomain, recordLabel)
|
||||
|
||||
glog.V(2).Infof("Setting DNS record: %v -> %q\n", recordKey, recordValue)
|
||||
if err := ks.writeSkyRecord(recordKey, recordValue); err != nil {
|
||||
return err
|
||||
}
|
||||
for portIdx := range e.Subsets[idx].Ports {
|
||||
endpointPort := &e.Subsets[idx].Ports[portIdx]
|
||||
portSegment := buildPortSegmentString(endpointPort.Name, endpointPort.Protocol)
|
||||
if portSegment != "" {
|
||||
err := ks.generateSRVRecord(subdomain, portSegment, recordLabel, recordKey, int(endpointPort.Port))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func getHostname(address *kapi.EndpointAddress, podHostnames map[string]endpoints.HostRecord) (string, bool) {
|
||||
if len(address.Hostname) > 0 {
|
||||
return address.Hostname, true
|
||||
}
|
||||
if hostRecord, exists := podHostnames[address.IP]; exists && len(validation.IsDNS1123Label(hostRecord.HostName)) == 0 {
|
||||
return hostRecord.HostName, true
|
||||
}
|
||||
return "", false
|
||||
}
|
||||
|
||||
func getPodHostnamesFromAnnotation(annotations map[string]string) (map[string]endpoints.HostRecord, error) {
|
||||
hostnames := map[string]endpoints.HostRecord{}
|
||||
|
||||
if annotations != nil {
|
||||
if serializedHostnames, exists := annotations[endpoints.PodHostnamesAnnotation]; exists && len(serializedHostnames) > 0 {
|
||||
err := json.Unmarshal([]byte(serializedHostnames), &hostnames)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
}
|
||||
return hostnames, nil
|
||||
}
|
||||
|
||||
func (ks *kube2sky) getServiceFromEndpoints(e *kapi.Endpoints) (*kapi.Service, error) {
|
||||
key, err := kcache.MetaNamespaceKeyFunc(e)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
obj, exists, err := ks.servicesStore.GetByKey(key)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get service object from services store - %v", err)
|
||||
}
|
||||
if !exists {
|
||||
glog.V(1).Infof("could not find service for endpoint %q in namespace %q", e.Name, e.Namespace)
|
||||
return nil, nil
|
||||
}
|
||||
if svc, ok := obj.(*kapi.Service); ok {
|
||||
return svc, nil
|
||||
}
|
||||
return nil, fmt.Errorf("got a non service object in services store %v", obj)
|
||||
}
|
||||
|
||||
func (ks *kube2sky) addDNSUsingEndpoints(subdomain string, e *kapi.Endpoints) error {
|
||||
ks.mlock.Lock()
|
||||
defer ks.mlock.Unlock()
|
||||
svc, err := ks.getServiceFromEndpoints(e)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if svc == nil || kapi.IsServiceIPSet(svc) {
|
||||
// No headless service found corresponding to endpoints object.
|
||||
return nil
|
||||
}
|
||||
// Remove existing DNS entry.
|
||||
if err := ks.removeDNS(subdomain); err != nil {
|
||||
return err
|
||||
}
|
||||
return ks.generateRecordsForHeadlessService(subdomain, e, svc)
|
||||
}
|
||||
|
||||
func (ks *kube2sky) handleEndpointAdd(obj interface{}) {
|
||||
if e, ok := obj.(*kapi.Endpoints); ok {
|
||||
name := buildDNSNameString(ks.domain, serviceSubdomain, e.Namespace, e.Name)
|
||||
ks.mutateEtcdOrDie(func() error { return ks.addDNSUsingEndpoints(name, e) })
|
||||
}
|
||||
}
|
||||
|
||||
func (ks *kube2sky) handlePodCreate(obj interface{}) {
|
||||
if e, ok := obj.(*kapi.Pod); ok {
|
||||
// If the pod ip is not yet available, do not attempt to create.
|
||||
if e.Status.PodIP != "" {
|
||||
name := buildDNSNameString(ks.domain, podSubdomain, e.Namespace, santizeIP(e.Status.PodIP))
|
||||
ks.mutateEtcdOrDie(func() error { return ks.generateRecordsForPod(name, e) })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (ks *kube2sky) handlePodUpdate(old interface{}, new interface{}) {
|
||||
oldPod, okOld := old.(*kapi.Pod)
|
||||
newPod, okNew := new.(*kapi.Pod)
|
||||
|
||||
// Validate that the objects are good
|
||||
if okOld && okNew {
|
||||
if oldPod.Status.PodIP != newPod.Status.PodIP {
|
||||
ks.handlePodDelete(oldPod)
|
||||
ks.handlePodCreate(newPod)
|
||||
}
|
||||
} else if okNew {
|
||||
ks.handlePodCreate(newPod)
|
||||
} else if okOld {
|
||||
ks.handlePodDelete(oldPod)
|
||||
}
|
||||
}
|
||||
|
||||
func (ks *kube2sky) handlePodDelete(obj interface{}) {
|
||||
if e, ok := obj.(*kapi.Pod); ok {
|
||||
if e.Status.PodIP != "" {
|
||||
name := buildDNSNameString(ks.domain, podSubdomain, e.Namespace, santizeIP(e.Status.PodIP))
|
||||
ks.mutateEtcdOrDie(func() error { return ks.removeDNS(name) })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (ks *kube2sky) generateRecordsForPod(subdomain string, service *kapi.Pod) error {
|
||||
b, err := json.Marshal(getSkyMsg(service.Status.PodIP, 0))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
recordValue := string(b)
|
||||
recordLabel := getHash(recordValue)
|
||||
recordKey := buildDNSNameString(subdomain, recordLabel)
|
||||
|
||||
glog.V(2).Infof("Setting DNS record: %v -> %q, with recordKey: %v\n", subdomain, recordValue, recordKey)
|
||||
if err := ks.writeSkyRecord(recordKey, recordValue); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ks *kube2sky) generateRecordsForPortalService(subdomain string, service *kapi.Service) error {
|
||||
b, err := json.Marshal(getSkyMsg(service.Spec.ClusterIP, 0))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
recordValue := string(b)
|
||||
recordLabel := getHash(recordValue)
|
||||
recordKey := buildDNSNameString(subdomain, recordLabel)
|
||||
|
||||
glog.V(2).Infof("Setting DNS record: %v -> %q, with recordKey: %v\n", subdomain, recordValue, recordKey)
|
||||
if err := ks.writeSkyRecord(recordKey, recordValue); err != nil {
|
||||
return err
|
||||
}
|
||||
// Generate SRV Records
|
||||
for i := range service.Spec.Ports {
|
||||
port := &service.Spec.Ports[i]
|
||||
portSegment := buildPortSegmentString(port.Name, port.Protocol)
|
||||
if portSegment != "" {
|
||||
err = ks.generateSRVRecord(subdomain, portSegment, recordLabel, subdomain, int(port.Port))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func santizeIP(ip string) string {
|
||||
return strings.Replace(ip, ".", "-", -1)
|
||||
}
|
||||
|
||||
func buildPortSegmentString(portName string, portProtocol kapi.Protocol) string {
|
||||
if portName == "" {
|
||||
// we don't create a random name
|
||||
return ""
|
||||
}
|
||||
|
||||
if portProtocol == "" {
|
||||
glog.Errorf("Port Protocol not set. port segment string cannot be created.")
|
||||
return ""
|
||||
}
|
||||
|
||||
return fmt.Sprintf("_%s._%s", portName, strings.ToLower(string(portProtocol)))
|
||||
}
|
||||
|
||||
func (ks *kube2sky) generateSRVRecord(subdomain, portSegment, recordName, cName string, portNumber int) error {
|
||||
recordKey := buildDNSNameString(subdomain, portSegment, recordName)
|
||||
srv_rec, err := json.Marshal(getSkyMsg(cName, portNumber))
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if err := ks.writeSkyRecord(recordKey, string(srv_rec)); err != nil {
|
||||
return err
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ks *kube2sky) addDNS(subdomain string, service *kapi.Service) error {
|
||||
// if ClusterIP is not set, a DNS entry should not be created
|
||||
if !kapi.IsServiceIPSet(service) {
|
||||
return ks.newHeadlessService(subdomain, service)
|
||||
}
|
||||
if len(service.Spec.Ports) == 0 {
|
||||
glog.Info("Unexpected service with no ports, this should not have happend: %v", service)
|
||||
}
|
||||
return ks.generateRecordsForPortalService(subdomain, service)
|
||||
}
|
||||
|
||||
// Implements retry logic for arbitrary mutator. Crashes after retrying for
|
||||
// etcd-mutation-timeout.
|
||||
func (ks *kube2sky) mutateEtcdOrDie(mutator func() error) {
|
||||
timeout := time.After(ks.etcdMutationTimeout)
|
||||
for {
|
||||
select {
|
||||
case <-timeout:
|
||||
glog.Fatalf("Failed to mutate etcd for %v using mutator: %v", ks.etcdMutationTimeout, mutator())
|
||||
default:
|
||||
if err := mutator(); err != nil {
|
||||
delay := 50 * time.Millisecond
|
||||
glog.V(1).Infof("Failed to mutate etcd using mutator: %v due to: %v. Will retry in: %v", mutator, err, delay)
|
||||
time.Sleep(delay)
|
||||
} else {
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func buildDNSNameString(labels ...string) string {
|
||||
var res string
|
||||
for _, label := range labels {
|
||||
if res == "" {
|
||||
res = label
|
||||
} else {
|
||||
res = fmt.Sprintf("%s.%s", label, res)
|
||||
}
|
||||
}
|
||||
return res
|
||||
}
|
||||
|
||||
// Returns a cache.ListWatch that gets all changes to services.
|
||||
func createServiceLW(kubeClient *kclient.Client) *kcache.ListWatch {
|
||||
return kcache.NewListWatchFromClient(kubeClient, "services", kapi.NamespaceAll, kselector.Everything())
|
||||
}
|
||||
|
||||
// Returns a cache.ListWatch that gets all changes to endpoints.
|
||||
func createEndpointsLW(kubeClient *kclient.Client) *kcache.ListWatch {
|
||||
return kcache.NewListWatchFromClient(kubeClient, "endpoints", kapi.NamespaceAll, kselector.Everything())
|
||||
}
|
||||
|
||||
// Returns a cache.ListWatch that gets all changes to pods.
|
||||
func createEndpointsPodLW(kubeClient *kclient.Client) *kcache.ListWatch {
|
||||
return kcache.NewListWatchFromClient(kubeClient, "pods", kapi.NamespaceAll, kselector.Everything())
|
||||
}
|
||||
|
||||
func (ks *kube2sky) newService(obj interface{}) {
|
||||
if s, ok := obj.(*kapi.Service); ok {
|
||||
name := buildDNSNameString(ks.domain, serviceSubdomain, s.Namespace, s.Name)
|
||||
ks.mutateEtcdOrDie(func() error { return ks.addDNS(name, s) })
|
||||
}
|
||||
}
|
||||
|
||||
func (ks *kube2sky) removeService(obj interface{}) {
|
||||
if s, ok := obj.(*kapi.Service); ok {
|
||||
name := buildDNSNameString(ks.domain, serviceSubdomain, s.Namespace, s.Name)
|
||||
ks.mutateEtcdOrDie(func() error { return ks.removeDNS(name) })
|
||||
}
|
||||
}
|
||||
|
||||
func (ks *kube2sky) updateService(oldObj, newObj interface{}) {
|
||||
// TODO: We shouldn't leave etcd in a state where it doesn't have a
|
||||
// record for a Service. This removal is needed to completely clean
|
||||
// the directory of a Service, which has SRV records and A records
|
||||
// that are hashed according to oldObj. Unfortunately, this is the
|
||||
// easiest way to purge the directory.
|
||||
ks.removeService(oldObj)
|
||||
ks.newService(newObj)
|
||||
}
|
||||
|
||||
func newEtcdClient(etcdServer string) (*etcd.Client, error) {
|
||||
var (
|
||||
client *etcd.Client
|
||||
err error
|
||||
)
|
||||
for attempt := 1; attempt <= maxConnectAttempts; attempt++ {
|
||||
if _, err = etcdutil.GetEtcdVersion(etcdServer); err == nil {
|
||||
break
|
||||
}
|
||||
if attempt == maxConnectAttempts {
|
||||
break
|
||||
}
|
||||
glog.Infof("[Attempt: %d] Attempting access to etcd after 5 second sleep", attempt)
|
||||
time.Sleep(5 * time.Second)
|
||||
}
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to connect to etcd server: %v, error: %v", etcdServer, err)
|
||||
}
|
||||
glog.Infof("Etcd server found: %v", etcdServer)
|
||||
|
||||
// loop until we have > 0 machines && machines[0] != ""
|
||||
poll, timeout := 1*time.Second, 10*time.Second
|
||||
if err := wait.Poll(poll, timeout, func() (bool, error) {
|
||||
if client = etcd.NewClient([]string{etcdServer}); client == nil {
|
||||
return false, fmt.Errorf("etcd.NewClient returned nil")
|
||||
}
|
||||
client.SyncCluster()
|
||||
machines := client.GetCluster()
|
||||
if len(machines) == 0 || len(machines[0]) == 0 {
|
||||
return false, nil
|
||||
}
|
||||
return true, nil
|
||||
}); err != nil {
|
||||
return nil, fmt.Errorf("Timed out after %s waiting for at least 1 synchronized etcd server in the cluster. Error: %v", timeout, err)
|
||||
}
|
||||
return client, nil
|
||||
}
|
||||
|
||||
func expandKubeMasterURL() (string, error) {
|
||||
parsedURL, err := url.Parse(os.ExpandEnv(*argKubeMasterURL))
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to parse --kube-master-url %s - %v", *argKubeMasterURL, err)
|
||||
}
|
||||
if parsedURL.Scheme == "" || parsedURL.Host == "" || parsedURL.Host == ":" {
|
||||
return "", fmt.Errorf("invalid --kube-master-url specified %s", *argKubeMasterURL)
|
||||
}
|
||||
return parsedURL.String(), nil
|
||||
}
|
||||
|
||||
// TODO: evaluate using pkg/client/clientcmd
|
||||
func newKubeClient() (*kclient.Client, error) {
|
||||
var (
|
||||
config *restclient.Config
|
||||
err error
|
||||
masterURL string
|
||||
)
|
||||
// If the user specified --kube-master-url, expand env vars and verify it.
|
||||
if *argKubeMasterURL != "" {
|
||||
masterURL, err = expandKubeMasterURL()
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
if masterURL != "" && *argKubecfgFile == "" {
|
||||
// Only --kube-master-url was provided.
|
||||
config = &restclient.Config{
|
||||
Host: masterURL,
|
||||
ContentConfig: restclient.ContentConfig{GroupVersion: &unversioned.GroupVersion{Version: "v1"}},
|
||||
}
|
||||
} else {
|
||||
// We either have:
|
||||
// 1) --kube-master-url and --kubecfg-file
|
||||
// 2) just --kubecfg-file
|
||||
// 3) neither flag
|
||||
// In any case, the logic is the same. If (3), this will automatically
|
||||
// fall back on the service account token.
|
||||
overrides := &kclientcmd.ConfigOverrides{}
|
||||
overrides.ClusterInfo.Server = masterURL // might be "", but that is OK
|
||||
rules := &kclientcmd.ClientConfigLoadingRules{ExplicitPath: *argKubecfgFile} // might be "", but that is OK
|
||||
if config, err = kclientcmd.NewNonInteractiveDeferredLoadingClientConfig(rules, overrides).ClientConfig(); err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
|
||||
glog.Infof("Using %s for kubernetes master", config.Host)
|
||||
glog.Infof("Using kubernetes API %v", config.GroupVersion)
|
||||
return kclient.New(config)
|
||||
}
|
||||
|
||||
func watchForServices(kubeClient *kclient.Client, ks *kube2sky) kcache.Store {
|
||||
serviceStore, serviceController := kframework.NewInformer(
|
||||
createServiceLW(kubeClient),
|
||||
&kapi.Service{},
|
||||
resyncPeriod,
|
||||
kframework.ResourceEventHandlerFuncs{
|
||||
AddFunc: ks.newService,
|
||||
DeleteFunc: ks.removeService,
|
||||
UpdateFunc: ks.updateService,
|
||||
},
|
||||
)
|
||||
go serviceController.Run(wait.NeverStop)
|
||||
return serviceStore
|
||||
}
|
||||
|
||||
func watchEndpoints(kubeClient *kclient.Client, ks *kube2sky) kcache.Store {
|
||||
eStore, eController := kframework.NewInformer(
|
||||
createEndpointsLW(kubeClient),
|
||||
&kapi.Endpoints{},
|
||||
resyncPeriod,
|
||||
kframework.ResourceEventHandlerFuncs{
|
||||
AddFunc: ks.handleEndpointAdd,
|
||||
UpdateFunc: func(oldObj, newObj interface{}) {
|
||||
// TODO: Avoid unwanted updates.
|
||||
ks.handleEndpointAdd(newObj)
|
||||
},
|
||||
},
|
||||
)
|
||||
|
||||
go eController.Run(wait.NeverStop)
|
||||
return eStore
|
||||
}
|
||||
|
||||
func watchPods(kubeClient *kclient.Client, ks *kube2sky) kcache.Store {
|
||||
eStore, eController := kframework.NewInformer(
|
||||
createEndpointsPodLW(kubeClient),
|
||||
&kapi.Pod{},
|
||||
resyncPeriod,
|
||||
kframework.ResourceEventHandlerFuncs{
|
||||
AddFunc: ks.handlePodCreate,
|
||||
UpdateFunc: func(oldObj, newObj interface{}) {
|
||||
ks.handlePodUpdate(oldObj, newObj)
|
||||
},
|
||||
DeleteFunc: ks.handlePodDelete,
|
||||
},
|
||||
)
|
||||
|
||||
go eController.Run(wait.NeverStop)
|
||||
return eStore
|
||||
}
|
||||
|
||||
func getHash(text string) string {
|
||||
h := fnv.New32a()
|
||||
h.Write([]byte(text))
|
||||
return fmt.Sprintf("%x", h.Sum32())
|
||||
}
|
||||
|
||||
// waitForKubernetesService waits for the "Kuberntes" master service.
|
||||
// Since the health probe on the kube2sky container is essentially an nslookup
|
||||
// of this service, we cannot serve any DNS records if it doesn't show up.
|
||||
// Once the Service is found, we start replying on this containers readiness
|
||||
// probe endpoint.
|
||||
func waitForKubernetesService(client *kclient.Client) (svc *kapi.Service) {
|
||||
name := fmt.Sprintf("%v/%v", kapi.NamespaceDefault, kubernetesSvcName)
|
||||
glog.Infof("Waiting for service: %v", name)
|
||||
var err error
|
||||
servicePollInterval := 1 * time.Second
|
||||
for {
|
||||
svc, err = client.Services(kapi.NamespaceDefault).Get(kubernetesSvcName)
|
||||
if err != nil || svc == nil {
|
||||
glog.Infof("Ignoring error while waiting for service %v: %v. Sleeping %v before retrying.", name, err, servicePollInterval)
|
||||
time.Sleep(servicePollInterval)
|
||||
continue
|
||||
}
|
||||
break
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
// setupSignalHandlers runs a goroutine that waits on SIGINT or SIGTERM and logs it
|
||||
// before exiting.
|
||||
func setupSignalHandlers() {
|
||||
sigChan := make(chan os.Signal)
|
||||
signal.Notify(sigChan, syscall.SIGINT, syscall.SIGTERM)
|
||||
// This program should always exit gracefully logging that it received
|
||||
// either a SIGINT or SIGTERM. Since kube2sky is run in a container
|
||||
// without a liveness probe as part of the kube-dns pod, it shouldn't
|
||||
// restart unless the pod is deleted. If it restarts without logging
|
||||
// anything it means something is seriously wrong.
|
||||
// TODO: Remove once #22290 is fixed.
|
||||
go func() {
|
||||
glog.Fatalf("Received signal %s", <-sigChan)
|
||||
}()
|
||||
}
|
||||
|
||||
// setupHealthzHandlers sets up a readiness and liveness endpoint for kube2sky.
|
||||
func setupHealthzHandlers(ks *kube2sky) {
|
||||
http.HandleFunc("/readiness", func(w http.ResponseWriter, req *http.Request) {
|
||||
fmt.Fprintf(w, "ok\n")
|
||||
})
|
||||
}
|
||||
|
||||
func main() {
|
||||
flag.CommandLine.SetNormalizeFunc(utilflag.WarnWordSepNormalizeFunc)
|
||||
flag.Parse()
|
||||
var err error
|
||||
setupSignalHandlers()
|
||||
// TODO: Validate input flags.
|
||||
domain := *argDomain
|
||||
if !strings.HasSuffix(domain, ".") {
|
||||
domain = fmt.Sprintf("%s.", domain)
|
||||
}
|
||||
ks := kube2sky{
|
||||
domain: domain,
|
||||
etcdMutationTimeout: *argEtcdMutationTimeout,
|
||||
}
|
||||
if ks.etcdClient, err = newEtcdClient(*argEtcdServer); err != nil {
|
||||
glog.Fatalf("Failed to create etcd client - %v", err)
|
||||
}
|
||||
|
||||
kubeClient, err := newKubeClient()
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to create a kubernetes client: %v", err)
|
||||
}
|
||||
// Wait synchronously for the Kubernetes service and add a DNS record for it.
|
||||
ks.newService(waitForKubernetesService(kubeClient))
|
||||
glog.Infof("Successfully added DNS record for Kubernetes service.")
|
||||
|
||||
ks.endpointsStore = watchEndpoints(kubeClient, &ks)
|
||||
ks.servicesStore = watchForServices(kubeClient, &ks)
|
||||
ks.podsStore = watchPods(kubeClient, &ks)
|
||||
|
||||
// We declare kube2sky ready when:
|
||||
// 1. It has retrieved the Kubernetes master service from the apiserver. If this
|
||||
// doesn't happen skydns will fail its liveness probe assuming that it can't
|
||||
// perform any cluster local DNS lookups.
|
||||
// 2. It has setup the 3 watches above.
|
||||
// Once ready this container never flips to not-ready.
|
||||
setupHealthzHandlers(&ks)
|
||||
glog.Fatal(http.ListenAndServe(fmt.Sprintf(":%d", *healthzPort), nil))
|
||||
}
|
@@ -1,467 +0,0 @@
|
||||
/*
|
||||
Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"net/http"
|
||||
"path"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/coreos/go-etcd/etcd"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
kapi "k8s.io/kubernetes/pkg/api"
|
||||
"k8s.io/kubernetes/pkg/client/cache"
|
||||
)
|
||||
|
||||
type fakeEtcdClient struct {
|
||||
// TODO: Convert this to real fs to better simulate etcd behavior.
|
||||
writes map[string]string
|
||||
}
|
||||
|
||||
func (ec *fakeEtcdClient) Set(key, value string, ttl uint64) (*etcd.Response, error) {
|
||||
ec.writes[key] = value
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (ec *fakeEtcdClient) Delete(key string, recursive bool) (*etcd.Response, error) {
|
||||
for p := range ec.writes {
|
||||
if (recursive && strings.HasPrefix(p, key)) || (!recursive && p == key) {
|
||||
delete(ec.writes, p)
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func (ec *fakeEtcdClient) RawGet(key string, sort, recursive bool) (*etcd.RawResponse, error) {
|
||||
values := ec.Get(key)
|
||||
if len(values) == 0 {
|
||||
return &etcd.RawResponse{StatusCode: http.StatusNotFound}, nil
|
||||
}
|
||||
return &etcd.RawResponse{StatusCode: http.StatusOK}, nil
|
||||
}
|
||||
|
||||
func (ec *fakeEtcdClient) Get(key string) []string {
|
||||
values := make([]string, 0, 10)
|
||||
minSeparatorCount := 0
|
||||
key = strings.ToLower(key)
|
||||
for path := range ec.writes {
|
||||
if strings.HasPrefix(path, key) {
|
||||
separatorCount := strings.Count(path, "/")
|
||||
if minSeparatorCount == 0 || separatorCount < minSeparatorCount {
|
||||
minSeparatorCount = separatorCount
|
||||
values = values[:0]
|
||||
values = append(values, ec.writes[path])
|
||||
} else if separatorCount == minSeparatorCount {
|
||||
values = append(values, ec.writes[path])
|
||||
}
|
||||
}
|
||||
}
|
||||
return values
|
||||
}
|
||||
|
||||
const (
|
||||
testDomain = "cluster.local."
|
||||
basePath = "/skydns/local/cluster"
|
||||
serviceSubDomain = "svc"
|
||||
podSubDomain = "pod"
|
||||
)
|
||||
|
||||
func newKube2Sky(ec etcdClient) *kube2sky {
|
||||
return &kube2sky{
|
||||
etcdClient: ec,
|
||||
domain: testDomain,
|
||||
etcdMutationTimeout: time.Second,
|
||||
endpointsStore: cache.NewStore(cache.MetaNamespaceKeyFunc),
|
||||
servicesStore: cache.NewStore(cache.MetaNamespaceKeyFunc),
|
||||
}
|
||||
}
|
||||
|
||||
func getEtcdPathForA(name, namespace, subDomain string) string {
|
||||
return path.Join(basePath, subDomain, namespace, name)
|
||||
}
|
||||
|
||||
func getEtcdPathForSRV(portName, protocol, name, namespace string) string {
|
||||
return path.Join(basePath, serviceSubDomain, namespace, name, fmt.Sprintf("_%s", strings.ToLower(protocol)), fmt.Sprintf("_%s", strings.ToLower(portName)))
|
||||
}
|
||||
|
||||
type hostPort struct {
|
||||
Host string `json:"host"`
|
||||
Port int `json:"port"`
|
||||
}
|
||||
|
||||
func getHostPort(service *kapi.Service) *hostPort {
|
||||
return &hostPort{
|
||||
Host: service.Spec.ClusterIP,
|
||||
Port: int(service.Spec.Ports[0].Port),
|
||||
}
|
||||
}
|
||||
|
||||
func getHostPortFromString(data string) (*hostPort, error) {
|
||||
var res hostPort
|
||||
err := json.Unmarshal([]byte(data), &res)
|
||||
return &res, err
|
||||
}
|
||||
|
||||
func assertDnsServiceEntryInEtcd(t *testing.T, ec *fakeEtcdClient, serviceName, namespace string, expectedHostPort *hostPort) {
|
||||
key := getEtcdPathForA(serviceName, namespace, serviceSubDomain)
|
||||
values := ec.Get(key)
|
||||
//require.True(t, exists)
|
||||
require.True(t, len(values) > 0, "entry not found.")
|
||||
actualHostPort, err := getHostPortFromString(values[0])
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, expectedHostPort.Host, actualHostPort.Host)
|
||||
}
|
||||
|
||||
func assertDnsPodEntryInEtcd(t *testing.T, ec *fakeEtcdClient, podIP, namespace string) {
|
||||
key := getEtcdPathForA(podIP, namespace, podSubDomain)
|
||||
values := ec.Get(key)
|
||||
//require.True(t, exists)
|
||||
require.True(t, len(values) > 0, "entry not found.")
|
||||
}
|
||||
|
||||
func assertDnsPodEntryNotInEtcd(t *testing.T, ec *fakeEtcdClient, podIP, namespace string) {
|
||||
key := getEtcdPathForA(podIP, namespace, podSubDomain)
|
||||
values := ec.Get(key)
|
||||
//require.True(t, exists)
|
||||
require.True(t, len(values) == 0, "entry found.")
|
||||
}
|
||||
|
||||
func assertSRVEntryInEtcd(t *testing.T, ec *fakeEtcdClient, portName, protocol, serviceName, namespace string, expectedPortNumber, expectedEntriesCount int) {
|
||||
srvKey := getEtcdPathForSRV(portName, protocol, serviceName, namespace)
|
||||
values := ec.Get(srvKey)
|
||||
assert.Equal(t, expectedEntriesCount, len(values))
|
||||
for i := range values {
|
||||
actualHostPort, err := getHostPortFromString(values[i])
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, expectedPortNumber, actualHostPort.Port)
|
||||
}
|
||||
}
|
||||
|
||||
func newHeadlessService(namespace, serviceName string) kapi.Service {
|
||||
service := kapi.Service{
|
||||
ObjectMeta: kapi.ObjectMeta{
|
||||
Name: serviceName,
|
||||
Namespace: namespace,
|
||||
},
|
||||
Spec: kapi.ServiceSpec{
|
||||
ClusterIP: "None",
|
||||
Ports: []kapi.ServicePort{
|
||||
{Port: 0},
|
||||
},
|
||||
},
|
||||
}
|
||||
return service
|
||||
}
|
||||
|
||||
func newService(namespace, serviceName, clusterIP, portName string, portNumber int) kapi.Service {
|
||||
service := kapi.Service{
|
||||
ObjectMeta: kapi.ObjectMeta{
|
||||
Name: serviceName,
|
||||
Namespace: namespace,
|
||||
},
|
||||
Spec: kapi.ServiceSpec{
|
||||
ClusterIP: clusterIP,
|
||||
Ports: []kapi.ServicePort{
|
||||
{Port: int32(portNumber), Name: portName, Protocol: "TCP"},
|
||||
},
|
||||
},
|
||||
}
|
||||
return service
|
||||
}
|
||||
|
||||
func newPod(namespace, podName, podIP string) kapi.Pod {
|
||||
pod := kapi.Pod{
|
||||
ObjectMeta: kapi.ObjectMeta{
|
||||
Name: podName,
|
||||
Namespace: namespace,
|
||||
},
|
||||
Status: kapi.PodStatus{
|
||||
PodIP: podIP,
|
||||
},
|
||||
}
|
||||
|
||||
return pod
|
||||
}
|
||||
|
||||
func newSubset() kapi.EndpointSubset {
|
||||
subset := kapi.EndpointSubset{
|
||||
Addresses: []kapi.EndpointAddress{},
|
||||
Ports: []kapi.EndpointPort{},
|
||||
}
|
||||
return subset
|
||||
}
|
||||
|
||||
func newSubsetWithOnePort(portName string, port int, ips ...string) kapi.EndpointSubset {
|
||||
subset := newSubset()
|
||||
subset.Ports = append(subset.Ports, kapi.EndpointPort{Port: int32(port), Name: portName, Protocol: "TCP"})
|
||||
for _, ip := range ips {
|
||||
subset.Addresses = append(subset.Addresses, kapi.EndpointAddress{IP: ip})
|
||||
}
|
||||
return subset
|
||||
}
|
||||
|
||||
func newSubsetWithTwoPorts(portName1 string, portNumber1 int, portName2 string, portNumber2 int, ips ...string) kapi.EndpointSubset {
|
||||
subset := newSubsetWithOnePort(portName1, portNumber1, ips...)
|
||||
subset.Ports = append(subset.Ports, kapi.EndpointPort{Port: int32(portNumber2), Name: portName2, Protocol: "TCP"})
|
||||
return subset
|
||||
}
|
||||
|
||||
func newEndpoints(service kapi.Service, subsets ...kapi.EndpointSubset) kapi.Endpoints {
|
||||
endpoints := kapi.Endpoints{
|
||||
ObjectMeta: service.ObjectMeta,
|
||||
Subsets: []kapi.EndpointSubset{},
|
||||
}
|
||||
|
||||
for _, subset := range subsets {
|
||||
endpoints.Subsets = append(endpoints.Subsets, subset)
|
||||
}
|
||||
return endpoints
|
||||
}
|
||||
|
||||
func TestHeadlessService(t *testing.T) {
|
||||
const (
|
||||
testService = "testservice"
|
||||
testNamespace = "default"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
service := newHeadlessService(testNamespace, testService)
|
||||
assert.NoError(t, k2s.servicesStore.Add(&service))
|
||||
endpoints := newEndpoints(service, newSubsetWithOnePort("", 80, "10.0.0.1", "10.0.0.2"), newSubsetWithOnePort("", 8080, "10.0.0.3", "10.0.0.4"))
|
||||
|
||||
// We expect 4 records.
|
||||
expectedDNSRecords := 4
|
||||
assert.NoError(t, k2s.endpointsStore.Add(&endpoints))
|
||||
k2s.newService(&service)
|
||||
assert.Equal(t, expectedDNSRecords, len(ec.writes))
|
||||
k2s.removeService(&service)
|
||||
assert.Empty(t, ec.writes)
|
||||
}
|
||||
|
||||
func TestHeadlessServiceWithNamedPorts(t *testing.T) {
|
||||
const (
|
||||
testService = "testservice"
|
||||
testNamespace = "default"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
service := newHeadlessService(testNamespace, testService)
|
||||
assert.NoError(t, k2s.servicesStore.Add(&service))
|
||||
endpoints := newEndpoints(service, newSubsetWithTwoPorts("http1", 80, "http2", 81, "10.0.0.1", "10.0.0.2"), newSubsetWithOnePort("https", 443, "10.0.0.3", "10.0.0.4"))
|
||||
|
||||
// We expect 10 records. 6 SRV records. 4 POD records.
|
||||
expectedDNSRecords := 10
|
||||
assert.NoError(t, k2s.endpointsStore.Add(&endpoints))
|
||||
k2s.newService(&service)
|
||||
assert.Equal(t, expectedDNSRecords, len(ec.writes))
|
||||
assertSRVEntryInEtcd(t, ec, "http1", "tcp", testService, testNamespace, 80, 2)
|
||||
assertSRVEntryInEtcd(t, ec, "http2", "tcp", testService, testNamespace, 81, 2)
|
||||
assertSRVEntryInEtcd(t, ec, "https", "tcp", testService, testNamespace, 443, 2)
|
||||
|
||||
endpoints.Subsets = endpoints.Subsets[:1]
|
||||
k2s.handleEndpointAdd(&endpoints)
|
||||
// We expect 6 records. 4 SRV records. 2 POD records.
|
||||
expectedDNSRecords = 6
|
||||
assert.Equal(t, expectedDNSRecords, len(ec.writes))
|
||||
assertSRVEntryInEtcd(t, ec, "http1", "tcp", testService, testNamespace, 80, 2)
|
||||
assertSRVEntryInEtcd(t, ec, "http2", "tcp", testService, testNamespace, 81, 2)
|
||||
|
||||
k2s.removeService(&service)
|
||||
assert.Empty(t, ec.writes)
|
||||
}
|
||||
|
||||
func TestHeadlessServiceEndpointsUpdate(t *testing.T) {
|
||||
const (
|
||||
testService = "testservice"
|
||||
testNamespace = "default"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
service := newHeadlessService(testNamespace, testService)
|
||||
assert.NoError(t, k2s.servicesStore.Add(&service))
|
||||
endpoints := newEndpoints(service, newSubsetWithOnePort("", 80, "10.0.0.1", "10.0.0.2"))
|
||||
|
||||
expectedDNSRecords := 2
|
||||
assert.NoError(t, k2s.endpointsStore.Add(&endpoints))
|
||||
k2s.newService(&service)
|
||||
assert.Equal(t, expectedDNSRecords, len(ec.writes))
|
||||
endpoints.Subsets = append(endpoints.Subsets,
|
||||
newSubsetWithOnePort("", 8080, "10.0.0.3", "10.0.0.4"),
|
||||
)
|
||||
expectedDNSRecords = 4
|
||||
k2s.handleEndpointAdd(&endpoints)
|
||||
|
||||
assert.Equal(t, expectedDNSRecords, len(ec.writes))
|
||||
k2s.removeService(&service)
|
||||
assert.Empty(t, ec.writes)
|
||||
}
|
||||
|
||||
func TestHeadlessServiceWithDelayedEndpointsAddition(t *testing.T) {
|
||||
const (
|
||||
testService = "testservice"
|
||||
testNamespace = "default"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
service := newHeadlessService(testNamespace, testService)
|
||||
assert.NoError(t, k2s.servicesStore.Add(&service))
|
||||
// Headless service DNS records should not be created since
|
||||
// corresponding endpoints object doesn't exist.
|
||||
k2s.newService(&service)
|
||||
assert.Empty(t, ec.writes)
|
||||
|
||||
// Add an endpoints object for the service.
|
||||
endpoints := newEndpoints(service, newSubsetWithOnePort("", 80, "10.0.0.1", "10.0.0.2"), newSubsetWithOnePort("", 8080, "10.0.0.3", "10.0.0.4"))
|
||||
// We expect 4 records.
|
||||
expectedDNSRecords := 4
|
||||
k2s.handleEndpointAdd(&endpoints)
|
||||
assert.Equal(t, expectedDNSRecords, len(ec.writes))
|
||||
}
|
||||
|
||||
// TODO: Test service updates for headless services.
|
||||
// TODO: Test headless service addition with delayed endpoints addition
|
||||
|
||||
func TestAddSinglePortService(t *testing.T) {
|
||||
const (
|
||||
testService = "testservice"
|
||||
testNamespace = "default"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
service := newService(testNamespace, testService, "1.2.3.4", "", 0)
|
||||
k2s.newService(&service)
|
||||
expectedValue := getHostPort(&service)
|
||||
assertDnsServiceEntryInEtcd(t, ec, testService, testNamespace, expectedValue)
|
||||
}
|
||||
|
||||
func TestUpdateSinglePortService(t *testing.T) {
|
||||
const (
|
||||
testService = "testservice"
|
||||
testNamespace = "default"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
service := newService(testNamespace, testService, "1.2.3.4", "", 0)
|
||||
k2s.newService(&service)
|
||||
assert.Len(t, ec.writes, 1)
|
||||
newService := service
|
||||
newService.Spec.ClusterIP = "0.0.0.0"
|
||||
k2s.updateService(&service, &newService)
|
||||
expectedValue := getHostPort(&newService)
|
||||
assertDnsServiceEntryInEtcd(t, ec, testService, testNamespace, expectedValue)
|
||||
}
|
||||
|
||||
func TestDeleteSinglePortService(t *testing.T) {
|
||||
const (
|
||||
testService = "testservice"
|
||||
testNamespace = "default"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
service := newService(testNamespace, testService, "1.2.3.4", "", 80)
|
||||
// Add the service
|
||||
k2s.newService(&service)
|
||||
assert.Len(t, ec.writes, 1)
|
||||
// Delete the service
|
||||
k2s.removeService(&service)
|
||||
assert.Empty(t, ec.writes)
|
||||
}
|
||||
|
||||
func TestServiceWithNamePort(t *testing.T) {
|
||||
const (
|
||||
testService = "testservice"
|
||||
testNamespace = "default"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
|
||||
// create service
|
||||
service := newService(testNamespace, testService, "1.2.3.4", "http1", 80)
|
||||
k2s.newService(&service)
|
||||
expectedValue := getHostPort(&service)
|
||||
assertDnsServiceEntryInEtcd(t, ec, testService, testNamespace, expectedValue)
|
||||
assertSRVEntryInEtcd(t, ec, "http1", "tcp", testService, testNamespace, 80, 1)
|
||||
assert.Len(t, ec.writes, 2)
|
||||
|
||||
// update service
|
||||
newService := service
|
||||
newService.Spec.Ports[0].Name = "http2"
|
||||
k2s.updateService(&service, &newService)
|
||||
expectedValue = getHostPort(&newService)
|
||||
assertDnsServiceEntryInEtcd(t, ec, testService, testNamespace, expectedValue)
|
||||
assertSRVEntryInEtcd(t, ec, "http2", "tcp", testService, testNamespace, 80, 1)
|
||||
assert.Len(t, ec.writes, 2)
|
||||
|
||||
// Delete the service
|
||||
k2s.removeService(&service)
|
||||
assert.Empty(t, ec.writes)
|
||||
}
|
||||
|
||||
func TestBuildDNSName(t *testing.T) {
|
||||
expectedDNSName := "name.ns.svc.cluster.local."
|
||||
assert.Equal(t, expectedDNSName, buildDNSNameString("local.", "cluster", "svc", "ns", "name"))
|
||||
newExpectedDNSName := "00.name.ns.svc.cluster.local."
|
||||
assert.Equal(t, newExpectedDNSName, buildDNSNameString(expectedDNSName, "00"))
|
||||
}
|
||||
|
||||
func TestPodDns(t *testing.T) {
|
||||
const (
|
||||
testPodIP = "1.2.3.4"
|
||||
sanitizedPodIP = "1-2-3-4"
|
||||
testNamespace = "default"
|
||||
testPodName = "testPod"
|
||||
)
|
||||
ec := &fakeEtcdClient{make(map[string]string)}
|
||||
k2s := newKube2Sky(ec)
|
||||
|
||||
// create pod without ip address yet
|
||||
pod := newPod(testNamespace, testPodName, "")
|
||||
k2s.handlePodCreate(&pod)
|
||||
assert.Empty(t, ec.writes)
|
||||
|
||||
// create pod
|
||||
pod = newPod(testNamespace, testPodName, testPodIP)
|
||||
k2s.handlePodCreate(&pod)
|
||||
assertDnsPodEntryInEtcd(t, ec, sanitizedPodIP, testNamespace)
|
||||
|
||||
// update pod with same ip
|
||||
newPod := pod
|
||||
newPod.Status.PodIP = testPodIP
|
||||
k2s.handlePodUpdate(&pod, &newPod)
|
||||
assertDnsPodEntryInEtcd(t, ec, sanitizedPodIP, testNamespace)
|
||||
|
||||
// update pod with different ip's
|
||||
newPod = pod
|
||||
newPod.Status.PodIP = "4.3.2.1"
|
||||
k2s.handlePodUpdate(&pod, &newPod)
|
||||
assertDnsPodEntryInEtcd(t, ec, "4-3-2-1", testNamespace)
|
||||
assertDnsPodEntryNotInEtcd(t, ec, "1-2-3-4", testNamespace)
|
||||
|
||||
// Delete the pod
|
||||
k2s.handlePodDelete(&newPod)
|
||||
assert.Empty(t, ec.writes)
|
||||
}
|
||||
|
||||
func TestSanitizeIP(t *testing.T) {
|
||||
expectedIP := "1-2-3-4"
|
||||
assert.Equal(t, expectedIP, santizeIP("1.2.3.4"))
|
||||
}
|
@@ -1,130 +0,0 @@
|
||||
# This file should be kept in sync with cluster/images/hyperkube/dns-rc.yaml
|
||||
apiVersion: v1
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
name: kube-dns-v11
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
version: v11
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
replicas: {{ pillar['dns_replicas'] }}
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
version: v11
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
version: v11
|
||||
kubernetes.io/cluster-service: "true"
|
||||
spec:
|
||||
containers:
|
||||
- name: etcd
|
||||
image: gcr.io/google_containers/etcd-amd64:2.2.1
|
||||
resources:
|
||||
# TODO: Set memory limits when we've profiled the container for large
|
||||
# clusters, then set request = limit to keep this container in
|
||||
# guaranteed class. Currently, this container falls into the
|
||||
# "burstable" category so the kubelet doesn't backoff from restarting it.
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 500Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
command:
|
||||
- /usr/local/bin/etcd
|
||||
- -data-dir
|
||||
- /var/etcd/data
|
||||
- -listen-client-urls
|
||||
- http://127.0.0.1:2379,http://127.0.0.1:4001
|
||||
- -advertise-client-urls
|
||||
- http://127.0.0.1:2379,http://127.0.0.1:4001
|
||||
- -initial-cluster-token
|
||||
- skydns-etcd
|
||||
volumeMounts:
|
||||
- name: etcd-storage
|
||||
mountPath: /var/etcd/data
|
||||
- name: kube2sky
|
||||
image: gcr.io/google_containers/kube2sky-amd64:1.15
|
||||
resources:
|
||||
# TODO: Set memory limits when we've profiled the container for large
|
||||
# clusters, then set request = limit to keep this container in
|
||||
# guaranteed class. Currently, this container falls into the
|
||||
# "burstable" category so the kubelet doesn't backoff from restarting it.
|
||||
limits:
|
||||
cpu: 100m
|
||||
# Kube2sky watches all pods.
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
path: /healthz
|
||||
port: 8080
|
||||
scheme: HTTP
|
||||
initialDelaySeconds: 60
|
||||
timeoutSeconds: 5
|
||||
successThreshold: 1
|
||||
failureThreshold: 5
|
||||
readinessProbe:
|
||||
httpGet:
|
||||
path: /readiness
|
||||
port: 8081
|
||||
scheme: HTTP
|
||||
# we poll on pod startup for the Kubernetes master service and
|
||||
# only setup the /readiness HTTP server once that's available.
|
||||
initialDelaySeconds: 30
|
||||
timeoutSeconds: 5
|
||||
args:
|
||||
# command = "/kube2sky"
|
||||
- --domain={{ pillar['dns_domain'] }}
|
||||
- name: skydns
|
||||
image: gcr.io/google_containers/skydns:2015-10-13-8c72f8c
|
||||
resources:
|
||||
# TODO: Set memory limits when we've profiled the container for large
|
||||
# clusters, then set request = limit to keep this container in
|
||||
# guaranteed class. Currently, this container falls into the
|
||||
# "burstable" category so the kubelet doesn't backoff from restarting it.
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 50Mi
|
||||
args:
|
||||
# command = "/skydns"
|
||||
- -machines=http://127.0.0.1:4001
|
||||
- -addr=0.0.0.0:53
|
||||
- -ns-rotate=false
|
||||
- -domain={{ pillar['dns_domain'] }}.
|
||||
ports:
|
||||
- containerPort: 53
|
||||
name: dns
|
||||
protocol: UDP
|
||||
- containerPort: 53
|
||||
name: dns-tcp
|
||||
protocol: TCP
|
||||
- name: healthz
|
||||
image: gcr.io/google_containers/exechealthz:1.0
|
||||
resources:
|
||||
# keep request = limit to keep this container in guaranteed class
|
||||
limits:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
requests:
|
||||
cpu: 10m
|
||||
memory: 20Mi
|
||||
args:
|
||||
- -cmd=nslookup kubernetes.default.svc.{{ pillar['dns_domain'] }} 127.0.0.1 >/dev/null
|
||||
- -port=8080
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
protocol: TCP
|
||||
volumes:
|
||||
- name: etcd-storage
|
||||
emptyDir: {}
|
||||
dnsPolicy: Default # Don't use cluster DNS.
|
@@ -1,21 +0,0 @@
|
||||
# This file should be kept in sync with cluster/images/hyperkube/dns-svc.yaml
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: kube-dns
|
||||
namespace: kube-system
|
||||
labels:
|
||||
k8s-app: kube-dns
|
||||
kubernetes.io/cluster-service: "true"
|
||||
kubernetes.io/name: "KubeDNS"
|
||||
spec:
|
||||
selector:
|
||||
k8s-app: kube-dns
|
||||
clusterIP: {{ pillar['dns_server'] }}
|
||||
ports:
|
||||
- name: dns
|
||||
port: 53
|
||||
protocol: UDP
|
||||
- name: dns-tcp
|
||||
port: 53
|
||||
protocol: TCP
|
@@ -1,18 +0,0 @@
|
||||
# Copyright 2016 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
FROM BASEIMAGE
|
||||
MAINTAINER Tim Hockin <thockin@google.com>
|
||||
ADD skydns /
|
||||
ENTRYPOINT ["/skydns"]
|
@@ -1,78 +0,0 @@
|
||||
# Copyright 2016 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# Build skydns
|
||||
#
|
||||
# Usage:
|
||||
# [ARCH=amd64] [TAG=1.0] [REGISTRY=gcr.io/google_containers] [BASEIMAGE=busybox] make (build|push)
|
||||
|
||||
# Default registry and arch. This can be overwritten by arguments to make
|
||||
ARCH?=amd64
|
||||
|
||||
# Version of this image, not the version of skydns
|
||||
TAG=1.0
|
||||
REGISTRY?=gcr.io/google_containers
|
||||
GOLANG_VERSION=1.6
|
||||
GOARM=6
|
||||
TEMP_DIR:=$(shell mktemp -d)
|
||||
|
||||
ifeq ($(ARCH),amd64)
|
||||
BASEIMAGE?=busybox
|
||||
GOBIN_DIR=/go/bin/
|
||||
else
|
||||
# If not the GOARCH == the host arch, the binary directory is here
|
||||
GOBIN_DIR=/go/bin/linux_$(ARCH)
|
||||
endif
|
||||
|
||||
ifeq ($(ARCH),arm)
|
||||
BASEIMAGE?=armel/busybox
|
||||
endif
|
||||
ifeq ($(ARCH),arm64)
|
||||
BASEIMAGE?=aarch64/busybox
|
||||
endif
|
||||
ifeq ($(ARCH),ppc64le)
|
||||
BASEIMAGE?=ppc64le/busybox
|
||||
endif
|
||||
|
||||
# Do not change this default value
|
||||
all: skydns
|
||||
|
||||
skydns:
|
||||
# Only build skydns. This requires go in PATH
|
||||
CGO_ENABLED=0 GOARCH=$(ARCH) GOARM=$(GOARM) go get -a -installsuffix cgo --ldflags '-w' github.com/skynetservices/skydns
|
||||
|
||||
container:
|
||||
# Copy the content in this dir to the temp dir
|
||||
cp ./* $(TEMP_DIR)
|
||||
|
||||
# Build skydns in a container. We mount the temporary dir
|
||||
docker run -it -v $(TEMP_DIR):$(GOBIN_DIR) golang:$(GOLANG_VERSION) /bin/bash -c "make -C $(GOBIN_DIR) skydns ARCH=$(ARCH)"
|
||||
|
||||
# Replace BASEIMAGE with the real base image
|
||||
cd $(TEMP_DIR) && sed -i "s|BASEIMAGE|$(BASEIMAGE)|g" Dockerfile
|
||||
|
||||
# And build the image
|
||||
docker build -t $(REGISTRY)/skydns-$(ARCH):$(TAG) $(TEMP_DIR)
|
||||
|
||||
push: container
|
||||
gcloud docker push $(REGISTRY)/skydns-$(ARCH):$(TAG)
|
||||
|
||||
ifeq ($(ARCH),amd64)
|
||||
# Backward compatability. TODO: deprecate this image tag
|
||||
docker tag -f $(REGISTRY)/skydns-$(ARCH):$(TAG) $(REGISTRY)/skydns:$(TAG)
|
||||
gcloud docker push $(REGISTRY)/skydns:$(TAG)
|
||||
endif
|
||||
|
||||
clean:
|
||||
rm -f skydns
|
@@ -1,30 +0,0 @@
|
||||
# skydns for kubernetes
|
||||
=======================
|
||||
|
||||
This container only exists until skydns itself is reduced in some way. At the
|
||||
time of this writing, it is over 600 MB large.
|
||||
|
||||
#### How to release
|
||||
|
||||
This image is compiled for multiple architectures.
|
||||
If you're rebuilding the image, please bump the `TAG` in the Makefile.
|
||||
|
||||
```console
|
||||
# Build for linux/amd64 (default)
|
||||
$ make push ARCH=amd64
|
||||
# ---> gcr.io/google_containers/skydns-amd64:TAG
|
||||
# ---> gcr.io/google_containers/skydns:TAG (image with backwards-compatible naming)
|
||||
|
||||
$ make push ARCH=arm
|
||||
# ---> gcr.io/google_containers/skydns-arm:TAG
|
||||
|
||||
$ make push ARCH=arm64
|
||||
# ---> gcr.io/google_containers/skydns-arm64:TAG
|
||||
|
||||
$ make push ARCH=ppc64le
|
||||
# ---> gcr.io/google_containers/skydns-ppc64le:TAG
|
||||
```
|
||||
|
||||
If you don't want to push the images, run `make` or `make build` instead
|
||||
|
||||
[]()
|
Reference in New Issue
Block a user