mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-31 15:25:57 +00:00
Elasticsearch Discovery with Kubernetes
This commit is contained in:
parent
9939f92731
commit
9c72a56056
18
examples/elasticsearch/Dockerfile
Normal file
18
examples/elasticsearch/Dockerfile
Normal file
@ -0,0 +1,18 @@
|
||||
FROM java:7-jre
|
||||
|
||||
RUN apt-get update && \
|
||||
apt-get install -y curl && \
|
||||
apt-get clean
|
||||
|
||||
RUN cd / && \
|
||||
curl -O https://download.elastic.co/elasticsearch/elasticsearch/elasticsearch-1.5.2.tar.gz && \
|
||||
tar xf elasticsearch-1.5.2.tar.gz && \
|
||||
rm elasticsearch-1.5.2.tar.gz
|
||||
|
||||
COPY elasticsearch.yml /elasticsearch-1.5.2/config/elasticsearch.yml
|
||||
COPY run.sh /
|
||||
COPY elasticsearch_discovery /
|
||||
|
||||
EXPOSE 9200 9300
|
||||
|
||||
CMD ["/run.sh"]
|
14
examples/elasticsearch/Makefile
Normal file
14
examples/elasticsearch/Makefile
Normal file
@ -0,0 +1,14 @@
|
||||
.PHONY: elasticsearch_discovery build push all
|
||||
|
||||
TAG = 1.0
|
||||
|
||||
build:
|
||||
docker build -t kubernetes/elasticsearch:$(TAG) .
|
||||
|
||||
push:
|
||||
docker push kubernetes/elasticsearch:$(TAG)
|
||||
|
||||
elasticsearch_discovery:
|
||||
go build elasticsearch_discovery.go
|
||||
|
||||
all: elasticsearch_discovery build push
|
319
examples/elasticsearch/README.md
Normal file
319
examples/elasticsearch/README.md
Normal file
@ -0,0 +1,319 @@
|
||||
# Elasticsearch for Kubernetes
|
||||
|
||||
This directory contains the source for a Docker image that creates an instance
|
||||
of [Elasticsearch](https://www.elastic.co/products/elasticsearch) 1.5.2 which can
|
||||
be used to automatically form clusters when used
|
||||
with replication controllers. This will not work with the library Elasticsearch image
|
||||
because multicast discovery will not find the other pod IPs needed to form a cluster. This
|
||||
image detects other Elasticsearch pods running in a specified namespace with a given
|
||||
label selector. The detected instances are used to form a list of peer hosts which
|
||||
are used as part of the unicast discovery mechansim for Elasticsearch. The detection
|
||||
of the peer nodes is done by a program which communicates with the Kubernetes API
|
||||
server to get a list of matching Elasticsearch pods. To enable authenticated
|
||||
communication this image needs a secret to be mounted at `/etc/apiserver-secret`
|
||||
with the basic authentication username and password.
|
||||
|
||||
Here is an example replication controller specification that creates 4 instances of Elasticsearch which is in the file
|
||||
[music-rc.yaml](music-rc.yaml).
|
||||
```
|
||||
apiVersion: v1beta3
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
labels:
|
||||
name: music-db
|
||||
namespace: mytunes
|
||||
name: music-db
|
||||
spec:
|
||||
replicas: 4
|
||||
selector:
|
||||
name: music-db
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: music-db
|
||||
spec:
|
||||
containers:
|
||||
- name: es
|
||||
image: kubernetes/elasticsearch:1.0
|
||||
env:
|
||||
- name: "CLUSTER_NAME"
|
||||
value: "mytunes-db"
|
||||
- name: "SELECTOR"
|
||||
value: "name=music-db"
|
||||
- name: "NAMESPACE"
|
||||
value: "mytunes"
|
||||
ports:
|
||||
- name: es
|
||||
containerPort: 9200
|
||||
- name: es-transport
|
||||
containerPort: 9300
|
||||
volumeMounts:
|
||||
- name: apiserver-secret
|
||||
mountPath: /etc/apiserver-secret
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: apiserver-secret
|
||||
secret:
|
||||
secretName: apiserver-secret
|
||||
```
|
||||
The `CLUSTER_NAME` variable gives a name to the cluster and allows multiple separate clusters to
|
||||
exist in the same namespace.
|
||||
The `SELECTOR` variable should be set to a label query that identifies the Elasticsearch
|
||||
nodes that should participate in this cluster. For our example we specify `name=music-db` to
|
||||
match all pods that have the label `name` set to the value `music-db`.
|
||||
The `NAMESPACE` variable identifies the namespace
|
||||
to be used to search for Elasticsearch pods and this should be the same as the namespace specified
|
||||
for the replication controller (in this case `mytunes`).
|
||||
|
||||
Before creating pods with the replication controller a secret containing the bearer authentication token
|
||||
should be set up. A template is provided in the file [apiserver-secret.yaml](apiserver-secret.yaml):
|
||||
```
|
||||
apiVersion: v1beta3
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: apiserver-secret
|
||||
namespace: NAMESPACE
|
||||
data:
|
||||
token: "TOKEN"
|
||||
|
||||
```
|
||||
Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded
|
||||
versions of the bearer token reported by `kubectl config view` e.g.
|
||||
```
|
||||
$ kubectl config view
|
||||
...
|
||||
- name: kubernetes-logging_kubernetes-basic-auth
|
||||
...
|
||||
token: yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2
|
||||
...
|
||||
$ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64
|
||||
eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=
|
||||
|
||||
```
|
||||
resulting in the file:
|
||||
```
|
||||
apiVersion: v1beta3
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: apiserver-secret
|
||||
namespace: mytunes
|
||||
data:
|
||||
token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK="
|
||||
|
||||
```
|
||||
which can be used to create the secret in your namespace:
|
||||
```
|
||||
kubectl create -f apiserver-secret.yaml --namespace=mytunes
|
||||
secrets/apiserver-secret
|
||||
|
||||
```
|
||||
Now you are ready to create the replication controller which will then create the pods:
|
||||
```
|
||||
$ kubectl create -f music-rc.yaml --namespace=mytunes
|
||||
replicationcontrollers/music-db
|
||||
|
||||
```
|
||||
It's also useful to have a service with an external load balancer for accessing the Elasticsearch
|
||||
cluster which can be found in the file [music-service.yaml](music-service.yaml).
|
||||
```
|
||||
apiVersion: v1beta3
|
||||
kind: Service
|
||||
metadata:
|
||||
name: music-server
|
||||
namespace: mytunes
|
||||
labels:
|
||||
name: music-db
|
||||
spec:
|
||||
selector:
|
||||
name: music-db
|
||||
ports:
|
||||
- name: db
|
||||
port: 9200
|
||||
targetPort: es
|
||||
createExternalLoadBalancer: true
|
||||
```
|
||||
Let's create the service with an external load balancer:
|
||||
```
|
||||
$ kubectl create -f music-service.yaml --namespace=mytunes
|
||||
services/music-server
|
||||
|
||||
```
|
||||
Let's see what we've got:
|
||||
```
|
||||
$ kubectl get pods,rc,services,secrets --namespace=mytunes
|
||||
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 29 seconds
|
||||
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 6 minutes
|
||||
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 19 seconds
|
||||
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 6 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 6 minutes
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
music-db es kubernetes/elasticsearch:1.0 name=music-db 4
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
music-server name=music-db name=music-db 10.0.138.61 9200/TCP
|
||||
104.197.12.157
|
||||
NAME TYPE DATA
|
||||
apiserver-secret Opaque 2
|
||||
```
|
||||
This shows 4 instances of Elasticsearch running. After making sure that port 9200 is accessible for this cluster (e.g. using a firewall rule for GCE) we can make queries via the service which will be fielded by the matching Elasticsearch pods.
|
||||
```
|
||||
$ curl 104.197.12.157:9200
|
||||
{
|
||||
"status" : 200,
|
||||
"name" : "Warpath",
|
||||
"cluster_name" : "mytunes-db",
|
||||
"version" : {
|
||||
"number" : "1.5.2",
|
||||
"build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
|
||||
"build_timestamp" : "2015-04-27T09:21:06Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "4.10.4"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
$ curl 104.197.12.157:9200
|
||||
{
|
||||
"status" : 200,
|
||||
"name" : "Callisto",
|
||||
"cluster_name" : "mytunes-db",
|
||||
"version" : {
|
||||
"number" : "1.5.2",
|
||||
"build_hash" : "62ff9868b4c8a0c45860bebb259e21980778ab1c",
|
||||
"build_timestamp" : "2015-04-27T09:21:06Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "4.10.4"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
We can query the nodes to confirm that an Elasticsearch cluster has been formed.
|
||||
```
|
||||
$ curl 104.197.12.157:9200/_nodes?pretty=true
|
||||
{
|
||||
"cluster_name" : "mytunes-db",
|
||||
"nodes" : {
|
||||
"u-KrvywFQmyaH5BulSclsA" : {
|
||||
"name" : "Jonas Harrow",
|
||||
...
|
||||
"discovery" : {
|
||||
"zen" : {
|
||||
"ping" : {
|
||||
"unicast" : {
|
||||
"hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ]
|
||||
},
|
||||
...
|
||||
"name" : "Warpath",
|
||||
...
|
||||
"discovery" : {
|
||||
"zen" : {
|
||||
"ping" : {
|
||||
"unicast" : {
|
||||
"hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ]
|
||||
},
|
||||
...
|
||||
"name" : "Callisto",
|
||||
...
|
||||
"discovery" : {
|
||||
"zen" : {
|
||||
"ping" : {
|
||||
"unicast" : {
|
||||
"hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ]
|
||||
},
|
||||
...
|
||||
"name" : "Vapor",
|
||||
...
|
||||
"discovery" : {
|
||||
"zen" : {
|
||||
"ping" : {
|
||||
"unicast" : {
|
||||
"hosts" : [ "10.244.2.48", "10.244.0.24", "10.244.3.31", "10.244.1.37" ]
|
||||
...
|
||||
```
|
||||
Let's ramp up the number of Elasticsearch nodes from 4 to 10:
|
||||
```
|
||||
$ kubectl resize --replicas=10 replicationcontrollers music-db --namespace=mytunes
|
||||
resized
|
||||
$ kubectl get pods --namespace=mytunes
|
||||
POD IP CONTAINER(S) IMAGE(S) HOST LABELS STATUS CREATED MESSAGE
|
||||
music-db-0fwsu 10.244.2.48 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 26 minutes
|
||||
music-db-2erje 10.244.2.50 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-5pc2e 10.244.0.24 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 32 minutes
|
||||
music-db-8rkvp 10.244.3.33 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-bjqmv 10.244.3.31 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 26 minutes
|
||||
music-db-efc46 10.244.2.49 kubernetes-minion-m49b/104.197.35.221 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-fhqyg 10.244.0.25 kubernetes-minion-3c8c/146.148.41.184 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 47 seconds
|
||||
music-db-guxe4 10.244.3.32 kubernetes-minion-zey5/104.154.59.10 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 46 seconds
|
||||
music-db-pbiq1 10.244.1.38 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 48 seconds
|
||||
es kubernetes/elasticsearch:1.0 Running 47 seconds
|
||||
music-db-swtrs 10.244.1.37 kubernetes-minion-f9dw/130.211.159.230 name=music-db Running 33 minutes
|
||||
es kubernetes/elasticsearch:1.0 Running 32 minutes
|
||||
|
||||
```
|
||||
Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster:
|
||||
```
|
||||
$ curl 104.197.12.157:9200/_nodes?pretty=true | grep name
|
||||
"cluster_name" : "mytunes-db",
|
||||
"name" : "Killraven",
|
||||
"name" : "Killraven",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Tefral the Surveyor",
|
||||
"name" : "Tefral the Surveyor",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Jonas Harrow",
|
||||
"name" : "Jonas Harrow",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Warpath",
|
||||
"name" : "Warpath",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Brute I",
|
||||
"name" : "Brute I",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Callisto",
|
||||
"name" : "Callisto",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Vapor",
|
||||
"name" : "Vapor",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Timeslip",
|
||||
"name" : "Timeslip",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Magik",
|
||||
"name" : "Magik",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
"name" : "Brother Voodoo",
|
||||
"name" : "Brother Voodoo",
|
||||
"name" : "mytunes-db"
|
||||
"vm_name" : "OpenJDK 64-Bit Server VM",
|
||||
"name" : "eth0",
|
||||
|
||||
```
|
8
examples/elasticsearch/apiserver-secret.yaml
Normal file
8
examples/elasticsearch/apiserver-secret.yaml
Normal file
@ -0,0 +1,8 @@
|
||||
apiVersion: v1beta3
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: apiserver-secret
|
||||
namespace: NAMESPACE
|
||||
data:
|
||||
token: "TOKEN"
|
||||
|
385
examples/elasticsearch/elasticsearch.yml
Normal file
385
examples/elasticsearch/elasticsearch.yml
Normal file
@ -0,0 +1,385 @@
|
||||
##################### Elasticsearch Configuration Example #####################
|
||||
|
||||
# This file contains an overview of various configuration settings,
|
||||
# targeted at operations staff. Application developers should
|
||||
# consult the guide at <http://elasticsearch.org/guide>.
|
||||
#
|
||||
# The installation procedure is covered at
|
||||
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup.html>.
|
||||
#
|
||||
# Elasticsearch comes with reasonable defaults for most settings,
|
||||
# so you can try it out without bothering with configuration.
|
||||
#
|
||||
# Most of the time, these defaults are just fine for running a production
|
||||
# cluster. If you're fine-tuning your cluster, or wondering about the
|
||||
# effect of certain configuration option, please _do ask_ on the
|
||||
# mailing list or IRC channel [http://elasticsearch.org/community].
|
||||
|
||||
# Any element in the configuration can be replaced with environment variables
|
||||
# by placing them in ${...} notation. For example:
|
||||
#
|
||||
#node.rack: ${RACK_ENV_VAR}
|
||||
|
||||
# For information on supported formats and syntax for the config file, see
|
||||
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/setup-configuration.html>
|
||||
|
||||
|
||||
################################### Cluster ###################################
|
||||
|
||||
# Cluster name identifies your cluster for auto-discovery. If you're running
|
||||
# multiple clusters on the same network, make sure you're using unique names.
|
||||
#
|
||||
cluster.name: ${CLUSTER_NAME}
|
||||
|
||||
|
||||
#################################### Node #####################################
|
||||
|
||||
# Node names are generated dynamically on startup, so you're relieved
|
||||
# from configuring them manually. You can tie this node to a specific name:
|
||||
#
|
||||
#node.name: "Franz Kafka"
|
||||
|
||||
# Every node can be configured to allow or deny being eligible as the master,
|
||||
# and to allow or deny to store the data.
|
||||
#
|
||||
# Allow this node to be eligible as a master node (enabled by default):
|
||||
#
|
||||
node.master: ${NODE_MASTER}
|
||||
#
|
||||
# Allow this node to store data (enabled by default):
|
||||
#
|
||||
node.data: ${NODE_DATA}
|
||||
|
||||
# You can exploit these settings to design advanced cluster topologies.
|
||||
#
|
||||
# 1. You want this node to never become a master node, only to hold data.
|
||||
# This will be the "workhorse" of your cluster.
|
||||
#
|
||||
#node.master: false
|
||||
#node.data: true
|
||||
#
|
||||
# 2. You want this node to only serve as a master: to not store any data and
|
||||
# to have free resources. This will be the "coordinator" of your cluster.
|
||||
#
|
||||
#node.master: true
|
||||
#node.data: false
|
||||
#
|
||||
# 3. You want this node to be neither master nor data node, but
|
||||
# to act as a "search load balancer" (fetching data from nodes,
|
||||
# aggregating results, etc.)
|
||||
#
|
||||
#node.master: false
|
||||
#node.data: false
|
||||
|
||||
# Use the Cluster Health API [http://localhost:9200/_cluster/health], the
|
||||
# Node Info API [http://localhost:9200/_nodes] or GUI tools
|
||||
# such as <http://www.elasticsearch.org/overview/marvel/>,
|
||||
# <http://github.com/karmi/elasticsearch-paramedic>,
|
||||
# <http://github.com/lukas-vlcek/bigdesk> and
|
||||
# <http://mobz.github.com/elasticsearch-head> to inspect the cluster state.
|
||||
|
||||
# A node can have generic attributes associated with it, which can later be used
|
||||
# for customized shard allocation filtering, or allocation awareness. An attribute
|
||||
# is a simple key value pair, similar to node.key: value, here is an example:
|
||||
#
|
||||
#node.rack: rack314
|
||||
|
||||
# By default, multiple nodes are allowed to start from the same installation location
|
||||
# to disable it, set the following:
|
||||
#node.max_local_storage_nodes: 1
|
||||
|
||||
|
||||
#################################### Index ####################################
|
||||
|
||||
# You can set a number of options (such as shard/replica options, mapping
|
||||
# or analyzer definitions, translog settings, ...) for indices globally,
|
||||
# in this file.
|
||||
#
|
||||
# Note, that it makes more sense to configure index settings specifically for
|
||||
# a certain index, either when creating it or by using the index templates API.
|
||||
#
|
||||
# See <http://elasticsearch.org/guide/en/elasticsearch/reference/current/index-modules.html> and
|
||||
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/indices-create-index.html>
|
||||
# for more information.
|
||||
|
||||
# Set the number of shards (splits) of an index (5 by default):
|
||||
#
|
||||
#index.number_of_shards: 5
|
||||
|
||||
# Set the number of replicas (additional copies) of an index (1 by default):
|
||||
#
|
||||
#index.number_of_replicas: 1
|
||||
|
||||
# Note, that for development on a local machine, with small indices, it usually
|
||||
# makes sense to "disable" the distributed features:
|
||||
#
|
||||
#index.number_of_shards: 1
|
||||
#index.number_of_replicas: 0
|
||||
|
||||
# These settings directly affect the performance of index and search operations
|
||||
# in your cluster. Assuming you have enough machines to hold shards and
|
||||
# replicas, the rule of thumb is:
|
||||
#
|
||||
# 1. Having more *shards* enhances the _indexing_ performance and allows to
|
||||
# _distribute_ a big index across machines.
|
||||
# 2. Having more *replicas* enhances the _search_ performance and improves the
|
||||
# cluster _availability_.
|
||||
#
|
||||
# The "number_of_shards" is a one-time setting for an index.
|
||||
#
|
||||
# The "number_of_replicas" can be increased or decreased anytime,
|
||||
# by using the Index Update Settings API.
|
||||
#
|
||||
# Elasticsearch takes care about load balancing, relocating, gathering the
|
||||
# results from nodes, etc. Experiment with different settings to fine-tune
|
||||
# your setup.
|
||||
|
||||
# Use the Index Status API (<http://localhost:9200/A/_status>) to inspect
|
||||
# the index status.
|
||||
|
||||
|
||||
#################################### Paths ####################################
|
||||
|
||||
# Path to directory containing configuration (this file and logging.yml):
|
||||
#
|
||||
#path.conf: /path/to/conf
|
||||
|
||||
# Path to directory where to store index data allocated for this node.
|
||||
#
|
||||
#path.data: /path/to/data
|
||||
#
|
||||
# Can optionally include more than one location, causing data to be striped across
|
||||
# the locations (a la RAID 0) on a file level, favouring locations with most free
|
||||
# space on creation. For example:
|
||||
#
|
||||
#path.data: /path/to/data1,/path/to/data2
|
||||
|
||||
# Path to temporary files:
|
||||
#
|
||||
#path.work: /path/to/work
|
||||
|
||||
# Path to log files:
|
||||
#
|
||||
#path.logs: /path/to/logs
|
||||
|
||||
# Path to where plugins are installed:
|
||||
#
|
||||
#path.plugins: /path/to/plugins
|
||||
|
||||
|
||||
#################################### Plugin ###################################
|
||||
|
||||
# If a plugin listed here is not installed for current node, the node will not start.
|
||||
#
|
||||
#plugin.mandatory: mapper-attachments,lang-groovy
|
||||
|
||||
|
||||
################################### Memory ####################################
|
||||
|
||||
# Elasticsearch performs poorly when JVM starts swapping: you should ensure that
|
||||
# it _never_ swaps.
|
||||
#
|
||||
# Set this property to true to lock the memory:
|
||||
#
|
||||
#bootstrap.mlockall: true
|
||||
|
||||
# Make sure that the ES_MIN_MEM and ES_MAX_MEM environment variables are set
|
||||
# to the same value, and that the machine has enough memory to allocate
|
||||
# for Elasticsearch, leaving enough memory for the operating system itself.
|
||||
#
|
||||
# You should also make sure that the Elasticsearch process is allowed to lock
|
||||
# the memory, eg. by using `ulimit -l unlimited`.
|
||||
|
||||
|
||||
############################## Network And HTTP ###############################
|
||||
|
||||
# Elasticsearch, by default, binds itself to the 0.0.0.0 address, and listens
|
||||
# on port [9200-9300] for HTTP traffic and on port [9300-9400] for node-to-node
|
||||
# communication. (the range means that if the port is busy, it will automatically
|
||||
# try the next port).
|
||||
|
||||
# Set the bind address specifically (IPv4 or IPv6):
|
||||
#
|
||||
#network.bind_host: 192.168.0.1
|
||||
|
||||
# Set the address other nodes will use to communicate with this node. If not
|
||||
# set, it is automatically derived. It must point to an actual IP address.
|
||||
#
|
||||
#network.publish_host: 192.168.0.1
|
||||
|
||||
# Set both 'bind_host' and 'publish_host':
|
||||
#
|
||||
#network.host: 192.168.0.1
|
||||
|
||||
# Set a custom port for the node to node communication (9300 by default):
|
||||
#
|
||||
transport.tcp.port: ${TRANSPORT_PORT}
|
||||
|
||||
# Enable compression for all communication between nodes (disabled by default):
|
||||
#
|
||||
#transport.tcp.compress: true
|
||||
|
||||
# Set a custom port to listen for HTTP traffic:
|
||||
#
|
||||
http.port: ${HTTP_PORT}
|
||||
|
||||
# Set a custom allowed content length:
|
||||
#
|
||||
#http.max_content_length: 100mb
|
||||
|
||||
# Disable HTTP completely:
|
||||
#
|
||||
#http.enabled: false
|
||||
|
||||
|
||||
################################### Gateway ###################################
|
||||
|
||||
# The gateway allows for persisting the cluster state between full cluster
|
||||
# restarts. Every change to the state (such as adding an index) will be stored
|
||||
# in the gateway, and when the cluster starts up for the first time,
|
||||
# it will read its state from the gateway.
|
||||
|
||||
# There are several types of gateway implementations. For more information, see
|
||||
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-gateway.html>.
|
||||
|
||||
# The default gateway type is the "local" gateway (recommended):
|
||||
#
|
||||
#gateway.type: local
|
||||
|
||||
# Settings below control how and when to start the initial recovery process on
|
||||
# a full cluster restart (to reuse as much local data as possible when using shared
|
||||
# gateway).
|
||||
|
||||
# Allow recovery process after N nodes in a cluster are up:
|
||||
#
|
||||
#gateway.recover_after_nodes: 1
|
||||
|
||||
# Set the timeout to initiate the recovery process, once the N nodes
|
||||
# from previous setting are up (accepts time value):
|
||||
#
|
||||
#gateway.recover_after_time: 5m
|
||||
|
||||
# Set how many nodes are expected in this cluster. Once these N nodes
|
||||
# are up (and recover_after_nodes is met), begin recovery process immediately
|
||||
# (without waiting for recover_after_time to expire):
|
||||
#
|
||||
#gateway.expected_nodes: 2
|
||||
|
||||
|
||||
############################# Recovery Throttling #############################
|
||||
|
||||
# These settings allow to control the process of shards allocation between
|
||||
# nodes during initial recovery, replica allocation, rebalancing,
|
||||
# or when adding and removing nodes.
|
||||
|
||||
# Set the number of concurrent recoveries happening on a node:
|
||||
#
|
||||
# 1. During the initial recovery
|
||||
#
|
||||
#cluster.routing.allocation.node_initial_primaries_recoveries: 4
|
||||
#
|
||||
# 2. During adding/removing nodes, rebalancing, etc
|
||||
#
|
||||
#cluster.routing.allocation.node_concurrent_recoveries: 2
|
||||
|
||||
# Set to throttle throughput when recovering (eg. 100mb, by default 20mb):
|
||||
#
|
||||
#indices.recovery.max_bytes_per_sec: 20mb
|
||||
|
||||
# Set to limit the number of open concurrent streams when
|
||||
# recovering a shard from a peer:
|
||||
#
|
||||
#indices.recovery.concurrent_streams: 5
|
||||
|
||||
|
||||
################################## Discovery ##################################
|
||||
|
||||
# Discovery infrastructure ensures nodes can be found within a cluster
|
||||
# and master node is elected. Multicast discovery is the default.
|
||||
|
||||
# Set to ensure a node sees N other master eligible nodes to be considered
|
||||
# operational within the cluster. This should be set to a quorum/majority of
|
||||
# the master-eligible nodes in the cluster.
|
||||
#
|
||||
#discovery.zen.minimum_master_nodes: 1
|
||||
|
||||
# Set the time to wait for ping responses from other nodes when discovering.
|
||||
# Set this option to a higher value on a slow or congested network
|
||||
# to minimize discovery failures:
|
||||
#
|
||||
#discovery.zen.ping.timeout: 3s
|
||||
|
||||
# For more information, see
|
||||
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-zen.html>
|
||||
|
||||
# Unicast discovery allows to explicitly control which nodes will be used
|
||||
# to discover the cluster. It can be used when multicast is not present,
|
||||
# or to restrict the cluster communication-wise.
|
||||
#
|
||||
# 1. Disable multicast discovery (enabled by default):
|
||||
#
|
||||
discovery.zen.ping.multicast.enabled: ${MULTICAST}
|
||||
#
|
||||
# 2. Configure an initial list of master nodes in the cluster
|
||||
# to perform discovery when new nodes (master or data) are started:
|
||||
#
|
||||
#discovery.zen.ping.unicast.hosts: ${UNICAST_HOSTS}
|
||||
|
||||
# EC2 discovery allows to use AWS EC2 API in order to perform discovery.
|
||||
#
|
||||
# You have to install the cloud-aws plugin for enabling the EC2 discovery.
|
||||
#
|
||||
# For more information, see
|
||||
# <http://elasticsearch.org/guide/en/elasticsearch/reference/current/modules-discovery-ec2.html>
|
||||
#
|
||||
# See <http://elasticsearch.org/tutorials/elasticsearch-on-ec2/>
|
||||
# for a step-by-step tutorial.
|
||||
|
||||
# GCE discovery allows to use Google Compute Engine API in order to perform discovery.
|
||||
#
|
||||
# You have to install the cloud-gce plugin for enabling the GCE discovery.
|
||||
#
|
||||
# For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-gce>.
|
||||
|
||||
# Azure discovery allows to use Azure API in order to perform discovery.
|
||||
#
|
||||
# You have to install the cloud-azure plugin for enabling the Azure discovery.
|
||||
#
|
||||
# For more information, see <https://github.com/elasticsearch/elasticsearch-cloud-azure>.
|
||||
|
||||
################################## Slow Log ##################################
|
||||
|
||||
# Shard level query and fetch threshold logging.
|
||||
|
||||
#index.search.slowlog.threshold.query.warn: 10s
|
||||
#index.search.slowlog.threshold.query.info: 5s
|
||||
#index.search.slowlog.threshold.query.debug: 2s
|
||||
#index.search.slowlog.threshold.query.trace: 500ms
|
||||
|
||||
#index.search.slowlog.threshold.fetch.warn: 1s
|
||||
#index.search.slowlog.threshold.fetch.info: 800ms
|
||||
#index.search.slowlog.threshold.fetch.debug: 500ms
|
||||
#index.search.slowlog.threshold.fetch.trace: 200ms
|
||||
|
||||
#index.indexing.slowlog.threshold.index.warn: 10s
|
||||
#index.indexing.slowlog.threshold.index.info: 5s
|
||||
#index.indexing.slowlog.threshold.index.debug: 2s
|
||||
#index.indexing.slowlog.threshold.index.trace: 500ms
|
||||
|
||||
################################## GC Logging ################################
|
||||
|
||||
#monitor.jvm.gc.young.warn: 1000ms
|
||||
#monitor.jvm.gc.young.info: 700ms
|
||||
#monitor.jvm.gc.young.debug: 400ms
|
||||
|
||||
#monitor.jvm.gc.old.warn: 10s
|
||||
#monitor.jvm.gc.old.info: 5s
|
||||
#monitor.jvm.gc.old.debug: 2s
|
||||
|
||||
################################## Security ################################
|
||||
|
||||
# Uncomment if you want to enable JSONP as a valid return transport on the
|
||||
# http server. With this enabled, it may pose a security risk, so disabling
|
||||
# it unless you need it is recommended (it is disabled by default).
|
||||
#
|
||||
#http.jsonp.enable: true
|
97
examples/elasticsearch/elasticsearch_discovery.go
Normal file
97
examples/elasticsearch/elasticsearch_discovery.go
Normal file
@ -0,0 +1,97 @@
|
||||
/*
|
||||
Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
*/
|
||||
|
||||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"os"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/GoogleCloudPlatform/kubernetes/pkg/api"
|
||||
"github.com/GoogleCloudPlatform/kubernetes/pkg/client"
|
||||
"github.com/GoogleCloudPlatform/kubernetes/pkg/fields"
|
||||
"github.com/GoogleCloudPlatform/kubernetes/pkg/labels"
|
||||
"github.com/golang/glog"
|
||||
)
|
||||
|
||||
var (
|
||||
token = flag.String("token", "", "Bearer token for authentication to the API server.")
|
||||
server = flag.String("server", "", "The address and port of the Kubernetes API server")
|
||||
namespace = flag.String("namespace", api.NamespaceDefault, "The namespace containing Elasticsearch pods")
|
||||
selector = flag.String("selector", "", "Selector (label query) for selecting Elasticsearch pods")
|
||||
)
|
||||
|
||||
func main() {
|
||||
flag.Parse()
|
||||
glog.Info("Elasticsearch discovery")
|
||||
apiServer := *server
|
||||
if apiServer == "" {
|
||||
kubernetesService := os.Getenv("KUBERNETES_SERVICE_HOST")
|
||||
if kubernetesService == "" {
|
||||
glog.Fatalf("Please specify the Kubernetes server with --server")
|
||||
}
|
||||
apiServer = fmt.Sprintf("https://%s:%s", kubernetesService, os.Getenv("KUBERNETES_SERVICE_PORT"))
|
||||
}
|
||||
|
||||
glog.Infof("Server: %s", apiServer)
|
||||
glog.Infof("Namespace: %q", *namespace)
|
||||
glog.Infof("selector: %q", *selector)
|
||||
|
||||
config := client.Config{
|
||||
Host: apiServer,
|
||||
BearerToken: *token,
|
||||
Insecure: true,
|
||||
}
|
||||
|
||||
c, err := client.New(&config)
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to make client: %v", err)
|
||||
}
|
||||
|
||||
l, err := labels.Parse(*selector)
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to parse selector %q: %v", *selector, err)
|
||||
}
|
||||
pods, err := c.Pods(*namespace).List(l, fields.Everything())
|
||||
if err != nil {
|
||||
glog.Fatalf("Failed to list pods: %v", err)
|
||||
}
|
||||
|
||||
glog.Infof("Elasticsearch pods in namespace %s with selector %q", *namespace, *selector)
|
||||
podIPs := []string{}
|
||||
for i := range pods.Items {
|
||||
p := &pods.Items[i]
|
||||
for attempt := 0; attempt < 10; attempt++ {
|
||||
glog.Infof("%d: %s PodIP: %s", i, p.Name, p.Status.PodIP)
|
||||
if p.Status.PodIP != "" {
|
||||
podIPs = append(podIPs, fmt.Sprintf(`"%s"`, p.Status.PodIP))
|
||||
break
|
||||
}
|
||||
time.Sleep(1 * time.Second)
|
||||
p, err = c.Pods(*namespace).Get(p.Name)
|
||||
if err != nil {
|
||||
glog.Warningf("Failed to get pod %s: %v", p.Name, err)
|
||||
}
|
||||
}
|
||||
if p.Status.PodIP == "" {
|
||||
glog.Warningf("Failed to obtain PodIP for %s", p.Name)
|
||||
}
|
||||
}
|
||||
fmt.Printf("discovery.zen.ping.unicast.hosts: [%s]\n", strings.Join(podIPs, ", "))
|
||||
}
|
39
examples/elasticsearch/music-rc.yaml
Normal file
39
examples/elasticsearch/music-rc.yaml
Normal file
@ -0,0 +1,39 @@
|
||||
apiVersion: v1beta3
|
||||
kind: ReplicationController
|
||||
metadata:
|
||||
labels:
|
||||
name: music-db
|
||||
namespace: mytunes
|
||||
name: music-db
|
||||
spec:
|
||||
replicas: 4
|
||||
selector:
|
||||
name: music-db
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
name: music-db
|
||||
spec:
|
||||
containers:
|
||||
- name: es
|
||||
image: kubernetes/elasticsearch:1.0
|
||||
env:
|
||||
- name: "CLUSTER_NAME"
|
||||
value: "mytunes-db"
|
||||
- name: "SELECTOR"
|
||||
value: "name=music-db"
|
||||
- name: "NAMESPACE"
|
||||
value: "mytunes"
|
||||
ports:
|
||||
- name: es
|
||||
containerPort: 9200
|
||||
- name: es-transport
|
||||
containerPort: 9300
|
||||
volumeMounts:
|
||||
- name: apiserver-secret
|
||||
mountPath: /etc/apiserver-secret
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: apiserver-secret
|
||||
secret:
|
||||
secretName: apiserver-secret
|
15
examples/elasticsearch/music-service.yaml
Normal file
15
examples/elasticsearch/music-service.yaml
Normal file
@ -0,0 +1,15 @@
|
||||
apiVersion: v1beta3
|
||||
kind: Service
|
||||
metadata:
|
||||
name: music-server
|
||||
namespace: mytunes
|
||||
labels:
|
||||
name: music-db
|
||||
spec:
|
||||
selector:
|
||||
name: music-db
|
||||
ports:
|
||||
- name: db
|
||||
port: 9200
|
||||
targetPort: es
|
||||
createExternalLoadBalancer: true
|
25
examples/elasticsearch/run.sh
Executable file
25
examples/elasticsearch/run.sh
Executable file
@ -0,0 +1,25 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Copyright 2015 The Kubernetes Authors All rights reserved.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
export CLUSTER_NAME=${CLUSTER_NAME:-elasticsearch-default}
|
||||
export NODE_MASTER=${NODE_MASTER:-true}
|
||||
export NODE_DATA=${NODE_DATA:-true}
|
||||
export MULTICAST=${MULTICAST:-false}
|
||||
readonly TOKEN=$(cat /etc/apiserver-secret/token)
|
||||
/elasticsearch_discovery --namespace="${NAMESPACE}" --token="${TOKEN}" --selector="${SELECTOR}" >> /elasticsearch-1.5.2/config/elasticsearch.yml
|
||||
export HTTP_PORT=${HTTP_PORT:-9200}
|
||||
export TRANSPORT_PORT=${TRANSPORT_PORT:-9300}
|
||||
/elasticsearch-1.5.2/bin/elasticsearch
|
Loading…
Reference in New Issue
Block a user