diff --git a/cluster/addons/fluentd-elasticsearch/README.md b/cluster/addons/fluentd-elasticsearch/README.md
index 065bc6dd750..59cc5ddfe99 100644
--- a/cluster/addons/fluentd-elasticsearch/README.md
+++ b/cluster/addons/fluentd-elasticsearch/README.md
@@ -1,45 +1,82 @@
# Elasticsearch Add-On
-This add-on consists of a combination of
-[Elasticsearch](https://www.elastic.co/products/elasticsearch), [Fluentd](http://www.fluentd.org/)
-and [Kibana](https://www.elastic.co/products/elasticsearch). Elasticsearch is a search engine
-that is responsible for storing our logs and allowing for them to be queried. Fluentd sends
-log messages from Kubernetes to Elasticsearch, whereas Kibana is a graphical interface for
-viewing and querying the logs stored in Elasticsearch.
+
+This add-on consists of a combination of [Elasticsearch][elasticsearch],
+[Fluentd][fluentd] and [Kibana][kibana]. Elasticsearch is a search engine
+that is responsible for storing our logs and allowing for them to be queried.
+Fluentd sends log messages from Kubernetes to Elasticsearch, whereas Kibana
+is a graphical interface for viewing and querying the logs stored in
+Elasticsearch.
+
+**Note:** this addon should **not** be used as-is in production. This is
+an example and you should treat is as such. Please see at least the
+[Security](#security) and the [Storage](#storage) sections for more
+information.
## Elasticsearch
-Elasticsearch is deployed as a
-[StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), which
-is like a Deployment, but allows for maintaining state on storage volumes.
-### Authentication
-Elasticsearch has basic authentication enabled by default, in our configuration the credentials
-are at their default values, i.e. username 'elastic' and password 'changeme'. In order to change
-them, please read up on [the official documentation](https://www.elastic.co/guide/en/x-pack/current/setting-up-authentication.html#reset-built-in-user-passwords).
+Elasticsearch is deployed as a [StatefulSet][statefulSet], which is like
+a Deployment, but allows for maintaining state on storage volumes.
+
+### Security
+
+Elasticsearch has capabilities to enable authorization using
+[X-Pack plugin][xPack]. See configuration parameter `xpack.security.enabled`
+in Elasticsearch and Kibana configurations. It can also be set via
+`XPACK_SECURITY_ENABLED` env variable. After enabling the feature,
+follow [official documentation][setupCreds] to set up credentials in
+Elasticsearch and Kibana. Don't forget to propagate those credentials also to
+Fluentd in its [configuration][fluentdCreds], using for example
+[environment variables][fluentdEnvVar]. You can utilize [ConfigMaps][configMap]
+and [Secrets][secret] to store credentials in the Kubernetes apiserver.
### Initialization
+
The Elasticsearch Statefulset manifest specifies that there shall be an
-[init container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/) executing
-before Elasticsearch containers themselves, in order to ensure that the kernel state variable
-`vm.max_map_count` is at least 262144, since this is a requirement of Elasticsearch.
-You may remove the init container if you know that your host OS meets this requirement.
+[init container][initContainer] executing before Elasticsearch containers
+themselves, in order to ensure that the kernel state variable
+`vm.max_map_count` is at least 262144, since this is a requirement of
+Elasticsearch. You may remove the init container if you know that your host
+OS meets this requirement.
### Storage
-The Elasticsearch StatefulSet will claim a storage volume 'elasticsearch-logging',
-of the standard
-[StorageClass](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#storageclasses),
-that by default will be 100 Gi per replica. Please adjust this to your needs (including
-possibly choosing a more suitable StorageClass).
+
+The Elasticsearch StatefulSet will use the [EmptyDir][emptyDir] volume to
+store data. EmptyDir is erased when the pod terminates, here it is used only
+for testing purposes. **Important:** please change the storage to persistent
+volume claim before actually using this StatefulSet in your setup!
## Fluentd
-Fluentd is deployed as a
-[DaemonSet](https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/) which spawns a
-pod on each node that reads logs, generated by kubelet, container runtime and containers and
-sends them to Elasticsearch.
-*Please note that for Fluentd to work, every Kubernetes node must be labeled*
-`beta.kubernetes.io/fluentd-ds-ready=true`, as otherwise Fluentd will ignore them.
+Fluentd is deployed as a [DaemonSet][daemonSet] which spawns a pod on each
+node that reads logs, generated by kubelet, container runtime and containers
+and sends them to Elasticsearch.
-Learn more at: https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana
+**Note:** in order for Fluentd to work, every Kubernetes node must be labeled
+with `beta.kubernetes.io/fluentd-ds-ready=true`, as otherwise the Fluentd
+DaemonSet will ignore them.
+
+Learn more in the [official Kubernetes documentation][k8sElasticsearchDocs].
+
+### Known problems
+
+Since Fluentd talks to the Elasticsearch service inside the cluster, instances
+on masters won't work, because masters have no kube-proxy. Don't mark masters
+with a label mentioned in the previous paragraph or add a taint on them to
+avoid Fluentd pods scheduling there.
+
+[fluentd]: http://www.fluentd.org/
+[elasticsearch]: https://www.elastic.co/products/elasticsearch
+[kibana]: https://www.elastic.co/products/kibana
+[xPack]: https://www.elastic.co/products/x-pack
+[setupCreds]: https://www.elastic.co/guide/en/x-pack/current/setting-up-authentication.html#reset-built-in-user-passwords
+[fluentdCreds]: https://github.com/uken/fluent-plugin-elasticsearch#user-password-path-scheme-ssl_verify
+[fluentdEnvVar]: https://docs.fluentd.org/v0.12/articles/faq#how-can-i-use-environment-variables-to-configure-parameters-dynamically
+[configMap]: https://kubernetes.io/docs/tasks/configure-pod-container/configmap/
+[secret]: https://kubernetes.io/docs/concepts/configuration/secret/
+[statefulSet]: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset
+[initContainer]: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
+[emptyDir]: https://kubernetes.io/docs/concepts/storage/volumes#emptydir
+[daemonSet]: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
+[k8sElasticsearchDocs]: https://kubernetes.io/docs/tasks/debug-application-cluster/logging-elasticsearch-kibana
[]()
-
diff --git a/cluster/addons/fluentd-elasticsearch/env-configmap.yaml b/cluster/addons/fluentd-elasticsearch/env-configmap.yaml
deleted file mode 100644
index 0ab1075fc85..00000000000
--- a/cluster/addons/fluentd-elasticsearch/env-configmap.yaml
+++ /dev/null
@@ -1,7 +0,0 @@
-apiVersion: v1
-kind: ConfigMap
-metadata:
- name: environment
- namespace: kube-system
-data:
- elasticsearch-user: elastic
diff --git a/cluster/addons/fluentd-elasticsearch/env-secret.yaml b/cluster/addons/fluentd-elasticsearch/env-secret.yaml
deleted file mode 100644
index 9e045135463..00000000000
--- a/cluster/addons/fluentd-elasticsearch/env-secret.yaml
+++ /dev/null
@@ -1,8 +0,0 @@
-apiVersion: v1
-kind: Secret
-metadata:
- name: environment
- namespace: kube-system
-type: Opaque
-data:
- elasticsearch-password: Y2hhbmdlbWU=
diff --git a/cluster/addons/fluentd-elasticsearch/es-clusterrole.yaml b/cluster/addons/fluentd-elasticsearch/es-clusterrole.yaml
deleted file mode 100644
index e77f51cd2d2..00000000000
--- a/cluster/addons/fluentd-elasticsearch/es-clusterrole.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1beta1
-metadata:
- name: elasticsearch-logging
- labels:
- k8s-app: elasticsearch-logging
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
-rules:
-- apiGroups:
- - ""
- resources:
- - "services"
- - "namespaces"
- - "endpoints"
- verbs:
- - "get"
diff --git a/cluster/addons/fluentd-elasticsearch/es-clusterrolebinding.yaml b/cluster/addons/fluentd-elasticsearch/es-clusterrolebinding.yaml
deleted file mode 100644
index ee3847bb814..00000000000
--- a/cluster/addons/fluentd-elasticsearch/es-clusterrolebinding.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1beta1
-metadata:
- namespace: kube-system
- name: elasticsearch-logging
- labels:
- k8s-app: elasticsearch-logging
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
-subjects:
-- kind: ServiceAccount
- name: elasticsearch-logging
- namespace: kube-system
- apiGroup: ""
-roleRef:
- kind: ClusterRole
- name: elasticsearch-logging
- apiGroup: ""
diff --git a/cluster/addons/fluentd-elasticsearch/es-image/Dockerfile b/cluster/addons/fluentd-elasticsearch/es-image/Dockerfile
index a9f2c56cffc..28e3c96d3d3 100644
--- a/cluster/addons/fluentd-elasticsearch/es-image/Dockerfile
+++ b/cluster/addons/fluentd-elasticsearch/es-image/Dockerfile
@@ -14,22 +14,12 @@
FROM docker.elastic.co/elasticsearch/elasticsearch:5.5.1
-USER root
-
-RUN mkdir /data
-RUN chown -R elasticsearch:elasticsearch /data
-
-WORKDIR /usr/share/elasticsearch
-
VOLUME ["/data"]
EXPOSE 9200 9300
-USER elasticsearch
-COPY elasticsearch_logging_discovery bin/
-COPY config/elasticsearch.yml config/
-COPY config/log4j2.properties config/
-COPY run.sh bin/
+COPY elasticsearch_logging_discovery run.sh bin/
+COPY config/elasticsearch.yml config/log4j2.properties config/
USER root
-RUN chown -R elasticsearch:elasticsearch config
+RUN chown -R elasticsearch:elasticsearch ./
CMD ["bin/run.sh"]
diff --git a/cluster/addons/fluentd-elasticsearch/es-image/Makefile b/cluster/addons/fluentd-elasticsearch/es-image/Makefile
index 753d09f6cd0..32d2c46ddf5 100755
--- a/cluster/addons/fluentd-elasticsearch/es-image/Makefile
+++ b/cluster/addons/fluentd-elasticsearch/es-image/Makefile
@@ -12,19 +12,19 @@
# See the License for the specific language governing permissions and
# limitations under the License.
-.PHONY: elasticsearch_logging_discovery build push
+.PHONY: binary build push
-# The current value of the tag to be used for building and
-# pushing an image to gcr.io
-TAG = v5.5.1
+PREFIX = gcr.io/google-containers
+IMAGE = elasticsearch
+TAG = v5.5.1-1
-build: elasticsearch_logging_discovery
- docker build --pull -t gcr.io/google_containers/elasticsearch:$(TAG) .
+build:
+ docker build --pull -t $(PREFIX)/$(IMAGE):$(TAG) .
push:
- gcloud docker -- push gcr.io/google_containers/elasticsearch:$(TAG)
+ gcloud docker -- push $(PREFIX)/$(IMAGE):$(TAG)
-elasticsearch_logging_discovery:
+binary:
CGO_ENABLED=0 GOOS=linux go build -a -ldflags "-w" elasticsearch_logging_discovery.go
clean:
diff --git a/cluster/addons/fluentd-elasticsearch/es-image/config/elasticsearch.yml b/cluster/addons/fluentd-elasticsearch/es-image/config/elasticsearch.yml
index f4ffee74a7f..23c59c53b79 100644
--- a/cluster/addons/fluentd-elasticsearch/es-image/config/elasticsearch.yml
+++ b/cluster/addons/fluentd-elasticsearch/es-image/config/elasticsearch.yml
@@ -12,3 +12,6 @@ path.data: /data
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: ${MINIMUM_MASTER_NODES}
+
+xpack.security.enabled: false
+xpack.monitoring.enabled: false
diff --git a/cluster/addons/fluentd-elasticsearch/es-serviceaccount.yaml b/cluster/addons/fluentd-elasticsearch/es-serviceaccount.yaml
deleted file mode 100644
index 6f4ede424e1..00000000000
--- a/cluster/addons/fluentd-elasticsearch/es-serviceaccount.yaml
+++ /dev/null
@@ -1,10 +0,0 @@
-apiVersion: v1
-kind: ServiceAccount
-metadata:
- name: elasticsearch-logging
- namespace: kube-system
- labels:
- k8s-app: elasticsearch-logging
- version: v1
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
diff --git a/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml b/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml
index 85256106794..43505ea476e 100644
--- a/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml
+++ b/cluster/addons/fluentd-elasticsearch/es-statefulset.yaml
@@ -1,11 +1,60 @@
-apiVersion: apps/v1beta1
-kind: StatefulSet
+# RBAC authn and authz
+apiVersion: v1
+kind: ServiceAccount
metadata:
- name: elasticsearch-logging-v1
+ name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
- version: v1
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+---
+kind: ClusterRole
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+ name: elasticsearch-logging
+ labels:
+ k8s-app: elasticsearch-logging
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+rules:
+- apiGroups:
+ - ""
+ resources:
+ - "services"
+ - "namespaces"
+ - "endpoints"
+ verbs:
+ - "get"
+---
+kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1beta1
+metadata:
+ namespace: kube-system
+ name: elasticsearch-logging
+ labels:
+ k8s-app: elasticsearch-logging
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: Reconcile
+subjects:
+- kind: ServiceAccount
+ name: elasticsearch-logging
+ namespace: kube-system
+ apiGroup: ""
+roleRef:
+ kind: ClusterRole
+ name: elasticsearch-logging
+ apiGroup: ""
+---
+# Elasticsearch deployment itself
+apiVersion: apps/v1beta1
+kind: StatefulSet
+metadata:
+ name: elasticsearch-logging
+ namespace: kube-system
+ labels:
+ k8s-app: elasticsearch-logging
+ version: v5.5.1
kubernetes.io/cluster-service: "true"
addonmanager.kubernetes.io/mode: Reconcile
spec:
@@ -14,17 +63,17 @@ spec:
selector:
matchLabels:
k8s-app: elasticsearch-logging
- version: v1
+ version: v5.5.1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
- version: v1
+ version: v5.5.1
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: elasticsearch-logging
containers:
- - image: gcr.io/google_containers/elasticsearch:v5.5.1
+ - image: gcr.io/google-containers/elasticsearch:v5.5.1-1
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
@@ -47,17 +96,15 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
+ volumes:
+ - name: elasticsearch-logging
+ emptyDir: {}
+ # Elasticsearch requires vm.max_map_count to be at least 262144.
+ # If your OS already sets up this number to a higher value, feel free
+ # to remove this init container.
initContainers:
- image: alpine:3.6
command: ["/sbin/sysctl", "-w", "vm.max_map_count=262144"]
name: elasticsearch-logging-init
securityContext:
privileged: true
- volumeClaimTemplates:
- - metadata:
- name: elasticsearch-logging
- spec:
- accessModes: ["ReadWriteOnce"]
- resources:
- requests:
- storage: 100Gi
diff --git a/cluster/addons/fluentd-elasticsearch/fluentd-es-clusterrole.yaml b/cluster/addons/fluentd-elasticsearch/fluentd-es-clusterrole.yaml
deleted file mode 100644
index 354956471ec..00000000000
--- a/cluster/addons/fluentd-elasticsearch/fluentd-es-clusterrole.yaml
+++ /dev/null
@@ -1,18 +0,0 @@
-kind: ClusterRole
-apiVersion: rbac.authorization.k8s.io/v1beta1
-metadata:
- name: fluentd-es
- labels:
- k8s-app: fluentd-es
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
-rules:
-- apiGroups:
- - ""
- resources:
- - "namespaces"
- - "pods"
- verbs:
- - "get"
- - "watch"
- - "list"
diff --git a/cluster/addons/fluentd-elasticsearch/fluentd-es-clusterrolebinding.yaml b/cluster/addons/fluentd-elasticsearch/fluentd-es-clusterrolebinding.yaml
deleted file mode 100644
index 24ff206ee03..00000000000
--- a/cluster/addons/fluentd-elasticsearch/fluentd-es-clusterrolebinding.yaml
+++ /dev/null
@@ -1,17 +0,0 @@
-kind: ClusterRoleBinding
-apiVersion: rbac.authorization.k8s.io/v1beta1
-metadata:
- name: fluentd-es
- labels:
- k8s-app: fluentd-es
- kubernetes.io/cluster-service: "true"
- addonmanager.kubernetes.io/mode: Reconcile
-subjects:
-- kind: ServiceAccount
- name: fluentd-es
- namespace: kube-system
- apiGroup: ""
-roleRef:
- kind: ClusterRole
- name: fluentd-es
- apiGroup: ""
diff --git a/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml b/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml
new file mode 100644
index 00000000000..3fe62d8ae59
--- /dev/null
+++ b/cluster/addons/fluentd-elasticsearch/fluentd-es-configmap.yaml
@@ -0,0 +1,362 @@
+kind: ConfigMap
+apiVersion: v1
+data:
+ containers.input.conf: |-
+ # This configuration file for Fluentd / td-agent is used
+ # to watch changes to Docker log files. The kubelet creates symlinks that
+ # capture the pod name, namespace, container name & Docker container ID
+ # to the docker logs for pods in the /var/log/containers directory on the host.
+ # If running this fluentd configuration in a Docker container, the /var/log
+ # directory should be mounted in the container.
+ #
+ # These logs are then submitted to Elasticsearch which assumes the
+ # installation of the fluent-plugin-elasticsearch & the
+ # fluent-plugin-kubernetes_metadata_filter plugins.
+ # See https://github.com/uken/fluent-plugin-elasticsearch &
+ # https://github.com/fabric8io/fluent-plugin-kubernetes_metadata_filter for
+ # more information about the plugins.
+ #
+ # Example
+ # =======
+ # A line in the Docker log file might look like this JSON:
+ #
+ # {"log":"2014/09/25 21:15:03 Got request with path wombat\n",
+ # "stream":"stderr",
+ # "time":"2014-09-25T21:15:03.499185026Z"}
+ #
+ # The time_format specification below makes sure we properly
+ # parse the time format produced by Docker. This will be
+ # submitted to Elasticsearch and should appear like:
+ # $ curl 'http://elasticsearch-logging:9200/_search?pretty'
+ # ...
+ # {
+ # "_index" : "logstash-2014.09.25",
+ # "_type" : "fluentd",
+ # "_id" : "VBrbor2QTuGpsQyTCdfzqA",
+ # "_score" : 1.0,
+ # "_source":{"log":"2014/09/25 22:45:50 Got request with path wombat\n",
+ # "stream":"stderr","tag":"docker.container.all",
+ # "@timestamp":"2014-09-25T22:45:50+00:00"}
+ # },
+ # ...
+ #
+ # The Kubernetes fluentd plugin is used to write the Kubernetes metadata to the log
+ # record & add labels to the log record if properly configured. This enables users
+ # to filter & search logs on any metadata.
+ # For example a Docker container's logs might be in the directory:
+ #
+ # /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b
+ #
+ # and in the file:
+ #
+ # 997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
+ #
+ # where 997599971ee6... is the Docker ID of the running container.
+ # The Kubernetes kubelet makes a symbolic link to this file on the host machine
+ # in the /var/log/containers directory which includes the pod name and the Kubernetes
+ # container name:
+ #
+ # synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
+ # ->
+ # /var/lib/docker/containers/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b/997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b-json.log
+ #
+ # The /var/log directory on the host is mapped to the /var/log directory in the container
+ # running this instance of Fluentd and we end up collecting the file:
+ #
+ # /var/log/containers/synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
+ #
+ # This results in the tag:
+ #
+ # var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
+ #
+ # The Kubernetes fluentd plugin is used to extract the namespace, pod name & container name
+ # which are added to the log message as a kubernetes field object & the Docker container ID
+ # is also added under the docker field object.
+ # The final tag is:
+ #
+ # kubernetes.var.log.containers.synthetic-logger-0.25lps-pod_default_synth-lgr-997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b.log
+ #
+ # And the final log record look like:
+ #
+ # {
+ # "log":"2014/09/25 21:15:03 Got request with path wombat\n",
+ # "stream":"stderr",
+ # "time":"2014-09-25T21:15:03.499185026Z",
+ # "kubernetes": {
+ # "namespace": "default",
+ # "pod_name": "synthetic-logger-0.25lps-pod",
+ # "container_name": "synth-lgr"
+ # },
+ # "docker": {
+ # "container_id": "997599971ee6366d4a5920d25b79286ad45ff37a74494f262e3bc98d909d0a7b"
+ # }
+ # }
+ #
+ # This makes it easier for users to search for logs by pod name or by
+ # the name of the Kubernetes container regardless of how many times the
+ # Kubernetes pod has been restarted (resulting in a several Docker container IDs).
+
+ # Example:
+ # {"log":"[info:2016-02-16T16:04:05.930-08:00] Some log text here\n","stream":"stdout","time":"2016-02-17T00:04:05.931087621Z"}
+
+ type tail
+ path /var/log/containers/*.log
+ pos_file /var/log/es-containers.log.pos
+ time_format %Y-%m-%dT%H:%M:%S.%NZ
+ tag kubernetes.*
+ format json
+ read_from_head true
+
+ system.input.conf: |-
+ # Example:
+ # 2015-12-21 23:17:22,066 [salt.state ][INFO ] Completed state [net.ipv4.ip_forward] at time 23:17:22.066081
+
+ type tail
+ format /^(?
+
+ # Example:
+ # Dec 21 23:17:22 gke-foo-1-1-4b5cbd14-node-4eoj startupscript: Finished running startup script /var/run/google.startup.script
+
+ type tail
+ format syslog
+ path /var/log/startupscript.log
+ pos_file /var/log/es-startupscript.log.pos
+ tag startupscript
+
+
+ # Examples:
+ # time="2016-02-04T06:51:03.053580605Z" level=info msg="GET /containers/json"
+ # time="2016-02-04T07:53:57.505612354Z" level=error msg="HTTP Error" err="No such image: -f" statusCode=404
+
+ type tail
+ format /^time="(?
+
+ # Example:
+ # 2016/02/04 06:52:38 filePurge: successfully removed file /var/etcd/data/member/wal/00000000000006d0-00000000010a23d1.wal
+
+ type tail
+ # Not parsing this, because it doesn't have anything particularly useful to
+ # parse out of it (like severities).
+ format none
+ path /var/log/etcd.log
+ pos_file /var/log/es-etcd.log.pos
+ tag etcd
+
+
+ # Multi-line parsing is required for all the kube logs because very large log
+ # statements, such as those that include entire object bodies, get split into
+ # multiple lines by glog.
+
+ # Example:
+ # I0204 07:32:30.020537 3368 server.go:1048] POST /stats/container/: (13.972191ms) 200 [[Go-http-client/1.1] 10.244.1.3:40537]
+
+ type tail
+ format multiline
+ multiline_flush_interval 5s
+ format_firstline /^\w\d{4}/
+ format1 /^(?\w)(?