mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-06 03:33:26 +00:00
Move the logging-related directories to where I think they belong.
1. Move fluentd-gcp to be a core cluster addon, rather than a contrib. 2. Get rid of the synthetic logger under contrib, since the exact same synthetic logger was also included in the logging-demo. 3. Move the logging-demo to examples, since it's effectively an example. We should also consider adding on a GCP section to the logging-demo example :)
This commit is contained in:
@@ -1,34 +0,0 @@
|
||||
# Makefile for launching syntheitc logging sources (any platform)
|
||||
# and for reporting the forwarding rules for the
|
||||
# Elasticsearch and Kibana pods for the GCE platform.
|
||||
|
||||
|
||||
.PHONY: up down logger-up logger-down logger10-up logger10-downget net
|
||||
|
||||
KUBECTL=../../../kubectl.sh
|
||||
|
||||
up: logger-up logger10-up
|
||||
|
||||
down: logger-down logger10-down
|
||||
|
||||
|
||||
logger-up:
|
||||
-${KUBECTL} create -f synthetic_0_25lps.yaml
|
||||
|
||||
logger-down:
|
||||
-${KUBECTL} delete pods synthetic-logger-0.25lps-pod
|
||||
|
||||
logger10-up:
|
||||
-${KUBECTL} create -f synthetic_10lps.yaml
|
||||
|
||||
logger10-down:
|
||||
-${KUBECTL} delete pods synthetic-logger-10lps-pod
|
||||
|
||||
get:
|
||||
${KUBECTL} get pods
|
||||
${KUBECTL} get replicationControllers
|
||||
${KUBECTL} get services
|
||||
|
||||
net:
|
||||
${KUBECTL} get services elasticsearch-logging -o json
|
||||
${KUBECTL} get services kibana-logging -o json
|
@@ -1,164 +0,0 @@
|
||||
# Elasticsearch/Kibana Logging Demonstration
|
||||
This directory contains two pod specifications which can be used as synthetic
|
||||
loggig sources. The pod specification in [synthetic_0_25lps.yaml](synthetic_0_25lps.yaml)
|
||||
describes a pod that just emits a log message once every 4 seconds:
|
||||
```
|
||||
# This pod specification creates an instance of a synthetic logger. The logger
|
||||
# is simply a program that writes out the hostname of the pod, a count which increments
|
||||
# by one on each iteration (to help notice missing log enteries) and the date using
|
||||
# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
|
||||
# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
|
||||
# and could have been written out as:
|
||||
# i="0"
|
||||
# while true
|
||||
# do
|
||||
# echo -n "`hostname`: $i: "
|
||||
# date --rfc-3339 ns
|
||||
# sleep 4
|
||||
# i=$[$i+1]
|
||||
# done
|
||||
|
||||
apiVersion: v1beta1
|
||||
kind: Pod
|
||||
id: synthetic-logger-0.25lps-pod
|
||||
desiredState:
|
||||
manifest:
|
||||
version: v1beta1
|
||||
id: synth-logger-0.25lps
|
||||
containers:
|
||||
- name: synth-lgr
|
||||
image: ubuntu:14.04
|
||||
command: ["bash", "-c", "i=\"0\"; while true; do echo -n \"`hostname`: $i: \"; date --rfc-3339 ns; sleep 4; i=$[$i+1]; done"]
|
||||
labels:
|
||||
name: synth-logging-source
|
||||
```
|
||||
|
||||
The other YAML file [synthetic_10lps.yaml](synthetic_10lps.yaml) specifies a similar synthetic logger that emits 10 log messages every second. To run both synthetic loggers:
|
||||
```
|
||||
$ make up
|
||||
../../../kubectl.sh create -f synthetic_0_25lps.yaml
|
||||
Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl create -f synthetic_0_25lps.yaml
|
||||
synthetic-logger-0.25lps-pod
|
||||
../../../kubectl.sh create -f synthetic_10lps.yaml
|
||||
Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl create -f synthetic_10lps.yaml
|
||||
synthetic-logger-10lps-pod
|
||||
|
||||
```
|
||||
|
||||
Visiting the Kibana dashboard should make it clear that logs are being collected from the two synthetic loggers:
|
||||

|
||||
|
||||
You can report the running pods, replication controllers and services with another Makefile rule:
|
||||
```
|
||||
$ make get
|
||||
../../../kubectl.sh get pods
|
||||
Running: ../../../../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl get pods
|
||||
POD CONTAINER(S) IMAGE(S) HOST LABELS STATUS
|
||||
7e1c7ce6-9764-11e4-898c-42010af03582 kibana-logging kubernetes/kibana kubernetes-minion-3.c.kubernetes-elk.internal/130.211.129.169 name=kibana-logging Running
|
||||
synthetic-logger-0.25lps-pod synth-lgr ubuntu:14.04 kubernetes-minion-2.c.kubernetes-elk.internal/146.148.41.87 name=synth-logging-source Running
|
||||
synthetic-logger-10lps-pod synth-lgr ubuntu:14.04 kubernetes-minion-1.c.kubernetes-elk.internal/146.148.42.44 name=synth-logging-source Running
|
||||
influx-grafana influxdb kubernetes/heapster_influxdb kubernetes-minion-3.c.kubernetes-elk.internal/130.211.129.169 name=influxdb Running
|
||||
grafana kubernetes/heapster_grafana
|
||||
elasticsearch elasticsearch
|
||||
heapster heapster kubernetes/heapster kubernetes-minion-2.c.kubernetes-elk.internal/146.148.41.87 name=heapster Running
|
||||
67cfcb1f-9764-11e4-898c-42010af03582 etcd quay.io/coreos/etcd:latest kubernetes-minion-3.c.kubernetes-elk.internal/130.211.129.169 k8s-app=skydns Running
|
||||
kube2sky kubernetes/kube2sky:1.0
|
||||
skydns kubernetes/skydns:2014-12-23-001
|
||||
6ba20338-9764-11e4-898c-42010af03582 elasticsearch-logging elasticsearch kubernetes-minion-3.c.kubernetes-elk.internal/130.211.129.169 name=elasticsearch-logging Running
|
||||
../../../cluster/kubectl.sh get replicationControllers
|
||||
Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl get replicationControllers
|
||||
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
|
||||
skydns etcd quay.io/coreos/etcd:latest k8s-app=skydns 1
|
||||
kube2sky kubernetes/kube2sky:1.0
|
||||
skydns kubernetes/skydns:2014-12-23-001
|
||||
elasticsearch-logging-controller elasticsearch-logging elasticsearch name=elasticsearch-logging 1
|
||||
kibana-logging-controller kibana-logging kubernetes/kibana name=kibana-logging 1
|
||||
../../.../kubectl.sh get services
|
||||
Running: ../../../cluster/../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl get services
|
||||
NAME LABELS SELECTOR IP PORT
|
||||
kubernetes-ro component=apiserver,provider=kubernetes <none> 10.0.83.3 80
|
||||
kubernetes component=apiserver,provider=kubernetes <none> 10.0.79.4 443
|
||||
influx-master <none> name=influxdb 10.0.232.223 8085
|
||||
skydns k8s-app=skydns k8s-app=skydns 10.0.0.10 53
|
||||
elasticsearch-logging <none> name=elasticsearch-logging 10.0.25.103 9200
|
||||
kibana-logging <none> name=kibana-logging 10.0.208.114 5601
|
||||
|
||||
```
|
||||
The `net` rule in the Makefile will report information about the Elasticsearch and Kibana services including the public IP addresses of each service.
|
||||
```
|
||||
$ make net
|
||||
../../../kubectl.sh get services elasticsearch-logging -o json
|
||||
current-context: "kubernetes-satnam_kubernetes"
|
||||
Running: ../../../../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl get services elasticsearch-logging -o json
|
||||
{
|
||||
"kind": "Service",
|
||||
"id": "elasticsearch-logging",
|
||||
"uid": "e5bf0a51-b87f-11e4-bd62-42010af01267",
|
||||
"creationTimestamp": "2015-02-19T21:40:18Z",
|
||||
"selfLink": "/api/v1beta1/services/elasticsearch-logging?namespace=default",
|
||||
"resourceVersion": 68,
|
||||
"apiVersion": "v1beta1",
|
||||
"namespace": "default",
|
||||
"port": 9200,
|
||||
"protocol": "TCP",
|
||||
"labels": {
|
||||
"name": "elasticsearch-logging"
|
||||
},
|
||||
"selector": {
|
||||
"name": "elasticsearch-logging"
|
||||
},
|
||||
"createExternalLoadBalancer": true,
|
||||
"publicIPs": [
|
||||
"104.154.81.135"
|
||||
],
|
||||
"containerPort": 9200,
|
||||
"portalIP": "10.0.58.62",
|
||||
"sessionAffinity": "None"
|
||||
}
|
||||
../../../kubectl.sh get services kibana-logging -o json
|
||||
current-context: "kubernetes-satnam_kubernetes"
|
||||
Running: ../../../../cluster/gce/../../_output/dockerized/bin/linux/amd64/kubectl get services kibana-logging -o json
|
||||
{
|
||||
"kind": "Service",
|
||||
"id": "kibana-logging",
|
||||
"uid": "e5bd4617-b87f-11e4-bd62-42010af01267",
|
||||
"creationTimestamp": "2015-02-19T21:40:18Z",
|
||||
"selfLink": "/api/v1beta1/services/kibana-logging?namespace=default",
|
||||
"resourceVersion": 67,
|
||||
"apiVersion": "v1beta1",
|
||||
"namespace": "default",
|
||||
"port": 5601,
|
||||
"protocol": "TCP",
|
||||
"labels": {
|
||||
"name": "kibana-logging"
|
||||
},
|
||||
"selector": {
|
||||
"name": "kibana-logging"
|
||||
},
|
||||
"createExternalLoadBalancer": true,
|
||||
"publicIPs": [
|
||||
"104.154.91.224"
|
||||
],
|
||||
"containerPort": 80,
|
||||
"portalIP": "10.0.124.153",
|
||||
"sessionAffinity": "None"
|
||||
}
|
||||
```
|
||||
For this example the Elasticsearch service is running at `http://104.154.81.135:9200`.
|
||||
```
|
||||
$ curl http://104.154.81.135:9200
|
||||
{
|
||||
"status" : 200,
|
||||
"name" : "Wombat",
|
||||
"cluster_name" : "elasticsearch",
|
||||
"version" : {
|
||||
"number" : "1.4.4",
|
||||
"build_hash" : "c88f77ffc81301dfa9dfd81ca2232f09588bd512",
|
||||
"build_timestamp" : "2015-02-19T13:05:36Z",
|
||||
"build_snapshot" : false,
|
||||
"lucene_version" : "4.10.3"
|
||||
},
|
||||
"tagline" : "You Know, for Search"
|
||||
}
|
||||
```
|
||||
Visiting the URL `http://104.154.91.224:5601` should show the Kibana viewer for the logging information stored in the Elasticsearch service running at `http://104.154.81.135:9200`.
|
Binary file not shown.
Before Width: | Height: | Size: 87 KiB |
@@ -1,29 +0,0 @@
|
||||
# This pod specification creates an instance of a synthetic logger. The logger
|
||||
# is simply a program that writes out the hostname of the pod, a count which increments
|
||||
# by one on each iteration (to help notice missing log enteries) and the date using
|
||||
# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
|
||||
# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
|
||||
# and could have been written out as:
|
||||
# i="0"
|
||||
# while true
|
||||
# do
|
||||
# echo -n "`hostname`: $i: "
|
||||
# date --rfc-3339 ns
|
||||
# sleep 4
|
||||
# i=$[$i+1]
|
||||
# done
|
||||
|
||||
apiVersion: v1beta1
|
||||
kind: Pod
|
||||
id: synthetic-logger-0.25lps-pod
|
||||
desiredState:
|
||||
manifest:
|
||||
version: v1beta1
|
||||
id: synth-logger-0.25lps
|
||||
containers:
|
||||
- name: synth-lgr
|
||||
image: ubuntu:14.04
|
||||
command: ["bash", "-c", "i=\"0\"; while true; do echo -n \"`hostname`: $i: \"; date --rfc-3339 ns; sleep 4; i=$[$i+1]; done"]
|
||||
labels:
|
||||
name: synth-logging-source
|
||||
|
@@ -1,29 +0,0 @@
|
||||
# This pod specification creates an instance of a synthetic logger. The logger
|
||||
# is simply a program that writes out the hostname of the pod, a count which increments
|
||||
# by one on each iteration (to help notice missing log enteries) and the date using
|
||||
# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
|
||||
# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
|
||||
# and could have been written out as:
|
||||
# i="0"
|
||||
# while true
|
||||
# do
|
||||
# echo -n "`hostname`: $i: "
|
||||
# date --rfc-3339 ns
|
||||
# sleep 4
|
||||
# i=$[$i+1]
|
||||
# done
|
||||
|
||||
apiVersion: v1beta1
|
||||
kind: Pod
|
||||
id: synthetic-logger-10lps-pod
|
||||
desiredState:
|
||||
manifest:
|
||||
version: v1beta1
|
||||
id: synth-logger-10lps
|
||||
containers:
|
||||
- name: synth-lgr
|
||||
image: ubuntu:14.04
|
||||
command: ["bash", "-c", "i=\"0\"; while true; do echo -n \"`hostname`: $i: \"; date --rfc-3339 ns; sleep 0.1; i=$[$i+1]; done"]
|
||||
labels:
|
||||
name: synth-logging-source
|
||||
|
25
cluster/addons/fluentd-gcp/fluentd-gcp-image/Dockerfile
Normal file
25
cluster/addons/fluentd-gcp/fluentd-gcp-image/Dockerfile
Normal file
@@ -0,0 +1,25 @@
|
||||
# This Dockerfile will build an image that is configured
|
||||
# to use Fluentd to collect all Docker container log files
|
||||
# and then cause them to be ingested using the Google Cloud
|
||||
# Logging API. This configuration assumes that the host performning
|
||||
# the collection is a VM that has been created with a logging.write
|
||||
# scope and that the Logging API has been enabled for the project
|
||||
# in the Google Developer Console.
|
||||
|
||||
FROM ubuntu:14.04
|
||||
MAINTAINER Satnam Singh "satnam@google.com"
|
||||
|
||||
# Disable prompts from apt.
|
||||
ENV DEBIAN_FRONTEND noninteractive
|
||||
ENV OPTS_APT -y --force-yes --no-install-recommends
|
||||
|
||||
RUN apt-get -q update && \
|
||||
apt-get -y install curl && \
|
||||
apt-get clean && \
|
||||
curl -s https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh | sudo bash
|
||||
|
||||
# Copy the Fluentd configuration file for logging Docker container logs.
|
||||
COPY google-fluentd.conf /etc/google-fluentd/google-fluentd.conf
|
||||
|
||||
# Start Fluentd to pick up our config that watches Docker container logs.
|
||||
CMD /usr/sbin/google-fluentd -qq > /var/log/google-fluentd.log
|
16
cluster/addons/fluentd-gcp/fluentd-gcp-image/Makefile
Normal file
16
cluster/addons/fluentd-gcp/fluentd-gcp-image/Makefile
Normal file
@@ -0,0 +1,16 @@
|
||||
# The build rule builds a Docker image that logs all Docker contains logs to
|
||||
# Google Compute Platform using the Cloud Logging API. The push rule pushes
|
||||
# the image to DockerHub.
|
||||
# Satnam Singh (satnam@google.com)
|
||||
|
||||
.PHONY: build push
|
||||
|
||||
|
||||
TAG = 1.2
|
||||
|
||||
build:
|
||||
docker build -t gcr.io/google_containers/fluentd-gcp:$(TAG) .
|
||||
|
||||
push:
|
||||
gcloud preview docker push gcr.io/google_containers/fluentd-gcp:$(TAG)
|
||||
|
8
cluster/addons/fluentd-gcp/fluentd-gcp-image/README.md
Normal file
8
cluster/addons/fluentd-gcp/fluentd-gcp-image/README.md
Normal file
@@ -0,0 +1,8 @@
|
||||
# Collecting Docker Log Files with Fluentd and sending to GCP.
|
||||
This directory contains the source files needed to make a Docker image
|
||||
that collects Docker container log files using [Fluentd](http://www.fluentd.org/)
|
||||
and sends them to GCP.
|
||||
This image is designed to be used as part of the [Kubernetes](https://github.com/GoogleCloudPlatform/kubernetes)
|
||||
cluster bring up process. The image resides at DockerHub under the name
|
||||
[kubernetes/fluentd-gcp](https://registry.hub.docker.com/u/kubernetes/fluentd-gcp/).
|
||||
|
@@ -0,0 +1,51 @@
|
||||
# This Fluentd configuration file specifies the colleciton
|
||||
# of all Docker container log files under /var/lib/docker/containers/...
|
||||
# followed by ingestion using the Google Cloud Logging API.
|
||||
# This configuration assumes the correct installation of the the
|
||||
# Google fluentd plug-in. Currently the collector uses a text format
|
||||
# rather than JSON (which is the format used to store the Docker
|
||||
# log files). When the fluentd plug-in can accept JSON this
|
||||
# configuraiton file should be changed by specifying:
|
||||
# format json
|
||||
# in the source section.
|
||||
# This configuration file assumes that the VM host running
|
||||
# this configuraiton has been created with a logging.write scope.
|
||||
# Maintainer: Satnam Singh (satnam@google.com)
|
||||
|
||||
<source>
|
||||
type tail
|
||||
format none
|
||||
time_key time
|
||||
path /var/lib/docker/containers/*/*-json.log
|
||||
pos_file /var/lib/docker/containers/gcp-containers.log.pos
|
||||
time_format %Y-%m-%dT%H:%M:%S
|
||||
tag docker.*
|
||||
read_from_head true
|
||||
</source>
|
||||
|
||||
<match docker.**>
|
||||
type google_cloud
|
||||
flush_interval 5s
|
||||
# Never wait longer than 5 minutes between retries.
|
||||
max_retry_wait 300
|
||||
# Disable the limit on the number of retries (retry forever).
|
||||
disable_retry_limit
|
||||
</match>
|
||||
|
||||
<source>
|
||||
type tail
|
||||
format none
|
||||
time_key time
|
||||
path /varlog/kubelet.log
|
||||
pos_file /varlog/gcp-kubelet.log.pos
|
||||
tag kubelet
|
||||
</source>
|
||||
|
||||
<match kubelet>
|
||||
type google_cloud
|
||||
flush_interval 5s
|
||||
# Never wait longer than 5 minutes between retries.
|
||||
max_retry_wait 300
|
||||
# Disable the limit on the number of retries (retry forever).
|
||||
disable_retry_limit
|
||||
</match>
|
Reference in New Issue
Block a user