Move the logging-related directories to where I think they belong.

1. Move fluentd-gcp to be a core cluster addon, rather than a contrib.
2. Get rid of the synthetic logger under contrib, since the exact same
synthetic logger was also included in the logging-demo.
3. Move the logging-demo to examples, since it's effectively an example.

We should also consider adding on a GCP section to the logging-demo
example :)
This commit is contained in:
Alex Robinson
2015-04-17 23:38:12 +00:00
parent 2775b9e0de
commit 059a8c92bd
11 changed files with 0 additions and 51 deletions

View File

@@ -1,25 +0,0 @@
# This Dockerfile will build an image that is configured
# to use Fluentd to collect all Docker container log files
# and then cause them to be ingested using the Google Cloud
# Logging API. This configuration assumes that the host performning
# the collection is a VM that has been created with a logging.write
# scope and that the Logging API has been enabled for the project
# in the Google Developer Console.
FROM ubuntu:14.04
MAINTAINER Satnam Singh "satnam@google.com"
# Disable prompts from apt.
ENV DEBIAN_FRONTEND noninteractive
ENV OPTS_APT -y --force-yes --no-install-recommends
RUN apt-get -q update && \
apt-get -y install curl && \
apt-get clean && \
curl -s https://storage.googleapis.com/signals-agents/logging/google-fluentd-install.sh | sudo bash
# Copy the Fluentd configuration file for logging Docker container logs.
COPY google-fluentd.conf /etc/google-fluentd/google-fluentd.conf
# Start Fluentd to pick up our config that watches Docker container logs.
CMD /usr/sbin/google-fluentd -qq > /var/log/google-fluentd.log

View File

@@ -1,16 +0,0 @@
# The build rule builds a Docker image that logs all Docker contains logs to
# Google Compute Platform using the Cloud Logging API. The push rule pushes
# the image to DockerHub.
# Satnam Singh (satnam@google.com)
.PHONY: build push
TAG = 1.2
build:
docker build -t gcr.io/google_containers/fluentd-gcp:$(TAG) .
push:
gcloud preview docker push gcr.io/google_containers/fluentd-gcp:$(TAG)

View File

@@ -1,8 +0,0 @@
# Collecting Docker Log Files with Fluentd and sending to GCP.
This directory contains the source files needed to make a Docker image
that collects Docker container log files using [Fluentd](http://www.fluentd.org/)
and sends them to GCP.
This image is designed to be used as part of the [Kubernetes](https://github.com/GoogleCloudPlatform/kubernetes)
cluster bring up process. The image resides at DockerHub under the name
[kubernetes/fluentd-gcp](https://registry.hub.docker.com/u/kubernetes/fluentd-gcp/).

View File

@@ -1,51 +0,0 @@
# This Fluentd configuration file specifies the colleciton
# of all Docker container log files under /var/lib/docker/containers/...
# followed by ingestion using the Google Cloud Logging API.
# This configuration assumes the correct installation of the the
# Google fluentd plug-in. Currently the collector uses a text format
# rather than JSON (which is the format used to store the Docker
# log files). When the fluentd plug-in can accept JSON this
# configuraiton file should be changed by specifying:
# format json
# in the source section.
# This configuration file assumes that the VM host running
# this configuraiton has been created with a logging.write scope.
# Maintainer: Satnam Singh (satnam@google.com)
<source>
type tail
format none
time_key time
path /var/lib/docker/containers/*/*-json.log
pos_file /var/lib/docker/containers/gcp-containers.log.pos
time_format %Y-%m-%dT%H:%M:%S
tag docker.*
read_from_head true
</source>
<match docker.**>
type google_cloud
flush_interval 5s
# Never wait longer than 5 minutes between retries.
max_retry_wait 300
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
</match>
<source>
type tail
format none
time_key time
path /varlog/kubelet.log
pos_file /varlog/gcp-kubelet.log.pos
tag kubelet
</source>
<match kubelet>
type google_cloud
flush_interval 5s
# Never wait longer than 5 minutes between retries.
max_retry_wait 300
# Disable the limit on the number of retries (retry forever).
disable_retry_limit
</match>

View File

@@ -1,22 +0,0 @@
# Makefile for a synthetic logger to be logged
# by GCP. The cluster must have been created with
# the variable LOGGING_DESTINATION=GCP.
.PHONY: up down logger-up logger-down get
KUBECTL=kubectl.sh
up: logger-up
down: logger-down
logger-up:
-${KUBECTL} create -f synthetic_0_25lps.yml
logger-down:
-${KUBECTL} delete pods synthetic-logger-0.25lps-pod
get:
${KUBECTL} get pods

View File

@@ -1,29 +0,0 @@
# This pod specification creates an instance of a synthetic logger. The logger
# is simply a program that writes out the hostname of the pod, a count which increments
# by one on each iteration (to help notice missing log enteries) and the date using
# a long format (RFC-3339) to nano-second precision. This program logs at a frequency
# of 0.25 lines per second. The shellscript program is given directly to bash as -c argument
# and could have been written out as:
# i="0"
# while true
# do
# echo -n "`hostname`: $i: "
# date --rfc-3339 ns
# sleep 4
# i=$[$i+1]
# done
apiVersion: v1beta1
kind: Pod
id: synthetic-logger-0.25lps-pod
desiredState:
manifest:
version: v1beta1
id: synth-logger-0.25lps
containers:
- name: synth-lgr
image: ubuntu:14.04
command: ["bash", "-c", "i=\"0\"; while true; do echo -n \"`hostname`: $i: \"; date --rfc-3339 ns; sleep 4; i=$[$i+1]; done"]
labels:
name: synth-logging-source