Merge pull request #10259 from satnam6502/logging-doc

Rename Google Cloud Logging doc appropriately
This commit is contained in:
Jeff Lowdermilk 2015-06-23 16:50:17 -07:00
commit 3c0cab7dfe

View File

@ -1,4 +1,4 @@
# Logging # Cluster Level Logging to Google Cloud Logging
A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes **cluster level logging** services. A Kubernetes cluster will typically be humming along running many system and application pods. How does the system administrator collect, manage and query the logs of the system pods? How does a user query the logs of their application which is composed of many pods which may be restarted or automatically generated by the Kubernetes system? These questions are addressed by the Kubernetes **cluster level logging** services.
@ -193,24 +193,4 @@ This page has touched briefly on the underlying mechanisms that support gatherin
Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html). Some of the material in this section also appears in the blog article [Cluster Level Logging with Kubernetes](http://blog.kubernetes.io/2015/06/cluster-level-logging-with-kubernetes.html).
## Logging with Fluentd and Elastiscsearch
To enable Elasticsearch based logging of the stdout and stderr output of every Docker container in
a Kubernetes cluster set the shell environment variables
``KUBE_ENABLE_NODE_LOGGING`` to ``true`` and ``KUBE_LOGGING_DESTINATION`` to ``elasticsearch``.
e.g. in bash:
```
export KUBE_ENABLE_NODE_LOGGING=true
export KUBE_LOGGING_DESTINATION=elasticsearch
```
This will instantiate a [Fluentd](http://www.fluentd.org/) instance on each node which will
collect all the Docker container log files. The collected logs will
be targeted at an [Elasticsearch](http://www.elasticsearch.org/) instance assumed to be running on the
local node and accepting log information on port 9200. This can be accomplished
by writing a pod specification and service specification to define an
Elasticsearch service (more information to follow shortly in the contrib directory).
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/logging.md?pixel)]() [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/getting-started-guides/logging.md?pixel)]()