Merge pull request #25168 from mikebrow/devel-tree-80col-updates-D

Automatic merge from submit-queue

devel/ tree more minor edits

Address line wrap issue #1488. Also cleans up other minor editing issues in the docs/devel/* tree such as spelling errors, links, content tables...

Signed-off-by: Mike Brown <brownwm@us.ibm.com>
This commit is contained in:
k8s-merge-robot 2016-05-10 09:35:34 -07:00
commit f6eefd4762
4 changed files with 254 additions and 114 deletions

View File

@ -34,69 +34,76 @@ Documentation for other releases can be found at
# Measuring Node Performance
This document outlines the issues and pitfalls of measuring Node performance, as well as the tools
available.
This document outlines the issues and pitfalls of measuring Node performance, as
well as the tools available.
## Cluster Set-up
There are lots of factors which can affect node performance numbers, so care must be taken in
setting up the cluster to make the intended measurements. In addition to taking the following steps
into consideration, it is important to document precisely which setup was used. For example,
performance can vary wildly from commit-to-commit, so it is very important to **document which commit
There are lots of factors which can affect node performance numbers, so care
must be taken in setting up the cluster to make the intended measurements. In
addition to taking the following steps into consideration, it is important to
document precisely which setup was used. For example, performance can vary
wildly from commit-to-commit, so it is very important to **document which commit
or version** of Kubernetes was used, which Docker version was used, etc.
### Addon pods
Be aware of which addon pods are running on which nodes. By default Kubernetes runs 8 addon pods,
plus another 2 per node (`fluentd-elasticsearch` and `kube-proxy`) in the `kube-system`
namespace. The addon pods can be disabled for more consistent results, but doing so can also have
performance implications.
Be aware of which addon pods are running on which nodes. By default Kubernetes
runs 8 addon pods, plus another 2 per node (`fluentd-elasticsearch` and
`kube-proxy`) in the `kube-system` namespace. The addon pods can be disabled for
more consistent results, but doing so can also have performance implications.
For example, Heapster polls each node regularly to collect stats data. Disabling Heapster will hide
the performance cost of serving those stats in the Kubelet.
For example, Heapster polls each node regularly to collect stats data. Disabling
Heapster will hide the performance cost of serving those stats in the Kubelet.
#### Disabling Add-ons
Disabling addons is simple. Just ssh into the Kubernetes master and move the addon from
`/etc/kubernetes/addons/` to a backup location. More details [here](../../cluster/addons/).
Disabling addons is simple. Just ssh into the Kubernetes master and move the
addon from `/etc/kubernetes/addons/` to a backup location. More details
[here](../../cluster/addons/).
### Which / how many pods?
Performance will vary a lot between a node with 0 pods and a node with 100 pods. In many cases
you'll want to make measurements with several different amounts of pods. On a single node cluster
scaling a replication controller makes this easy, just make sure the system reaches a steady-state
before starting the measurement. E.g. `kubectl scale replicationcontroller pause --replicas=100`
Performance will vary a lot between a node with 0 pods and a node with 100 pods.
In many cases you'll want to make measurements with several different amounts of
pods. On a single node cluster scaling a replication controller makes this easy,
just make sure the system reaches a steady-state before starting the
measurement. E.g. `kubectl scale replicationcontroller pause --replicas=100`
In most cases pause pods will yield the most consistent measurements since the system will not be
affected by pod load. However, in some special cases Kubernetes has been tuned to optimize pods that
are not doing anything, such as the cAdvisor housekeeping (stats gathering). In these cases,
performing a very light task (such as a simple network ping) can make a difference.
In most cases pause pods will yield the most consistent measurements since the
system will not be affected by pod load. However, in some special cases
Kubernetes has been tuned to optimize pods that are not doing anything, such as
the cAdvisor housekeeping (stats gathering). In these cases, performing a very
light task (such as a simple network ping) can make a difference.
Finally, you should also consider which features yours pods should be using. For example, if you
want to measure performance with probing, you should obviously use pods with liveness or readiness
probes configured. Likewise for volumes, number of containers, etc.
Finally, you should also consider which features yours pods should be using. For
example, if you want to measure performance with probing, you should obviously
use pods with liveness or readiness probes configured. Likewise for volumes,
number of containers, etc.
### Other Tips
**Number of nodes** - On the one hand, it can be easier to manage logs, pods, environment etc. with
a single node to worry about. On the other hand, having multiple nodes will let you gather more
data in parallel for more robust sampling.
**Number of nodes** - On the one hand, it can be easier to manage logs, pods,
environment etc. with a single node to worry about. On the other hand, having
multiple nodes will let you gather more data in parallel for more robust
sampling.
## E2E Performance Test
There is an end-to-end test for collecting overall resource usage of node components:
[kubelet_perf.go](../../test/e2e/kubelet_perf.go). To
run the test, simply make sure you have an e2e cluster running (`go run hack/e2e.go -up`) and
[set up](#cluster-set-up) correctly.
There is an end-to-end test for collecting overall resource usage of node
components: [kubelet_perf.go](../../test/e2e/kubelet_perf.go). To
run the test, simply make sure you have an e2e cluster running (`go run
hack/e2e.go -up`) and [set up](#cluster-set-up) correctly.
Run the test with `go run hack/e2e.go -v -test
--test_args="--ginkgo.focus=resource\susage\stracking"`. You may also wish to customise the number of
pods or other parameters of the test (remember to rerun `make WHAT=test/e2e/e2e.test` after you do).
--test_args="--ginkgo.focus=resource\susage\stracking"`. You may also wish to
customise the number of pods or other parameters of the test (remember to rerun
`make WHAT=test/e2e/e2e.test` after you do).
## Profiling
Kubelet installs the [go pprof handlers](https://golang.org/pkg/net/http/pprof/), which can be
queried for CPU profiles:
Kubelet installs the [go pprof handlers]
(https://golang.org/pkg/net/http/pprof/), which can be queried for CPU profiles:
```console
$ kubectl proxy &
@ -109,13 +116,15 @@ $ go tool pprof -web $KUBELET_BIN $OUTPUT
`pprof` can also provide heap usage, from the `/debug/pprof/heap` endpoint
(e.g. `http://localhost:8001/api/v1/proxy/nodes/${NODE}:10250/debug/pprof/heap`).
More information on go profiling can be found [here](http://blog.golang.org/profiling-go-programs).
More information on go profiling can be found
[here](http://blog.golang.org/profiling-go-programs).
## Benchmarks
Before jumping through all the hoops to measure a live Kubernetes node in a real cluster, it is
worth considering whether the data you need can be gathered through a Benchmark test. Go provides a
really simple benchmarking mechanism, just add a unit test of the form:
Before jumping through all the hoops to measure a live Kubernetes node in a real
cluster, it is worth considering whether the data you need can be gathered
through a Benchmark test. Go provides a really simple benchmarking mechanism,
just add a unit test of the form:
```go
// In foo_test.go

View File

@ -31,79 +31,155 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
Kubernetes "Github and Build-cop" Rotation
==========================================
Preqrequisites
--------------
## Kubernetes "Github and Build-cop" Rotation
### Preqrequisites
* Ensure you have [write access to http://github.com/kubernetes/kubernetes](https://github.com/orgs/kubernetes/teams/kubernetes-maintainers)
* Test your admin access by e.g. adding a label to an issue.
Traffic sources and responsibilities
------------------------------------
### Traffic sources and responsibilities
* GitHub Kubernetes [issues](https://github.com/kubernetes/kubernetes/issues)
and [pulls](https://github.com/kubernetes/kubernetes/pulls): Your job is to be
the first responder to all new issues and PRs. If you are not equipped to do
this (which is fine!), it is your job to seek guidance!
* Support issues should be closed and redirected to Stackoverflow (see example
response below).
* All incoming issues should be tagged with a team label
(team/{api,ux,control-plane,node,cluster,csi,redhat,mesosphere,gke,release-infra,test-infra,none});
for issues that overlap teams, you can use multiple team labels
* There is a related concept of "Github teams" which allow you to @ mention
a set of people; feel free to @ mention a Github team if you wish, but this is
not a substitute for adding a team/* label, which is required
* GitHub [https://github.com/kubernetes/kubernetes/issues](https://github.com/kubernetes/kubernetes/issues) and [https://github.com/kubernetes/kubernetes/pulls](https://github.com/kubernetes/kubernetes/pulls): Your job is to be the first responder to all new issues and PRs. If you are not equipped to do this (which is fine!), it is your job to seek guidance!
* Support issues should be closed and redirected to Stackoverflow (see example response below).
* All incoming issues should be tagged with a team label (team/{api,ux,control-plane,node,cluster,csi,redhat,mesosphere,gke,release-infra,test-infra,none}); for issues that overlap teams, you can use multiple team labels
* There is a related concept of "Github teams" which allow you to @ mention a set of people; feel free to @ mention a Github team if you wish, but this is not a substitute for adding a team/* label, which is required
* [Google teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=goog-)
* [Redhat teams](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=rh-)
* [SIGs](https://github.com/orgs/kubernetes/teams?utf8=%E2%9C%93&query=sig-)
* If the issue is reporting broken builds, broken e2e tests, or other obvious P0 issues, label the issue with priority/P0 and assign it to someone. This is the only situation in which you should add a priority/* label
* If the issue is reporting broken builds, broken e2e tests, or other
obvious P0 issues, label the issue with priority/P0 and assign it to someone.
This is the only situation in which you should add a priority/* label
* non-P0 issues do not need a reviewer assigned initially
* Assign any issues related to Vagrant to @derekwaynecarr (and @mention him in the issue)
* Assign any issues related to Vagrant to @derekwaynecarr (and @mention him
in the issue)
* All incoming PRs should be assigned a reviewer.
* unless it is a WIP (Work in Progress), RFC (Request for Comments), or design proposal.
* An auto-assigner [should do this for you] (https://github.com/kubernetes/kubernetes/pull/12365/files)
* When in doubt, choose a TL or team maintainer of the most relevant team; they can delegate
* Keep in mind that you can @ mention people in an issue/PR to bring it to their attention without assigning it to them. You can also @ mention github teams, such as @kubernetes/goog-ux or @kubernetes/kubectl
* If you need help triaging an issue or PR, consult with (or assign it to) @brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107, @lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time).
* At the beginning of your shift, please add team/* labels to any issues that have fallen through the cracks and don't have one. Likewise, be fair to the next person in rotation: try to ensure that every issue that gets filed while you are on duty is handled. The Github query to find issues with no team/* label is: [here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke+-label%3A"team%2FCSI-API+Machinery+SIG"+-label%3Ateam%2Fhuawei+-label%3Ateam%2Fsig-aws).
* Keep in mind that you can @ mention people in an issue/PR to bring it to
their attention without assigning it to them. You can also @ mention github
teams, such as @kubernetes/goog-ux or @kubernetes/kubectl
* If you need help triaging an issue or PR, consult with (or assign it to)
@brendandburns, @thockin, @bgrant0607, @quinton-hoole, @davidopp, @dchen1107,
@lavalamp (all U.S. Pacific Time) or @fgrzadkowski (Central European Time).
* At the beginning of your shift, please add team/* labels to any issues that
have fallen through the cracks and don't have one. Likewise, be fair to the next
person in rotation: try to ensure that every issue that gets filed while you are
on duty is handled. The Github query to find issues with no team/* label is:
[here](https://github.com/kubernetes/kubernetes/issues?utf8=%E2%9C%93&q=is%3Aopen+is%3Aissue+-label%3Ateam%2Fcontrol-plane+-label%3Ateam%2Fmesosphere+-label%3Ateam%2Fredhat+-label%3Ateam%2Frelease-infra+-label%3Ateam%2Fnone+-label%3Ateam%2Fnode+-label%3Ateam%2Fcluster+-label%3Ateam%2Fux+-label%3Ateam%2Fapi+-label%3Ateam%2Ftest-infra+-label%3Ateam%2Fgke+-label%3A"team%2FCSI-API+Machinery+SIG"+-label%3Ateam%2Fhuawei+-label%3Ateam%2Fsig-aws).
Example response for support issues:
Please re-post your question to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes).
```code
Please re-post your question to [stackoverflow]
(http://stackoverflow.com/questions/tagged/kubernetes).
We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases.
We are trying to consolidate the channels to which questions for help/support
are posted so that we can improve our efficiency in responding to your requests,
and to make it easier for you to find answers to frequently asked questions and
how to address common use cases.
We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs.
We regularly see messages posted in multiple forums, with the full response
thread only in one place or, worse, spread across multiple forums. Also, the
large volume of support issues on github is making it difficult for us to use
issues to identify real bugs.
The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered.
The Kubernetes team scans stackoverflow on a regular basis, and will try to
ensure your questions don't go unanswered.
Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with:
* [the user guide](http://kubernetes.io/v1.0/)
* [the troubleshooting guide](http://kubernetes.io/v1.0/docs/troubleshooting.html)
Before posting a new question, please search stackoverflow for answers to
similar questions, and also familiarize yourself with:
Again, thanks for using Kubernetes.
* [user guide](http://kubernetes.io/v1.0/)
* [troubleshooting guide](http://kubernetes.io/v1.0/docs/troubleshooting.html)
The Kubernetes Team
Again, thanks for using Kubernetes.
Build-copping
-------------
The Kubernetes Team
```
### Build-copping
* The [merge-bot submit queue](http://submit-queue.k8s.io/)
([source](https://github.com/kubernetes/contrib/tree/master/mungegithub/mungers/submit-queue.go))
should auto-merge all eligible PRs for you once they've passed all the relevant
checks mentioned below and all [critical e2e tests]
(https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the
merge-bot been disabled for some reason, or tests are failing, you might need to
do some manual merging to get things back on track.
* Once a day or so, look at the [flaky test builds]
(https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters
are failing to start, or tests are consistently failing (instead of just
flaking), file an issue to get things back on track.
* Jobs that are not in [critical e2e tests](https://goto.google.com/k8s-test/view/Critical%20Builds/)
or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not
your responsibility to monitor. The `Test owner:` in the job description will be
automatically emailed if the job is failing.
* If you are a weekday oncall, ensure that PRs confirming to the following
pre-requisites are being merged at a reasonable rate:
* The [merge-bot submit queue](http://submit-queue.k8s.io/) ([source](https://github.com/kubernetes/contrib/tree/master/mungegithub/mungers/submit-queue.go)) should auto-merge all eligible PRs for you once they've passed all the relevant checks mentioned below and all [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) are passing. If the merge-bot been disabled for some reason, or tests are failing, you might need to do some manual merging to get things back on track.
* Once a day or so, look at the [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/); if they are timing out, clusters are failing to start, or tests are consistently failing (instead of just flaking), file an issue to get things back on track.
* Jobs that are not in [critical e2e tests] (https://goto.google.com/k8s-test/view/Critical%20Builds/) or [flaky test builds](https://goto.google.com/k8s-test/view/Flaky/) are not your responsibility to monitor. The `Test owner:` in the job description will be automatically emailed if the job is failing.
* If you are a weekday oncall, ensure that PRs confirming to the following pre-requisites are being merged at a reasonable rate:
* [Have been LGTMd](https://github.com/kubernetes/kubernetes/labels/lgtm)
* Pass Travis and Jenkins per-PR tests.
* Author has signed CLA if applicable.
* If you are a weekend oncall, [never merge PRs manually](collab.md), instead add the label "lgtm" to the PRs once they have been LGTMd and passed Travis; this will cause merge-bot to merge them automatically (or make them easy to find by the next oncall, who will merge them).
* If you are a weekend oncall, [never merge PRs manually](collab.md), instead
add the label "lgtm" to the PRs once they have been LGTMd and passed Travis;
this will cause merge-bot to merge them automatically (or make them easy to find
by the next oncall, who will merge them).
* When the build is broken, roll back the PRs responsible ASAP
* When E2E tests are unstable, a "merge freeze" may be instituted. During a merge freeze:
* Oncall should slowly merge LGTMd changes throughout the day while monitoring E2E to ensure stability.
* Ideally the E2E run should be green, but some tests are flaky and can fail randomly (not as a result of a particular change).
* If a large number of tests fail, or tests that normally pass fail, that is an indication that one or more of the PR(s) in that build might be problematic (and should be reverted).
* Use the Test Results Analyzer to see individual test history over time.
* When E2E tests are unstable, a "merge freeze" may be instituted. During a
merge freeze:
* Oncall should slowly merge LGTMd changes throughout the day while monitoring
E2E to ensure stability.
* Ideally the E2E run should be green, but some tests are flaky and can fail
randomly (not as a result of a particular change).
* If a large number of tests fail, or tests that normally pass fail, that
is an indication that one or more of the PR(s) in that build might be
problematic (and should be reverted).
* Use the Test Results Analyzer to see individual test history over time.
* Flake mitigation
* Tests that flake (fail a small percentage of the time) need an issue filed against them. Please read [this](flaky-tests.md#filing-issues-for-flaky-tests); the build cop is expected to file issues for any flaky tests they encounter.
* Tests that flake (fail a small percentage of the time) need an issue filed
against them. Please read [this](flaky-tests.md#filing-issues-for-flaky-tests);
the build cop is expected to file issues for any flaky tests they encounter.
* It's reasonable to manually merge PRs that fix a flake or otherwise mitigate it.
Contact information
-------------------
### Contact information
[@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on call.
[@k8s-oncall](https://github.com/k8s-oncall) will reach the current person on
call.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/devel/on-call-build-cop.md?pixel)]()

View File

@ -31,23 +31,43 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
Kubernetes On-Call Rotations
====================
Kubernetes "first responder" rotations
--------------------------------------
## Kubernetes On-Call Rotations
Kubernetes has generated a lot of public traffic: email, pull-requests, bugs, etc. So much traffic that it's becoming impossible to keep up with it all! This is a fantastic problem to have. In order to be sure that SOMEONE, but not EVERYONE on the team is paying attention to public traffic, we have instituted two "first responder" rotations, listed below. Please read this page before proceeding to the pages linked below, which are specific to each rotation.
### Kubernetes "first responder" rotations
Please also read our [notes on OSS collaboration](collab.md), particularly the bits about hours. Specifically, each rotation is expected to be active primarily during work hours, less so off hours.
Kubernetes has generated a lot of public traffic: email, pull-requests, bugs,
etc. So much traffic that it's becoming impossible to keep up with it all! This
is a fantastic problem to have. In order to be sure that SOMEONE, but not
EVERYONE on the team is paying attention to public traffic, we have instituted
two "first responder" rotations, listed below. Please read this page before
proceeding to the pages linked below, which are specific to each rotation.
During regular workday work hours of your shift, your primary responsibility is to monitor the traffic sources specific to your rotation. You can check traffic in the evenings if you feel so inclined, but it is not expected to be as highly focused as work hours. For weekends, you should check traffic very occasionally (e.g. once or twice a day). Again, it is not expected to be as highly focused as workdays. It is assumed that over time, everyone will get weekday and weekend shifts, so the workload will balance out.
Please also read our [notes on OSS collaboration](collab.md), particularly the
bits about hours. Specifically, each rotation is expected to be active primarily
during work hours, less so off hours.
If you can not serve your shift, and you know this ahead of time, it is your responsibility to find someone to cover and to change the rotation. If you have an emergency, your responsibilities fall on the primary of the other rotation, who acts as your secondary. If you need help to cover all of the tasks, partners with oncall rotations (e.g., [Redhat](https://github.com/orgs/kubernetes/teams/rh-oncall)).
During regular workday work hours of your shift, your primary responsibility is
to monitor the traffic sources specific to your rotation. You can check traffic
in the evenings if you feel so inclined, but it is not expected to be as highly
focused as work hours. For weekends, you should check traffic very occasionally
(e.g. once or twice a day). Again, it is not expected to be as highly focused as
workdays. It is assumed that over time, everyone will get weekday and weekend
shifts, so the workload will balance out.
If you are not on duty you DO NOT need to do these things. You are free to focus on "real work".
If you can not serve your shift, and you know this ahead of time, it is your
responsibility to find someone to cover and to change the rotation. If you have
an emergency, your responsibilities fall on the primary of the other rotation,
who acts as your secondary. If you need help to cover all of the tasks, partners
with oncall rotations (e.g.,
[Redhat](https://github.com/orgs/kubernetes/teams/rh-oncall)).
Note that Kubernetes will occasionally enter code slush/freeze, prior to milestones. When it does, there might be changes in the instructions (assigning milestones, for instance).
If you are not on duty you DO NOT need to do these things. You are free to focus
on "real work".
Note that Kubernetes will occasionally enter code slush/freeze, prior to
milestones. When it does, there might be changes in the instructions (assigning
milestones, for instance).
* [Github and Build Cop Rotation](on-call-build-cop.md)
* [User Support Rotation](on-call-user-support.md)

View File

@ -31,55 +31,90 @@ Documentation for other releases can be found at
<!-- END STRIP_FOR_RELEASE -->
<!-- END MUNGE: UNVERSIONED_WARNING -->
Kubernetes "User Support" Rotation
==================================
Traffic sources and responsibilities
------------------------------------
## Kubernetes "User Support" Rotation
### Traffic sources and responsibilities
* [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and
[ServerFault](http://serverfault.com/questions/tagged/google-kubernetes):
Respond to any thread that has no responses and is more than 6 hours old (over
time we will lengthen this timeout to allow community responses). If you are not
equipped to respond, it is your job to redirect to someone who can.
* [StackOverflow](http://stackoverflow.com/questions/tagged/kubernetes) and [ServerFault](http://serverfault.com/questions/tagged/google-kubernetes): Respond to any thread that has no responses and is more than 6 hours old (over time we will lengthen this timeout to allow community responses). If you are not equipped to respond, it is your job to redirect to someone who can.
* [Query for unanswered Kubernetes StackOverflow questions](http://stackoverflow.com/search?q=%5Bkubernetes%5D+answers%3A0)
* [Query for unanswered Kubernetes ServerFault questions](http://serverfault.com/questions/tagged/google-kubernetes?sort=unanswered&pageSize=15)
* Direct poorly formulated questions to [stackoverflow's tips about how to ask](http://stackoverflow.com/help/how-to-ask)
* Direct off-topic questions to [stackoverflow's policy](http://stackoverflow.com/help/on-topic)
* [Slack](https://kubernetes.slack.com) ([registration](http://slack.k8s.io)): Your job is to be on Slack, watching for questions and answering or redirecting as needed. Also check out the [Slack Archive](http://kubernetes.slackarchive.io/).
* [Email/Groups](https://groups.google.com/forum/#!forum/google-containers): Respond to any thread that has no responses and is more than 6 hours old (over time we will lengthen this timeout to allow community responses). If you are not equipped to respond, it is your job to redirect to someone who can.
* [Legacy] [IRC](irc://irc.freenode.net/#google-containers) (irc.freenode.net #google-containers): watch IRC for questions and try to redirect users to Slack. Also check out the [IRC logs](https://botbot.me/freenode/google-containers/).
* [Slack](https://kubernetes.slack.com) ([registration](http://slack.k8s.io)):
Your job is to be on Slack, watching for questions and answering or redirecting
as needed. Also check out the [Slack Archive](http://kubernetes.slackarchive.io/).
* [Email/Groups](https://groups.google.com/forum/#!forum/google-containers):
Respond to any thread that has no responses and is more than 6 hours old (over
time we will lengthen this timeout to allow community responses). If you are not
equipped to respond, it is your job to redirect to someone who can.
* [Legacy] [IRC](irc://irc.freenode.net/#google-containers)
(irc.freenode.net #google-containers): watch IRC for questions and try to
redirect users to Slack. Also check out the
[IRC logs](https://botbot.me/freenode/google-containers/).
In general, try to direct support questions to:
1. Documentation, such as the [user guide](../user-guide/README.md) and [troubleshooting guide](../troubleshooting.md)
1. Documentation, such as the [user guide](../user-guide/README.md) and
[troubleshooting guide](../troubleshooting.md)
2. Stackoverflow
If you see questions on a forum other than Stackoverflow, try to redirect them to Stackoverflow. Example response:
If you see questions on a forum other than Stackoverflow, try to redirect them
to Stackoverflow. Example response:
Please re-post your question to [stackoverflow](http://stackoverflow.com/questions/tagged/kubernetes).
```code
Please re-post your question to [stackoverflow]
(http://stackoverflow.com/questions/tagged/kubernetes).
We are trying to consolidate the channels to which questions for help/support are posted so that we can improve our efficiency in responding to your requests, and to make it easier for you to find answers to frequently asked questions and how to address common use cases.
We are trying to consolidate the channels to which questions for help/support
are posted so that we can improve our efficiency in responding to your requests,
and to make it easier for you to find answers to frequently asked questions and
how to address common use cases.
We regularly see messages posted in multiple forums, with the full response thread only in one place or, worse, spread across multiple forums. Also, the large volume of support issues on github is making it difficult for us to use issues to identify real bugs.
We regularly see messages posted in multiple forums, with the full response
thread only in one place or, worse, spread across multiple forums. Also, the
large volume of support issues on github is making it difficult for us to use
issues to identify real bugs.
The Kubernetes team scans stackoverflow on a regular basis, and will try to ensure your questions don't go unanswered.
The Kubernetes team scans stackoverflow on a regular basis, and will try to
ensure your questions don't go unanswered.
Before posting a new question, please search stackoverflow for answers to similar questions, and also familiarize yourself with:
* [the user guide](http://kubernetes.io/v1.1/)
* [the troubleshooting guide](http://kubernetes.io/v1.1/docs/troubleshooting.html)
Before posting a new question, please search stackoverflow for answers to
similar questions, and also familiarize yourself with:
Again, thanks for using Kubernetes.
* [user guide](http://kubernetes.io/v1.1/)
* [troubleshooting guide](http://kubernetes.io/v1.1/docs/troubleshooting.html)
The Kubernetes Team
Again, thanks for using Kubernetes.
The Kubernetes Team
```
If you answer a question (in any of the above forums) that you think might be
useful for someone else in the future, *please add it to one of the FAQs in the
wiki*:
If you answer a question (in any of the above forums) that you think might be useful for someone else in the future, *please add it to one of the FAQs in the wiki*:
* [User FAQ](https://github.com/kubernetes/kubernetes/wiki/User-FAQ)
* [Developer FAQ](https://github.com/kubernetes/kubernetes/wiki/Developer-FAQ)
* [Debugging FAQ](https://github.com/kubernetes/kubernetes/wiki/Debugging-FAQ).
Getting it into the FAQ is more important than polish. Please indicate the date it was added, so people can judge the likelihood that it is out-of-date (and please correct any FAQ entries that you see contain out-of-date information).
Getting it into the FAQ is more important than polish. Please indicate the date
it was added, so people can judge the likelihood that it is out-of-date (and
please correct any FAQ entries that you see contain out-of-date information).
Contact information
-------------------
### Contact information
[@k8s-support-oncall](https://github.com/k8s-support-oncall) will reach the current person on call.
[@k8s-support-oncall](https://github.com/k8s-support-oncall) will reach the
current person on call.