Generally try to waive away folks who see a particular event stream and feel tempted to extrapolate and build tooling that expects the same underlying resource transition chain to continue to produce a similar event stream as the underlying components evolve and are updated. New controllers should not be constrained to be backwards-compatible with previous versions with regard to Event emission. This is distinct from the Event type itself, which has the usual Kubernetes-API compatibility commitments for versioned types. The EventTTL default has been 1h since7e258b85bd
(Reduce TTL for events in etcd from 48hrs to 1hr, 2015-03-11, #5315), and remains so today: $ git --no-pager log -1 --format='%h %s' origin/master8e5c02255c
Merge pull request #90942 from ii/ii-create-pod%2Bpodstatus-resource-lifecycle-test $ git --no-pager grep EventTTL:8e5c02255c
cmd/kube-apiserver/app/options/options.go 8e5c02255cc:cmd/kube-apiserver/app/options/options.go: EventTTL: 1 * time.Hour, In this space [1,2]: To avoid filling up master's disk, a retention policy is enforced: events are removed one hour after the last occurrence. To provide longer history and aggregation capabilities, a third party solution should be installed to capture events. ... Note: It is not guaranteed that all events happening in a cluster will be exported to Stackdriver. One possible scenario when events will not be exported is when event exporter is not running (e.g. during restart or upgrade). In most cases it's fine to use events for purposes like setting up metrics and alerts, but you should be aware of the potential inaccuracy. ... To prevent disturbing your workloads, event exporter does not have resources set and is in the best effort QOS class, which means that it will be the first to be killed in the case of resource starvation. Although that's talking more about export from etcd -> external storage, and not about cluster components submitting events to etcd. [1]: https://kubernetes.io/docs/tasks/debug-application-cluster/events-stackdriver/ [2]: https://github.com/kubernetes/website/pull/4155/files#diff-d8eb69c5436aa38b396d4f3ed75e4792R10
Kubernetes

Kubernetes is an open source system for managing containerized applications across multiple hosts. It provides basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.
Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If your company wants to help shape the evolution of technologies that are container-packaged, dynamically scheduled, and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.
To start using Kubernetes
See our documentation on kubernetes.io.
Try our interactive tutorial.
Take a free course on Scalable Microservices with Kubernetes.
To use Kubernetes code as a library in other applications, see the list of published components.
Use of the k8s.io/kubernetes
module or k8s.io/kubernetes/...
packages as libraries is not supported.
To start developing Kubernetes
The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.
If you want to build Kubernetes right away there are two options:
You have a working Go environment.
mkdir -p $GOPATH/src/k8s.io
cd $GOPATH/src/k8s.io
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make
You have a working Docker environment.
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
make quick-release
For the full story, head over to the developer's documentation.
Support
If you need support, start with the troubleshooting guide, and work your way through the process that we've outlined.
That said, if you have questions, reach out to us one way or another.