Automatic merge from submit-queue Uses container/heap for DelayingQueue The current implementation of DelayingQueue doesn't perform very well when a large number of items (at random delays) are inserted. The original authors seemed to be aware of this and noted it in a `TODO` comment. This is my attempt at switching the implementation to use a priority queue based on `container/heap`. Benchmarks from before the change: ``` ╰─ go test -bench=. -benchmem | tee /tmp/before.txt BenchmarkDelayingQueue_AddAfter-8 300000 256824 ns/op 520 B/op 3 allocs/op PASS ok k8s.io/kubernetes/staging/src/k8s.io/client-go/util/workqueue 77.237s ``` After: ``` ╰─ go test -bench=. -benchmem | tee /tmp/after.txt BenchmarkDelayingQueue_AddAfter-8 500000 3519 ns/op 406 B/op 4 allocs/op PASS ok k8s.io/kubernetes/staging/src/k8s.io/client-go/util/workqueue 2.969s ``` Comparison: ``` ╰─ benchcmp /tmp/before.txt /tmp/after.txt benchmark old ns/op new ns/op delta BenchmarkDelayingQueue_AddAfter-8 256824 3519 -98.63% benchmark old allocs new allocs delta BenchmarkDelayingQueue_AddAfter-8 3 4 +33.33% benchmark old bytes new bytes delta BenchmarkDelayingQueue_AddAfter-8 520 406 -21.92% ``` I also find the `container/heap`-based code a bit more easy to understand. The implementation of the PriorityQueue is based on the documentation for `container/heap`. Feedback definitely welcomed. This is one of my first contributions. ```release-note NONE ``` |
||
---|---|---|
.github | ||
api | ||
build | ||
cluster | ||
cmd | ||
docs | ||
examples | ||
federation | ||
Godeps | ||
hack | ||
hooks | ||
logo | ||
pkg | ||
plugin | ||
staging | ||
test | ||
third_party | ||
translations | ||
vendor | ||
.bazelrc | ||
.gazelcfg.json | ||
.generated_files | ||
.gitattributes | ||
.gitignore | ||
BUILD.bazel | ||
CHANGELOG.md | ||
code-of-conduct.md | ||
CONTRIBUTING.md | ||
labels.yaml | ||
LICENSE | ||
Makefile | ||
Makefile.generated_files | ||
OWNERS | ||
OWNERS_ALIASES | ||
README.md | ||
Vagrantfile | ||
WORKSPACE |
Kubernetes

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.
Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.
To start using Kubernetes
See our documentation on kubernetes.io.
Try our interactive tutorial.
Take a free course on Scalable Microservices with Kubernetes.
To start developing Kubernetes
The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.
If you want to build Kubernetes right away there are two options:
You have a working Go environment.
$ go get -d k8s.io/kubernetes
$ cd $GOPATH/src/k8s.io/kubernetes
$ make
You have a working Docker environment.
$ git clone https://github.com/kubernetes/kubernetes
$ cd kubernetes
$ make quick-release
If you are less impatient, head over to the developer's documentation.
Support
If you need support, start with the troubleshooting guide and work your way through the process that we've outlined.
That said, if you have questions, reach out to us one way or another.