Automatic merge from submit-queue Reduce memory allocations in kube proxy Memory allocation (and Go GarbageCollection) seems to be one of the most expensive operations in kube-proxy (I've seen profiles where it was more than 50%). The commits are mostly independent from each other and all of them are mostly about reusing already allocated memory. This PR is reducing memory allocation by ~5x (results below from 100-node load test): before: ``` (pprof) top 38.64GB of 39.11GB total (98.79%) Dropped 249 nodes (cum <= 0.20GB) Showing top 10 nodes out of 61 (cum >= 0.20GB) flat flat% sum% cum cum% 15.10GB 38.62% 38.62% 15.10GB 38.62% bytes.makeSlice 9.48GB 24.25% 62.87% 9.48GB 24.25% runtime.rawstringtmp 8.30GB 21.21% 84.07% 32.47GB 83.02% k8s.io/kubernetes/pkg/proxy/iptables.(*Proxier).syncProxyRules 2.08GB 5.31% 89.38% 2.08GB 5.31% fmt.(*fmt).padString 1.90GB 4.86% 94.24% 3.82GB 9.77% strings.Join 0.67GB 1.72% 95.96% 0.67GB 1.72% runtime.hashGrow 0.36GB 0.92% 96.88% 0.36GB 0.92% runtime.stringtoslicebyte 0.31GB 0.79% 97.67% 0.62GB 1.58% encoding/base32.(*Encoding).EncodeToString 0.24GB 0.62% 98.29% 0.24GB 0.62% strings.genSplit 0.20GB 0.5% 98.79% 0.20GB 0.5% runtime.convT2E ``` after: ``` 7.94GB of 8.13GB total (97.75%) Dropped 311 nodes (cum <= 0.04GB) Showing top 10 nodes out of 65 (cum >= 0.11GB) flat flat% sum% cum cum% 3.32GB 40.87% 40.87% 8.05GB 99.05% k8s.io/kubernetes/pkg/proxy/iptables.(*Proxier).syncProxyRules 2.85GB 35.09% 75.95% 2.85GB 35.09% runtime.rawstringtmp 0.60GB 7.41% 83.37% 0.60GB 7.41% runtime.hashGrow 0.31GB 3.76% 87.13% 0.31GB 3.76% runtime.stringtoslicebyte 0.28GB 3.43% 90.56% 0.55GB 6.80% encoding/base32.(*Encoding).EncodeToString 0.19GB 2.29% 92.85% 0.19GB 2.29% strings.genSplit 0.18GB 2.17% 95.03% 0.18GB 2.17% runtime.convT2E 0.10GB 1.28% 96.31% 0.71GB 8.71% runtime.mapassign 0.10GB 1.21% 97.51% 0.10GB 1.21% syscall.ByteSliceFromString 0.02GB 0.23% 97.75% 0.11GB 1.38% syscall.SlicePtrFromStrings ``` |
||
---|---|---|
.github | ||
api | ||
build | ||
cluster | ||
cmd | ||
docs | ||
examples | ||
federation | ||
Godeps | ||
hack | ||
hooks | ||
logo | ||
pkg | ||
plugin | ||
staging | ||
test | ||
third_party | ||
translations | ||
vendor | ||
.bazelrc | ||
.gazelcfg.json | ||
.generated_files | ||
.gitattributes | ||
.gitignore | ||
BUILD.bazel | ||
CHANGELOG.md | ||
code-of-conduct.md | ||
CONTRIBUTING.md | ||
labels.yaml | ||
LICENSE | ||
Makefile | ||
Makefile.generated_files | ||
OWNERS | ||
OWNERS_ALIASES | ||
README.md | ||
Vagrantfile | ||
WORKSPACE |
Kubernetes

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.
Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale using a system called Borg, combined with best-of-breed ideas and practices from the community.
Kubernetes is hosted by the Cloud Native Computing Foundation (CNCF). If you are a company that wants to help shape the evolution of technologies that are container-packaged, dynamically-scheduled and microservices-oriented, consider joining the CNCF. For details about who's involved and how Kubernetes plays a role, read the CNCF announcement.
To start using Kubernetes
See our documentation on kubernetes.io.
Try our interactive tutorial.
Take a free course on Scalable Microservices with Kubernetes.
To start developing Kubernetes
The community repository hosts all information about building Kubernetes from source, how to contribute code and documentation, who to contact about what, etc.
If you want to build Kubernetes right away there are two options:
You have a working Go environment.
$ go get -d k8s.io/kubernetes
$ cd $GOPATH/src/k8s.io/kubernetes
$ make
You have a working Docker environment.
$ git clone https://github.com/kubernetes/kubernetes
$ cd kubernetes
$ make quick-release
If you are less impatient, head over to the developer's documentation.
Support
If you need support, start with the troubleshooting guide and work your way through the process that we've outlined.
That said, if you have questions, reach out to us one way or another.