mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-08-11 13:02:14 +00:00
scheduler performance test suite: README.md docs
This commit is contained in:
parent
161b106082
commit
9704222cf3
75
test/component/scheduler/perf/README.md
Normal file
75
test/component/scheduler/perf/README.md
Normal file
@ -0,0 +1,75 @@
|
||||
<!-- BEGIN MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
<!-- BEGIN STRIP_FOR_RELEASE -->
|
||||
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
<img src="http://kubernetes.io/img/warning.png" alt="WARNING"
|
||||
width="25" height="25">
|
||||
|
||||
<h2>PLEASE NOTE: This document applies to the HEAD of the source tree</h2>
|
||||
|
||||
If you are using a released version of Kubernetes, you should
|
||||
refer to the docs that go with that version.
|
||||
|
||||
<strong>
|
||||
The latest release of this document can be found
|
||||
[here](http://releases.k8s.io/release-1.1/docs/proposals/choosing-scheduler.md).
|
||||
|
||||
Documentation for other releases can be found at
|
||||
[releases.k8s.io](http://releases.k8s.io).
|
||||
</strong>
|
||||
--
|
||||
|
||||
<!-- END STRIP_FOR_RELEASE -->
|
||||
|
||||
<!-- END MUNGE: UNVERSIONED_WARNING -->
|
||||
|
||||
Scheduler Performance Test
|
||||
======
|
||||
|
||||
Motivation
|
||||
------
|
||||
We already have a performance testing system -- Kubemark. However, Kubemark requires setting up and bootstrapping a whole cluster, which takes a lot of time.
|
||||
|
||||
We want to have a standard way to reproduce scheduling latency metrics result and benchmark scheduler as simple and fast as possible. We have the following goals:
|
||||
|
||||
- Save time on testing
|
||||
- The test and benchmark can be run in a single box.
|
||||
We only set up components necessary to scheduling without booting up a cluster.
|
||||
- Profiling runtime metrics to find out bottleneck
|
||||
- Write scheduler integration test but focus on performance measurement.
|
||||
Take advantage of go profiling tools and collect fine-grained metrics,
|
||||
like cpu-profiling, memory-profiling and block-profiling.
|
||||
- Reproduce test result easily
|
||||
- We want to have a known place to do the performance related test for scheduler.
|
||||
Developers should just run one script to collect all the information they need.
|
||||
|
||||
Currently the test suite has the following:
|
||||
|
||||
- density test (by adding a new Go test)
|
||||
- schedule 30k pods on 1000 (fake) nodes and 3k pods on 100 (fake) nodes
|
||||
- print out scheduling rate every second
|
||||
- let you learn the rate changes vs number of scheduled pods
|
||||
- benchmark
|
||||
- make use of `go test -bench` and report nanosecond/op.
|
||||
- schedule b.N pods when the cluster has N nodes and P scheduled pods. Since it takes relatively long time to finish one round, b.N is small: 10 - 100.
|
||||
|
||||
|
||||
How To Run
|
||||
------
|
||||
```
|
||||
cd kubernetes/test/component/scheduler/perf
|
||||
./test-performance.sh
|
||||
```
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
[]()
|
||||
<!-- END MUNGE: GENERATED_ANALYTICS -->
|
Loading…
Reference in New Issue
Block a user