Merge pull request #88528 from ingvagabund/doc-how-to-extend-scheduler-perf-tests

[doc] scheduler_perf: describe suite configuration in more detail
This commit is contained in:
Kubernetes Prow Robot 2020-03-23 17:10:47 -07:00 committed by GitHub
commit 2ab6357df0
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -68,6 +68,26 @@ To produce a cpu profile:
make test-integration WHAT=./test/integration/scheduler_perf KUBE_TIMEOUT="-timeout=3600s" KUBE_TEST_VMODULE="''" KUBE_TEST_ARGS="-alsologtostderr=false -logtostderr=false -run=^$$ -benchtime=1ns -bench=BenchmarkPerfScheduling -cpuprofile ~/cpu-profile.out"
```
### How to configure bechmark tests
Configuration file located under config/performance-config.yaml contains a list of templates.
Each template allows to set:
- node manifest
- manifests for initial and testing pod
- number of nodes, number of initial and testing pods
- templates for PVs and PVCs
- feature gates
See `simpleTestCases` data type implementation for available configuration for each template.
Initial pods create a state of a cluster before the scheduler performance measurement can begin.
Testing pods are then subject to performance measurement.
The configuration file under config/performance-config.yaml contains a default list of templates to cover
various scenarios. In case you want to add your own, you can extend the list with new templates.
It's also possible to extend `simpleTestCases` data type, respectively its underlying data types
to extend configuration of possible test cases.
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
[![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/test/component/scheduler/perf/README.md?pixel)]()
<!-- END MUNGE: GENERATED_ANALYTICS -->