mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-26 12:46:06 +00:00
Add a slow e2e test to monitor kubelet resource usage
This test tracks kubelet resource usage over a long period of time (1hr) when running N pods (e.g., N=0,50), and prints out the resource usage. This would give us an idea how much kubelet's management overhead is in a stable cluster. Some followup items: * Use a more realistic workload (e.g., including probing) * Fail the test if the resource usage is too high. Caveat: * We assume the scheduler would do a decent job distributing the pause pods, but we should double check. * Cluster addon pods could be unevenly distributed and skews the resource usage on nodes.
This commit is contained in:
@@ -134,6 +134,7 @@ GCE_FLAKY_TESTS=(
|
||||
GCE_SLOW_TESTS=(
|
||||
"SchedulerPredicates\svalidates\sMaxPods\slimit " # 8 min, file: scheduler_predicates.go, PR: #13315
|
||||
"Nodes\sResize" # 3 min 30 sec, file: resize_nodes.go, issue: #13323
|
||||
"resource\susage\stracking" # 1 hour, file: kubelet_perf.go, slow by design
|
||||
)
|
||||
|
||||
# Tests which are not able to be run in parallel.
|
||||
@@ -147,6 +148,7 @@ GCE_PARALLEL_SKIP_TESTS=(
|
||||
"SchedulerPredicates"
|
||||
"Services.*restarting"
|
||||
"Shell.*services"
|
||||
"resource\susage\stracking"
|
||||
)
|
||||
|
||||
# Tests which are known to be flaky when run in parallel.
|
||||
|
Reference in New Issue
Block a user