This changes the text registration so that tags for which the framework has a
dedicated API (features, feature gates, slow, serial, etc.) those APIs are
used.
Arbitrary, custom tags are still left in place for now.
a) add namespacing to metrics: fixes interference between `should scale up when one metric is missing (Pod and External metrics)` and `should not scale down when one metric is missing (Container Resource and External Metrics)` specs, cause of flakiness.
b) replaces deployments containing unused exporters (metrics ignored) with deployments without any exporters: potential fix for often hitting a rate-limit on creating metrics descriptors (429 errors), also adds clarity.
c) fixes metric types: some external metrics tests used non-average type while expecting the value to be constant regardless of the number of pods. However, queries resulting from metric specs don't filter by pods, so a sum of metrics for all the pods is the fetched metric value (https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/#autoscaling-on-metrics-not-related-to-kubernetes-objects). Adding averaging back by the number of pods fixes a couple of specs where the tests were passing for the wrong reason (wanted d ifferent test conditions).
All code must use the context from Ginkgo when doing API calls or polling for a
change, otherwise the code would not return immediately when the test gets
aborted.
ginkgo.DeferCleanup has multiple advantages:
- The cleanup operation can get registered if and only if needed.
- No need to return a cleanup function that the caller must invoke.
- Automatically determines whether a context is needed, which will
simplify the introduction of context parameters.
- Ginkgo's timeline shows when it executes the cleanup operation.
Every ginkgo callback should return immediately when a timeout occurs or the
test run manually gets aborted with CTRL-C. To do that, they must take a ctx
parameter and pass it through to all code which might block.
This is a first automated step towards that: the additional parameter got added
with
sed -i 's/\(framework.ConformanceIt\|ginkgo.It\)\(.*\)func() {$/\1\2func(ctx context.Context) {/' \
$(git grep -l -e framework.ConformanceIt -e ginkgo.It )
$GOPATH/bin/goimports -w $(git status | grep modified: | sed -e 's/.* //')
log_test.go was left unchanged.
- update all the import statements
- run hack/pin-dependency.sh to change pinned dependency versions
- run hack/update-vendor.sh to update go.mod files and the vendor directory
- update the method signatures for custom reporters
Signed-off-by: Dave Chen <dave.chen@arm.com>
This makes sub packages of e2e test autoscaling to use log functions
of core framework instead for avoiding circular dependencies.
Signed-off-by: clarklee92 <clarklee1992@hotmail.com>
This is part of the transition to using framework/log instead
of the Logf inside the framework package. This will help with
import size/cycles when importing the framework or subpackages.
Tests for scaling down based on external metric are flaky. I think this
is because they:
- Start with 2 replicas,
- Export metric value == 1/2 target,
- Expect scale down to 1.
Since the expected recommendation is exactly 1 it might flake (and with
scale down stabilization any recommendations higher than 1 will
persist).
Change expected value of the metric so recommended size will be lower
than 1. This should make those tests less flaky.
- Scale down based on custom metric was flaking. Increase target value
of the metric.
- Scale down based on CPU was flaking during stabilization. Increase
tolerance of stabilization (caused by resource consumer using more CPU
than requested).
Rename the file to note that these are Stackdriver tests in anticipation of tests with fake custom metrics apiserver. Refactor the tests to be more structured.