I looked at all runs over all jobs run against master or release-1.15,
ignored [Feature:.*] or [Serial] tests, and added [Slow] to any tests
whose 50th percentile duration was over 5 minutes
Misc comments:
- the apimachinery chunking test is the worst offender, at about 15min
- all test cases for all drivers that ran the [Testpattern:.*(xfs)] were
taking longer than 5 minutes, so I got lucky and this was an easy
call; not sure how to support some drivers taking too long for some
test patterns
The global prometheus registry comes preloaded with process and go
metrics. Since these are not under kubernetes control, they can't be
considered stable. However, we can make a best effort to maintain
backwards compatibility by preloading the same metrics.
Only Unstructured objects worked (because unstructured implicitly
clears the .Object map when Unmarshal is called). We must reset
obj before we attempt to unmarshal into it.
Currently the BaseServiceInfo struct implements the ServicePort interface, but
only uses that interface sometimes. All the elements of BaseServiceInfo are exported
and sometimes the interface is used to access them and othertimes not
I extended the ServicePort interface so that all relevent values can be accessed through
it and unexported all the elements of BaseServiceInfo
If there are tags in the test name that describe qualities of the
test that make it ineligible for conformance, raise an error. This
is basically the "skip list" that heptio's e2e image used to use.
Thankfully all of our existing Conformance tests lack these tags. I
considered added [Slow] to the list, but let's save that for another
day.
kube-controller-manager.
If a service CIDR that overlaps with the cluster CIDR is
specified to kube-controller-manager then kube-controller-
manager will incorrectly allocate node CIDRs that overlap
with the service CIDR. The fix ensure that kubeadm
maps the --service-cidr to --service-cluster-ip-range for use
by kube-controller-manager.
As per docs, --allocate-node-cidrs must be true for
--service-cluster-ip-range to be considered. It does not make
sense for --cluster-cidr to be unspecified but for
--service-cluster-ip-range and --allocate-node-cidrs to be
set, since the purpose of these options is to have the
controller-manager do the per node CIDR allocation. Also
note that --service-cluster-ip-range is passed to the
api-server, so the presence of *just*
--service-cluster-ip-range should not imply that
--allocate-node-cidrs should be true.
Resolves: kubernetes/kubeadm/issues/1591