The kubelet test here is using a one minute timeout, instead of the
normal framework.PodStartTimeout.
The DNS results validation functions pull several images including
the jessie-dnsutils which is a bit bigger than usual.
While it's not happening today, we'll want to eventually also handle
updates to this CHANGELOG list via krel changelog. By collapsing the set
of changelogs into a single list, it will be a little easier to write
out an updated file for commit by searching the contents of the directory
that match "CHANGELOG-x.y.md" and adding them to the list.
Signed-off-by: Stephen Augustus <saugustus@vmware.com>
The types were different so the diff output is not useful, both
should be pointers:
```
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: I0205 19:44:40.222259 2737 status_manager.go:642] Pod status is inconsistent with cached status for pod "prometheus-k8s-1_openshift-monitoring(0e9137b8-3bd2-4353-b7f5-672749106dc1)", a reconciliation should be triggered:
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: interface{}(
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: - s"&PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-05 19:13:30 +0000 UTC,Reason:,Message:,},PodCondit>
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: + v1.PodStatus{
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: + Phase: "Running",
Feb 05 19:44:40 ci-ln-6k7l4-w-c-w9wbb.c.openshift-gce-devel-ci.internal hyperkube[2737]: + Conditions: []v1.PodCondition{
```
It turns out that the dual-stack feature enabled doesn't mean that
the cluster MUST be dual-stack, it only indicates that it MAY be
dual-stack but CAN be single-stack.
We should relax the validation to allow single-stack clusters
with dual-stack enabled.
This change removes the audience logic from the oidc authenticator
and collapses it onto the same logic used by other audience unaware
authenticators.
oidc is audience unaware in the sense that it does not know or
understand the API server's audience. As before, the authenticator
will continue to check that the token audience matches the
configured client ID.
The reasoning for this simplification is:
1. The previous code tries to make the client ID on the oidc token
a valid audience. But by not returning any audience, the token is
not valid when used via token review on a server that is configured
to honor audiences (the token works against the Kube API because the
audience check is skipped).
2. It is unclear what functionality would be gained by allowing
token review to check the client ID as a valid audience. It could
serve as a proxy to know that the token was honored by the oidc
authenticator, but that does not seem like a valid use case.
3. It has never been possible to use the client ID as an audience
with token review as it would have always failed the audience
intersection check. Thus this change is backwards compatible.
It is strange that the oidc authenticator would be considered
audience unaware when oidc tokens have an audience claim, but from
the perspective of the Kube API (and for backwards compatibility),
these tokens are only valid for the API server's audience.
This change seems to be the least magical and most consistent way to
honor backwards compatibility and to allow oidc tokens to be used
via token review when audience support in enabled.
Signed-off-by: Monis Khan <mok@vmware.com>