Nov 29 22:56:18.559: INFO: Waited 371.058378ms for the sample-apiserver to be ready to handle requests.
Nov 29 22:56:49.503: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: { } Scheduled: Successfully assigned e2e-openapiv3-5878/sample-apiserver-deployment-58dfd44dd-gp8nd to ip-10-0-80-91.us-west-2.compute.internal
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment: {deployment-controller } ScalingReplicaSet: Scaled up replica set sample-apiserver-deployment-58dfd44dd to 1
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd: {replicaset-controller } SuccessfulCreate: Created pod: sample-apiserver-deployment-58dfd44dd-gp8nd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {multus } AddedInterface: Add eth0 [10.129.2.137/23] from ovn-kubernetes
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:53 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-3-registry-k8s-io-e2e-test-images-sample-apiserver-1-17-7-F6raNs0YQ76APdUD"
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulling: Pulling image "quay.io/openshift/community-e2e-images:e2e-11-registry-k8s-io-etcd-3-5-7-0-C5nYFPeT0lxaFCao"
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Started: Started container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Created: Created container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:55:57 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-3-registry-k8s-io-e2e-test-images-sample-apiserver-1-17-7-F6raNs0YQ76APdUD" in 3.068195738s (3.068206328s including waiting)
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Pulled: Successfully pulled image "quay.io/openshift/community-e2e-images:e2e-11-registry-k8s-io-etcd-3-5-7-0-C5nYFPeT0lxaFCao" in 9.1810668s (9.18107711s including waiting)
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Created: Created container etcd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:06 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Started: Started container etcd
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:48 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Killing: Stopping container sample-apiserver
Nov 29 22:56:49.503: INFO: At 2023-11-29 22:56:48 +0000 UTC - event for sample-apiserver-deployment-58dfd44dd-gp8nd: {kubelet ip-10-0-80-91.us-west-2.compute.internal} Killing: Stopping container etcd
Nov 29 22:56:49.570: INFO: POD NODE PHASE GRACE CONDITIONS
Nov 29 22:56:49.570: INFO:
Nov 29 22:56:49.702: INFO: skipping dumping cluster info - cluster too large
k8s.io/kubernetes/test/e2e/apimachinery.glob..func19.3({0x7f470406fbf8, 0xc001c4b530})
k8s.io/kubernetes@v1.27.1/test/e2e/apimachinery/openapiv3.go:161 +0x6bf
fail [runtime/panic.go:260]: Test Panicked: runtime error: invalid memory address or nil pointer dereference
Ginkgo exit error 1: exit with code 1
Signed-off-by: Dan Williams <dcbw@redhat.com>
In the original version of "MinimizeIPTablesRestore", we skipped the
bottom half of the sync loop when we weren't re-syncing a service, so
certain things that couldn't be skipped had to be done in the top
half. But the code was later changed to always run through the whole
loop body (just not necessarily writing out rules in the bottom half),
so we can reorganize things now to put some related bits of code back
together.
(In particular, this also resolves the fact that we were accidentally
adding the endpoint chains to activeNATChains twice.)
Also change activeNATChains to be a proper generic Set type.
This assumes that any such field is atomic, except:
* OwnerReferences: because it has a `+patchStrategy=merge`, but it
probably needs a `+listMapKey=...` ?
* Finalizers: because it hs a `+patchStrategy=merge`, but is a
primitive type (string).
* []byte fields, which should not be failing this anyway (fixed
subsequently).
An alternative approach could be just to turn off the API warnings for
these fields, but it felt more correct to declare the semantics.
The notes printed to the user from common.go when
loadConfig fails are outdated and incorrect.
If the config cannot be loaded the user should not be instructed
to re-upload the config with kubeadm commands. Instead they
should do it manually with kubectl.
On loadConfig() error just wrap the error in a simple message
and show it to the user.
The current setup stomps missing IsNotFound errors for Node objects.
The underlying fetching of init configuration uses
the node object to construct an initconfiguration for this
upgrade process, so if the Node is missing the kube-config CM
will be reported as missing, which is incorrect.
default_preemtion.go has an unnecessary fmt.Sprintf call which
triggers common code checkers. This removes it.
Signed-off-by: Ukri Niemimuukko <ukri.niemimuukko@intel.com>