This test has been part of the Conformance suite since at least
Kubernetes 1.2 (2015-10-xx). Some years later, around 2018-10-xx, we
drafted a rigorous set of rules for tests to follow in order to be
eligible for promotion to Conformance. We explicitly disallowed any
tests that check for specific Events, since they are not an API, and we
make no guarantees about their contents nor their delivery.
Unfortunately, we neglected to go through the existing corpus of
Conformance tests with a fine-toothed comb after drafting these rules.
The very nature of what this test is attempting to exercise and verify
is specific Events, and their delivery, thus making it ineligible for
Conformance. We should have caught and demoted this test back then.
Better late than never?
* De-share the Handler struct in core API
An upcoming PR adds a handler that only applies on one of these paths.
Having fields that don't work seems bad.
This never should have been shared. Lifecycle hooks are like a "write"
while probes are more like a "read". HTTPGet and TCPSocket don't really
make sense as lifecycle hooks (but I can't take that back). When we add
gRPC, it is EXPLICITLY a health check (defined by gRPC) not an arbitrary
RPC - so a probe makes sense but a hook does not.
In the future I can also see adding lifecycle hooks that don't make
sense as probes. E.g. 'sleep' is a common lifecycle request. The only
option is `exec`, which requires having a sleep binary in your image.
* Run update scripts
This test case requires special test-handler setup which is only done
for gce clusters created by kube-up scripts. Let's skip the test when
run under other providers.
We now use a host local exec instead of SSH commands to simplify the
test and make the result more robust.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
This assumes that SSH via bastion works if the `KUBE_SSH_BASTION`
environment variable is set, which is the case for
`pull-kubernetes-e2e-gce-correctness`.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
A number of race conditions exist when pods are terminated early in
their lifecycle because components in the kubelet need to know "no
running containers" or "containers can't be started from now on" but
were relying on outdated state.
Only the pod worker knows whether containers are being started for
a given pod, which is required to know when a pod is "terminated"
(no running containers, none coming). Move that responsibility and
podKiller function into the pod workers, and have everything that
was killing the pod go into the UpdatePod loop. Split syncPod into
three phases - setup, terminate containers, and cleanup pod - and
have transitions between those methods be visible to other
components. After this change, to kill a pod you tell the pod worker
to UpdatePod({UpdateType: SyncPodKill, Pod: pod}).
Several places in the kubelet were incorrect about whether they
were handling terminating (should stop running, might have
containers) or terminated (no running containers) pods. The pod worker
exposes methods that allow other loops to know when to set up or tear
down resources based on the state of the pod - these methods remove
the possibility of race conditions by ensuring a single component is
responsible for knowing each pod's allowed state and other components
simply delegate to checking whether they are in the window by UID.
Removing containers now no longer blocks final pod deletion in the
API server and are handled as background cleanup. Node shutdown
no longer marks pods as failed as they can be restarted in the
next step.
See https://docs.google.com/document/d/1Pic5TPntdJnYfIpBeZndDelM-AbS4FN9H2GTLFhoJ04/edit# for details
The e2e that create/deletes pods rapidly and verifies their status
was reporting a very long timing:
timings total=12.211347385s t=491ms run=2s execute=450402h8m25s
in a few scenarios. Add error checks that clarify when this happens
and why. Report p50/75/90/99 latencies on teardown as observed from
the test for baseline for future changes.
Before assuming that a certain host runs an SSH server, we now test its
`SSHPort` for connectivity. This means that the test `should be able to
run crictl on the node` can be now more failure proof by checking only
hosts where SSH actually runs. Beside that, we can also test all hosts
and not only the first one.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>
This replaces the check of mount propagation to/from the host OS mount
namespace to a similar check about the mount namespace where kubelet is
running (which may or may not be the same mount namespace as the host
OS).
This addresses issue #100259
A previous commit created a few agnhost related functions that creates agnhost
pods / containers for general purposes.
Refactors tests to use those functions.
The kubelet would attempt to create a new sandbox for a pod whose
RestartPolicy is OnFailure even after all container succeeded. It caused
unnecessary CRI and CNI calls, confusing logs and conflicts between the
routine that creates the new sandbox and the routine that kills the Pod.
This patch checks the containers to start and stops creating sandbox if
no container is supposed to start.
e2e/framework is a place to keep common functions for e2e tests, and
it is not a place to keep e2e tests themself. recreate_node.go is e2e
test for node.
This moves recreate_node.go to e2e/node.
These changes allow to set FQDN as hostname of pods for pods
that set the new PodSpec field setHostnameAsFQDN to true. The PodSpec
new field was added in related PR.
This is PART2 (last) of the changes to enable KEP #1797 and addresses #91036
and they made an invalid dependency to sub e2e framework from the core framework.
So we can use e2epod.WaitTimeoutForPodReadyInNamespace to remove invalid dependency.
The main purpose of this pr is to handle the framework core package dependency subpackage pod.
WaitForPod*() are just wrapper functions for e2epod package, and they
made an invalid dependency to sub e2e framework from the core framework.
So this replaces WaitForPodRunning() with the e2epod function.
This is gross but because NewDeleteOptions is used by various parts of
storage that still pass around pointers, the return type can't be
changed without significant refactoring within the apiserver. I think
this would be good to cleanup, but I want to minimize apiserver side
changes as much as possible in the client signature refactor.
The condition was not part of the message and so would not
match:
OCI runtime create failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"rootfs_linux.go:58: mounting \\\"/var/lib/kubelet/pods/128aea1f-bde3-43d5-8b5f-dd86b9a5ef33/volumes/kubernetes.io~secret/default-token-v55hm\\\" to rootfs \\\"/var/lib/docker/overlay2/813487ba91d534ded546ae34f2a05e7d94c26bd015d356f9b2641522d8f0d6da/merged\\\" at \\\"/var/run/secrets/kubernetes.io/serviceaccount\\\" caused \\\"stat /var/lib/kubelet/pods/128aea1f-bde3-43d5-8b5f-dd86b9a5ef33/volumes/kubernetes.io~secret/default-token-v55hm: no such file or directory\\\"\"": unknown
Updated the check and regex.
The kubelet can race when a pod is deleted and report that a container succeeded
when it instead failed, and thus the pod is reported as succeeded. Create an e2e
test that demonstrates this failure.
1. move the integration test of TaintBasedEvictions to test/integration/node
2. move the e2e test of TaintBasedEvictions e2e test/e2e/node
3. modify the conformance file to adapt the TaintBasedEviction test
Similar functionality is required across e2e tests for RuntimeClass.
Let's create runtimeclass as part of the framework/node package.
Signed-off-by: Eric Ernst <eric.ernst@intel.com>
Commit 69a473be3 broke the test case (that was checking
that a file from a volume can't be read) by adding a
(wrong) assumption that error should be nil.
Fix the assumption (we do expect the error here).
Note that we were checking file contents before; now when
we check the error from read, checking the contents is
redundant.
The other issue with the test is SELinux should be in enforcing
mode for this to work, so let's check that first to avoid
false positives.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
This is currently the top flake against PRs, so I'm tagging it
as [Flaky]. Flaky tests can't be conformance tests, so I'm
removing it from [Conformance] as well until this is resolved.