add (admittedly pretty crude) CPU allocatable check.
A more incisive refactoring is needed, but we need
to unbreak CI first, so this seems the minimal decently clean test.
Signed-off-by: Francesco Romani <fromani@redhat.com>
A real SELinuxOptionsToFileLabel function needs access to host's
/etc/selinux to read the defaults. This is not possible in
kube-controller-manager that often runs in a container and does not have
access to /etc on the host. Even if it had, it could run on a different
Linux distro than worker nodes.
Therefore implement a custom SELinuxOptionsToFileLabel that does not
default fields in SELinuxOptions and uses just fields provided by the Pod.
Since the controller cannot default empty SELinux label components,
treat them as incomparable.
Example: "system_u:system_r:container_t:s0:c1,c2" *does not* conflict with ":::s0:c1,c2",
because the node that will run such a Pod may expand "":::s0:c1,c2" to "system_u:system_r:container_t:s0:c1,c2".
However, "system_u:system_r:container_t:s0:c1,c2" *does* conflict with ":::s0:c98,c99".
kubeproxy_conntrack_reconciler_deleted_entries_total can be used
to track total entries deleted in conntrack reconciliation.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
kube_proxy_conntrack_reconciler_sync_duration_seconds can be used
to track the latency of conntrack flow reconciliation.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
The test `Pods should support retriving logs from the container
over websockets` flakes as it doesn't always wait until
container is running and is able to produce expected output.
Waiting for pod to be in the `Running` state is not enough
as it doesn't mean that container is running.
Waiting for container to be in `Running` state should fix
the test.
IIUC, before using the translator handler, the ping data can be delivered from
the client to the runtime side since kube-apiserver does not parse any client
data. However, with WebSocket, the server responds with a pong to the client
without forwarding the data to the runtime side. If a proxy is present, it may
close the connection due to inactivity. SPDY's PingPeriod can help address this
issue.
Signed-off-by: Wei Fu <fuweid89@gmail.com>
Co-authored-by: Antonio Ojea <aojea@google.com>
Our CI machines happen to have 1 fully allocatable CPU for test workloads.
This is really, really the minimal amount. But still should be sufficient for the tests to run
the tests; the CFS quota pod, however, does create a series of pods (at time of writing, 6)
and does the cleanup only at the very end the end. This means pods
requiring resources accumulate on the CI machine node.
The fix implemented here is to just clean up after each subcase.
Doing so the cpu test footprint is equal to the higher requirement (say, 1000 millicores) vs
the sum of all the subcases requirements.
Doing like this doesn't change the test behavior, and make it possible
to run it on very barebones machines.
Kernels may have `kernel.dmesg_restrict = 1` set which requires root
access to see dmesg. We're now using `sudo` to mitigate that.
Signed-off-by: Sascha Grunert <sgrunert@redhat.com>