- as soon as a request is received by the apiserver, determine the
timeout of the request and set a new request context with the deadline.
- the timeout filter that times out non-long-running requests should
use the request context as opposed to a fixed 60s wait today.
- admission and storage layer uses the same request context with the
deadline specified.
The test code generation for create subresource was missing the resource
name parameter in the template which caused a compile error for the
sample projects using the scale subresource. That only occurs with
non-namespaced resource kind.
Also re-ran the code generation after applying the fix.
* Mixed protocol support for Services with type=LoadBalancer
KEP: https://github.com/kubernetes/enhancements/blob/master/keps/sig-network/20200103-mixed-protocol-lb.md
Add new feature gate to control the support of mixed protocols in Services with type=LoadBalancer
Add new fields to the ServiceStatus
Add Ports to the LoadBalancerIngress, so cloud provider implementations can report the status of the requested load balanc
er ports
Add ServiceCondition to the ServiceStatus so Service controllers can indicate the conditions of the Service
* regenerate conflicting stuff
Instead of hardcoding fedora:latest, use one of our e2e images as
source inside the created pods. This will allow users who test with
this data outside of integration environments to reference a real
image and avoid spurious errors.
Relaxes matching of pod_memory_working_set_bytes metric so that we won't
error due to presence of other pods.
Signed-off-by: hasheddan <georgedanielmangum@gmail.com>
The e2e topology manager want to test the resource alignment using
devices, and the easiest devices to use are the SRIOV devices at this
moment.
The resource alignment test cases are run for each supported policies,
in a loop.
The tests manage the SRIOV device plugin; up until now, the plugin
was set up and tore down at each loop.
There is no real need for that. Each loop must reconfigure (thus
restart) the kubelet, but the device plugin can set up and tore down
just once for all the policies, thus once.
The kubelet can reconnect just fine to a running device plugin.
This way, we greatly reduce the interactions and the complexity of the
test environment, making it easier to understand and more robust, and
we trim down some minutes from execution time.
However, this patch also hides (not solves) a test flake we observed
on some environment. The issue is hardly reproduceable and not well
understood, but seems caused by doing the sriov dp setup/teardown
in each policy testing loop.
Investigation so far suggests that the kubelet sometimes have a stale
state after the sriovdp teardown/setup cycle, leading to flakes and
false negatives.
We tried to address this in https://github.com/kubernetes/kubernetes/pull/95611
with no conclusive results yet.
This patch was posted because overall we believe this patch gains
exceeds the drawbacks (hiding the aforementioned flake) and
because understanding the potential interaction issues between the
sriovdp and the kubelet deserve a separate test.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Device plugin which implements v1beta interface can return nil in
Topology field
For example nvidia-gpu-deviceplugin
3520254b75/nvidia.go (L147)
Signed-off-by: Alexey Perevalov <alexey.perevalov@huawei.com>