1. Use pod-level resource when feature is enabled and resources are set at pod-level
2. Edge case handling: When a pod defines only CPU or memory limits at pod-level (but not both), and container-level requests/limits are unset, the pod-level requests stay empty for the resource without a pod-limit. The container's request for that resource is then set to the default request value from schedutil.
1. The effective container requests cannot be greater than pod-level requests
2. Inidividual container limits cannot be greater than pod-level limits
3. Only CPU & Memory are supported at pod-level
4. Inplace container resources updates are not supported if pod-level resources are set
Note: effective container requests cannot be greater than pod-level limits is supported by transitivity. Effective container requests <= pod-level requests && pod-level requests <= pod-level limits; Therefore effective container requests <= pod-level limits
Signed-off-by: ndixita <ndixita@google.com>
1. If pod-level limit is set, pod-level request is unset and container-level request is set: derive pod-level request from container-level requests
2. If pod-level limit is set, pod-level request is unset and container-level request is unset: set pod-level request equal to pod-level limit
1. Add support for pod level resources in kubectl
2. Reuse the existing method to describe container resources and generalize it to describe both pod and container level resources
1. Add Resources struct to PodSpec struct in both external and internal API packages
2. Adding feature gate and logic for dropping disabled fields for Pod Level Resources
KEP: enhancements/keps/sig-node/2837-pod-level-resource-spec
This commit introduces:
1. Cleanups in port-forwarding error handling code, which ensures that
we only compare lowercased text always.
2. E2E verifying that when a pod is removed a port-forward is stopped.
Signed-off-by: Maciej Szulik <soltysh@gmail.com>
Since the GA graduation of memory manager in https://github.com/kubernetes/kubernetes/pull/128517
we are sharing the initial container map across managers.
The intention of this sharing was not to actually share a data
structure, but
1. save the relatively expensive relisting from runtime
2. have all the managers share a consistent view - even though the
chance for misalignement tend to be tiny.
The unwanted side effect though is now all the managers race
to modify a data shared, not thread safe data structure.
The fix is to clone (deepcopy) the computed map when passing it
to each manager. This restores the old semantic of the code.
This issue brings the topic of possibly managers go out of sync
since each of them maintain a private view of the world.
This risk is real, yet this is how the code worked for
most of the lifetime, so the plan is to look at this and evaluate
possible improvements later on.
Signed-off-by: Francesco Romani <fromani@redhat.com>