Since the GA graduation of memory manager in https://github.com/kubernetes/kubernetes/pull/128517
we are sharing the initial container map across managers.
The intention of this sharing was not to actually share a data
structure, but
1. save the relatively expensive relisting from runtime
2. have all the managers share a consistent view - even though the
chance for misalignement tend to be tiny.
The unwanted side effect though is now all the managers race
to modify a data shared, not thread safe data structure.
The fix is to clone (deepcopy) the computed map when passing it
to each manager. This restores the old semantic of the code.
This issue brings the topic of possibly managers go out of sync
since each of them maintain a private view of the world.
This risk is real, yet this is how the code worked for
most of the lifetime, so the plan is to look at this and evaluate
possible improvements later on.
Signed-off-by: Francesco Romani <fromani@redhat.com>
Clients must be able to use CBOR without a guarantee that all apiservers support it. The apiserver
aggregation layer avoids changing in any way that would require an aggregated apiservers to be
updated. This end-to-end test verifies that a client's content negotiation behaviors continue to
work over time when communicating with a 1.17 sample-apiserver.
With the ClientsAllowCBOR client-go feature gate enabled, a 415 response to a CBOR-encoded REST
causes all subsequent requests from the client to fall back to a JSON request encoding. This
mechanism had only worked as intended when CBOR was explicitly configured in the
ClientContentConfig. When both ClientsAllowCBOR and ClientsPreferCBOR are enabled, an
unconfigured (empty) content type defaults to CBOR instead of JSON. Both ways of configuring a
client to use the CBOR request encoding are now subject to the same fallback mechanism.
We are still only calling NodeExpand after the volume is mounted.
avoid depending on ASW from dswp.findAndAddNewPods(). It is weird to determine desired state based on actual state.
* Test feature-gate enabled/disabled for validation
* Test pkg/registry/resource/resourceclaim
* Add Data and NetworkData to integration test
Signed-off-by: Lionel Jouin <lionel.jouin@est.tech>
* Add status
* Add validation to check if fields are correct (Network field, device
has been allocated))
* Add feature-gate
* Drop field if feature-gate not set
Signed-off-by: Lionel Jouin <lionel.jouin@est.tech>
This fixes an issue in
TestSchedulerPerf/SteadyStateClusterResourceClaimTemplate:
scheduler_perf.go:1542: FATAL ERROR: op 7: delete scheduled pods: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
That occurs when the test is almost done, but hasn't observed all scheduled
pods yet. The previous attempt to address this error wasn't actually 100%
correct. It covered the case when the context has already been canceled, but
not this particular "will reach deadline soon".