The generated ResourceClaim name and the names of the ResourceClaimTemplate and
ResourceClaim referenced by a pod must be valid according to the resource API,
otherwise the pod cannot start.
Checking this was removed from the original implementation out of concerns
about validating fields in core against limitations imposed by a separate,
alpha API. But as this was pointed out again in
https://github.com/kubernetes/kubernetes/pull/116254#discussion_r1134010324
it gets added back.
The same strings that worked before still work now. In particular, the
constraints for a spec.resourceClaim.name are still the same (DNS label).
The name "PodScheduling" was unusual because in contrast to most other names,
it was impossible to put an article in front of it. Now PodSchedulingContext is
used instead.
* apiserver: add latency tracker for priority & fairness queue wait time
Signed-off-by: Andrew Sy Kim <andrewsy@google.com>
* apiserver: exclude priority & fairness wait times to SLO/SLI latency metrics
Signed-off-by: Andrew Sy Kim <andrewsy@google.com>
* apiserver: update TestLatencyTrackersFrom to check latency from PriorityAndFairnessTracker
Signed-off-by: Andrew Sy Kim <andrewsy@google.com>
* flowcontrol: add helper function observeQueueWaitTime to consolidate metric and latency tracker calls
Signed-off-by: Andrew Sy Kim <andrewsy@google.com>
* flowcontrol: replace time.Now() / time.Since() with clock.Now() / clock.Since() for better testability
Signed-off-by: Andrew Sy Kim <andrewsy@google.com>
* flowcontrol: add unit test TestQueueWaitTimeLatencyTracker to validate queue wait times recorded by latency tracker
Signed-off-by: Andrew Sy Kim <andrewsy@google.com>
---------
Signed-off-by: Andrew Sy Kim <andrewsy@google.com>
Annotation
As part of this change, kube-proxy accepts any value for either
annotation that is not "disabled".
Change-Id: Idfc26eb4cc97ff062649dc52ed29823a64fc59a4
Note that this fixes a bug in the existing `toBytes` implementation
which does not correctly set the capacity on the returned slice.
Signed-off-by: Monis Khan <mok@microsoft.com>
SyncKnownPods began triggering UpdatePod() for pods that have been
orphaned by desired config to ensure pods run to termination. This
test reads a mutex protected value while pod workers are running
in the background and as a consequence triggers a data race.
Wait for the workers to stabilize before reading the value. Other
tests validate that the correct sync events are triggered (see
kubelet_pods_test.go#TestKubelet_HandlePodCleanups for full
verification of this behavior).
It is slightly concerning that I was unable to recreate the race
locally even under stress testing, but I cannot identify why.