When renaming NodeResourceSlice to ResourceSlice, the embedded
[Node]ResourceModel also should have been renamed.
Kubernetes-commit: a0add8d2c7578cd9f94fc302d6212f9f7d16175b
The runtime classes are apiserver's concept, while the handlers are kubelet's concept.
For NodeStatus, it makes more sense to return the latter ones here.
This commit modifies the following files:
- pkg/apis/core/types.go
- staging/src/k8s.io/api/core/v1/types.go
- pkg/kubelet/nodestatus/setters.go
- pkg/kubelet/kubelet_node_status.go
- pkg/registry/core/node/strategy.go
- test/e2e_node/mount_rro_linux_test.go
Other changes were auto-generated by running `make update`.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Kubernetes-commit: 1dc05009fe7f4e1d139b0c8394683edb54f8d082
This commit modifies the following files:
- pkg/apis/core/types.go
- staging/src/k8s.io/api/core/v1/types.go
Other changes were auto-generated by running `make update`.
Signed-off-by: Akihiro Suda <akihiro.suda.cz@hco.ntt.co.jp>
Kubernetes-commit: d940886d0a4ee9aa8a7ca075fee175b002baf883
The default queue implementation is mostly FIFO and it is not
exchangeable unless we implement the whole `workqueue.Interface` which
is less desirable as we have to duplicate a lot of code. There was one
attempt done in [kubernetes/kubernetes#109349][1] which tried to
implement a priority queue. That is really useful and [knative/pkg][2]
implemented something called two-lane-queue. While two lane queue is
great, but isn't perfect since a full slow queue can still slow down
items in fast queue.
This change proposes a swappable queue implementation while not adding
extra maintenance effort in kubernetes community. We are happy to
maintain our own queue implementation (similar to two-lane-queue) in
downstream.
[1]: https://github.com/kubernetes/kubernetes/pull/109349
[2]: https://github.com/knative/pkg/blob/main/controller/two_lane_queue.go
Kubernetes-commit: 87b4279e07349b3c68f16f69a349a02bddd12f25
While currently those objects only get published by the kubelet for node-local
resources, this could change once we also support network-attached
resources. Dropping the "Node" prefix enables such a future extension.
The NodeName in ResourceSlice and StructuredResourceHandle then becomes
optional. The kubelet still needs to provide one and it must match its own node
name, otherwise it doesn't have permission to access ResourceSlice objects.
Kubernetes-commit: 0b6a0d686a060b5d5ff92cea931aacd4eba85adb
This adds support for semantic version comparison to the CEL support in the
"named resources" structured parameter model. For example, it can be used to
check that an instance supports a certain API level.
To minimize the risk, the new "semver" type is only defined in the CEL
environment for DRA expressions, not in the base library. See
https://github.com/kubernetes/kubernetes/pull/123664 for a PR which
adds it to the base library.
Validation of semver strings is done with the regular expression from
semver.org. The actual evaluation at runtime then uses semver/v4.
Kubernetes-commit: 42ee56f093133402ed860d4c5f54b049041386c9
Like the current device plugin interface, a DRA driver using this model
announces a list of resource instances. In contrast to device plugins, this
list is made available to the scheduler together with attributes that can be
used to select suitable instances when they are not all alike.
Because this is the first structured parameter model, some checks that
previously were not possible, in particular "is one structured parameter field
set", now gets enabled. Adding another structured parameter model will be
similar.
The applyconfigs code generator assumes that all types in an API are defined in
a single package. If it wasn't for that, it would be possible to place the
"named resources" types in separate packages, which makes their names in the Go
code more natural and provides an indication of their stability level because
the package name could include a version.
Kubernetes-commit: d4d5ade7f5be047472f8d9572c7f01f142951a2d
* support for the managed-by label in Job
* Use managedBy field instead of managed-by label
* Additional review remarks
* Review remarks 2
* review remarks 3
* Skip cleanup of finalizers for job with custom managedBy
* Drop the performance optimization
* imrpove logs
Kubernetes-commit: e568a77a931a1cf4239a4a5fa43e2b05bad3abdf