Codecs is already exported, but in order for tests to construct an alternate CodecFactory for meta's
internal version types, they either need to be able to reference the scheme or to construct a
parallel scheme, and a parallel scheme construction risks going out of sync with the way the
package-scoped scheme object is initialized.
As with the apiserver feature gate for CBOR as a serving and storage encoding, the client feature
gates for CBOR are being initially added through a test-only feature gate instance that is not wired
to environment variables or to command-line flags and is intended only to be enabled
programmatically from integration tests. The test-only instance will be removed as part of alpha
graduation and replaced by conventional client feature gating.
To mitigate the risk of introducing a new protocol, integration tests for CBOR will be written using
a test-only feature gate instance that is not wired to runtime options. On alpha graduation, the
test-only feature gate instance will be replaced by a normal feature gate in the existing apiserver
feature gate instance.
CodecFactory construction uses an unexported struct type named "serializerType" to hold serializer
definitions. There are few differences between it and runtime.SerializerInfo, and they do not appear
to be used anymore. For example, serializerType includes an unused FileExtensions field, and has
distinct ContentType (singular) and AcceptContentTypes (plural) fields instead of
runtime.SerializeInfo's singular MediaType. All remaining uses of serializerType set
AcceptContentTypes to a single-entry slice whose element is equal to its ContentType field.
During construction of a CodecFactory, all serializerType values were already being mechanically
translated into runtime.SerializerInfo values.
Moving to an exported type for serializer definitions makes it easier to expose an option to allow
callers to register their own serializer definitions, which in turn makes it possible to
conditionally include new serializers at runtime (especially behind feature gates).
Add metrics about the sizing of the cpu pools.
Currently the cpumanager maintains 2 cpu pools:
- shared pool: this is where all pods with non-exclusive
cpu allocation run
- exclusive pool: this is the union of the set of exclusive
cpus allocated to containers, if any (requires static policy in use).
By reporting the size of the pools, the users (humans or machines)
can get better insights and more feedback about how the resources
actually allocated to the workload and how the node resources are used.
As pointed out during code review, the CEL cost estimates are not considered
perfectly reliable. Therefore it is better to also do runtime checks.
Some downstream users might decide to allow CEL expressions to run
longer. Therefore the cost limit is now part of an Options struct.
kube-scheduler uses the default cost limit defined in the resource.k8s.io API,
which is the same cost limit that also the apiserver uses during validation.
Expression evaluation in all scenarios gets benchmarked where compilation
works. A pending optimization in another PR caches compiled expressions, so the
time for compilation will become less important. What matters is the actual
evaluation.
In DRA, the cost check is done only at validation time. At runtime, any
expression that passed validation gets executed without interrupting it. The
advantage is that it becomes easier to change the limit because stored
expression do not suddenly fail after an up- or downgrade. The limit could
even become a configuration parameter of the apiserver because that is the only
place where the limit gets checked