* add quantity library to CEL
* add more tests to quantity
* use 1.29 env for quantity
* set CEL default env to 1.28 for 1.28 release
* add compare function
* docs and arith lib
* fixup addInt and subInt overload, add docs
* more tests
* cleanup docs
* remove old comments
* remove unnecessary cast
* add isInteger
* add overflow tests
* boilerplate
* refactor expectedResult for tests
* doc typo fix
* returns bool
* add docs link
* different dos link
* add isInteger true case
* expand iff
* add quantity back to 1.28 version, and revert change to DefaultCompatibilityVersion
* formatting
* [API REVIEW] ValidatingAdmissionPolicyStatucController config.
worker count.
* ValidatingAdmissionPolicyStatus controller.
* remove CEL typechecking from API server.
* fix initializer tests.
* remove type checking integration tests
from API server integration tests.
* validatingadmissionpolicy-status options.
* grant access to VAP controller.
* add defaulting unit test.
* generated: ./hack/update-codegen.sh
* add OWNERS for VAP status controller.
* type checking test case.
When someone decides that a Pod should definitely run on a specific node, they
can create the Pod with spec.nodeName already set. Some custom scheduler might
do that. Then kubelet starts to check the pod and (if DRA is enabled) will
refuse to run it, either because the claims are still waiting for the first
consumer or the pod wasn't added to reservedFor. Both are things the scheduler
normally does.
Also, if a pod got scheduled while the DRA feature was off in the
kube-scheduler, a pod can reach the same state.
The resource claim controller can handle these two cases by taking over for the
kube-scheduler when nodeName is set. Triggering an allocation is simpler than
in the scheduler because all it takes is creating the right
PodSchedulingContext with spec.selectedNode set. There's no need to list nodes
because that choice was already made, permanently. Adding the pod to
reservedFor also isn't hard.
What's currently missing is triggering de-allocation of claims to re-allocate
them for the desired node. This is not important for claims that get created
for the pod from a template and then only get used once, but it might be
worthwhile to add de-allocation in the future.
The allocation mode is relevant when clearing the reservedFor: for delayed
allocation, deallocation gets requested, for immediate allocation not. Both
should get tested.
All pre-defined claims now use delayed allocation, just as they would if
created normally.
Enabling logging is useful to track what the code is doing.
There are some functional changes:
- The pod handler checks for existence of claims. This
avoids adding pods to the work queue in more cases
when nothing needs to be done, at the cost of
making the event handlers a bit slower. This will become
more important when adding more work to the controller
- The handler for deleted ResourceClaim did not check for
cache.DeletedFinalStateUnknown.
* Fix lifecycle generator to check the version correctly
* Fix file header
Co-authored-by: Antonio Ojea <antonio.ojea.garcia@gmail.com>
---------
Co-authored-by: Antonio Ojea <antonio.ojea.garcia@gmail.com>