* feat: add func for generating register code
* refactor:remove unused local variable
* fix: make the function name singular
Signed-off-by: Lin Yang <reaver@flomesh.io>
* fix: precisely matching the comment tag for register-gen
Signed-off-by: Lin Yang <reaver@flomesh.io>
---------
Signed-off-by: Lin Yang <reaver@flomesh.io>
When replacing the deterministic ResourceClaim name with a generated one this
particular piece of the original validation was incorrectly left in place.
It's not required anymore that "<pod name>-<claim name in pod spec>" is a valid
ResourceClaim name.
Currently, there are some unit tests that are failing on Windows due
to various reasons:
- IPVS proxy mode is not supported on Windows.
- pkg/kubelet/cri/remote was moved to cri-client.
This makes the API nicer:
resourceClaims:
- name: with-template
resourceClaimTemplateName: test-inline-claim-template
- name: with-claim
resourceClaimName: test-shared-claim
Previously, this was:
resourceClaims:
- name: with-template
source:
resourceClaimTemplateName: test-inline-claim-template
- name: with-claim
source:
resourceClaimName: test-shared-claim
A more long-term benefit is that other, future alternatives
might not make sense under the "source" umbrella.
This is a breaking change. It's justified because DRA is still
alpha and will have several other API breaks in 1.31.
The JSON patch approach works, but it is complex. A retry loop is easier to
understand (detect conflict, get new claim, try again). There is one additional
API call (the get), but in practice this scenario is unlikely.
There was a race caused by having to update claim finalizer and status in two
different operations:
- Resource claim controller removes allocation, does not yet
get to remove the finalizer.
- Scheduler prepares an allocation, without adding the finalizer
because it's there.
- Controller removes finalizer.
- Scheduler adds allocation.
This is an invalid state. Automatic checking found this during the execution of
the "with translated parameters on single node.*supports sharing a claim
sequentially" E2E test, but only when run stand-alone. When running in
parallel (as in the CI), the bad outcome of the race did not occur.
The fix is to check that the finalizer is still set when adding the
allocation. The apiserver doesn't check that because it doesn't know which
finalizer goes with the allocation result. It could check for "some finalizer",
but that is not guaranteed to be correct (could be some unrelated one).
Checking the finalizer can only be done with a JSON patch. Despite the
complications, having the ability to add multiple pods concurrently to
ReservedFor seems worth it (avoids expensive rescheduling or a local retry
loop).
The resource claim controller doesn't need this, it can do a normal update
which implicitly checks ResourceVersion.