The current dryrun client implemnetation is suboptimal
and sparse. It has the following problems:
- When an object CREATE or UPDATE reaches the default dryrun client
the operation is a NO-OP, which means subsequent GET calls must
fully emulate the object that exists in the store.
- There are multiple implmentations of a DryRunGetter interface
such the one in init_dryrun.go but there are no implementations
for reset, upgrade, join.
- There is a specific DryRunGetter that is backed by a real
client in clientbacked_dryrun.go, but this is used for upgrade
and does not work in conjuction with a fake client.
This commit does the following changes:
- Removes all existing *dryrun*.go implementations.
- Add a new DryRun implementation in dryrun.go that implements
3 clients - fake clientset, real clientset, real dynamic client.
- The DryRun object uses the method chaining pattern.
- Allows the user opt-in into real clients only if needed, by passing
a real kubeconfig. By default only constructs a fake client.
- The default reactor chain for the fake client, always logs the
object action, then for GET or LIST actions attempts to use the
real dynamic client to get the object. If a real object does not
exist it attempts to get the object from the fake object store.
- The user can prepend or append reactors to the chain.
- All known needed reactors for operations during init, join,
reset, upgrade are added as methods of the DryRun struct.
- Adds detailed unit test for the DryRun struct and its methods
including reactors.
Additional changes:
- Use the new DryRun implementation in all command workflows -
init, join, reset, upgrade.
- Ensure that --dry-run works even if there is no active cluster
by returning faked objects. For join, a faked cluster-info
with a fake bootstrap token and CA are used.
Fix error message if availablePhysicalCPUs = 0.
Without this change, the logic was mistakenly emitting
the old error message, which is confusing for troubleshooting.
Plus, a tiny quality of life improvement:
cpumanager static policy wants to use `cpuGroupSize` multiple times.
The value represents how many VCPUs per PCPUs the machine has.
So, let's cache (and log!) the value in the policy data.
We don't support dynamic update of the HW topology anyway.
Signed-off-by: Francesco Romani <fromani@redhat.com>
* A pod with restartable init container that exits with
a non-zero code is marked as a pod succeeded phase
* A pod with restartable init containers that exits with
a non-zero code by prestop hook is marked as a pod succeeded phase
* A pod with regular container that exceeds its termination grace period
seconds is marked as a pod failed phase
* A pod with restartable init containers that exceeds its termination
grace period seconds is marked as a pod succeeded phase
* A pod with a regular container that exceeded its termination grace
period seconds by PreStop hook is marked as a pod failed phase
* A pod with restartable init containers that exceeds its termination
grace period seconds by PreStop hook is marked as a pod succeeded phase
Signed-off-by: Tsubasa Nagasawa <toversus2357@gmail.com>