Some code owners might want this for specific packages, like cmd/kubeadm.
This cannot be enabled for everything because:
- a lot of existing code doesn't pass (-> can't be in base config)
- a lot of packages don't need it (-> shouldn't even be a hint)
Today, DRA manager does not call plugin NodePrepareResource
for claims that it previously successfully handled, that is,
if claims are present in cache (checkpoint) even if node
rebooted.
After node reboots, it is required to call DRA plugin
for resource claims so that plugins may prepare them
again in case the resources dont persist reboot.
To achieve that, once kubelet is started, we call DRA
plugins for claims once if a pod sandbox is required
to be created during PodSync.
Signed-off-by: adrianc <adrianc@nvidia.com>
- Add the new file name: super-admin.conf and a function
to return its default path GetSuperAdminKubeConfigPath()
- Add the ClusterAdminsGroupAndClusterRoleBinding object name.
Use device mountable volume, to make it impossible to share the same global
mount with different SELinux contexts.
And fix pod2Name to actually refer to pod2.
volume_manager_selinux_volume_context_mismatch_warnings_total should be
counted only once per volume + pod. The previous location is evaluated
periodically, so bump the metric only when a new pod is added to volume.
The in-tree configs use a relative path to find logcheck.so. This is useful
because then the invocation of golangci-lint also works outside of the script.
But when running with a containerized build, GOBIN points somewhere else. For
that case, a temporary copy of the configuration has to be created with an
absolute path.
Avoids starting informers or the config-consuming controller when
--enable-priority-and-fairness=false. For kube-apiserver, the config-producing controller runs if
and only if flowcontrol API storage is enabled.
Currently, the downward API tests flake on Windows with a failure
to allocate memory when starting the agnhost binary used in these
tests. The tests are spawning pods with a memory limit of 64MB,
which is a bit on the low side for a Windows Pod, even if it's
a nanoserver-based image.
Increases the memory limit to 128MB, the primary goal of the tests
is not to enforce and test the limits, but to check if these details
are projected into the Pod.
* cleanup: refactor pod replacement policy integration test into staged assertion
* cleanup: remove typo in job_test.go
* refactor PodReplacementPolicy test and remove test for defaulting the policy
* fix issue with missing update in job controller for terminating status and refactor pod replacement policy integration test
* use t.Cleanup instead of defer in PodReplacementPolicy integration tests
* revert t.Cleanup to defer for reseting feature flag in PodReplacementPolicy integration tests