* migrated pod-security-admission to contextual logging
Signed-off-by: Naman <namanlakhwani@gmail.com>
* updating test files for contextual logging
Signed-off-by: Naman <namanlakhwani@gmail.com>
* smalll nit
Signed-off-by: Naman <namanlakhwani@gmail.com>
* doing inline if
Signed-off-by: Naman <namanlakhwani@gmail.com>
---------
Signed-off-by: Naman <namanlakhwani@gmail.com>
This moves the hack/ directory and scripts to the examples dir, which is
a distinct module. This avoids some Go unpleasantness around module
boundaries and just makes more sense.
When running this script more than once on Debian and Ubuntu, we fail to
chown -R the CERT_DIR due to this file owned by root and the CERT_DIR
owned by the unprivileged user running the script.
Let's remove the file, that is something we can always do, before
generating the certs. This fixes the problem on Debian and Ubuntu local
setups.
Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
By generating the unique name in advance, the label also can be set to a
matching value directly in the Create request. This makes test startup in
test/integration/scheduler_perf a bit faster because the extra patching can be
avoided.
It also leads to a better label because previously, the unique label value
didn't match the node name. This is required for simulating dynamic resource
allocation, which relies on the label to track where an allocated claim is
available.
The pod worker may recieve a new pod which is marked as terminal in
the runtime cache. This can occur if a pod is marked as terminal and the
kubelet is restarted.
The kubelet needs to drive these pods through the termination state
machine. If upon restart, the kubelet receives a pod which is terminal
based on runtime cache, it indicates that pod finished
`SyncTerminatingPod`, but it did not complete `SyncTerminatedPod`. The
pod worker needs ensure that `SyncTerminatedPod` will run on these pods.
To accomplish this, set `finished=False`, on the pod sync status, to
drive the pod through the rest of the state machine.
This will ensure that status manager and other kubelet subcomponents
(e.g. volume manager), will be aware of this pod and properly cleanup
all of the resources of the pod after the kubelet is restarted.
While making change, also update the comments to provide a bit more
background around why the kubelet needs to read the runtime pod cache
for newly synced terminal pods.
Signed-off-by: David Porter <david@porter.me>
Add a regression test for https://issues.k8s.io/116925. The test
exercises the following:
1) Start a restart never pod which will exit with
`v1.PodSucceeded` phase.
2) Start a graceful deletion of the pod (set a deletion timestamp)
3) Restart the kubelet as soon as the kubelet reports the pod is
terminal (but before the pod is deleted).
4) Verify that after kubelet restart, the pod is deleted.
As of v1.27, there is a delay between the pod being marked terminal
phaes, and the status manager deleting the pod. If the kubelet is
restarted in the middle, after starting up again, the kubelet needs to
ensure the pod will be deleted on the API server.
Signed-off-by: David Porter <david@porter.me>
It makes sense to define a test where, depending on the parameters, some
operation creations zero pods, namespaces or nodes. The validation didn't allow
that previously due to the way how it was implemented although the underlying
code works fine with zero as count.
collector.collect got called without ensuring that collector.run had
terminated, so it could have happened that collector.run adds another sample
while collector.collect is reading them.
This behavior is surprising to some users (see kubectl issues #1110 and #1385), who expect that an unlimited container will result in an unlimited pod, but that is not how PodLimits() works, as it ignores any containers that do not specify limits when calculating the pod limits.
This commit adds unit tests that confirm this behavior.
When size limit is specified subsequent invocations will fail because
ibytes is changed to -1 and stored internally in quotaSizeMap during the
first call. Later invocation will see that the requested size doesn't
match the actual stored value and it will fail.
Signed-off-by: Alexandru Matei <alexandru.matei@uipath.com>
Followup to https://github.com/kubernetes/kubernetes/pull/107826. The
referenced method doesn't exist.
This leads to confusing lint's with 1.27. I would recommend a backport
to 1.27 but not sure if that aligns with the release schedule.
test-e2e-node for AWS is out-of-tree so that we won't need to vendor
in AWS related packages. For this to work, some of the scripts/golang
code need to know where the k8s tree is git cloned.
So let's add an option to lookup the env var, so that we can then,
change directory to this specified directory to run some make commands
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
Align metrics-server metrics-resolution with the upstream manifests so
that scalability tests are running a similar configuration of
metrics-server as the one we are running in the e2e tests.
Signed-off-by: Damien Grisonnet <dgrisonn@redhat.com>