This test wishes to observe a watch event. In order to do this in the
past, the test chose a well-known `Service` object, fetched it, and did
arithmetic on the returned `resourceVersion` in order to start a watch
that was guaranteed to see an event. It is not valid to parse the
`resourceVersion` as an integer or to do arithmetic on it, so in order
to make the test conformant to an appropriate use of the API it now:
- creates a namespace
- fetches the current `resourceVersion`
- creates an object
- watches from the previous `resourceVersion` that was read
This ensures that an event is seen by the watch, but uses the publically
supported API.
`ConfigMap`s are used instead of `Service`s as they do not require a
valid `spec` for creation and make the test terser.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
For the YAML examples, make the indentation consistent
by starting with a space and following with a TAB.
Also adjust the indentation of some fields to place them under
the right YAML field parent - e.g. ignorePreflightErrors
is under nodeRegistration.
Previously, callers of `Exists()` would not know why the cGroup was or
was not existing. In one call-site in particular, the `kubelet` would
entirely fail to start if the cGroup validation did not succeed. In
these cases we MUST explain what went wrong and pass that information
clearly to the caller. Previously, some but not all of the reasons for
invalidation were logged at a low log-level instead. This led to poor
UX.
The original method was retained on the interface so as to make this
diff small.
Signed-off-by: Steve Kuznetsov <skuznets@redhat.com>
* kubectl debug: print container messages
This provides feedback to the user, for example that the server is
unable to pull the debug container image.
* Label debug container updates as warnings
Co-authored-by: Eddie Zaneski <eddiezane@gmail.com>
Co-authored-by: Eddie Zaneski <eddiezane@gmail.com>
at present the spec.csi.secretRef name has to be DNS1035 label
format and it should fail if we use DNSSubdomain secretRef in
the secretReference field of CSI spec. The newly added test cases
validate this behaviour in validation tests for controllerPublish,
nodePublish and nodeStage secretRef formats.
Additionally csiExpansionEnabled struct field also removed from
the validation function.
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Instead of doing (almost) the same thing from the three different
methods (Create, Update, Destroy), move the functionality to
libctCgroupConfig, replacing updateSystemdCgroupInfo.
The needResources bool is needed because we do not need resources
during Destroy, so we skip the unneeded resource conversion.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Commit 79be8be10e made hugetlb settings optional if cgroup v2 is used and
hugetlb is not available, fixing issue 92933. Note at that time this was only
needed for v2, because for v1 the resources were set one-by-one, and only for
supported resources.
Commit d312ef7eb6 switched the code to using Set from runc/libcontainer
cgroups manager, and expanded the check to cgroup v1 as well.
Move this check earlier, to inside m.toResources, so instead of
converting all hugetlb resources from ResourceConfig to libcontainers's
Resources.HugetlbLimit, and then setting it to nil, we can skip the
conversion entirely if hugetlb is not supported, thus not doing the work
that is not needed.
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
Commit ecd6361f added setting PidsLimit to Create and Update.
Commit bce9d5f2 added setting PidsLimit to m.toResources.
Now, PidsLimit is assigned twice.
Remove the duplicate.
Fixes: bce9d5f2
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
There's no need to call m.Update (which will create another instance of
libcontainer cgroup manager, convert all the resources and then set
them). All this is already done here, except for Set().
Signed-off-by: Kir Kolyshkin <kolyshkin@gmail.com>
- Also update test-cmd.sh to pass a signing ca to the kube controller
manager, so CSRs work properly in integration tests.
Signed-off-by: Margo Crawford <margaretc@vmware.com>