make sure to cleanup after setting RelaxPolicyForUserNamespacePods
setup test variables to be a little more terse and similar between tests
cleanup Allowed checking
Signed-off-by: Peter Hunt <pehunt@redhat.com>
a masked proc mount has traditionally been used to prevent untrusted containers from accessing leaky kernel APIs.
However, within a user namespace, typical ID checks protect better than masked proc. Further, allowing unmasked proc
with a user namespace gives access to a container mounting sub procs, which opens avenues for container-in-container use cases.
Update PSS for baseline to allow a container to access an unmasked /proc, if it's in a user namespace and if the UserNamespacesPodSecurityStandards feature is enabled.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
objects.
Change the order of operations to stop current iteration if no changes
to the service chains are needed.
Bump syncProxy frequency to 1 hour.
In a test kind cluster creation of 10K services, 2 endpoints each,
takes ~25m before the fix and ~9min after. Maximum memory usage
during creation is ~650MiB and 260MiB respectively.
Another important metric is the time it takes to create 1 new service
when 10K svc already exist. It used to take ~8m before the fix,
with partialSync it takes ~141ms.
Signed-off-by: Nadia Pinaeva <n.m.pinaeva@gmail.com>
Remove PortRange for internal configuration of kube-proxy
adhering to the v1alpha2 version specifications as detailed in
https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Refactor ClusterCIDR for internal configuration of kube-proxy
adhering to the v1alpha2 version specifications as detailed in
https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Consolidate SyncPeriod and MinSyncPeriod for internal configuration
of kube-proxy adhering to the v1alpha2 version specifications as
detailed in https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Split running the Wardle aggregated API into preparation and
running phase. This allows reusing the prepared options and
makes it possible for us to introduce additional hooks into
the server authorization flow.
Some of the E2E node tests were flaky. Their timeout apparently was chosen
under the assumption that kubelet would retry immediately after a failed gRPC
call, with a factor of 2 as safety margin. But according to
0449cef8fd,
kubelet has a different, higher retry period of 90 seconds, which was exactly
the test timeout. The test timeout has to be higher than that.
As the tests don't use the gRPC call timeout anymore, it can be made
private. While at it, the name and documentation gets updated.