Dynamic resource allocation is similar to storage in the sense that users
create ResourceClaim objects to request resources, same as with persistent
volume claims. The actual resource usage is only known when allocating claims,
but some limits can already be enforced at admission time:
- "count/resourceclaims.resource.k8s.io" limits the number of ResourceClaim objects in
a namespace; this is a generic feature that is already supported also without
this commit.
- "resourceclaims" is *not* an alias - use "count/resourceclaims.resource.k8s.io"
instead.
- <device-class-name>.deviceclass.resource.k8s.io/devices limits the number of
ResourceClaim objects in a namespace such that the number of devices
requested through those objects with that class does not exceed the limit.
A single request may cause the allocation of multiple devices. For exact
counts, the quota limit is based on the sum of those exact counts. For requests
asking for "all" matching devices, the maximum number of allocated devices per
claim is used as a worst-case upper bound.
Requests asking for "admin access" contribute to the quota.
DRA quota: remove admin mode exception
The names aren't actually special for validation. They are
acceptable with and without the feature gate, the only difference
is that they don't do anything when the feature is enabled.
make sure to cleanup after setting RelaxPolicyForUserNamespacePods
setup test variables to be a little more terse and similar between tests
cleanup Allowed checking
Signed-off-by: Peter Hunt <pehunt@redhat.com>
a masked proc mount has traditionally been used to prevent untrusted containers from accessing leaky kernel APIs.
However, within a user namespace, typical ID checks protect better than masked proc. Further, allowing unmasked proc
with a user namespace gives access to a container mounting sub procs, which opens avenues for container-in-container use cases.
Update PSS for baseline to allow a container to access an unmasked /proc, if it's in a user namespace and if the UserNamespacesPodSecurityStandards feature is enabled.
Signed-off-by: Peter Hunt <pehunt@redhat.com>
objects.
Change the order of operations to stop current iteration if no changes
to the service chains are needed.
Bump syncProxy frequency to 1 hour.
In a test kind cluster creation of 10K services, 2 endpoints each,
takes ~25m before the fix and ~9min after. Maximum memory usage
during creation is ~650MiB and 260MiB respectively.
Another important metric is the time it takes to create 1 new service
when 10K svc already exist. It used to take ~8m before the fix,
with partialSync it takes ~141ms.
Signed-off-by: Nadia Pinaeva <n.m.pinaeva@gmail.com>
Remove PortRange for internal configuration of kube-proxy
adhering to the v1alpha2 version specifications as detailed in
https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Refactor ClusterCIDR for internal configuration of kube-proxy
adhering to the v1alpha2 version specifications as detailed in
https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Consolidate SyncPeriod and MinSyncPeriod for internal configuration
of kube-proxy adhering to the v1alpha2 version specifications as
detailed in https://kep.k8s.io/784.
Signed-off-by: Daman Arora <aroradaman@gmail.com>
Split running the Wardle aggregated API into preparation and
running phase. This allows reusing the prepared options and
makes it possible for us to introduce additional hooks into
the server authorization flow.