Commit Graph

128041 Commits

Author SHA1 Message Date
Kubernetes Prow Robot
20e1944f88
Merge pull request #129439 from serathius/refactor-delegate-2
Refactor shouldDelegateList
2025-01-10 09:20:31 -08:00
Daman Arora
64aac665fd pkg/proxy/healthcheck: bug fix for last updated time
The lastUpdated time returned by healthz call should be the latest
lastUpdated time among the proxiers. Prior to this commit, if proxy
is unhealthy, the returned lastUpdated time was lastUpdated time
of the unhealthy proxier.

Signed-off-by: Daman Arora <aroradaman@gmail.com>
2025-01-10 21:28:39 +05:30
Kubernetes Prow Robot
db1da72bee
Merge pull request #129543 from pohly/dra-reserved-for-limit
DRA API: bump maximum size of ReservedFor to 256
2025-01-10 06:52:37 -08:00
Kubernetes Prow Robot
b733c4a620
Merge pull request #128926 from bzsuni/bz/coredns/update/1.12.0
Update coredns to 1.12.0
2025-01-10 06:52:31 -08:00
Marek Siarkowicz
4a4fc9da80 Extract and unify cache bypass logic by creating a CacheProxy struct 2025-01-10 14:15:31 +01:00
Kubernetes Prow Robot
fc7520b32f
Merge pull request #129377 from carlory/deflake-subpath-tests
e2e: deflake subpath tests
2025-01-10 03:26:30 -08:00
Maciej Skoczeń
2d82687114 Improve topologyToMatchedTermCount map in InterPodAffinity PreFilter 2025-01-10 10:55:49 +00:00
carlory
1b7ddfe6bb e2e: deflake subpath tests
Signed-off-by: carlory <baofa.fan@daocloud.io>
2025-01-10 18:11:17 +08:00
Marek Siarkowicz
1b2bacda5b Only test requests that pass validation 2025-01-10 09:55:51 +01:00
Patrick Ohly
7226a3084e DRA e2e: adapt to increased ReservedFor limit
We want to be sure that the maximum number of pods per claim are actually
scheduled concurrently. Previously the test just made sure that they ran
eventually.

Running 256 pods only works on more than 2 nodes, so network-attached resources
have to be used. This is what the increased limit is meant for anyway. Because
of the tightened validation of node selectors in 1.32, the E2E test has to
use MatchExpressions because they allow listing node names.
2025-01-10 09:49:55 +01:00
Kevin Hannon
0899cf892d add documentation that 0s duration will be overwritten for 5m 2025-01-09 14:19:56 -05:00
Kubernetes Prow Robot
e319c541f1
Merge pull request #129298 from omerap12/fix-discovery-controller-panic
apiextensions: replace panic with error handling in DiscoveryController
2025-01-09 09:04:31 -08:00
Kubernetes Prow Robot
732fc190d0
Merge pull request #129522 from carlory/fix-129520
Fix service's nodePort already allocated
2025-01-09 07:48:43 -08:00
Kubernetes Prow Robot
2331c028c2
Merge pull request #129139 from tklauser/client-setconfigdefaults-noerror
Remove always-`nil` `setConfigDefaults` error return value in generated clients
2025-01-09 07:48:32 -08:00
Joel Speed
ba816967a0
Simplify status subresource ratcheting testing 2025-01-09 15:46:31 +00:00
Madhu Rajanna
8d79998058 remove workaround for vsg testing
remove the extra kubectl apply for
the volume snapshot CRD in vgs tests

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2025-01-09 15:24:44 +01:00
Madhu Rajanna
c6f19d3c2a update snapshot CRDs to v1 in cluster addons
Updating the snapshot CRD's to v1 inthe cluster
addons to keep it sync with the external-snapshotter
2025-01-09 15:24:24 +01:00
Marek Siarkowicz
e5a3bdb3a7 Refactor shouldDelegateList 2025-01-09 15:19:08 +01:00
andyzhangx
bdd0f5dd23 test: add Junction file type test on Windows 2025-01-09 13:59:51 +00:00
Patrick Ohly
1cee3682da DRA API: bump maximum size of ReservedFor to 256
The original limit of 32 seemed sufficient for a single GPU on a node. But for
shared non-local resources it is too low. For example, a ResourceClaim might be
used to allocate an interconnect channel that connects all pods of a workload
running on several different nodes, in which case the number of pods can be
considerably larger.

256 is high enough for currently planned systems. If we need something even
higher in the future, an alternative approach might be needed to avoid
scalability problems.

Normally, increasing such a limit would have to be done incrementally over two
releases. In this case we decided on
Slack (https://kubernetes.slack.com/archives/CJUQN3E4T/p1734593174791519) to
make an exception and apply this change to current master for 1.33 and backport
it to the next 1.32.x patch release for production usage.

This breaks downgrades to a 1.32 release without this change if there are
ResourceClaims with a number of consumers > 32 in ReservedFor. In practice,
this breakage is very unlikely because there are no workloads yet which need so
many consumers and such downgrades to a previous patch release are also
unlikely. Downgrades to 1.31 already weren't supported when using DRA v1beta1.
2025-01-09 14:26:01 +01:00
Kubernetes Prow Robot
75531ccc9c
Merge pull request #129540 from serathius/test-list-cache-bypass
Test all possible combinations of input for shouldDelegateList
2025-01-09 05:20:45 -08:00
Kubernetes Prow Robot
f34d791b13
Merge pull request #125901 from jralmaraz/kubelet_prober
Report event for the cases when probe returned Unknown result
2025-01-09 05:20:33 -08:00
Kubernetes Prow Robot
30de989fb5
Merge pull request #129542 from serathius/watchcache-benchmark-namespace
Add benchmarking of namespace index
2025-01-09 03:22:32 -08:00
Marek Siarkowicz
fe895563d9 Test all possible combinations of input for shouldDelegateList 2025-01-09 11:50:39 +01:00
Marek Siarkowicz
13a21d5854 Add benchmarking of namespace index 2025-01-09 11:08:17 +01:00
carlory
8eb31f8aa1 Fix service's nodePort already allocated
Signed-off-by: carlory <baofa.fan@daocloud.io>
2025-01-09 16:15:07 +08:00
Kubernetes Prow Robot
a32f69590a
Merge pull request #129536 from pacoxu/fix-lint
fix dra test lint
2025-01-08 19:58:30 -08:00
Zhonghu Xu
a2a0a75210 Cleanupï: only initiate http2 server options when http2 is not disabled 2025-01-09 11:28:51 +08:00
Paco Xu
2653caa248 fix dra test lint 2025-01-09 10:42:40 +08:00
Kubernetes Prow Robot
8a1db6e8a5
Merge pull request #129512 from dims/lower-verbosity-for-topologycache-messages
Lower verbosity for topologycache messages
2025-01-08 14:50:30 -08:00
Kubernetes Prow Robot
37a8516e5e
Merge pull request #129530 from nojnhuh/dra-vap-message
Add namespace to DRA adminAccess ValidatingAdmissionPolicy message
2025-01-08 11:30:31 -08:00
Jon Huhn
5b2c1dde79 Add namespace to DRA adminAccess ValidatingAdmissionPolicy message 2025-01-08 11:06:36 -06:00
Kubernetes Prow Robot
d9441212d3
Merge pull request #129525 from cpanato/update-rules
update publishing rules to use go1.22.10 for some active release branches
2025-01-08 07:34:37 -08:00
Kubernetes Prow Robot
2832f70801
Merge pull request #129343 from pohly/log-client-go-v1-event
client-go event: add WithContext expansion methods
2025-01-08 07:34:30 -08:00
Abhishek Kr Srivastav
41f805b476 Added check for multipath device mapper
Addressed review comments
2025-01-08 19:44:11 +05:30
Kubernetes Prow Robot
b56d38e7d4
Merge pull request #129441 from serathius/watchcache-benchmark
Improve benchmark to handle multiple dimensions
2025-01-08 05:40:42 -08:00
Kubernetes Prow Robot
10fb206f70
Merge pull request #129201 from tnqn/fix-ns-controller-permission
Add watch permission to namespace-controller for WatchListClient feature
2025-01-08 05:40:31 -08:00
cpanato
a6c7d22f44
update publishing rules to use go1.22.10 for some active release branches
Signed-off-by: cpanato <ctadeu@gmail.com>
2025-01-08 14:10:51 +01:00
Daman Arora
0645f0e50e pkg/proxy/healthcheck: file rename
Signed-off-by: Daman Arora <aroradaman@gmail.com>
2025-01-08 17:40:42 +05:30
Daman Arora
d6c575532a pkg/proxy/healthcheck: rename 'proxier' to 'proxy'
KubeProxy operates with a single health server and two proxies,
one for each IP family. The use of the term 'proxier' in the
types and functions within pkg/proxy/healthcheck can be
misleading, as it may suggest the existence of two health
servers, one for each IP family.

Signed-off-by: Daman Arora <aroradaman@gmail.com>
2025-01-08 17:26:47 +05:30
Kubernetes Prow Robot
90a45563ae
Merge pull request #129517 from googs1025/feature/remove/dra_resourceslice_qhint
feature(scheduler): remove dra plugin resourceslice QueueingHintFn
2025-01-08 03:22:30 -08:00
Marek Siarkowicz
4a0578e3de Improve benchmark to handle multiple dimensions 2025-01-08 12:01:33 +01:00
Joel Speed
091fa29390
Fix status subresource ratcheting 2025-01-08 10:49:45 +00:00
Patrick Ohly
f1834f06f4 client-go event: add WithContext expansion methods
Only the v1 API should be in use. The v1beta1 API therefore doesn't get updated
and doesn't need the context.TODO anymore.
2025-01-08 10:13:49 +01:00
Patrick Ohly
e681a79058 apimachinery wait: support contextual logging
By passing the context, if available, down into the actual wait loop it becomes
possible to use contextual logging when handling a crash. That then logs more
information about which component encountered the problem (WithName e.g. in
kube-controller-manager) and what it was doing at the time (WithValues e.g. in
kube-scheduler).
2025-01-08 10:07:34 +01:00
Kubernetes Prow Robot
4ab6035925
Merge pull request #129369 from carlory/fix-118037-2
e2e: deflake volume tests
2025-01-08 00:34:36 -08:00
Kubernetes Prow Robot
26442a6a85
Merge pull request #129359 from andyzhangx/fix-pv-deletion-timeout
test: fix pv deletion timeout
2025-01-08 00:34:29 -08:00
googs1025
77eae7c34f feature(scheduler): remove dra plugin resourceslice QueueingHintFn 2025-01-08 16:24:28 +08:00
bzsuni
fb47caa689 Update coredns to 1.12.0
Signed-off-by: bzsuni <bingzhe.sun@daocloud.io>
2025-01-08 03:34:41 +00:00
Davanum Srinivas
cad12e5a41
Lower verbosity for topologycache messages
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2025-01-07 21:19:27 -05:00