Commit Graph

127530 Commits

Author SHA1 Message Date
Davanum Srinivas
0d8a8fe306
Update to latest kustomize/v5.6.0
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2025-01-14 13:12:48 -05:00
Kubernetes Prow Robot
1a9feed0cd
Merge pull request #129615 from pohly/log-client-go-tools-cache-apis-fix
client-go/tools/cache: fix TestAddWhileActive
2025-01-14 09:32:40 -08:00
Kubernetes Prow Robot
3d84276707
Merge pull request #129595 from aravindhp/nlq-env-vars
kubelet: use env vars in node log query PS command
2025-01-14 09:32:33 -08:00
Kubernetes Prow Robot
c9f695138b
Merge pull request #129591 from liggitt/node-binding-ga
KEP-4193: Promote ServiceAccountTokenNodeBinding to GA
2025-01-14 08:02:32 -08:00
Jordan Liggitt
59850b5823
Promote ServiceAccountTokenNodeBinding to GA 2025-01-14 09:48:35 -05:00
Patrick Ohly
d66ced5730 client-go/tools/cache: fix TestAddWhileActive
4638ba9716 added tracking of the goroutine which
executes informer.Run. In the TestAddWhileActive the original `go
informer.Run()` was left in place, causing a data race between the two
`informer.Run` instances:

==================
WARNING: DATA RACE
Read at 0x00c000262398 by goroutine 5302:
  k8s.io/client-go/tools/cache.(*controller).RunWithContext()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/controller.go:162 +0x1ad
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer.go:584 +0x6c5
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer.go:527 +0x48
  k8s.io/client-go/tools/cache.TestAddWhileActive.gowrap1()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer_test.go:1080 +0x17

Previous write at 0x00c000262398 by goroutine 5301:
  k8s.io/client-go/tools/cache.New()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/controller.go:142 +0x9de
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext.func1()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer.go:562 +0xa78
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).RunWithContext()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer.go:565 +0x119
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer.go:527 +0x44
  k8s.io/client-go/tools/cache.(*sharedIndexInformer).Run-fm()
      <autogenerated>:1 +0x17
  k8s.io/client-go/tools/cache.TestAddWhileActive.(*Group).StartWithChannel.func2()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:55 +0x38
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:72 +0x86

Goroutine 5302 (running) created at:
  k8s.io/client-go/tools/cache.TestAddWhileActive()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer_test.go:1080 +0x93e
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1690 +0x226
  testing.(*T).Run.gowrap1()
      /usr/local/go/src/testing/testing.go:1743 +0x44

Goroutine 5301 (running) created at:
  k8s.io/apimachinery/pkg/util/wait.(*Group).Start()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:70 +0xe4
  k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/apimachinery/pkg/util/wait/wait.go:54 +0x7e6
  k8s.io/client-go/tools/cache.TestAddWhileActive()
      /home/prow/go/src/k8s.io/kubernetes/staging/src/k8s.io/client-go/tools/cache/shared_informer_test.go:1074 +0x6a1
  testing.tRunner()
      /usr/local/go/src/testing/testing.go:1690 +0x226
  testing.(*T).Run.gowrap1()
      /usr/local/go/src/testing/testing.go:1743 +0x44
==================
2025-01-14 14:14:08 +01:00
Kubernetes Prow Robot
f3cbd79db7
Merge pull request #129590 from wojtek-t/cleanup_feature_gates
Remove WatchBookmark feature gate
2025-01-14 00:42:32 -08:00
Wojciech Tyczyński
a7937f5391 Remove WatchBookmark feature gate 2025-01-14 08:31:23 +01:00
Kubernetes Prow Robot
e384893030
Merge pull request #129594 from neolit123/1.33-fix-preflight-pull-sandbox-error
kubeadm: remove misplaced error during image pull
2025-01-13 21:02:38 -08:00
Kubernetes Prow Robot
a318576817
Merge pull request #129589 from neolit123/1.33-remove-etcd-learner-fg
kubeadm: remove the GA EtcdLearnerMode FG
2025-01-13 21:02:31 -08:00
Kubernetes Prow Robot
ccd2b4e8a7
Merge pull request #129101 from xigang/controller_log
Adjust the log level of the FilterActivePods method
2025-01-13 12:20:32 -08:00
Aravindh Puthiyaparambil
12345a14c3
kubelet: use env vars in node log query PS command
- Use environment variables to pass string arguments in the node log
  query PS command
- Split getLoggingCmd into getLoggingCmdEnv and getLoggingCmdArgs
  for better modularization
2025-01-13 11:43:04 -08:00
Lubomir I. Ivanov
2f4bd13fe5 kubeadm: remove misplaced error during image pull
During preflight when an image is pulled, if the sandbox image
check returns an error, the same error later blocks the actual
image pull.
2025-01-13 19:29:39 +02:00
Kubernetes Prow Robot
8a5cf7b66f
Merge pull request #129488 from Madhu-1/vs-v1
Update  snapshot CRDs to v1 in cluster addons
2025-01-13 08:34:33 -08:00
Lubomir I. Ivanov
a92297f1a7 kubeadm: remove the GA EtcdLearnerMode FG 2025-01-13 16:44:56 +02:00
Kubernetes Prow Robot
728a4d2a48
Merge pull request #129506 from JoelSpeed/fix-status-ratcheting
Fix CRD status subresource ratcheting
2025-01-13 05:06:32 -08:00
Joel Speed
aa1d79c370
Use DeepCopyJSON to copy testcase input
Unstructured types must use int64, all integers in the test case
must be explicitly cast as int64.
2025-01-13 11:45:54 +00:00
Joel Speed
a2b12ba406
Simplify schema sentinel subresource logic 2025-01-13 11:28:33 +00:00
Kubernetes Prow Robot
35d6959ace
Merge pull request #129586 from soltysh/expand_error_conditions
e2e: expand error conditions when test-ing port-forward
2025-01-13 03:28:32 -08:00
Maciej Szulik
f886f3b7f1
e2e: expand error conditions when test-ing port-forward
Signed-off-by: Maciej Szulik <soltysh@gmail.com>
2025-01-13 11:15:26 +01:00
Kubernetes Prow Robot
5e220e4041
Merge pull request #129582 from aojea/service_nodeport
e2e services: avoid panic on service creation retry
2025-01-13 01:22:33 -08:00
Antonio Ojea
17030f19b6 e2e services: avoid panic on service creation retry
The test was reusing the same variable for the service on each
iteration, the problem is that when the service creation fails, it
clears out the variable and in the next iteration it panics because is
trying to use a field on that same variable.
2025-01-13 08:02:45 +00:00
Kubernetes Prow Robot
36d316ebc5
Merge pull request #124087 from krzysdabro/tests-apiserver-options-kms
apiserver: decrease timeout for TestKMSHealthzEndpoint
2025-01-12 23:22:31 -08:00
Kubernetes Prow Robot
d8093cc403
Merge pull request #129053 from stlaz/e2e_ctb_parallel
e2e: ctb: make it possible to run the tests in parallel
2025-01-11 20:52:31 -08:00
Kubernetes Prow Robot
afc4647816
Merge pull request #129561 from mozillazg/patch-1
kubeadm: fix a wrong comment
2025-01-11 06:38:32 -08:00
Kubernetes Prow Robot
64276979cd
Merge pull request #129570 from aojea/netpolv0.7.0
bump kube-network-policies to v0.7.0
2025-01-11 04:30:31 -08:00
Huang Huang
018ee41e6f kubeadm: fix a wrong comment
apply commit suggestion

Co-authored-by: Lubomir I. Ivanov <neolit123@gmail.com>
2025-01-11 11:28:45 +00:00
Antonio Ojea
729deef454 bump kube-network-policies to v0.7.0 2025-01-11 09:32:20 +00:00
xigang
0e55e47cff Remove unnecessary logging in FilterActivePods
Signed-off-by: xigang <wangxigang2014@gmail.com>
2025-01-11 10:44:00 +08:00
Kubernetes Prow Robot
a38edf3a47
Merge pull request #129444 from adrianmoisey/podautoscaler_deprecations
Remove use of deprecated functions
2025-01-10 10:40:39 -08:00
Kubernetes Prow Robot
0b789d7cca
Merge pull request #129427 from macsko/improve_map_in_interpodaffinity_prefilter
Improve topologyToMatchedTermCount map in InterPodAffinity PreFilter
2025-01-10 10:40:33 -08:00
Kubernetes Prow Robot
cace64ab7e
Merge pull request #129443 from serathius/watchcache-proxy
Watchcache proxy
2025-01-10 09:20:38 -08:00
Kubernetes Prow Robot
20e1944f88
Merge pull request #129439 from serathius/refactor-delegate-2
Refactor shouldDelegateList
2025-01-10 09:20:31 -08:00
Kubernetes Prow Robot
db1da72bee
Merge pull request #129543 from pohly/dra-reserved-for-limit
DRA API: bump maximum size of ReservedFor to 256
2025-01-10 06:52:37 -08:00
Kubernetes Prow Robot
b733c4a620
Merge pull request #128926 from bzsuni/bz/coredns/update/1.12.0
Update coredns to 1.12.0
2025-01-10 06:52:31 -08:00
Marek Siarkowicz
4a4fc9da80 Extract and unify cache bypass logic by creating a CacheProxy struct 2025-01-10 14:15:31 +01:00
Kubernetes Prow Robot
fc7520b32f
Merge pull request #129377 from carlory/deflake-subpath-tests
e2e: deflake subpath tests
2025-01-10 03:26:30 -08:00
Maciej Skoczeń
2d82687114 Improve topologyToMatchedTermCount map in InterPodAffinity PreFilter 2025-01-10 10:55:49 +00:00
carlory
1b7ddfe6bb e2e: deflake subpath tests
Signed-off-by: carlory <baofa.fan@daocloud.io>
2025-01-10 18:11:17 +08:00
Patrick Ohly
7226a3084e DRA e2e: adapt to increased ReservedFor limit
We want to be sure that the maximum number of pods per claim are actually
scheduled concurrently. Previously the test just made sure that they ran
eventually.

Running 256 pods only works on more than 2 nodes, so network-attached resources
have to be used. This is what the increased limit is meant for anyway. Because
of the tightened validation of node selectors in 1.32, the E2E test has to
use MatchExpressions because they allow listing node names.
2025-01-10 09:49:55 +01:00
Kubernetes Prow Robot
e319c541f1
Merge pull request #129298 from omerap12/fix-discovery-controller-panic
apiextensions: replace panic with error handling in DiscoveryController
2025-01-09 09:04:31 -08:00
Kubernetes Prow Robot
732fc190d0
Merge pull request #129522 from carlory/fix-129520
Fix service's nodePort already allocated
2025-01-09 07:48:43 -08:00
Kubernetes Prow Robot
2331c028c2
Merge pull request #129139 from tklauser/client-setconfigdefaults-noerror
Remove always-`nil` `setConfigDefaults` error return value in generated clients
2025-01-09 07:48:32 -08:00
Joel Speed
ba816967a0
Simplify status subresource ratcheting testing 2025-01-09 15:46:31 +00:00
Madhu Rajanna
8d79998058 remove workaround for vsg testing
remove the extra kubectl apply for
the volume snapshot CRD in vgs tests

Signed-off-by: Madhu Rajanna <madhupr007@gmail.com>
2025-01-09 15:24:44 +01:00
Madhu Rajanna
c6f19d3c2a update snapshot CRDs to v1 in cluster addons
Updating the snapshot CRD's to v1 inthe cluster
addons to keep it sync with the external-snapshotter
2025-01-09 15:24:24 +01:00
Marek Siarkowicz
e5a3bdb3a7 Refactor shouldDelegateList 2025-01-09 15:19:08 +01:00
Patrick Ohly
1cee3682da DRA API: bump maximum size of ReservedFor to 256
The original limit of 32 seemed sufficient for a single GPU on a node. But for
shared non-local resources it is too low. For example, a ResourceClaim might be
used to allocate an interconnect channel that connects all pods of a workload
running on several different nodes, in which case the number of pods can be
considerably larger.

256 is high enough for currently planned systems. If we need something even
higher in the future, an alternative approach might be needed to avoid
scalability problems.

Normally, increasing such a limit would have to be done incrementally over two
releases. In this case we decided on
Slack (https://kubernetes.slack.com/archives/CJUQN3E4T/p1734593174791519) to
make an exception and apply this change to current master for 1.33 and backport
it to the next 1.32.x patch release for production usage.

This breaks downgrades to a 1.32 release without this change if there are
ResourceClaims with a number of consumers > 32 in ReservedFor. In practice,
this breakage is very unlikely because there are no workloads yet which need so
many consumers and such downgrades to a previous patch release are also
unlikely. Downgrades to 1.31 already weren't supported when using DRA v1beta1.
2025-01-09 14:26:01 +01:00
Kubernetes Prow Robot
75531ccc9c
Merge pull request #129540 from serathius/test-list-cache-bypass
Test all possible combinations of input for shouldDelegateList
2025-01-09 05:20:45 -08:00
Kubernetes Prow Robot
f34d791b13
Merge pull request #125901 from jralmaraz/kubelet_prober
Report event for the cases when probe returned Unknown result
2025-01-09 05:20:33 -08:00