Commit Graph

893 Commits

Author SHA1 Message Date
vinay kulkarni
f66e8848ee Fix pod object update that may cause data race 2023-03-17 08:50:52 +00:00
Michal Wozniak
3d68f362c3 Give terminal phase correctly to all pods that will not be restarted 2023-03-16 21:25:29 +01:00
Kubernetes Prow Robot
c8f001d798
Merge pull request #114504 from vrutkovs/tracing-kubelet-toplevel
kubelet: create top-level traces for pod sync and GC
2023-03-14 03:12:16 -07:00
Kubernetes Prow Robot
3106a5c553
Merge pull request #116301 from andyzhangx/remove-azuredisk-code
Remove Azure disk in-tree storage plugin
2023-03-13 10:38:48 -07:00
Vadim Rutkovsky
556d774945 kubelet: create top-level traces for pod sync and GC
This starts new top level OpenTelemetry spans every time syncPod or image / container GC is invoked
2023-03-11 10:42:14 +01:00
vinay kulkarni
01b96e7704 Rename ContainerStatus.ResourcesAllocated to ContainerStatus.AllocatedResources 2023-03-10 14:49:26 +00:00
Kubernetes Prow Robot
e57d968323
Merge pull request #116015 from SataQiu/clean-kubelet-20230223
kubelet: remove the deprecated --master-service-namespace flag
2023-03-09 22:43:34 -08:00
Kubernetes Prow Robot
45b96eae98
Merge pull request #113145 from smarterclayton/zombie_terminating_pods
kubelet: Force deleted pods can fail to move out of terminating
2023-03-09 15:32:30 -08:00
andyzhangx
5d0a54dcb5 remove Azure Disk in-tree driver code
fix
2023-03-09 13:24:08 +00:00
Clayton Coleman
6b9a381185
kubelet: Force deleted pods can fail to move out of terminating
If a CRI error occurs during the terminating phase after a pod is
force deleted (API or static) then the housekeeping loop will not
deliver updates to the pod worker which prevents the pod's state
machine from progressing. The pod will remain in the terminating
phase but no further attempts to terminate or cleanup will occur
until the kubelet is restarted.

The pod worker now maintains a store of the pods state that it is
attempting to reconcile and uses that to resync unknown pods when
SyncKnownPods() is invoked, so that failures in sync methods for
unknown pods no longer hang forever.

The pod worker's store tracks desired updates and the last update
applied on podSyncStatuses. Each goroutine now synchronizes to
acquire the next work item, context, and whether the pod can start.
This synchronization moves the pending update to the stored last
update, which will ensure third parties accessing pod worker state
don't see updates before the pod worker begins synchronizing them.

As a consequence, the update channel becomes a simple notifier
(struct{}) so that SyncKnownPods can coordinate with the pod worker
to create a synthetic pending update for unknown pods (i.e. no one
besides the pod worker has data about those pods). Otherwise the
pending update info would be hidden inside the channel.

In order to properly track pending updates, we have to be very
careful not to mix RunningPods (which are calculated from the
container runtime and are missing all spec info) and config-
sourced pods. Update the pod worker to avoid using ToAPIPod()
and instead require the pod worker to directly use
update.Options.Pod or update.Options.RunningPod for the
correct methods. Add a new SyncTerminatingRuntimePod to prevent
accidental invocations of runtime only pod data.

Finally, fix SyncKnownPods to replay the last valid update for
undesired pods which drives the pod state machine towards
termination, and alter HandlePodCleanups to:

- terminate runtime pods that aren't known to the pod worker
- launch admitted pods that aren't known to the pod worker

Any started pods receive a replay until they reach the finished
state, and then are removed from the pod worker. When a desired
pod is detected as not being in the worker, the usual cause is
that the pod was deleted and recreated with the same UID (almost
always a static pod since API UID reuse is statistically
unlikely). This simplifies the previous restartable pod support.
We are careful to filter for active pods (those not already
terminal or those which have been previously rejected by
admission). We also force a refresh of the runtime cache to
ensure we don't see an older version of the state.

Future changes will allow other components that need to view the
pod worker's actual state (not the desired state the podManager
represents) to retrieve that info from the pod worker.

Several bugs in pod lifecycle have been undetectable at runtime
because the kubelet does not clearly describe the number of pods
in use. To better report, add the following metrics:

  kubelet_desired_pods: Pods the pod manager sees
  kubelet_active_pods: "Admitted" pods that gate new pods
  kubelet_mirror_pods: Mirror pods the kubelet is tracking
  kubelet_working_pods: Breakdown of pods from the last sync in
    each phase, orphaned state, and static or not
  kubelet_restarted_pods_total: A counter for pods that saw a
    CREATE before the previous pod with the same UID was finished
  kubelet_orphaned_runtime_pods_total: A counter for pods detected
    at runtime that were not known to the kubelet. Will be
    populated at Kubelet startup and should never be incremented
    after.

Add a metric check to our e2e tests that verifies the values are
captured correctly during a serial test, and then verify them in
detail in unit tests.

Adds 23 series to the kubelet /metrics endpoint.
2023-03-08 22:03:51 -06:00
torredil
6aebda9b1e Remove AWS legacy cloud provider + EBS in-tree storage plugin
Signed-off-by: torredil <torredil@amazon.com>
2023-03-06 14:01:15 +00:00
SataQiu
91089ce65b kubelet: remove the deprecated --master-service-namespace flag 2023-03-01 18:44:59 +08:00
Vinay Kulkarni
f2bd94a0de In-place Pod Vertical Scaling - core implementation
1. Core Kubelet changes to implement In-place Pod Vertical Scaling.
2. E2E tests for In-place Pod Vertical Scaling.
3. Refactor kubelet code and add missing tests (Derek's kubelet review)
4. Add a new hash over container fields without Resources field to allow feature gate toggling without restarting containers not using the feature.
5. Fix corner-case where resize A->B->A gets ignored
6. Add cgroup v2 support to pod resize E2E test.
KEP: /enhancements/keps/sig-node/1287-in-place-update-pod-resources

Co-authored-by: Chen Wang <Chen.Wang1@ibm.com>
2023-02-24 18:21:21 +00:00
arrowfeng
6a57404e28 kubelet: cleanup secretManager and configManager in podManager
Signed-off-by: arrowfeng <289716347@qq.com>
2022-11-14 23:05:32 +08:00
Kubernetes Prow Robot
70263d55b2
Merge pull request #113501 from pacoxu/fix-startReflector
kubelet: fix nil pointer in startReflector for standalone mode
2022-11-09 03:50:12 -08:00
Harshal Patil
86284d42f8
Add support for Evented PLEG
Signed-off-by: Harshal Patil <harpatil@redhat.com>
Co-authored-by: Swarup Ghosh <swghosh@redhat.com>
2022-11-08 20:06:16 +05:30
David Ashpole
64af1adace
Second attempt: Plumb context to Kubelet CRI calls (#113591)
* plumb context from CRI calls through kubelet

* clean up extra timeouts

* try fixing incorrectly cancelled context
2022-11-05 06:02:13 -07:00
Paco Xu
89e4836dde add ut for kubelet standalone mode 2022-11-04 18:17:51 +08:00
Antonio Ojea
9c2b333925 Revert "plumb context from CRI calls through kubelet"
This reverts commit f43b4f1b95.
2022-11-02 13:37:23 +00:00
David Ashpole
f43b4f1b95
plumb context from CRI calls through kubelet 2022-10-28 02:55:28 +00:00
Artur Żyliński
9f31669a53 New histogram: Pod start SLI duration 2022-10-26 11:28:17 +02:00
Kubernetes Prow Robot
127f33f63d
Merge pull request #111221 from inosato/remove-ioutil-from-kubelet
Remove ioutil in kubelet/kubeadm and its tests
2022-09-17 21:56:28 -07:00
Artur Żyliński
15566d3d89 Cleanup: Remove unused lastContainerStartedTime time.Cache lru 2022-08-19 14:57:29 +02:00
jinxu
0064010cdd Promote Local storage capacity isolation feature to GA
This change is to promote local storage capacity isolation feature to GA

At the same time, to allow rootless system disable this feature due to
unable to get root fs, this change introduced a new kubelet config
"localStorageCapacityIsolation". By default it is set to true. For
rootless systems, they can set this configuration to false to disable
the feature. Once it is set, user cannot set ephemeral-storage
request/limit because capacity and allocatable will not be set.

Change-Id: I48a52e737c6a09e9131454db6ad31247b56c000a
2022-08-02 23:45:48 -07:00
inosato
3b95d3b076 Remove ioutil in kubelet and its tests
Signed-off-by: inosato <si17_21@yahoo.co.jp>
2022-07-30 12:35:26 +09:00
Adrian Reber
fc37a7a990
kubelet: wire checkpoint container support through
This adds the last pieces to wire through the container checkpoint
support in the kubelet.

Signed-off-by: Adrian Reber <areber@redhat.com>
2022-07-14 10:27:41 +00:00
Kubernetes Prow Robot
0dc32b10fe
Merge pull request #110774 from kinvolk/rata/kubelet-short-tests
pkg/kubelet: skip long test on short mode
2022-07-07 20:36:05 -07:00
Rodrigo Campos
466c4d24a9 pkg/kubelet: skip long test on short mode
When adding functionality to the kubelet package and a test file, is
kind of painful to run unit tests today locally.

We usually can't run specifying the test file, as if xx_test.go and
xx.go use the same package, we need to specify all the dependencies. As
soon as xx.go uses the Kuebelet type (we need to do that to fake a
kubelet in the unit tests), this is completely impossible to do in
practice.

So the other option is to run the unit tests for the whole package or
run only a specific funtion. Running a single function can work in some
cases, but it is painful when we want to test all the functions we
wrote. On the other hand, running the test for the whole package is very
slow.

Today some unit tests try to connect to the API server (with retries)
create and list lot of pods/volumes, etc. This makes running the unit
test for the kubelet package slow.

This patch tries to make running the unit test for the whole package
more palatable. This patch adds a skip if the short version was
requested (go test -short ...), so we don't try to connect
to the API server or skip other slow tests.

Before this patch running the unit tests took in my computer (I've run
it several times so the compilation is already done):

	$ time go test -v
	real	0m21.303s
	user	0m9.033s
	sys	0m2.052s

With this patch it takes ~1/3 of the time:

	$ time go test -short -v
	real	0m7.825s
	user	0m9.588s
	sys	0m1.723s

Around 8 seconds is something I can wait to run the tests :)

Signed-off-by: Rodrigo Campos <rodrigoca@microsoft.com>
2022-06-24 18:00:21 +02:00
Patrick Ohly
65385fec20 kubelet: convert node shutdown manager to contextual logging
This will make output checking easier (done in a separate commit). kubelet
itself still uses the global logger.
2022-06-24 11:20:34 +02:00
Davanum Srinivas
50bea1dad8
Move from k8s.gcr.io to registry.k8s.io
Signed-off-by: Davanum Srinivas <davanum@gmail.com>
2022-05-31 10:16:53 -04:00
AllenZMC
bedd0839a1 Optimize test cases for kubelet 2022-05-05 23:07:09 +08:00
Clayton Coleman
69a3820214
kubelet: Delay writing a terminal phase until the pod is terminated
Other components must know when the Kubelet has released critical
resources for terminal pods. Do not set the phase in the apiserver
to terminal until all containers are stopped and cannot restart.

As a consequence of this change, the Kubelet must explicitly transition
a terminal pod to the terminating state in the pod worker which is
handled by returning a new isTerminal boolean from syncPod.

Finally, if a pod with init containers hasn't been initialized yet,
don't default container statuses or not yet attempted init containers
to the unknown failure state.
2022-03-16 13:15:00 -04:00
Kubernetes Prow Robot
06e107081e
Merge pull request #104732 from mengjiao-liu/remove-flag-experimental-check-node-capabilities-before-mount
kubelet: Remove the deprecated flag `--experimental-check-node-capabilities-before-mount`
2022-02-24 07:56:30 -08:00
Ciprian Hacman
0819451ea6 Clean up logic for deprecated flag --container-runtime in kubelet
Signed-off-by: Ciprian Hacman <ciprian@hakman.dev>
2022-02-10 13:26:59 +02:00
Ciprian Hacman
21809043b5 Remove deprecated flag --non-masquerade-cidr in kubelet
Signed-off-by: Ciprian Hacman <ciprian@hakman.dev>
2022-01-19 09:17:26 +02:00
Mengjiao Liu
beda4cafb6 kubelet: Remove the deprecated flag --experimental-check-node-capabilities-before-mount 2022-01-06 11:47:11 +08:00
Kubernetes Prow Robot
f0dbc32ed9
Merge pull request #106853 from gnufied/disable-exp-backoff-volume-not-inuse
When volume is not marked in-use, do not backoff
2021-12-22 19:46:37 -08:00
Hemant Kumar
7989f27044 use node informer to check volumes attachment status before backoff
fix unit tests
2021-12-20 11:57:05 -05:00
Sergey Kanzhelev
a11453efbc remove ReallyCrashForTesting and cleaned up some references to HandleCrash behavior 2021-11-29 20:00:10 +00:00
haoyun
65ac99eef5 fix: npe in kubelet test
Signed-off-by: haoyun <yun.hao@daocloud.io>
Co-authored-by: Antonio Ojea <antonio.ojea.garcia@gmail.com>
2021-11-19 17:44:05 +08:00
ravisantoshgudimetla
696abecada [test][kubelet]: Fix out of bounds in TestSyncLabels unit 2021-11-10 16:53:59 -05:00
ravisantoshgudimetla
02c1bac0b6 [kubelet]: Sync label periodically 2021-11-05 18:47:43 -04:00
Shiming Zhang
b468c24e85 Refactor to use structure to pass parameters 2021-10-15 11:16:21 +08:00
Ryan Phillips
e2e938066d kubelet: add probe termination to graceful shutdowns 2021-09-22 14:13:25 -05:00
Kubernetes Prow Robot
047a6b9f86
Merge pull request #104874 from wojtek-t/migrate_clock_1
Unify towards k8s.io/utils/clock - part 1
2021-09-13 19:09:20 -07:00
wojtekt
53ce79a18a Migrate to k8s.io/utils/clock in pkg/kubelet 2021-09-10 12:20:09 +02:00
Clayton Coleman
17d32ed0b8
kubelet: Rejected pods should be filtered from admission
A pod that has been rejected by admission will have status manager
set the phase to Failed locally, which make take some time to
propagate to the apiserver. The rejected pod will be included in
admission until the apiserver propagates the change back, which
was an unintended regression when checking pod worker state as
authoritative.

A pod that is terminal in the API may still be consuming resources
on the system, so it should still be included in admission.
2021-09-08 10:23:45 -04:00
Kubernetes Prow Robot
8dbc33d649
Merge pull request #101081 from rphillips/add_graceful_shutdown_event
kubelet: add graceful shutdown events
2021-08-17 22:08:08 -07:00
Clayton Coleman
3eadd1a9ea
Keep pod worker running until pod is truly complete
A number of race conditions exist when pods are terminated early in
their lifecycle because components in the kubelet need to know "no
running containers" or "containers can't be started from now on" but
were relying on outdated state.

Only the pod worker knows whether containers are being started for
a given pod, which is required to know when a pod is "terminated"
(no running containers, none coming). Move that responsibility and
podKiller function into the pod workers, and have everything that
was killing the pod go into the UpdatePod loop. Split syncPod into
three phases - setup, terminate containers, and cleanup pod - and
have transitions between those methods be visible to other
components. After this change, to kill a pod you tell the pod worker
to UpdatePod({UpdateType: SyncPodKill, Pod: pod}).

Several places in the kubelet were incorrect about whether they
were handling terminating (should stop running, might have
containers) or terminated (no running containers) pods. The pod worker
exposes methods that allow other loops to know when to set up or tear
down resources based on the state of the pod - these methods remove
the possibility of race conditions by ensuring a single component is
responsible for knowing each pod's allowed state and other components
simply delegate to checking whether they are in the window by UID.

Removing containers now no longer blocks final pod deletion in the
API server and are handled as background cleanup. Node shutdown
no longer marks pods as failed as they can be restarted in the
next step.

See https://docs.google.com/document/d/1Pic5TPntdJnYfIpBeZndDelM-AbS4FN9H2GTLFhoJ04/edit# for details
2021-07-06 15:55:22 -04:00
Ryan Phillips
d9be5abc37 kubelet: add shutdown events 2021-06-23 16:44:19 -05:00