From 52b4e8721be3b06f32df5d4c1aa98e5e1266071a Mon Sep 17 00:00:00 2001 From: derekwaynecarr Date: Tue, 16 Aug 2016 12:43:09 -0400 Subject: [PATCH] Document known issue for kubelet inode exhaustion --- docs/proposals/kubelet-eviction.md | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/docs/proposals/kubelet-eviction.md b/docs/proposals/kubelet-eviction.md index 4ac85765778..02c921ad050 100644 --- a/docs/proposals/kubelet-eviction.md +++ b/docs/proposals/kubelet-eviction.md @@ -478,9 +478,19 @@ for eviction. Instead `DaemonSet` should ideally include Guaranteed pods only. ## Known issues +### kubelet may evict more pods than needed + The pod eviction may evict more pods than needed due to stats collection timing gap. This can be mitigated by adding the ability to get root container stats on an on-demand basis (https://github.com/google/cadvisor/issues/1247) in the future. +### How kubelet ranks pods for eviction in response to inode exhaustion + +At this time, it is not possible to know how many inodes were consumed by a particular container. If the `kubelet` observes +inode exhaustion, it will evict pods by ranking them by quality of service. The following issue has been opened in cadvisor +to track per container inode consumption (https://github.com/google/cadvisor/issues/1422) which would allow us to rank pods +by inode consumption. For example, this would let us identify a container that created large numbers of 0 byte files, and evict +that pod over others. + [![Analytics](https://kubernetes-site.appspot.com/UA-36037335-10/GitHub/docs/proposals/kubelet-eviction.md?pixel)]()