mirror of
https://github.com/kata-containers/kata-containers.git
synced 2025-08-09 11:58:16 +00:00
kata-monitor: increase delay before syncing with the container manager
When we detect a new kata sandbox from the sbs fs, we add that to the sandbox cache to retrieve metrics. We also schedule a sync with the container manager, which we consider the source of truth: if the kata pod is not yet ready the container manager will not report it and we will drop it from our cache. We will add it back only when we re-sync, i.e., when we get an event from the sbs fs (which means a kata pod has been terminated or a new one has been started). Since we use the sync with the container manager to remove pods from the cache, we can wait some more before syncing (and so reduce the chance to miss a kata pod just because it was not ready yet). Let's raise the waiting time before starting the sync timer. Fixes: #3550 Signed-off-by: Francesco Giudici <fgiudici@redhat.com>
This commit is contained in:
parent
cf5a79cfe1
commit
786c667e60
@ -24,7 +24,7 @@ const (
|
||||
RuntimeContainerd = "containerd"
|
||||
RuntimeCRIO = "cri-o"
|
||||
fsMonitorRetryDelaySeconds = 60
|
||||
podCacheRefreshDelaySeconds = 5
|
||||
podCacheRefreshDelaySeconds = 60
|
||||
)
|
||||
|
||||
// SetLogger sets the logger for katamonitor package.
|
||||
@ -85,7 +85,7 @@ func (km *KataMonitor) startPodCacheUpdater() {
|
||||
break
|
||||
}
|
||||
// we refresh the pod cache once if we get multiple add/delete pod events in a short time (< podCacheRefreshDelaySeconds)
|
||||
cacheUpdateTimer := time.NewTimer(podCacheRefreshDelaySeconds * time.Second)
|
||||
cacheUpdateTimer := time.NewTimer(5 * time.Second)
|
||||
cacheUpdateTimerWasSet := false
|
||||
for {
|
||||
select {
|
||||
|
Loading…
Reference in New Issue
Block a user