Merge pull request #111879 from sanwishe/etcd_compact_doc

fix etcd compaction doc
This commit is contained in:
Kubernetes Prow Robot 2022-11-08 13:51:12 -08:00 committed by GitHub
commit 886da71e75
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

View File

@ -40,7 +40,7 @@ func init() {
} }
// StartCompactor starts a compactor in the background to compact old version of keys that's not needed. // StartCompactor starts a compactor in the background to compact old version of keys that's not needed.
// By default, we save the most recent 10 minutes data and compact versions > 10minutes ago. // By default, we save the most recent 5 minutes data and compact versions > 5minutes ago.
// It should be enough for slow watchers and to tolerate burst. // It should be enough for slow watchers and to tolerate burst.
// TODO: We might keep a longer history (12h) in the future once storage API can take advantage of past version of keys. // TODO: We might keep a longer history (12h) in the future once storage API can take advantage of past version of keys.
func StartCompactor(ctx context.Context, client *clientv3.Client, compactInterval time.Duration) { func StartCompactor(ctx context.Context, client *clientv3.Client, compactInterval time.Duration) {
@ -84,7 +84,7 @@ func compactor(ctx context.Context, client *clientv3.Client, interval time.Durat
// Technical details/insights: // Technical details/insights:
// //
// The protocol here is lease based. If one compactor CAS successfully, the others would know it when they fail in // The protocol here is lease based. If one compactor CAS successfully, the others would know it when they fail in
// CAS later and would try again in 10 minutes. If an APIServer crashed, another one would "take over" the lease. // CAS later and would try again in 5 minutes. If an APIServer crashed, another one would "take over" the lease.
// //
// For example, in the following diagram, we have a compactor C1 doing compaction in t1, t2. Another compactor C2 // For example, in the following diagram, we have a compactor C1 doing compaction in t1, t2. Another compactor C2
// at t1' (t1 < t1' < t2) would CAS fail, set its known oldRev to rev at t1', and try again in t2' (t2' > t2). // at t1' (t1 < t1' < t2) would CAS fail, set its known oldRev to rev at t1', and try again in t2' (t2' > t2).
@ -100,14 +100,14 @@ func compactor(ctx context.Context, client *clientv3.Client, interval time.Durat
// t0 t1 t2 // t0 t1 t2
// //
// We have the guarantees: // We have the guarantees:
// - in normal cases, the interval is 10 minutes. // - in normal cases, the interval is 5 minutes.
// - in failover, the interval is >10m and <20m // - in failover, the interval is >5m and <10m
// //
// FAQ: // FAQ:
// - What if time is not accurate? We don't care as long as someone did the compaction. Atomicity is ensured using // - What if time is not accurate? We don't care as long as someone did the compaction. Atomicity is ensured using
// etcd API. // etcd API.
// - What happened under heavy load scenarios? Initially, each apiserver will do only one compaction // - What happened under heavy load scenarios? Initially, each apiserver will do only one compaction
// every 10 minutes. This is very unlikely affecting or affected w.r.t. server load. // every 5 minutes. This is very unlikely affecting or affected w.r.t. server load.
var compactTime int64 var compactTime int64
var rev int64 var rev int64