Merge pull request #77991 from mikedanese/ledoc

cleanup some leader election client doc

Kubernetes-commit: 69c90d8cca62d4dbb1acb966608d5100bada5500
This commit is contained in:
Kubernetes Publisher 2019-05-20 17:12:39 -07:00
commit ba257ce558

View File

@ -19,12 +19,13 @@ limitations under the License.
// election state. This implementation does not guarantee that only one
// client is acting as a leader (a.k.a. fencing).
//
// A client observes timestamps captured locally to infer the state of the
// leader election. Thus the implementation is tolerant to arbitrary clock
// skew, but is not tolerant to arbitrary clock skew rate. Timestamp(renew time)
// is not meaningful if it was collected on another machine. The implementation
// of this client only acts on locally collected timestamps and cannot rely on
// the accuracy of timestamp in the record for correctness.
// A client only acts on timestamps captured locally to infer the state of the
// leader election. The client does not consider timestamps in the leader
// election record to be accurate because these timestamps may not have been
// produced by a local clock. The implemention does not depend on their
// accuracy and only uses their change to indicate that another client has
// renewed the leader lease. Thus the implementation is tolerant to arbitrary
// clock skew, but is not tolerant to arbitrary clock skew rate.
//
// However the level of tolerance to skew rate can be configured by setting
// RenewDeadline and LeaseDuration appropriately. The tolerance expressed as a
@ -107,21 +108,27 @@ type LeaderElectionConfig struct {
// LeaseDuration is the duration that non-leader candidates will
// wait to force acquire leadership. This is measured against time of
// last observed ack. A client needs to wait a full LeaseDuration without
// observing a change to the record before it can attempt to take over even
// when a client with a different identity against the record's starts and
// the renew time in the record is older than LeaseDuration. A.k.a., when
// all clients are shutdown and after at least a LeaseDuration, clients
// started with different identities against the record's must wait a full
// LeaseDuration before acquiring a lock. Thus LeaseDuration should be as
// short as possible to avoid a possible long waiting. LeaseDuration is 15
// seconds in core Kubernetes components.
// last observed ack.
//
// A client needs to wait a full LeaseDuration without observing a change to
// the record before it can attempt to take over. When all clients are
// shutdown and a new set of clients are started with different names against
// the same leader record, they must wait the full LeaseDuration before
// attempting to acquire the lease. Thus LeaseDuration should be as short as
// possible (within your tolerance for clock skew rate) to avoid a possible
// long waits in the scenario.
//
// Core clients default this value to 15 seconds.
LeaseDuration time.Duration
// RenewDeadline is the duration that the acting master will retry
// refreshing leadership before giving up.
//
// Core clients default this value to 10 seconds.
RenewDeadline time.Duration
// RetryPeriod is the duration the LeaderElector clients should wait
// between tries of actions.
//
// Core clients default this value to 2 seconds.
RetryPeriod time.Duration
// Callbacks are callbacks that are triggered during certain lifecycle