mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-07 04:03:20 +00:00
Copy edits for typos (resubmitted)
This commit is contained in:
@@ -60,7 +60,7 @@ the objects (of a given type) without any filtering. The changes delivered from
|
||||
etcd will then be stored in a cache in apiserver. This cache is in fact a
|
||||
"rolling history window" that will support clients having some amount of latency
|
||||
between their list and watch calls. Thus it will have a limited capacity and
|
||||
whenever a new change comes from etcd when a cache is full, othe oldest change
|
||||
whenever a new change comes from etcd when a cache is full, the oldest change
|
||||
will be remove to make place for the new one.
|
||||
|
||||
When a client sends a watch request to apiserver, instead of redirecting it to
|
||||
@@ -159,7 +159,7 @@ necessary. In such case, to avoid LIST requests coming from all watchers at
|
||||
the same time, we can introduce an additional etcd event type:
|
||||
[EtcdResync](../../pkg/storage/etcd/etcd_watcher.go#L36)
|
||||
|
||||
Whenever reslisting will be done to refresh the internal watch to etcd,
|
||||
Whenever relisting will be done to refresh the internal watch to etcd,
|
||||
EtcdResync event will be send to all the watchers. It will contain the
|
||||
full list of all the objects the watcher is interested in (appropriately
|
||||
filtered) as the parameter of this watch event.
|
||||
|
@@ -518,7 +518,7 @@ thus far:
|
||||
approach.
|
||||
1. A more monolithic architecture, where a single instance of the
|
||||
Kubernetes control plane itself manages a single logical cluster
|
||||
composed of nodes in multiple availablity zones and cloud
|
||||
composed of nodes in multiple availability zones and cloud
|
||||
providers.
|
||||
|
||||
A very brief, non-exhaustive list of pro's and con's of the two
|
||||
@@ -563,12 +563,12 @@ prefers the Decoupled Hierarchical model for the reasons stated below).
|
||||
largely independently (different sets of developers, different
|
||||
release schedules etc).
|
||||
1. **Administration complexity:** Again, I think that this could be argued
|
||||
both ways. Superficially it woud seem that administration of a
|
||||
both ways. Superficially it would seem that administration of a
|
||||
single Monolithic multi-zone cluster might be simpler by virtue of
|
||||
being only "one thing to manage", however in practise each of the
|
||||
underlying availability zones (and possibly cloud providers) has
|
||||
it's own capacity, pricing, hardware platforms, and possibly
|
||||
bureaucratic boudaries (e.g. "our EMEA IT department manages those
|
||||
bureaucratic boundaries (e.g. "our EMEA IT department manages those
|
||||
European clusters"). So explicitly allowing for (but not
|
||||
mandating) completely independent administration of each
|
||||
underlying Kubernetes cluster, and the Federation system itself,
|
||||
|
@@ -56,7 +56,7 @@ We are going to introduce Scale subresource and implement horizontal autoscaling
|
||||
Scale subresource will be supported for replication controllers and deployments.
|
||||
Scale subresource will be a Virtual Resource (will not be stored in etcd as a separate object).
|
||||
It will be only present in API as an interface to accessing replication controller or deployment,
|
||||
and the values of Scale fields will be inferred from the corresponing replication controller/deployment object.
|
||||
and the values of Scale fields will be inferred from the corresponding replication controller/deployment object.
|
||||
HorizontalPodAutoscaler object will be bound with exactly one Scale subresource and will be
|
||||
autoscaling associated replication controller/deployment through it.
|
||||
The main advantage of such approach is that whenever we introduce another type we want to auto-scale,
|
||||
@@ -132,7 +132,7 @@ type HorizontalPodAutoscaler struct {
|
||||
// HorizontalPodAutoscalerSpec is the specification of a horizontal pod autoscaler.
|
||||
type HorizontalPodAutoscalerSpec struct {
|
||||
// ScaleRef is a reference to Scale subresource. HorizontalPodAutoscaler will learn the current
|
||||
// resource consumption from its status, and will set the desired number of pods by modyfying its spec.
|
||||
// resource consumption from its status, and will set the desired number of pods by modifying its spec.
|
||||
ScaleRef *SubresourceReference
|
||||
// MinCount is the lower limit for the number of pods that can be set by the autoscaler.
|
||||
MinCount int
|
||||
@@ -151,7 +151,7 @@ type HorizontalPodAutoscalerStatus struct {
|
||||
CurrentReplicas int
|
||||
|
||||
// DesiredReplicas is the desired number of replicas of pods managed by this autoscaler.
|
||||
// The number may be different because pod downscaling is someteimes delayed to keep the number
|
||||
// The number may be different because pod downscaling is sometimes delayed to keep the number
|
||||
// of pods stable.
|
||||
DesiredReplicas int
|
||||
|
||||
@@ -161,7 +161,7 @@ type HorizontalPodAutoscalerStatus struct {
|
||||
CurrentConsumption ResourceConsumption
|
||||
|
||||
// LastScaleTimestamp is the last time the HorizontalPodAutoscaler scaled the number of pods.
|
||||
// This is used by the autoscaler to controll how often the number of pods is changed.
|
||||
// This is used by the autoscaler to control how often the number of pods is changed.
|
||||
LastScaleTimestamp *util.Time
|
||||
}
|
||||
|
||||
|
@@ -96,7 +96,7 @@ case, the nodes we move the Pods onto might have been in the system for a long t
|
||||
have been added by the cluster auto-scaler specifically to allow the rescheduler to
|
||||
rebalance utilization.
|
||||
|
||||
A second spreading use case is to separate antagnosits.
|
||||
A second spreading use case is to separate antagonists.
|
||||
Sometimes the processes running in two different Pods on the same node
|
||||
may have unexpected antagonistic
|
||||
behavior towards one another. A system component might monitor for such
|
||||
|
Reference in New Issue
Block a user