mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-13 05:02:50 +00:00
Spelling fixes inspired by github.com/client9/misspell
This commit is contained in:
@@ -47,7 +47,7 @@ federated servers.
|
||||
developers to expose their APIs as a separate server and enabling the cluster
|
||||
admin to use it without any change to the core kubernetes reporsitory, we
|
||||
unblock these APIs.
|
||||
* Place for staging experimental APIs: New APIs can remain in seperate
|
||||
* Place for staging experimental APIs: New APIs can remain in separate
|
||||
federated servers until they become stable, at which point, they can be moved
|
||||
to the core kubernetes master, if appropriate.
|
||||
* Ensure that new APIs follow kubernetes conventions: Without the mechanism
|
||||
|
@@ -37,7 +37,7 @@ Full Ubernetes will offer sophisticated federation between multiple kuberentes
|
||||
clusters, offering true high-availability, multiple provider support &
|
||||
cloud-bursting, multiple region support etc. However, many users have
|
||||
expressed a desire for a "reasonably" high-available cluster, that runs in
|
||||
multiple zones on GCE or availablity zones in AWS, and can tolerate the failure
|
||||
multiple zones on GCE or availability zones in AWS, and can tolerate the failure
|
||||
of a single zone without the complexity of running multiple clusters.
|
||||
|
||||
Ubernetes-Lite aims to deliver exactly that functionality: to run a single
|
||||
@@ -88,7 +88,7 @@ The implementation of this will be described in the implementation section.
|
||||
|
||||
Note that zone spreading is 'best effort'; zones are just be one of the factors
|
||||
in making scheduling decisions, and thus it is not guaranteed that pods will
|
||||
spread evenly across zones. However, this is likely desireable: if a zone is
|
||||
spread evenly across zones. However, this is likely desirable: if a zone is
|
||||
overloaded or failing, we still want to schedule the requested number of pods.
|
||||
|
||||
### Volume affinity
|
||||
|
@@ -111,7 +111,7 @@ and cheap network within each cluster.
|
||||
There is also assumed to be some degree of failure correlation across
|
||||
a cluster, i.e. whole clusters are expected to fail, at least
|
||||
occasionally (due to cluster-wide power and network failures, natural
|
||||
disasters etc). Clusters are often relatively homogenous in that all
|
||||
disasters etc). Clusters are often relatively homogeneous in that all
|
||||
compute nodes are typically provided by a single cloud provider or
|
||||
hardware vendor, and connected by a common, unified network fabric.
|
||||
But these are not hard requirements of Kubernetes.
|
||||
|
@@ -129,7 +129,7 @@ The first is accomplished in this PR, while a timeline for 2. and 3. are TDB. To
|
||||
- Get: Handle a watch on leases
|
||||
* `/network/leases/subnet`:
|
||||
- Put: This is a request for a lease. If the nodecontroller is allocating CIDRs we can probably just no-op.
|
||||
* `/network/reservations`: TDB, we can probably use this to accomodate node controller allocating CIDR instead of flannel requesting it
|
||||
* `/network/reservations`: TDB, we can probably use this to accommodate node controller allocating CIDR instead of flannel requesting it
|
||||
|
||||
The ick-iest part of this implementation is going to the the `GET /network/leases`, i.e the watch proxy. We can side-step by waiting for a more generic Kubernetes resource. However, we can also implement it as follows:
|
||||
* Watch all nodes, ignore heartbeats
|
||||
@@ -152,7 +152,7 @@ This proposal is really just a call for community help in writing a Kubernetes x
|
||||
* Flannel daemon in privileged pod
|
||||
* Flannel server talks to apiserver, described in proposal above
|
||||
* HTTPs between flannel daemon/server
|
||||
* Investigate flannel server runing on every node (as done in the reference implementation mentioned above)
|
||||
* Investigate flannel server running on every node (as done in the reference implementation mentioned above)
|
||||
* Use flannel reservation mode to support node controller podcidr alloction
|
||||
|
||||
|
||||
|
@@ -129,8 +129,8 @@ behavior is equivalent to the 1.1 behavior with scheduling based on Capacity.
|
||||
|
||||
In the initial implementation, `SystemReserved` will be functionally equivalent to
|
||||
[`KubeReserved`](#system-reserved), but with a different semantic meaning. While KubeReserved
|
||||
designates resources set asside for kubernetes components, SystemReserved designates resources set
|
||||
asside for non-kubernetes components (currently this is reported as all the processes lumped
|
||||
designates resources set aside for kubernetes components, SystemReserved designates resources set
|
||||
aside for non-kubernetes components (currently this is reported as all the processes lumped
|
||||
together in the `/system` raw container).
|
||||
|
||||
## Issues
|
||||
@@ -159,7 +159,7 @@ according to `KubeReserved`.
|
||||
**API server expects `Allocatable` but does not receive it:** If the kubelet is older and does not
|
||||
provide `Allocatable` in the `NodeStatus`, then `Allocatable` will be
|
||||
[defaulted](../../pkg/api/v1/defaults.go) to
|
||||
`Capacity` (which will yield todays behavior of scheduling based on capacity).
|
||||
`Capacity` (which will yield today's behavior of scheduling based on capacity).
|
||||
|
||||
### 3rd party schedulers
|
||||
|
||||
|
@@ -40,7 +40,7 @@ Main reason behind doing this is to understand what kind of monitoring needs to
|
||||
|
||||
Issue https://github.com/kubernetes/kubernetes/issues/14216 was opened because @spiffxp observed a regression in scheduler performance in 1.1 branch in comparison to `old` 1.0
|
||||
cut. In the end it turned out the be caused by `--v=4` (instead of default `--v=2`) flag in the scheduler together with the flag `--logtostderr` which disables batching of
|
||||
log lines and a number of loging without explicit V level. This caused weird behavior of the whole component.
|
||||
log lines and a number of logging without explicit V level. This caused weird behavior of the whole component.
|
||||
|
||||
Because we now know that logging may have big performance impact we should consider instrumenting logging mechanism and compute statistics such as number of logged messages,
|
||||
total and average size of them. Each binary should be responsible for exposing its metrics. An unaccounted but way too big number of days, if not weeks, of engineering time was
|
||||
@@ -137,7 +137,7 @@ Basic ideas:
|
||||
We should monitor other aspects of the system, which may indicate saturation of some component.
|
||||
|
||||
Basic ideas:
|
||||
- queue lenght for queues in the system,
|
||||
- queue length for queues in the system,
|
||||
- wait time for WaitGroups.
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
@@ -194,7 +194,7 @@ From the above, we know that label management must be applied:
|
||||
3. To some volume types *sometimes*
|
||||
|
||||
Volumes should be relabeled with the correct SELinux context. Docker has this capability today; it
|
||||
is desireable for other container runtime implementations to provide similar functionality.
|
||||
is desirable for other container runtime implementations to provide similar functionality.
|
||||
|
||||
Relabeling should be an optional aspect of a volume plugin to accommodate:
|
||||
|
||||
|
@@ -178,7 +178,7 @@ can be instantiated multiple times within the same namespace, as long as a diffe
|
||||
instantiation. The resulting objects will be independent from a replica/load-balancing perspective.
|
||||
|
||||
Generation of parameter values for fields such as Secrets will be delegated to an [admission controller/initializer/finalizer](https://github.com/kubernetes/kubernetes/issues/3585) rather than being solved by the template processor. Some discussion about a generation
|
||||
service is occuring [here](https://github.com/kubernetes/kubernetes/issues/12732)
|
||||
service is occurring [here](https://github.com/kubernetes/kubernetes/issues/12732)
|
||||
|
||||
Labels to be assigned to all objects could also be generated in addition to, or instead of, allowing labels to be supplied in the
|
||||
Template definition.
|
||||
|
@@ -258,7 +258,7 @@ The events associated to `Workflow`s will be:
|
||||
|
||||
## Kubectl
|
||||
|
||||
Kubectl will be modified to display workflows. More particulary the `describe` command
|
||||
Kubectl will be modified to display workflows. More particularly the `describe` command
|
||||
will display all the steps with their status. Steps will be topologically sorted and
|
||||
each dependency will be decorated with its status (wether or not step is waitin for
|
||||
dependency).
|
||||
|
Reference in New Issue
Block a user