mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-19 08:54:44 +00:00
address issue #1488; clean up linewrap and some minor editing issues in the docs/design/* tree
Signed-off-by: mikebrow <brownwm@us.ibm.com>
This commit is contained in:
@@ -71,7 +71,8 @@ unified view.
|
||||
|
||||
Here are the functionality requirements derived from above use cases:
|
||||
|
||||
+ Clients of the federation control plane API server can register and deregister clusters.
|
||||
+ Clients of the federation control plane API server can register and deregister
|
||||
clusters.
|
||||
+ Workloads should be spread to different clusters according to the
|
||||
workload distribution policy.
|
||||
+ Pods are able to discover and connect to services hosted in other
|
||||
@@ -90,7 +91,7 @@ Here are the functionality requirements derived from above use cases:
|
||||
It’s difficult to have a perfect design with one click that implements
|
||||
all the above requirements. Therefore we will go with an iterative
|
||||
approach to design and build the system. This document describes the
|
||||
phase one of the whole work. In phase one we will cover only the
|
||||
phase one of the whole work. In phase one we will cover only the
|
||||
following objectives:
|
||||
|
||||
+ Define the basic building blocks and API objects of control plane
|
||||
@@ -130,9 +131,9 @@ description of each module contained in above diagram.
|
||||
|
||||
The API Server in the Ubernetes control plane works just like the API
|
||||
Server in K8S. It talks to a distributed key-value store to persist,
|
||||
retrieve and watch API objects. This store is completely distinct
|
||||
retrieve and watch API objects. This store is completely distinct
|
||||
from the kubernetes key-value stores (etcd) in the underlying
|
||||
kubernetes clusters. We still use `etcd` as the distributed
|
||||
kubernetes clusters. We still use `etcd` as the distributed
|
||||
storage so customers don’t need to learn and manage a different
|
||||
storage system, although it is envisaged that other storage systems
|
||||
(consol, zookeeper) will probably be developedand supported over
|
||||
@@ -141,16 +142,16 @@ time.
|
||||
## Ubernetes Scheduler
|
||||
|
||||
The Ubernetes Scheduler schedules resources onto the underlying
|
||||
Kubernetes clusters. For example it watches for unscheduled Ubernetes
|
||||
Kubernetes clusters. For example it watches for unscheduled Ubernetes
|
||||
replication controllers (those that have not yet been scheduled onto
|
||||
underlying Kubernetes clusters) and performs the global scheduling
|
||||
work. For each unscheduled replication controller, it calls policy
|
||||
work. For each unscheduled replication controller, it calls policy
|
||||
engine to decide how to spit workloads among clusters. It creates a
|
||||
Kubernetes Replication Controller on one ore more underlying cluster,
|
||||
and post them back to `etcd` storage.
|
||||
|
||||
One sublety worth noting here is that the scheduling decision is
|
||||
arrived at by combining the application-specific request from the user (which might
|
||||
One sublety worth noting here is that the scheduling decision is arrived at by
|
||||
combining the application-specific request from the user (which might
|
||||
include, for example, placement constraints), and the global policy specified
|
||||
by the federation administrator (for example, "prefer on-premise
|
||||
clusters over AWS clusters" or "spread load equally across clusters").
|
||||
@@ -165,9 +166,9 @@ performs the following two kinds of work:
|
||||
corresponding API objects on the underlying K8S clusters.
|
||||
1. It periodically retrieves the available resources metrics from the
|
||||
underlying K8S cluster, and updates them as object status of the
|
||||
`cluster` API object. An alternative design might be to run a pod
|
||||
`cluster` API object. An alternative design might be to run a pod
|
||||
in each underlying cluster that reports metrics for that cluster to
|
||||
the Ubernetes control plane. Which approach is better remains an
|
||||
the Ubernetes control plane. Which approach is better remains an
|
||||
open topic of discussion.
|
||||
|
||||
## Ubernetes Service Controller
|
||||
@@ -187,7 +188,7 @@ Cluster is a new first-class API object introduced in this design. For
|
||||
each registered K8S cluster there will be such an API resource in
|
||||
control plane. The way clients register or deregister a cluster is to
|
||||
send corresponding REST requests to following URL:
|
||||
`/api/{$version}/clusters`. Because control plane is behaving like a
|
||||
`/api/{$version}/clusters`. Because control plane is behaving like a
|
||||
regular K8S client to the underlying clusters, the spec of a cluster
|
||||
object contains necessary properties like K8S cluster address and
|
||||
credentials. The status of a cluster API object will contain
|
||||
@@ -294,7 +295,7 @@ $version.clusterStatus
|
||||
**For simplicity we didn’t introduce a separate “cluster metrics” API
|
||||
object here**. The cluster resource metrics are stored in cluster
|
||||
status section, just like what we did to nodes in K8S. In phase one it
|
||||
only contains available CPU resources and memory resources. The
|
||||
only contains available CPU resources and memory resources. The
|
||||
cluster controller will periodically poll the underlying cluster API
|
||||
Server to get cluster capability. In phase one it gets the metrics by
|
||||
simply aggregating metrics from all nodes. In future we will improve
|
||||
@@ -315,7 +316,7 @@ Below is the state transition diagram.
|
||||
## Replication Controller
|
||||
|
||||
A global workload submitted to control plane is represented as an
|
||||
Ubernetes replication controller. When a replication controller
|
||||
Ubernetes replication controller. When a replication controller
|
||||
is submitted to control plane, clients need a way to express its
|
||||
requirements or preferences on clusters. Depending on different use
|
||||
cases it may be complex. For example:
|
||||
@@ -327,7 +328,7 @@ cases it may be complex. For example:
|
||||
(use case: workload )
|
||||
+ Seventy percent of this workload should be scheduled to cluster Foo,
|
||||
and thirty percent should be scheduled to cluster Bar (use case:
|
||||
vendor lock-in avoidance). In phase one, we only introduce a
|
||||
vendor lock-in avoidance). In phase one, we only introduce a
|
||||
_clusterSelector_ field to filter acceptable clusters. In default
|
||||
case there is no such selector and it means any cluster is
|
||||
acceptable.
|
||||
@@ -376,7 +377,7 @@ clusters. How to handle this will be addressed after phase one.
|
||||
The Service API object exposed by Ubernetes is similar to service
|
||||
objects on Kubernetes. It defines the access to a group of pods. The
|
||||
Ubernetes service controller will create corresponding Kubernetes
|
||||
service objects on underlying clusters. These are detailed in a
|
||||
service objects on underlying clusters. These are detailed in a
|
||||
separate design document: [Federated Services](federated-services.md).
|
||||
|
||||
## Pod
|
||||
@@ -389,7 +390,8 @@ order to keep the Ubernetes API compatible with the Kubernetes API.
|
||||
|
||||
## Scheduling
|
||||
|
||||
The below diagram shows how workloads are scheduled on the Ubernetes control plane:
|
||||
The below diagram shows how workloads are scheduled on the Ubernetes control\
|
||||
plane:
|
||||
|
||||
1. A replication controller is created by the client.
|
||||
1. APIServer persists it into the storage.
|
||||
@@ -425,8 +427,8 @@ proposed solutions like resource reservation mechanisms.
|
||||
|
||||
This part has been included in the section “Federated Service” of
|
||||
document
|
||||
“[Ubernetes Cross-cluster Load Balancing and Service Discovery Requirements and System Design](federated-services.md))”. Please
|
||||
refer to that document for details.
|
||||
“[Ubernetes Cross-cluster Load Balancing and Service Discovery Requirements and System Design](federated-services.md))”.
|
||||
Please refer to that document for details.
|
||||
|
||||
|
||||
<!-- BEGIN MUNGE: GENERATED_ANALYTICS -->
|
||||
|
Reference in New Issue
Block a user