mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-08-01 15:58:37 +00:00
Zone scheduler: Update scheduler docs
There's not a huge amount of detail in the docs as to how the scheduler actually works, which is probably a good thing both for readability and because it makes it easier to tweak the zone-spreading approach in the future, but we should include some information that we do spread across zones if zone information is present on the nodes.
This commit is contained in:
parent
cd433c974f
commit
541ff002c0
@ -47,7 +47,7 @@ will filter out nodes that don't have at least that much resources available (co
|
||||
as the capacity of the node minus the sum of the resource requests of the containers that
|
||||
are already running on the node). Second, it applies a set of "priority functions"
|
||||
that rank the nodes that weren't filtered out by the predicate check. For example,
|
||||
it tries to spread Pods across nodes while at the same time favoring the least-loaded
|
||||
it tries to spread Pods across nodes and zones while at the same time favoring the least-loaded
|
||||
nodes (where "load" here is sum of the resource requests of the containers running on the node,
|
||||
divided by the node's capacity).
|
||||
Finally, the node with the highest priority is chosen
|
||||
|
@ -61,7 +61,7 @@ Currently, Kubernetes scheduler provides some practical priority functions, incl
|
||||
- `LeastRequestedPriority`: The node is prioritized based on the fraction of the node that would be free if the new Pod were scheduled onto the node. (In other words, (capacity - sum of requests of all Pods already on the node - request of Pod that is being scheduled) / capacity). CPU and memory are equally weighted. The node with the highest free fraction is the most preferred. Note that this priority function has the effect of spreading Pods across the nodes with respect to resource consumption.
|
||||
- `CalculateNodeLabelPriority`: Prefer nodes that have the specified label.
|
||||
- `BalancedResourceAllocation`: This priority function tries to put the Pod on a node such that the CPU and Memory utilization rate is balanced after the Pod is deployed.
|
||||
- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node.
|
||||
- `CalculateSpreadPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on the same node. If zone information is present on the nodes, the priority will be adjusted so that pods are spread across zones and nodes.
|
||||
- `CalculateAntiAffinityPriority`: Spread Pods by minimizing the number of Pods belonging to the same service on nodes with the same value for a particular label.
|
||||
|
||||
The details of the above priority functions can be found in [plugin/pkg/scheduler/algorithm/priorities](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithm/priorities/). Kubernetes uses some, but not all, of these priority functions by default. You can see which ones are used by default in [plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go](http://releases.k8s.io/HEAD/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go). Similar as predicates, you can combine the above priority functions and assign weight factors (positive number) to them as you want (check [scheduler.md](scheduler.md) for how to customize).
|
||||
|
Loading…
Reference in New Issue
Block a user