From 37b30b54e7081e392055780028f0f50f4485da31 Mon Sep 17 00:00:00 2001 From: qiaolei Date: Sat, 29 Aug 2015 23:54:36 +0800 Subject: [PATCH] Amend some markdown errors in federation.md --- docs/proposals/federation.md | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/docs/proposals/federation.md b/docs/proposals/federation.md index 34df0aee55b..371d9c306b4 100644 --- a/docs/proposals/federation.md +++ b/docs/proposals/federation.md @@ -237,10 +237,10 @@ It seems useful to split this into multiple sets of sub use cases: which feature sets like private networks, load balancing, persistent disks, data snapshots etc are typically consistent and explicitly designed to inter-operate). - 1.1. within the same geographical region (e.g. metro) within which network + 1. within the same geographical region (e.g. metro) within which network is fast and cheap enough to be almost analogous to a single data center. - 1.1. across multiple geographical regions, where high network cost and + 1. across multiple geographical regions, where high network cost and poor network performance may be prohibitive. 1. Multiple cloud providers (typically with inconsistent feature sets, more limited interoperability, and typically no cheap inter-cluster @@ -440,12 +440,13 @@ to be able to: There is of course a lot of detail still missing from this section, including discussion of: -1. admission control, + +1. admission control 1. initial placement of instances of a new service vs scheduling new instances of an existing service in response -to auto-scaling, +to auto-scaling 1. rescheduling pods due to failure (response might be -different depending on if it's failure of a node, rack, or whole AZ), +different depending on if it's failure of a node, rack, or whole AZ) 1. data placement relative to compute capacity, etc.