Rename 'portal IP' to 'cluster IP' most everywhere

This covers obvious transforms, but not --portal_net, $PORTAL_NET and
similar.
This commit is contained in:
Tim Hockin
2015-05-23 13:41:11 -07:00
parent 46686616d4
commit 4318ca5a8b
43 changed files with 389 additions and 326 deletions

View File

@@ -83,7 +83,7 @@ We want to be able to assign IP addresses externally from Docker ([Docker issue
In addition to enabling self-registration with 3rd-party discovery mechanisms, we'd like to setup DDNS automatically ([Issue #146](https://github.com/GoogleCloudPlatform/kubernetes/issues/146)). hostname, $HOSTNAME, etc. should return a name for the pod ([Issue #298](https://github.com/GoogleCloudPlatform/kubernetes/issues/298)), and gethostbyname should be able to resolve names of other pods. Probably we need to set up a DNS resolver to do the latter ([Docker issue #2267](https://github.com/dotcloud/docker/issues/2267)), so that we don't need to keep /etc/hosts files up to date dynamically.
[Service](http://docs.k8s.io/services.md) endpoints are currently found through environment variables. Both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) variables and kubernetes-specific variables ({NAME}_SERVICE_HOST and {NAME}_SERVICE_BAR) are supported, and resolve to ports opened by the service proxy. We don't actually use [the Docker ambassador pattern](https://docs.docker.com/articles/ambassador_pattern_linking/) to link containers because we don't require applications to identify all clients at configuration time, yet. While services today are managed by the service proxy, this is an implementation detail that applications should not rely on. Clients should instead use the [service portal IP](http://docs.k8s.io/services.md) (which the above environment variables will resolve to). However, a flat service namespace doesn't scale and environment variables don't permit dynamic updates, which complicates service deployment by imposing implicit ordering constraints. We intend to register each service portal IP in DNS, and for that to become the preferred resolution protocol.
[Service](http://docs.k8s.io/services.md) endpoints are currently found through environment variables. Both [Docker-links-compatible](https://docs.docker.com/userguide/dockerlinks/) variables and kubernetes-specific variables ({NAME}_SERVICE_HOST and {NAME}_SERVICE_BAR) are supported, and resolve to ports opened by the service proxy. We don't actually use [the Docker ambassador pattern](https://docs.docker.com/articles/ambassador_pattern_linking/) to link containers because we don't require applications to identify all clients at configuration time, yet. While services today are managed by the service proxy, this is an implementation detail that applications should not rely on. Clients should instead use the [service IP](http://docs.k8s.io/services.md) (which the above environment variables will resolve to). However, a flat service namespace doesn't scale and environment variables don't permit dynamic updates, which complicates service deployment by imposing implicit ordering constraints. We intend to register each service's IP in DNS, and for that to become the preferred resolution protocol.
We'd also like to accommodate other load-balancing solutions (e.g., HAProxy), non-load-balanced services ([Issue #260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260)), and other types of groups (worker pools, etc.). Providing the ability to Watch a label selector applied to pod addresses would enable efficient monitoring of group membership, which could be directly consumed or synced with a discovery mechanism. Event hooks ([Issue #140](https://github.com/GoogleCloudPlatform/kubernetes/issues/140)) for join/leave events would probably make this even easier.

View File

@@ -87,7 +87,7 @@ Some firewall software that uses iptables may not interact well with
kubernetes. If you're having trouble around networking, try disabling any
firewall or other iptables-using systems, first.
By default the IP range for service portals is 10.0.*.* - depending on your
By default the IP range for service cluster IPs is 10.0.*.* - depending on your
docker installation, this may conflict with IPs for containers. If you find
containers running with IPs in this range, edit hack/local-cluster-up.sh and
change the portal_net flag to something else.

View File

@@ -235,7 +235,7 @@ $ mesos ps
```
The number of Kubernetes pods listed earlier (from `bin/kubectl get pods`) should equal to the number active Mesos tasks listed the previous listing (`mesos ps`).
Next, determine the internal IP address of the front end [service portal][8]:
Next, determine the internal IP address of the front end [service][8]:
```bash
$ bin/kubectl get services
@@ -268,14 +268,14 @@ Or interact with the frontend application via your browser, in 2 steps:
First, open the firewall on the master machine.
```bash
# determine the internal port for the frontend service portal
# determine the internal port for the frontend service
$ sudo iptables-save|grep -e frontend # -- port 36336 in this case
-A KUBE-PORTALS-CONTAINER -d 10.10.10.149/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336
-A KUBE-PORTALS-CONTAINER -d 10.22.183.23/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336
-A KUBE-PORTALS-HOST -d 10.10.10.149/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336
-A KUBE-PORTALS-HOST -d 10.22.183.23/32 -p tcp -m comment --comment frontend -m tcp --dport 9998 -j DNAT --to-destination 10.22.183.23:36336
# open up access to the internal port for the frontend service portal
# open up access to the internal port for the frontend service
$ sudo iptables -A INPUT -i eth0 -p tcp -m state --state NEW,ESTABLISHED -m tcp \
--dport ${internal_frontend_service_port} -j ACCEPT
```
@@ -297,7 +297,7 @@ Now, you can visit the guestbook in your browser!
[5]: https://google.mesosphere.com
[6]: http://mesosphere.com/docs/getting-started/cloud/google/mesosphere/#vpn-setup
[7]: https://github.com/mesosphere/kubernetes-mesos/tree/v0.4.0/examples/guestbook
[8]: https://github.com/GoogleCloudPlatform/kubernetes/blob/v0.11.0/docs/services.md#ips-and-portals
[8]: https://github.com/GoogleCloudPlatform/kubernetes/blob/v0.11.0/docs/services.md#ips-and-vips
[9]: mesos/k8s-firewall.png
[10]: mesos/k8s-guestbook.png
[11]: http://mesos.apache.org/

View File

@@ -135,7 +135,7 @@ The the kube-apiserver several options.
DEPRECATED: see --insecure-port instead
**--portal-net**=<nil>
A CIDR notation IP range from which to assign portal IPs. This must not overlap with any IP ranges assigned to nodes for pods.
A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.
**--profiling**=true
Enable profiling via web interface host:port/debug/pprof/

View File

@@ -179,7 +179,7 @@ The the kube\-apiserver several options.
.PP
\fB\-\-portal\-net\fP=
A CIDR notation IP range from which to assign portal IPs. This must not overlap with any IP ranges assigned to nodes for pods.
A CIDR notation IP range from which to assign service cluster IPs. This must not overlap with any IP ranges assigned to nodes for pods.
.PP
\fB\-\-profiling\fP=true

View File

@@ -42,7 +42,7 @@ applications will expose one or more network endpoints for clients to connect to
balanced or situated behind a proxy - the data from those proxies and load balancers can be used to estimate client to
server traffic for applications. This is the primary, but not sole, source of data for making decisions.
Within Kubernetes a [kube proxy](http://docs.k8s.io/services.md#ips-and-portals)
Within Kubernetes a [kube proxy](http://docs.k8s.io/services.md#ips-and-vips)
running on each node directs service requests to the underlying implementation.
While the proxy provides internal inter-pod connections, there will be L3 and L7 proxies and load balancers that manage

View File

@@ -20,6 +20,58 @@ clustered database or key-value store. We will target such workloads for our
## v1 APIs
For existing and future workloads, we want to provide a consistent, stable set of APIs, over which developers can build and extend Kubernetes. This includes input validation, a consistent API structure, clean semantics, and improved diagnosability of the system.
||||||| merged common ancestors
## APIs and core features
1. Consistent v1 API
- Status: DONE. [v1beta3](http://kubernetesio.blogspot.com/2015/04/introducing-kubernetes-v1beta3.html) was developed as the release candidate for the v1 API.
2. Multi-port services for apps which need more than one port on the same portal IP ([#1802](https://github.com/GoogleCloudPlatform/kubernetes/issues/1802))
- Status: DONE. Released in 0.15.0
3. Nominal services for applications which need one stable IP per pod instance ([#260](https://github.com/GoogleCloudPlatform/kubernetes/issues/260))
- Status: #2585 covers some design options.
4. API input is scrubbed of status fields in favor of a new API to set status ([#4248](https://github.com/GoogleCloudPlatform/kubernetes/issues/4248))
- Status: DONE
5. Input validation reporting versioned field names ([#3084](https://github.com/GoogleCloudPlatform/kubernetes/issues/3084))
- Status: in progress
6. Error reporting: Report common problems in ways that users can discover
- Status:
7. Event management: Make events usable and useful
- Status:
8. Persistent storage support ([#5105](https://github.com/GoogleCloudPlatform/kubernetes/issues/5105))
- Status: in progress
9. Allow nodes to join/leave a cluster ([#6087](https://github.com/GoogleCloudPlatform/kubernetes/issues/6087),[#3168](https://github.com/GoogleCloudPlatform/kubernetes/issues/3168))
- Status: in progress ([#6949](https://github.com/GoogleCloudPlatform/kubernetes/pull/6949))
10. Handle node death
- Status: mostly covered by nodes joining/leaving a cluster
11. Allow live cluster upgrades ([#6075](https://github.com/GoogleCloudPlatform/kubernetes/issues/6075),[#6079](https://github.com/GoogleCloudPlatform/kubernetes/issues/6079))
- Status: design in progress
12. Allow kernel upgrades
- Status: mostly covered by nodes joining/leaving a cluster, need demonstration
13. Allow rolling-updates to fail gracefully ([#1353](https://github.com/GoogleCloudPlatform/kubernetes/issues/1353))
- Status:
14. Easy .dockercfg
- Status:
15. Demonstrate cluster stability over time
- Status
16. Kubelet use the kubernetes API to fetch jobs to run (instead of etcd) on supported platforms
- Status: DONE
## Reliability and performance
1. Restart system components in case of crash (#2884)
- Status: in progress
2. Scale to 100 nodes (#3876)
- Status: in progress
3. Scale to 30-50 pods (1-2 containers each) per node (#4188)
- Status:
4. Scheduling throughput: 99% of scheduling decisions made in less than 1s on 100 node, 3000 pod cluster; linear time to number of nodes and pods (#3954)
5. Startup time: 99% of end-to-end pod startup time with prepulled images is less than 5s on 100 node, 3000 pod cluster; linear time to number of nodes and pods (#3952, #3954)
- Status:
6. API performance: 99% of API calls return in less than 1s; constant time to number of nodes and pods (#4521)
- Status:
7. Manage and report disk space on nodes (#4135)
- Status: in progress
8. API test coverage more than 85% in e2e tests
- Status:
In addition, we will provide versioning and deprecation policies for the APIs.

View File

@@ -31,7 +31,7 @@ that is updated whenever the set of `Pods` in a `Service` changes. For
non-native applications, Kubernetes offers a virtual-IP-based bridge to Services
which redirects to the backend `Pods`.
## Defining a Service
## Defining a service
A `Service` in Kubernetes is a REST object, similar to a `Pod`. Like all of the
REST objects, a `Service` definition can be POSTed to the apiserver to create a
@@ -138,7 +138,7 @@ Accessing a `Service` without a selector works the same as if it had selector.
The traffic will be routed to endpoints defined by the user (`1.2.3.4:80` in
this example).
## Portals and service proxies
## Virtual IPs and service proxies
Every node in a Kubernetes cluster runs a `kube-proxy`. This application
watches the Kubernetes master for the addition and removal of `Service`
@@ -199,20 +199,22 @@ disambiguated. For example:
}
```
## Choosing your own PortalIP address
## Choosing your own IP address
A user can specify their own `PortalIP` address as part of a `Service` creation
request. For example, if they already have an existing DNS entry that they
wish to replace, or legacy systems that are configured for a specific IP
address and difficult to re-configure. The `PortalIP` address that a user
A user can specify their own cluster IP address as part of a `Service` creation
request. To do this, set the `spec.clusterIP` field (called `portalIP` in
v1beta3 and earlier APIs). For example, if they already have an existing DNS
entry that they wish to replace, or legacy systems that are configured for a
specific IP address and difficult to re-configure. The IP address that a user
chooses must be a valid IP address and within the portal_net CIDR range that is
specified by flag to the API server. If the PortalIP value is invalid, the
specified by flag to the API server. If the IP address value is invalid, the
apiserver returns a 422 HTTP status code to indicate that the value is invalid.
### Why not use round-robin DNS?
A question that pops up every now and then is why we do all this stuff with
portals rather than just use standard round-robin DNS. There are a few reasons:
virtual IPs rather than just use standard round-robin DNS. There are a few
reasons:
* There is a long history of DNS libraries not respecting DNS TTLs and
caching the results of name lookups.
@@ -221,7 +223,7 @@ portals rather than just use standard round-robin DNS. There are a few reasons:
client re-resolving DNS over and over would be difficult to manage.
We try to discourage users from doing things that hurt themselves. That said,
if enough people ask for this, we may implement it as an alternative to portals.
if enough people ask for this, we may implement it as an alternative.
## Discovering services
@@ -238,7 +240,7 @@ and simpler `{SVCNAME}_SERVICE_HOST` and `{SVCNAME}_SERVICE_PORT` variables,
where the Service name is upper-cased and dashes are converted to underscores.
For example, the Service "redis-master" which exposes TCP port 6379 and has been
allocated portal IP address 10.0.0.11 produces the following environment
allocated cluster IP address 10.0.0.11 produces the following environment
variables:
```
@@ -272,24 +274,25 @@ cluster IP.
We will soon add DNS support for multi-port `Service`s in the form of SRV
records.
## Headless Services
## Headless services
Sometimes you don't need or want a single service IP. In this case, you can
create "headless" services by specifying `"None"` for the `PortalIP`. For such
`Service`s, a cluster IP is not allocated and service-specific environment
variables for `Pod`s are not created. DNS is configured to return multiple A
records (addresses) for the `Service` name, which point directly to the `Pod`s
backing the `Service`. Additionally, the kube proxy does not handle these
services and there is no load balancing or proxying done by the platform for
them. The endpoints controller will still create `Endpoints` records in the
API.
Sometimes you don't need or want load-balancing and a single service IP. In
this case, you can create "headless" services by specifying `"None"` for the
cluster IP (`spec.clusterIP` or `spec.portalIP` in v1beta3 and earlier APIs).
For such `Service`s, a cluster IP is not allocated and service-specific
environment variables for `Pod`s are not created. DNS is configured to return
multiple A records (addresses) for the `Service` name, which point directly to
the `Pod`s backing the `Service`. Additionally, the kube proxy does not handle
these services and there is no load balancing or proxying done by the platform
for them. The endpoints controller will still create `Endpoints` records in
the API.
This option allows developers to reduce coupling to the Kubernetes system, if
they desire, but leaves them freedom to do discovery in their own way.
Applications can still use a self-registration pattern and adapters for other
discovery systems could easily be built upon this API.
## External Services
## External services
For some parts of your application (e.g. frontends) you may want to expose a
Service onto an external (outside of your cluster, maybe public internet) IP
@@ -366,7 +369,7 @@ though exactly how that works depends on the cloud provider.
## Shortcomings
We expect that using iptables and userspace proxies for portals will work at
We expect that using iptables and userspace proxies for VIPs will work at
small to medium scale, but may not scale to very large clusters with thousands
of Services. See [the original design proposal for
portals](https://github.com/GoogleCloudPlatform/kubernetes/issues/1107) for more
@@ -387,7 +390,7 @@ but the current API requires it.
In the future we envision that the proxy policy can become more nuanced than
simple round robin balancing, for example master elected or sharded. We also
envision that some `Services` will have "real" load balancers, in which case the
portal will simply transport the packets there.
VIP will simply transport the packets there.
There's a
[proposal](https://github.com/GoogleCloudPlatform/kubernetes/issues/3760) to
@@ -400,7 +403,7 @@ We intend to have first-class support for L7 (HTTP) `Service`s.
We intend to have more flexible ingress modes for `Service`s which encompass
the current `ClusterIP`, `NodePort`, and `LoadBalancer` modes and more.
## The gory details of portals
## The gory details of virtual IPs
The previous information should be sufficient for many people who just want to
use `Services`. However, there is a lot going on behind the scenes that may be
@@ -427,26 +430,25 @@ of Kubernetes that used in memory locking) as well as checking for invalid
assignments due to administrator intervention and cleaning up any any IPs
that were allocated but which no service currently uses.
### IPs and Portals
### IPs and VIPs
Unlike `Pod` IP addresses, which actually route to a fixed destination,
`Service` IPs are not actually answered by a single host. Instead, we use
`iptables` (packet processing logic in Linux) to define virtual IP addresses
which are transparently redirected as needed. We call the tuple of the
`Service` IP and the `Service` port the `portal`. When clients connect to the
`portal`, their traffic is automatically transported to an appropriate
endpoint. The environment variables and DNS for `Services` are actually
populated in terms of the portal IP and port.
which are transparently redirected as needed. When clients connect to the
VIP, their traffic is automatically transported to an appropriate endpoint.
The environment variables and DNS for `Services` are actually populated in
terms of the `Service`'s VIP and port.
As an example, consider the image processing application described above.
When the backend `Service` is created, the Kubernetes master assigns a portal
When the backend `Service` is created, the Kubernetes master assigns a virtual
IP address, for example 10.0.0.1. Assuming the `Service` port is 1234, the
portal is 10.0.0.1:1234. The master stores that information, which is then
observed by all of the `kube-proxy` instances in the cluster. When a proxy
sees a new portal, it opens a new random port, establishes an iptables redirect
from the portal to this new port, and starts accepting connections on it.
`Service` is observed by all of the `kube-proxy` instances in the cluster.
When a proxy sees a new `Service`, it opens a new random port, establishes an
iptables redirect from the VIP to this new port, and starts accepting
connections on it.
When a client connects to the portal the iptables rule kicks in, and redirects
When a client connects to the VIP the iptables rule kicks in, and redirects
the packets to the `Service proxy`'s own port. The `Service proxy` chooses a
backend, and starts proxying traffic from the client to the backend.