k8s-merge-robot 52cc96d5a0 Merge pull request #24569 from williamsandrew/elb-proxy-protocol
Automatic merge from submit-queue

AWS: ELB proxy protocol support via annotation service.beta.kubernetes.io/aws-load-balancer-proxy-protocol

This is a ~~work in progress~~ branch that adds support for the Proxy Protocol with Elastic Load Balancers. The proxy protocol is documented here: http://www.haproxy.org/download/1.5/doc/proxy-protocol.txt. It allows us to pass the "real ip" address of a client to pods behind services.

As it stands now, we create an ELB policy on the load balancer that enables the proxy protocol. We then enumerate each node port assigned to the load balancer and add our newly created policy to it. The manual process is documented here: http://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/enable-proxy-protocol.html


Right now, I’m looking to get some feedback on the approach before I dive too much deeper in the code. More precisely, I have questions regarding the following:

1) Right now I just check that a certain annotation exists on the service regardless of what its value is. Assuming we’re going to enable this feature via an annotation, what is the expected experience? This decision likely depends on the answers to the next questions.

2) Right now the implementation enables the proxy protocol on every ELB backend. The actual ELB API expects you to add the policy for each configured backend. Do we want the ability to configure the proxy protocol on a per service port basis? For example, if a service exposes TCP 80 and 443, would we want the ability to only enable the proxy protocol on port 443? Does this overcomplicate the implementation? If we wanted to go this direction we could do something like ...

```
{
  "service.beta.kubernetes.io/aws-load-balancer-proxy-protocol": "tcp:80,tcp:443"
}
```

3) I avoided this because I was concerned with scope creep and our organization doesn’t need it, but could/should our implementation be adjusted to just handle ELB policies in general? I hadn’t used the ELB API until I started working on this branch so I don’t know how realistic this is. I also don't know how common this use case is as our organization has used our own load balancing setup prior to Kubernetes. This page has a couple of examples at the bottom: http://docs.aws.amazon.com/cli/latest/reference/elb/create-load-balancer-policy.html

cc @justinsb

<!-- Reviewable:start -->
---
This change is [<img src="http://reviewable.k8s.io/review_button.svg" height="35" align="absmiddle" alt="Reviewable"/>](http://reviewable.k8s.io/reviews/kubernetes/kubernetes/24569)
<!-- Reviewable:end -->
2016-05-31 12:37:57 -07:00
2016-05-31 09:32:23 -04:00
2016-05-31 19:41:54 +02:00
2016-05-31 09:48:39 +02:00
2016-05-30 07:28:48 +02:00
2016-05-31 09:48:39 +02:00
2016-05-08 20:32:06 -07:00
2016-04-28 11:00:28 -07:00
2016-04-22 11:48:11 -07:00
2016-05-18 15:06:23 +00:00
2016-03-30 22:55:08 +03:00

Kubernetes

GoReportCard Widget GoDoc Widget Travis Widget Coverage Status Widget

Are you ...

  • Interested in learning more about using Kubernetes? Please see our user-facing documentation on kubernetes.io
  • Interested in hacking on the core Kubernetes code base? Keep reading!

Kubernetes is an open source system for managing containerized applications across multiple hosts, providing basic mechanisms for deployment, maintenance, and scaling of applications.

Kubernetes is:

  • lean: lightweight, simple, accessible
  • portable: public, private, hybrid, multi cloud
  • extensible: modular, pluggable, hookable, composable
  • self-healing: auto-placement, auto-restart, auto-replication

Kubernetes builds upon a decade and a half of experience at Google running production workloads at scale, combined with best-of-breed ideas and practices from the community.


Kubernetes is ready for Production!

With the 1.0.1 release Kubernetes is ready to serve your production workloads.

Kubernetes can run anywhere!

You can run Kubernetes on your local workstation under Vagrant, cloud providers (e.g. GCE, AWS, Azure), and physical hardware. Essentially, anywhere Linux runs you can run Kubernetes. Checkout the Getting Started Guides for details.

Concepts

Kubernetes works with the following concepts:

Cluster
A cluster is a set of physical or virtual machines and other infrastructure resources used by Kubernetes to run your applications. Kubernetes can run anywhere! See the Getting Started Guides for instructions for a variety of services.
Node
A node is a physical or virtual machine running Kubernetes, onto which pods can be scheduled.
Pod
Pods are a colocated group of application containers with shared volumes. They're the smallest deployable units that can be created, scheduled, and managed with Kubernetes. Pods can be created individually, but it's recommended that you use a replication controller even if creating a single pod.
Replication controller
Replication controllers manage the lifecycle of pods. They ensure that a specified number of pods are running at any given time, by creating or killing pods as required.
Service
Services provide a single, stable name and address for a set of pods. They act as basic load balancers.
Label
Labels are used to organize and select groups of objects based on key:value pairs.

Documentation

Kubernetes documentation is organized into several categories.

Community, discussion, contribution, and support

See which companies are committed to driving quality in Kubernetes on our community page.

Do you want to help "shape the evolution of technologies that are container packaged, dynamically scheduled and microservices oriented?"

You should consider joining the Cloud Native Computing Foundation. For details about who's involved and how Kubernetes plays a role, read their announcement.

Code of conduct

Participation in the Kubernetes community is governed by the Kubernetes Code of Conduct.

Are you ready to add to the discussion?

We have presence on:

You can also view recordings of past events and presentations on our Media page.

For Q&A, our threads are at:

Want to do more than just 'discuss' Kubernetes?

If you're interested in being a contributor and want to get involved in developing Kubernetes, start in the Kubernetes Developer Guide and also review the contributor guidelines.

Support

While there are many different channels that you can use to get ahold of us, you can help make sure that we are efficient in getting you the help that you need.

If you need support, start with the troubleshooting guide and work your way through the process that we've outlined.

That said, if you have questions, reach out to us one way or another. We don't bite!

Community resources:

  • Awesome-kubernetes:

You can find more projects, tools and articles related to Kubernetes on the awesome-kubernetes list. Add your project there and help us make it better.

Instructive & educational resources for the Kubernetes community. By the community.

Analytics

Description
Production-Grade Container Scheduling and Management
Readme Apache-2.0 1.3 GiB
Languages
Go 97%
Shell 2.6%
PowerShell 0.2%