mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-13 05:02:50 +00:00
apply changes
This commit is contained in:
@@ -60,6 +60,7 @@ This guide uses a simple nginx server to demonstrate proof of concept. The same
|
||||
## Exposing pods to the cluster
|
||||
|
||||
We did this in a previous example, but lets do it once again and focus on the networking perspective. Create an nginx pod, and note that it has a container port specification:
|
||||
|
||||
```yaml
|
||||
$ cat nginxrc.yaml
|
||||
apiVersion: v1
|
||||
@@ -81,6 +82,7 @@ spec:
|
||||
```
|
||||
|
||||
This makes it accessible from any node in your cluster. Check the nodes the pod is running on:
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./nginxrc.yaml
|
||||
$ kubectl get pods -l app=nginx -o wide
|
||||
@@ -89,11 +91,13 @@ my-nginx-t26zt 1/1 Running 0 2h e2e-test-beeps-minion-
|
||||
```
|
||||
|
||||
Check your pods ips:
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -l app=nginx -o json | grep podIP
|
||||
"podIP": "10.245.0.15",
|
||||
"podIP": "10.245.0.14",
|
||||
```
|
||||
|
||||
You should be able to ssh into any node in your cluster and curl both ips. Note that the containers are *not* using port 80 on the node, nor are there any special NAT rules to route traffic to the pod. This means you can run multiple nginx pods on the same node all using the same containerPort and access them from any other pod or node in your cluster using ip. Like Docker, ports can still be published to the host node's interface(s), but the need for this is radically diminished because of the networking model.
|
||||
|
||||
You can read more about [how we achieve this](../admin/networking.md#how-to-achieve-this) if you’re curious.
|
||||
@@ -105,6 +109,7 @@ So we have pods running nginx in a flat, cluster wide, address space. In theory,
|
||||
A Kubernetes Service is an abstraction which defines a logical set of Pods running somewhere in your cluster, that all provide the same functionality. When created, each Service is assigned a unique IP address (also called clusterIP). This address is tied to the lifespan of the Service, and will not change while the Service is alive. Pods can be configured to talk to the Service, and know that communication to the Service will be automatically load-balanced out to some pod that is a member of the Service.
|
||||
|
||||
You can create a Service for your 2 nginx replicas with the following yaml:
|
||||
|
||||
```yaml
|
||||
$ cat nginxsvc.yaml
|
||||
apiVersion: v1
|
||||
@@ -120,7 +125,9 @@ spec:
|
||||
selector:
|
||||
app: nginx
|
||||
```
|
||||
|
||||
This specification will create a Service which targets TCP port 80 on any Pod with the `app=nginx` label, and expose it on an abstracted Service port (`targetPort`: is the port the container accepts traffic on, `port`: is the abstracted Service port, which can be any port other pods use to access the Service). Check your Service:
|
||||
|
||||
```shell
|
||||
$ kubectl get svc
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
@@ -128,6 +135,7 @@ nginxsvc app=nginx app=nginx 10.0.116.146 80/TCP
|
||||
```
|
||||
|
||||
As mentioned previously, a Service is backed by a group of pods. These pods are exposed through `endpoints`. The Service's selector will be evaluated continuously and the results will be POSTed to an Endpoints object also named `nginxsvc`. When a pod dies, it is automatically removed from the endpoints, and new pods matching the Service’s selector will automatically get added to the endpoints. Check the endpoints, and note that the ips are the same as the pods created in the first step:
|
||||
|
||||
```shell
|
||||
$ kubectl describe svc nginxsvc
|
||||
Name: nginxsvc
|
||||
@@ -145,6 +153,7 @@ $ kubectl get ep
|
||||
NAME ENDPOINTS
|
||||
nginxsvc 10.245.0.14:80,10.245.0.15:80
|
||||
```
|
||||
|
||||
You should now be able to curl the nginx Service on `10.0.116.146:80` from any node in your cluster. Note that the Service ip is completely virtual, it never hits the wire, if you’re curious about how this works you can read more about the [service proxy](services.md#virtual-ips-and-service-proxies).
|
||||
|
||||
## Accessing the Service
|
||||
@@ -153,12 +162,15 @@ Kubernetes supports 2 primary modes of finding a Service - environment variables
|
||||
|
||||
### Environment Variables
|
||||
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods:
|
||||
|
||||
```shell
|
||||
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
|
||||
KUBERNETES_SERVICE_HOST=10.0.0.1
|
||||
KUBERNETES_SERVICE_PORT=443
|
||||
```
|
||||
|
||||
Note there’s no mention of your Service. This is because you created the replicas before the Service. Another disadvantage of doing this is that the scheduler might put both pods on the same machine, which will take your entire Service down if it dies. We can do this the right way by killing the 2 pods and waiting for the replication controller to recreate them. This time around the Service exists *before* the replicas. This will given you scheduler level Service spreading of your pods (provided all your nodes have equal capacity), as well as the right environment variables:
|
||||
|
||||
```shell
|
||||
$ kubectl scale rc my-nginx --replicas=0; kubectl scale rc my-nginx --replicas=2;
|
||||
$ kubectl get pods -l app=nginx -o wide
|
||||
@@ -175,12 +187,14 @@ NGINXSVC_SERVICE_PORT=80
|
||||
|
||||
### DNS
|
||||
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it’s running on your cluster:
|
||||
|
||||
```shell
|
||||
$ kubectl get services kube-dns --namespace=kube-system
|
||||
NAME LABELS SELECTOR IP(S) PORT(S)
|
||||
kube-dns <none> k8s-app=kube-dns 10.0.0.10 53/UDP
|
||||
53/TCP
|
||||
```
|
||||
|
||||
If it isn’t running, you can [enable it](../../cluster/addons/dns/README.md#how-do-i-configure-it). The rest of this section will assume you have a Service with a long lived ip (nginxsvc), and a dns server that has assigned a name to that ip (the kube-dns cluster addon), so you can talk to the Service from any pod in your cluster using standard methods (e.g. gethostbyname). Let’s create another pod to test this:
|
||||
|
||||
```yaml
|
||||
@@ -199,7 +213,9 @@ spec:
|
||||
name: curlcontainer
|
||||
restartPolicy: Always
|
||||
```
|
||||
|
||||
And perform a lookup of the nginx Service
|
||||
|
||||
```shell
|
||||
$ kubectl create -f ./curlpod.yaml
|
||||
default/curlpod
|
||||
@@ -222,6 +238,7 @@ Till now we have only accessed the nginx server from within the cluster. Before
|
||||
* A [secret](secrets.md) that makes the certificates accessible to pods
|
||||
|
||||
You can acquire all these from the [nginx https example](../../examples/https-nginx/README.md), in short:
|
||||
|
||||
```shell
|
||||
$ make keys secret KEY=/tmp/nginx.key CERT=/tmp/nginx.crt SECRET=/tmp/secret.json
|
||||
$ kubectl create -f /tmp/secret.json
|
||||
@@ -233,6 +250,7 @@ nginxsecret Opaque 2
|
||||
```
|
||||
|
||||
Now modify your nginx replicas to start a https server using the certificate in the secret, and the Service, to expose both ports (80 and 443):
|
||||
|
||||
```yaml
|
||||
$ cat nginx-app.yaml
|
||||
apiVersion: v1
|
||||
@@ -279,6 +297,7 @@ spec:
|
||||
- mountPath: /etc/nginx/ssl
|
||||
name: secret-volume
|
||||
```
|
||||
|
||||
Noteworthy points about the nginx-app manifest:
|
||||
- It contains both rc and service specification in the same file
|
||||
- The [nginx server](../../examples/https-nginx/default.conf) serves http traffic on port 80 and https traffic on 443, and nginx Service exposes both ports.
|
||||
@@ -293,6 +312,7 @@ replicationcontrollers/my-nginx
|
||||
```
|
||||
|
||||
At this point you can reach the nginx server from any node.
|
||||
|
||||
```shell
|
||||
$ kubectl get pods -o json | grep -i podip
|
||||
"podIP": "10.1.0.80",
|
||||
@@ -383,6 +403,7 @@ $ curl https://104.197.63.17:30645 -k
|
||||
```
|
||||
|
||||
Lets now recreate the Service to use a cloud load balancer, just change the `Type` of Service in the nginx-app.yaml from `NodePort` to `LoadBalancer`:
|
||||
|
||||
```shell
|
||||
$ kubectl delete rc, svc -l app=nginx
|
||||
$ kubectl create -f ./nginx-app.yaml
|
||||
|
Reference in New Issue
Block a user