mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-09-06 11:42:14 +00:00
Remove colon from end of doc heading.
This commit is contained in:
@@ -261,7 +261,7 @@ Kubelet volume plugin API will be changed so that a volume plugin receives the s
|
|||||||
a volume along with the volume spec. This will allow volume plugins to implement setting the
|
a volume along with the volume spec. This will allow volume plugins to implement setting the
|
||||||
security context of volumes they manage.
|
security context of volumes they manage.
|
||||||
|
|
||||||
## Community work:
|
## Community work
|
||||||
|
|
||||||
Several proposals / upstream patches are notable as background for this proposal:
|
Several proposals / upstream patches are notable as background for this proposal:
|
||||||
|
|
||||||
|
@@ -32,7 +32,7 @@ While Kubernetes today is not primarily a multi-tenant system, the long term evo
|
|||||||
|
|
||||||
## Use cases
|
## Use cases
|
||||||
|
|
||||||
### Roles:
|
### Roles
|
||||||
|
|
||||||
We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories:
|
We define "user" as a unique identity accessing the Kubernetes API server, which may be a human or an automated process. Human users fall into the following categories:
|
||||||
|
|
||||||
@@ -46,7 +46,7 @@ Automated process users fall into the following categories:
|
|||||||
2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles
|
2. k8s infrastructure user - the user that kubernetes infrastructure components use to perform cluster functions with clearly defined roles
|
||||||
|
|
||||||
|
|
||||||
### Description of roles:
|
### Description of roles
|
||||||
|
|
||||||
* Developers:
|
* Developers:
|
||||||
* write pod specs.
|
* write pod specs.
|
||||||
|
@@ -37,7 +37,7 @@ kubectl -s http://localhost:8080 run nginx --image=nginx --port=80
|
|||||||
|
|
||||||
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||||
|
|
||||||
### Expose it as a service:
|
### Expose it as a service
|
||||||
```sh
|
```sh
|
||||||
kubectl expose rc nginx --port=80
|
kubectl expose rc nginx --port=80
|
||||||
```
|
```
|
||||||
|
@@ -26,7 +26,7 @@ For each worker node, there are three steps:
|
|||||||
### Set up Flanneld on the worker node
|
### Set up Flanneld on the worker node
|
||||||
As before, the Flannel daemon is going to provide network connectivity.
|
As before, the Flannel daemon is going to provide network connectivity.
|
||||||
|
|
||||||
#### Set up a bootstrap docker:
|
#### Set up a bootstrap docker
|
||||||
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
|
As previously, we need a second instance of the Docker daemon running to bootstrap the flannel networking.
|
||||||
|
|
||||||
Run:
|
Run:
|
||||||
|
@@ -88,7 +88,7 @@ kubectl -s http://localhost:8080 run-container nginx --image=nginx --port=80
|
|||||||
|
|
||||||
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
now run ```docker ps``` you should see nginx running. You may need to wait a few minutes for the image to get pulled.
|
||||||
|
|
||||||
### Expose it as a service:
|
### Expose it as a service
|
||||||
```sh
|
```sh
|
||||||
kubectl expose rc nginx --port=80
|
kubectl expose rc nginx --port=80
|
||||||
```
|
```
|
||||||
|
@@ -64,7 +64,7 @@ There is a specific `cluster/rackspace` directory with the scripts for the follo
|
|||||||
3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems.
|
3. The master server and additional nodes will be created via the `nova` CLI. A `cloud-config.yaml` is generated and provided as user-data with the entire configuration for the systems.
|
||||||
4. We then boot as many nodes as defined via `$NUM_MINIONS`.
|
4. We then boot as many nodes as defined via `$NUM_MINIONS`.
|
||||||
|
|
||||||
## Some notes:
|
## Some notes
|
||||||
- The scripts expect `eth2` to be the cloud network that the containers will communicate across.
|
- The scripts expect `eth2` to be the cloud network that the containers will communicate across.
|
||||||
- A number of the items in `config-default.sh` are overridable via environment variables.
|
- A number of the items in `config-default.sh` are overridable via environment variables.
|
||||||
- For older versions please either:
|
- For older versions please either:
|
||||||
|
@@ -56,7 +56,7 @@ There is a short window after a new master acquires the lease, during which data
|
|||||||
|
|
||||||
5. When the API server makes the corresponding write to etcd, it includes it in a transaction that does a compare-and-swap on the "current master" entry (old value == new value == host:port and sequence number from the replica that sent the mutating operation). This basically guarantees that if we elect the new master, all transactions coming from the old master will fail. You can think of this as the master attaching a "precondition" of its belief about who is the latest master.
|
5. When the API server makes the corresponding write to etcd, it includes it in a transaction that does a compare-and-swap on the "current master" entry (old value == new value == host:port and sequence number from the replica that sent the mutating operation). This basically guarantees that if we elect the new master, all transactions coming from the old master will fail. You can think of this as the master attaching a "precondition" of its belief about who is the latest master.
|
||||||
|
|
||||||
## Open Questions:
|
## Open Questions
|
||||||
* Is there a desire to keep track of all nodes for a specific component type?
|
* Is there a desire to keep track of all nodes for a specific component type?
|
||||||
|
|
||||||
|
|
||||||
|
@@ -90,7 +90,7 @@ $ kubectl create -f secret.json
|
|||||||
$ kubectl describe secret mysecretname
|
$ kubectl describe secret mysecretname
|
||||||
```
|
```
|
||||||
|
|
||||||
#### To delete/invalidate a service account token:
|
#### To delete/invalidate a service account token
|
||||||
```
|
```
|
||||||
kubectl delete secret mysecretname
|
kubectl delete secret mysecretname
|
||||||
```
|
```
|
||||||
|
@@ -97,7 +97,7 @@ You should now be able to curl the nginx Service on `10.0.208.159:80` from any n
|
|||||||
|
|
||||||
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](../../cluster/addons/dns/README.md).
|
Kubernetes supports 2 primary modes of finding a Service - environment variables and DNS. The former works out of the box while the latter requires the [kube-dns cluster addon](../../cluster/addons/dns/README.md).
|
||||||
|
|
||||||
### Environment Variables:
|
### Environment Variables
|
||||||
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods:
|
When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service. This introduces an ordering problem. To see why, inspect the environment of your running nginx pods:
|
||||||
```shell
|
```shell
|
||||||
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
|
$ kubectl exec my-nginx-6isf4 -- printenv | grep SERVICE
|
||||||
@@ -119,7 +119,7 @@ KUBERNETES_SERVICE_HOST=10.0.0.1
|
|||||||
NGINXSVC_SERVICE_PORT=80
|
NGINXSVC_SERVICE_PORT=80
|
||||||
```
|
```
|
||||||
|
|
||||||
### DNS:
|
### DNS
|
||||||
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it’s running on your cluster:
|
Kubernetes offers a DNS cluster addon Service that uses skydns to automatically assign dns names to other Services. You can check if it’s running on your cluster:
|
||||||
```shell
|
```shell
|
||||||
$ kubectl get services kube-dns --namespace=kube-system
|
$ kubectl get services kube-dns --namespace=kube-system
|
||||||
|
@@ -206,7 +206,7 @@ aws ec2 create-volume --availability-zone eu-west-1a --size 10 --volume-type gp2
|
|||||||
Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume
|
Make sure the zone matches the zone you brought up your cluster in. (And also check that the size and EBS volume
|
||||||
type are suitable for your use!)
|
type are suitable for your use!)
|
||||||
|
|
||||||
#### AWS EBS Example configuration:
|
#### AWS EBS Example configuration
|
||||||
|
|
||||||
```yaml
|
```yaml
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
|
Reference in New Issue
Block a user