mirror of
https://github.com/k3s-io/kubernetes.git
synced 2026-01-05 07:27:21 +00:00
Merge pull request #11424 from lavalamp/mungePreformatted
Munge preformatted
This commit is contained in:
@@ -116,11 +116,13 @@ To permit an action Policy with an unset namespace applies regardless of namespa
|
||||
|
||||
Other implementations can be developed fairly easily.
|
||||
The APIserver calls the Authorizer interface:
|
||||
|
||||
```go
|
||||
type Authorizer interface {
|
||||
Authorize(a Attributes) error
|
||||
}
|
||||
```
|
||||
|
||||
to determine whether or not to allow each API action.
|
||||
|
||||
An authorization plugin is a module that implements this interface.
|
||||
|
||||
@@ -62,6 +62,7 @@ To avoid running into cloud provider quota issues, when creating a cluster with
|
||||
To prevent memory leaks or other resource issues in [cluster addons](../../cluster/addons/) from consuming all the resources available on a node, Kubernetes sets resource limits on addon containers to limit the CPU and Memory resources they can consume (See PR [#10653](https://github.com/GoogleCloudPlatform/kubernetes/pull/10653/files) and [#10778](https://github.com/GoogleCloudPlatform/kubernetes/pull/10778/files)).
|
||||
|
||||
For example:
|
||||
|
||||
```YAML
|
||||
containers:
|
||||
- image: gcr.io/google_containers/heapster:v0.15.0
|
||||
|
||||
@@ -40,6 +40,7 @@ You may also visit [troubleshooting document](../troubleshooting.md) for more in
|
||||
The first thing to debug in your cluster is if your nodes are all registered correctly.
|
||||
|
||||
Run
|
||||
|
||||
```
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
@@ -131,6 +131,7 @@ for ```${NODE_IP}``` on each machine.
|
||||
|
||||
#### Validating your cluster
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
|
||||
|
||||
```
|
||||
etcdctl member list
|
||||
```
|
||||
@@ -209,11 +210,12 @@ master election. On each of the three apiserver nodes, we run a small utility a
|
||||
election protocol using etcd "compare and swap". If the apiserver node wins the election, it starts the master component it is managing (e.g. the scheduler), if it
|
||||
loses the election, it ensures that any master components running on the node (e.g. the scheduler) are stopped.
|
||||
|
||||
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](proposals/high-availability.md)
|
||||
In the future, we expect to more tightly integrate this lease-locking into the scheduler and controller-manager binaries directly, as described in the [high availability design proposal](../proposals/high-availability.md)
|
||||
|
||||
### Installing configuration files
|
||||
|
||||
First, create empty log files on each node, so that Docker will mount the files not make new directories:
|
||||
|
||||
```
|
||||
touch /var/log/kube-scheduler.log
|
||||
touch /var/log/kube-controller-manager.log
|
||||
@@ -244,7 +246,7 @@ set the ```--apiserver``` flag to your replicated endpoint.
|
||||
|
||||
##Vagrant up!
|
||||
|
||||
We indeed have an initial proof of concept tester for this, which is available [here](../examples/high-availability/).
|
||||
We indeed have an initial proof of concept tester for this, which is available [here](../../examples/high-availability/).
|
||||
|
||||
It implements the major concepts (with a few minor reductions for simplicity), of the podmaster HA implementation alongside a quick smoke test using k8petstore.
|
||||
|
||||
|
||||
@@ -152,6 +152,7 @@ outbound internet access. A linux bridge (called `cbr0`) is configured to exist
|
||||
on that subnet, and is passed to docker's `--bridge` flag.
|
||||
|
||||
We start Docker with:
|
||||
|
||||
```
|
||||
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
|
||||
```
|
||||
|
||||
@@ -102,6 +102,7 @@ setting the Status of the condition, has not heard from the
|
||||
node recently (currently 40 seconds).
|
||||
Node condition is represented as a json object. For example,
|
||||
the following conditions mean the node is in sane state:
|
||||
|
||||
```json
|
||||
"conditions": [
|
||||
{
|
||||
@@ -133,6 +134,7 @@ or from your pool of physical or virtual machines. What this means is that when
|
||||
Kubernetes creates a node, it is really just creating an object that represents the node in its internal state.
|
||||
After creation, Kubernetes will check whether the node is valid or not.
|
||||
For example, if you try to create a node from the following content:
|
||||
|
||||
```json
|
||||
{
|
||||
"kind": "Node",
|
||||
@@ -204,6 +206,7 @@ Making a node unscheduleable will prevent new pods from being scheduled to that
|
||||
node, but will not affect any existing pods on the node. This is useful as a
|
||||
preparatory step before a node reboot, etc. For example, to mark a node
|
||||
unschedulable, run this command:
|
||||
|
||||
```
|
||||
kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
|
||||
```
|
||||
@@ -222,6 +225,7 @@ processes not in containers.
|
||||
|
||||
If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
|
||||
pod. Use the following template:
|
||||
|
||||
```
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
@@ -236,6 +240,7 @@ spec:
|
||||
cpu: 100m
|
||||
memory: 100Mi
|
||||
```
|
||||
|
||||
Set the `cpu` and `memory` values to the amount of resources you want to reserve.
|
||||
Place the file in the manifest directory (`--config=DIR` flag of kubelet). Do this
|
||||
on each kubelet where you want to reserve resources.
|
||||
|
||||
@@ -84,6 +84,7 @@ This means the resource must have a fully-qualified name (i.e. mycompany.org/shi
|
||||
|
||||
## Viewing and Setting Quotas
|
||||
Kubectl supports creating, updating, and viewing quotas
|
||||
|
||||
```
|
||||
$ kubectl namespace myspace
|
||||
$ cat <<EOF > quota.json
|
||||
|
||||
@@ -48,6 +48,7 @@ Each salt-minion service is configured to interact with the **salt-master** serv
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
|
||||
master: kubernetes-master
|
||||
```
|
||||
|
||||
The salt-master is contacted by each salt-minion and depending upon the machine information presented, the salt-master will provision the machine as either a kubernetes-master or kubernetes-minion with all the required capabilities needed to run Kubernetes.
|
||||
|
||||
If you are running the Vagrant based environment, the **salt-api** service is running on the kubernetes-master. It is configured to enable the vagrant user to introspect the salt cluster in order to find out about machines in the Vagrant environment via a REST API.
|
||||
|
||||
@@ -109,6 +109,7 @@ $ kubectl describe secret mysecretname
|
||||
```
|
||||
|
||||
#### To delete/invalidate a service account token
|
||||
|
||||
```
|
||||
kubectl delete secret mysecretname
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user