mirror of
https://github.com/k3s-io/kubernetes.git
synced 2025-07-24 20:24:09 +00:00
Merge pull request #11548 from a-robinson/admin
Improve admin docs syntax highlighting
This commit is contained in:
commit
e3521a8e74
@ -68,14 +68,14 @@ To prevent memory leaks or other resource issues in [cluster addons](../../clust
|
||||
|
||||
For example:
|
||||
|
||||
```YAML
|
||||
containers:
|
||||
- image: gcr.io/google_containers/heapster:v0.15.0
|
||||
name: heapster
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
```yaml
|
||||
containers:
|
||||
- image: gcr.io/google_containers/heapster:v0.15.0
|
||||
name: heapster
|
||||
resources:
|
||||
limits:
|
||||
cpu: 100m
|
||||
memory: 200Mi
|
||||
```
|
||||
|
||||
These limits, however, are based on data collected from addons running on 4-node clusters (see [#10335](https://github.com/GoogleCloudPlatform/kubernetes/issues/10335#issuecomment-117861225)). The addons consume a lot more resources when running on large deployment clusters (see [#5880](https://github.com/GoogleCloudPlatform/kubernetes/issues/5880#issuecomment-113984085)). So, if a large cluster is deployed without adjusting these values, the addons may continuously get killed because they keep hitting the limits.
|
||||
|
@ -62,7 +62,7 @@ KUBE_API_VERSIONS env var controls the API versions that are supported in the cl
|
||||
|
||||
You can use the kube-version-change utility to convert config files between different API versions.
|
||||
|
||||
```
|
||||
```console
|
||||
$ hack/build-go.sh cmd/kube-version-change
|
||||
$ _output/local/go/bin/kube-version-change -i myPod.v1beta3.yaml -o myPod.v1.yaml
|
||||
```
|
||||
|
@ -44,7 +44,7 @@ The first thing to debug in your cluster is if your nodes are all registered cor
|
||||
|
||||
Run
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl get nodes
|
||||
```
|
||||
|
||||
|
@ -73,7 +73,7 @@ To test whether `etcd` is running correctly, you can try writing a value to a
|
||||
test key. On your master VM (or somewhere with firewalls configured such that
|
||||
you can talk to your cluster's etcd), try:
|
||||
|
||||
```
|
||||
```sh
|
||||
curl -fs -X PUT "http://${host}:${port}/v2/keys/_test"
|
||||
```
|
||||
|
||||
|
@ -141,13 +141,13 @@ for ```${NODE_IP}``` on each machine.
|
||||
|
||||
Once you copy this into all three nodes, you should have a clustered etcd set up. You can validate with
|
||||
|
||||
```
|
||||
```sh
|
||||
etcdctl member list
|
||||
```
|
||||
|
||||
and
|
||||
|
||||
```
|
||||
```sh
|
||||
etcdctl cluster-health
|
||||
```
|
||||
|
||||
@ -179,7 +179,7 @@ Once you have replicated etcd set up correctly, we will also install the apiserv
|
||||
|
||||
First you need to create the initial log file, so that Docker mounts a file instead of a directory:
|
||||
|
||||
```
|
||||
```sh
|
||||
touch /var/log/kube-apiserver.log
|
||||
```
|
||||
|
||||
@ -231,7 +231,7 @@ In the future, we expect to more tightly integrate this lease-locking into the s
|
||||
|
||||
First, create empty log files on each node, so that Docker will mount the files not make new directories:
|
||||
|
||||
```
|
||||
```sh
|
||||
touch /var/log/kube-scheduler.log
|
||||
touch /var/log/kube-controller-manager.log
|
||||
```
|
||||
|
@ -155,7 +155,7 @@ on that subnet, and is passed to docker's `--bridge` flag.
|
||||
|
||||
We start Docker with:
|
||||
|
||||
```
|
||||
```sh
|
||||
DOCKER_OPTS="--bridge=cbr0 --iptables=false --ip-masq=false"
|
||||
```
|
||||
|
||||
@ -172,14 +172,14 @@ masquerade (aka SNAT - to make it seem as if packets came from the `Node`
|
||||
itself) traffic that is bound for IPs outside the GCE project network
|
||||
(10.0.0.0/8).
|
||||
|
||||
```
|
||||
```sh
|
||||
iptables -t nat -A POSTROUTING ! -d 10.0.0.0/8 -o eth0 -j MASQUERADE
|
||||
```
|
||||
|
||||
Lastly we enable IP forwarding in the kernel (so the kernel will process
|
||||
packets for bridged containers):
|
||||
|
||||
```
|
||||
```sh
|
||||
sysctl net.ipv4.ip_forward=1
|
||||
```
|
||||
|
||||
|
@ -209,7 +209,7 @@ node, but will not affect any existing pods on the node. This is useful as a
|
||||
preparatory step before a node reboot, etc. For example, to mark a node
|
||||
unschedulable, run this command:
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl replace nodes 10.1.2.3 --patch='{"apiVersion": "v1", "unschedulable": true}'
|
||||
```
|
||||
|
||||
@ -228,7 +228,7 @@ processes not in containers.
|
||||
If you want to explicitly reserve resources for non-Pod processes, you can create a placeholder
|
||||
pod. Use the following template:
|
||||
|
||||
```
|
||||
```yaml
|
||||
apiVersion: v1
|
||||
kind: Pod
|
||||
metadata:
|
||||
|
@ -89,7 +89,7 @@ This means the resource must have a fully-qualified name (i.e. mycompany.org/shi
|
||||
|
||||
Kubectl supports creating, updating, and viewing quotas
|
||||
|
||||
```
|
||||
```console
|
||||
$ kubectl namespace myspace
|
||||
$ cat <<EOF > quota.json
|
||||
{
|
||||
|
@ -45,7 +45,7 @@ The **salt-minion** service runs on the kubernetes-master node and each kubernet
|
||||
|
||||
Each salt-minion service is configured to interact with the **salt-master** service hosted on the kubernetes-master via the **master.conf** file [(except on GCE)](#standalone-salt-configuration-on-gce).
|
||||
|
||||
```
|
||||
```console
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/master.conf
|
||||
master: kubernetes-master
|
||||
```
|
||||
@ -66,7 +66,7 @@ All remaining sections that refer to master/minion setups should be ignored for
|
||||
|
||||
Security is not enabled on the salt-master, and the salt-master is configured to auto-accept incoming requests from minions. It is not recommended to use this security configuration in production environments without deeper study. (In some environments this isn't as bad as it might sound if the salt master port isn't externally accessible and you trust everyone on your network.)
|
||||
|
||||
```
|
||||
```console
|
||||
[root@kubernetes-master] $ cat /etc/salt/master.d/auto-accept.conf
|
||||
open_mode: True
|
||||
auto_accept: True
|
||||
@ -78,7 +78,7 @@ Each minion in the salt cluster has an associated configuration that instructs t
|
||||
|
||||
An example file is presented below using the Vagrant based environment.
|
||||
|
||||
```
|
||||
```console
|
||||
[root@kubernetes-master] $ cat /etc/salt/minion.d/grains.conf
|
||||
grains:
|
||||
etcd_servers: $MASTER_IP
|
||||
|
@ -93,7 +93,7 @@ account. To create additional API tokens for a service account, create a secret
|
||||
of type `ServiceAccountToken` with an annotation referencing the service
|
||||
account, and the controller will update it with a generated token:
|
||||
|
||||
```
|
||||
```json
|
||||
secret.json:
|
||||
{
|
||||
"kind": "Secret",
|
||||
@ -105,14 +105,16 @@ secret.json:
|
||||
}
|
||||
"type": "kubernetes.io/service-account-token"
|
||||
}
|
||||
```
|
||||
|
||||
$ kubectl create -f ./secret.json
|
||||
$ kubectl describe secret mysecretname
|
||||
```sh
|
||||
kubectl create -f ./secret.json
|
||||
kubectl describe secret mysecretname
|
||||
```
|
||||
|
||||
#### To delete/invalidate a service account token
|
||||
|
||||
```
|
||||
```sh
|
||||
kubectl delete secret mysecretname
|
||||
```
|
||||
|
||||
|
Loading…
Reference in New Issue
Block a user