Improve markdown highlighting in coreos getting started guides.

This commit is contained in:
Alex Robinson 2015-07-18 19:20:04 -07:00
parent 261e347e60
commit 11d0fd1515
2 changed files with 27 additions and 27 deletions

View File

@ -56,7 +56,7 @@ In this guide I will demonstrate how to deploy a Kubernetes cluster to Azure clo
To get started, you need to checkout the code: To get started, you need to checkout the code:
``` ```sh
git clone https://github.com/GoogleCloudPlatform/kubernetes git clone https://github.com/GoogleCloudPlatform/kubernetes
cd kubernetes/docs/getting-started-guides/coreos/azure/ cd kubernetes/docs/getting-started-guides/coreos/azure/
``` ```
@ -65,13 +65,13 @@ You will need to have [Node.js installed](http://nodejs.org/download/) on you ma
First, you need to install some of the dependencies with First, you need to install some of the dependencies with
``` ```sh
npm install npm install
``` ```
Now, all you need to do is: Now, all you need to do is:
``` ```sh
./azure-login.js -u <your_username> ./azure-login.js -u <your_username>
./create-kubernetes-cluster.js ./create-kubernetes-cluster.js
``` ```
@ -82,7 +82,7 @@ This script will provision a cluster suitable for production use, where there is
Once the creation of Azure VMs has finished, you should see the following: Once the creation of Azure VMs has finished, you should see the following:
``` ```console
... ...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>` azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_1c1496016083b4_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are: azure_wrapper/info: The hosts in this deployment are:
@ -92,7 +92,7 @@ azure_wrapper/info: Saved state into `./output/kube_1c1496016083b4_deployment.ym
Let's login to the master node like so: Let's login to the master node like so:
``` ```sh
ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00 ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
``` ```
@ -100,7 +100,7 @@ ssh -F ./output/kube_1c1496016083b4_ssh_conf kube-00
Check there are 2 nodes in the cluster: Check there are 2 nodes in the cluster:
``` ```console
core@kube-00 ~ $ kubectl get nodes core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS NAME LABELS STATUS
kube-01 environment=production Ready kube-01 environment=production Ready
@ -111,7 +111,7 @@ kube-02 environment=production Ready
Let's follow the Guestbook example now: Let's follow the Guestbook example now:
``` ```sh
cd guestbook-example cd guestbook-example
kubectl create -f examples/guestbook/redis-master-controller.yaml kubectl create -f examples/guestbook/redis-master-controller.yaml
kubectl create -f examples/guestbook/redis-master-service.yaml kubectl create -f examples/guestbook/redis-master-service.yaml
@ -123,7 +123,7 @@ kubectl create -f examples/guestbook/frontend-service.yaml
You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`. You need to wait for the pods to get deployed, run the following and wait for `STATUS` to change from `Unknown`, through `Pending` to `Running`.
``` ```sh
kubectl get pods --watch kubectl get pods --watch
``` ```
@ -131,7 +131,7 @@ kubectl get pods --watch
Eventually you should see: Eventually you should see:
``` ```console
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
frontend-8anh8 1/1 Running 0 1m frontend-8anh8 1/1 Running 0 1m
frontend-8pq5r 1/1 Running 0 1m frontend-8pq5r 1/1 Running 0 1m
@ -149,14 +149,14 @@ You will need to open another terminal window on your machine and go to the same
First, lets set the size of new VMs: First, lets set the size of new VMs:
``` ```sh
export AZ_VM_SIZE=Large export AZ_VM_SIZE=Large
``` ```
Now, run scale script with state file of the previous deployment and number of nodes to add: Now, run scale script with state file of the previous deployment and number of nodes to add:
``` ```console
./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2 core@kube-00 ~ $ ./scale-kubernetes-cluster.js ./output/kube_1c1496016083b4_deployment.yml 2
... ...
azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>` azure_wrapper/info: Saved SSH config, you can use it like so: `ssh -F ./output/kube_8f984af944f572_ssh_conf <hostname>`
azure_wrapper/info: The hosts in this deployment are: azure_wrapper/info: The hosts in this deployment are:
@ -175,7 +175,7 @@ azure_wrapper/info: Saved state into `./output/kube_8f984af944f572_deployment.ym
Back on `kube-00`: Back on `kube-00`:
``` ```console
core@kube-00 ~ $ kubectl get nodes core@kube-00 ~ $ kubectl get nodes
NAME LABELS STATUS NAME LABELS STATUS
kube-01 environment=production Ready kube-01 environment=production Ready
@ -188,7 +188,7 @@ You can see that two more nodes joined happily. Let's scale the number of Guestb
First, double-check how many replication controllers there are: First, double-check how many replication controllers there are:
``` ```console
core@kube-00 ~ $ kubectl get rc core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3 frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 3
@ -198,7 +198,7 @@ redis-slave slave kubernetes/redis-slave:v2 name=r
As there are 4 nodes, let's scale proportionally: As there are 4 nodes, let's scale proportionally:
``` ```console
core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave core@kube-00 ~ $ kubectl scale --replicas=4 rc redis-slave
scaled scaled
core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend core@kube-00 ~ $ kubectl scale --replicas=4 rc frontend
@ -207,7 +207,7 @@ scaled
Check what you have now: Check what you have now:
``` ```console
core@kube-00 ~ $ kubectl get rc core@kube-00 ~ $ kubectl get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS
frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4 frontend php-redis kubernetes/example-guestbook-php-redis:v2 name=frontend 4
@ -217,7 +217,7 @@ redis-slave slave kubernetes/redis-slave:v2 name=r
You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node. You now will have more instances of front-end Guestbook apps and Redis slaves; and, if you look up all pods labeled `name=frontend`, you should see one running on each node.
``` ```console
core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend core@kube-00 ~/guestbook-example $ kubectl get pods -l name=frontend
NAME READY STATUS RESTARTS AGE NAME READY STATUS RESTARTS AGE
frontend-8anh8 1/1 Running 0 3m frontend-8anh8 1/1 Running 0 3m
@ -244,7 +244,7 @@ You should probably try deploy other [example apps](../../../../examples/) or wr
If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see. If you don't wish care about the Azure bill, you can tear down the cluster. It's easy to redeploy it, as you can see.
``` ```sh
./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml ./destroy-cluster.js ./output/kube_8f984af944f572_deployment.yml
``` ```

View File

@ -52,14 +52,14 @@ Use the [master.yaml](cloud-configs/master.yaml) and [node.yaml](cloud-configs/n
#### Provision the Master #### Provision the Master
``` ```sh
aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group" aws ec2 create-security-group --group-name kubernetes --description "Kubernetes Security Group"
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 22 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0 aws ec2 authorize-security-group-ingress --group-name kubernetes --protocol tcp --port 80 --cidr 0.0.0.0/0
aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes aws ec2 authorize-security-group-ingress --group-name kubernetes --source-security-group-name kubernetes
``` ```
``` ```sh
aws ec2 run-instances \ aws ec2 run-instances \
--image-id <ami_image_id> \ --image-id <ami_image_id> \
--key-name <keypair> \ --key-name <keypair> \
@ -71,7 +71,7 @@ aws ec2 run-instances \
#### Capture the private IP address #### Capture the private IP address
``` ```sh
aws ec2 describe-instances --instance-id <master-instance-id> aws ec2 describe-instances --instance-id <master-instance-id>
``` ```
@ -81,7 +81,7 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
#### Provision worker nodes #### Provision worker nodes
``` ```sh
aws ec2 run-instances \ aws ec2 run-instances \
--count 1 \ --count 1 \
--image-id <ami_image_id> \ --image-id <ami_image_id> \
@ -98,7 +98,7 @@ aws ec2 run-instances \
#### Provision the Master #### Provision the Master
``` ```sh
gcloud compute instances create master \ gcloud compute instances create master \
--image-project coreos-cloud \ --image-project coreos-cloud \
--image <gce_image_id> \ --image <gce_image_id> \
@ -110,7 +110,7 @@ gcloud compute instances create master \
#### Capture the private IP address #### Capture the private IP address
``` ```sh
gcloud compute instances list gcloud compute instances list
``` ```
@ -120,7 +120,7 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
#### Provision worker nodes #### Provision worker nodes
``` ```sh
gcloud compute instances create node1 \ gcloud compute instances create node1 \
--image-project coreos-cloud \ --image-project coreos-cloud \
--image <gce_image_id> \ --image <gce_image_id> \
@ -140,7 +140,7 @@ run `gcloud compute ssh master --ssh-flag="-R 8080:127.0.0.1:8080"`.
#### Create the master config-drive #### Create the master config-drive
``` ```sh
mkdir -p /tmp/new-drive/openstack/latest/ mkdir -p /tmp/new-drive/openstack/latest/
cp master.yaml /tmp/new-drive/openstack/latest/user_data cp master.yaml /tmp/new-drive/openstack/latest/user_data
hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o master.iso /tmp/new-drive hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o master.iso /tmp/new-drive
@ -158,7 +158,7 @@ Edit `node.yaml` and replace all instances of `<master-private-ip>` with the pri
#### Create the node config-drive #### Create the node config-drive
``` ```sh
mkdir -p /tmp/new-drive/openstack/latest/ mkdir -p /tmp/new-drive/openstack/latest/
cp node.yaml /tmp/new-drive/openstack/latest/user_data cp node.yaml /tmp/new-drive/openstack/latest/user_data
hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o node.iso /tmp/new-drive hdiutil makehybrid -iso -joliet -joliet-volume-name "config-2" -joliet -o node.iso /tmp/new-drive