From 995a7aef293231659347d7aee0b44129d920f444 Mon Sep 17 00:00:00 2001 From: Tim Hockin Date: Mon, 20 Jul 2015 09:40:32 -0700 Subject: [PATCH] Collected markedown fixes around syntax. --- docs/admin/admission-controllers.md | 2 +- docs/admin/salt.md | 2 +- .../admission_control_resource_quota.md | 1 - docs/design/event_compression.md | 1 - docs/getting-started-guides/aws-coreos.md | 6 ++--- .../fedora/fedora_manual_config.md | 1 - .../logging-elasticsearch.md | 4 --- docs/getting-started-guides/scratch.md | 7 ----- docs/getting-started-guides/ubuntu.md | 11 -------- docs/user-guide/debugging-services.md | 1 - docs/user-guide/kubeconfig-file.md | 1 - docs/user-guide/persistent-volumes/README.md | 1 - examples/cassandra/README.md | 1 - examples/celery-rabbitmq/README.md | 2 +- examples/elasticsearch/README.md | 8 ------ examples/explorer/README.md | 2 +- examples/glusterfs/README.md | 12 ++++----- examples/k8petstore/dev/README | 4 +-- examples/meteor/README.md | 12 +++++---- examples/nfs/README.md | 2 +- examples/phabricator/README.md | 26 +++++++++---------- examples/rethinkdb/README.md | 2 +- examples/spark/README.md | 14 +++++----- 23 files changed, 43 insertions(+), 80 deletions(-) diff --git a/docs/admin/admission-controllers.md b/docs/admin/admission-controllers.md index 08ffa22e18d..7c74c5618a9 100644 --- a/docs/admin/admission-controllers.md +++ b/docs/admin/admission-controllers.md @@ -158,7 +158,7 @@ Yes. For Kubernetes 1.0, we strongly recommend running the following set of admission control plug-ins (order matters): -```shell +``` --admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota ``` diff --git a/docs/admin/salt.md b/docs/admin/salt.md index f91f2237732..39aa7c63c55 100644 --- a/docs/admin/salt.md +++ b/docs/admin/salt.md @@ -109,7 +109,7 @@ These keys may be leveraged by the Salt sls files to branch behavior. In addition, a cluster may be running a Debian based operating system or Red Hat based operating system (Centos, Fedora, RHEL, etc.). As a result, it's important to sometimes distinguish behavior based on operating system using if branches like the following. -``` +```jinja {% if grains['os_family'] == 'RedHat' %} // something specific to a RedHat environment (Centos, Fedora, RHEL) where you may use yum, systemd, etc. {% else %} diff --git a/docs/design/admission_control_resource_quota.md b/docs/design/admission_control_resource_quota.md index 1cc81771f65..c86577ac6b3 100644 --- a/docs/design/admission_control_resource_quota.md +++ b/docs/design/admission_control_resource_quota.md @@ -100,7 +100,6 @@ type ResourceQuotaList struct { // Items is a list of ResourceQuota objects Items []ResourceQuota `json:"items"` } - ``` ## AdmissionControl plugin: ResourceQuota diff --git a/docs/design/event_compression.md b/docs/design/event_compression.md index aea04e41f67..bfa2c5d60a3 100644 --- a/docs/design/event_compression.md +++ b/docs/design/event_compression.md @@ -103,7 +103,6 @@ Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 Thu, 12 Feb 2015 01:13:05 +0000 Thu, 12 Feb 2015 01:13:12 +0000 4 monitoring-heapster-controller-oh43e Pod failedScheduling {scheduler } Error scheduling: no minions available to schedule pods Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey BoundPod implicitly required container POD pulled {kubelet kubernetes-minion-4.c.saad-dev-vms.internal} Successfully pulled image "kubernetes/pause:latest" Thu, 12 Feb 2015 01:13:20 +0000 Thu, 12 Feb 2015 01:13:20 +0000 1 kibana-logging-controller-gziey Pod scheduled {scheduler } Successfully assigned kibana-logging-controller-gziey to kubernetes-minion-4.c.saad-dev-vms.internal - ``` This demonstrates what would have been 20 separate entries (indicating scheduling failure) collapsed/compressed down to 5 entries. diff --git a/docs/getting-started-guides/aws-coreos.md b/docs/getting-started-guides/aws-coreos.md index a1b8c13a4bf..ce1ef3fa135 100644 --- a/docs/getting-started-guides/aws-coreos.md +++ b/docs/getting-started-guides/aws-coreos.md @@ -117,7 +117,7 @@ Gather the public and private IPs for the master node: aws ec2 describe-instances --instance-id ``` -``` +```json { "Reservations": [ { @@ -131,7 +131,6 @@ aws ec2 describe-instances --instance-id }, "PublicIpAddress": "54.68.97.117", "PrivateIpAddress": "172.31.9.9", -... ``` #### Update the node.yaml cloud-config @@ -222,7 +221,7 @@ Gather the public IP address for the worker node. aws ec2 describe-instances --filters 'Name=private-ip-address,Values=' ``` -``` +```json { "Reservations": [ { @@ -235,7 +234,6 @@ aws ec2 describe-instances --filters 'Name=private-ip-address,Values=' "Name": "running" }, "PublicIpAddress": "54.68.97.117", -... ``` Visit the public IP address in your browser to view the running pod. diff --git a/docs/getting-started-guides/fedora/fedora_manual_config.md b/docs/getting-started-guides/fedora/fedora_manual_config.md index 7d0f538f185..ed9958116b1 100644 --- a/docs/getting-started-guides/fedora/fedora_manual_config.md +++ b/docs/getting-started-guides/fedora/fedora_manual_config.md @@ -165,7 +165,6 @@ $ kubectl create -f ./node.json $ kubectl get nodes NAME LABELS STATUS fed-node name=fed-node-label Unknown - ``` Please note that in the above, it only creates a representation for the node diff --git a/docs/getting-started-guides/logging-elasticsearch.md b/docs/getting-started-guides/logging-elasticsearch.md index 8fee27c8e09..8ae7f17b92e 100644 --- a/docs/getting-started-guides/logging-elasticsearch.md +++ b/docs/getting-started-guides/logging-elasticsearch.md @@ -67,7 +67,6 @@ NAME ZONE SIZE_GB TYPE STATUS kubernetes-master-pd us-central1-b 20 pd-ssd READY Created [https://www.googleapis.com/compute/v1/projects/kubernetes-satnam/regions/us-central1/addresses/kubernetes-master-ip]. +++ Logging using Fluentd to elasticsearch - ``` The node level Fluentd collector pods and the Elasticsearech pods used to ingest cluster logs and the pod for the Kibana @@ -86,7 +85,6 @@ kibana-logging-v1-bhpo8 1/1 Running 0 2h kube-dns-v3-7r1l9 3/3 Running 0 2h monitoring-heapster-v4-yl332 1/1 Running 1 2h monitoring-influx-grafana-v1-o79xf 2/2 Running 0 2h - ``` Here we see that for a four node cluster there is a `fluent-elasticsearch` pod running which gathers @@ -137,7 +135,6 @@ KubeUI is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/ Grafana is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana Heapster is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-heapster InfluxDB is running at https://146.148.94.154/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb - ``` Before accessing the logs ingested into Elasticsearch using a browser and the service proxy URL we need to find out @@ -204,7 +201,6 @@ $ curl --header "Authorization: Bearer JsUe2Z3cXqa17UQqQ8qWGGf4nOSLwSnp" --insec }, "tagline" : "You Know, for Search" } - ``` Note that you need the trailing slash at the end of the service proxy URL. Here is an example of a search: diff --git a/docs/getting-started-guides/scratch.md b/docs/getting-started-guides/scratch.md index f7c4bab5b0b..ba09cf95a42 100644 --- a/docs/getting-started-guides/scratch.md +++ b/docs/getting-started-guides/scratch.md @@ -661,13 +661,11 @@ Next, verify that kubelet has started a container for the apiserver: ```console $ sudo docker ps | grep apiserver: 5783290746d5 gcr.io/google_containers/kube-apiserver:e36bf367342b5a80d7467fd7611ad873 "/bin/sh -c '/usr/lo'" 10 seconds ago Up 9 seconds k8s_kube-apiserver.feb145e7_kube-apiserver-kubernetes-master_default_eaebc600cf80dae59902b44225f2fc0a_225a4695 - ``` Then try to connect to the apiserver: ```console - $ echo $(curl -s http://localhost:8080/healthz) ok $ curl -s http://localhost:8080/api @@ -677,7 +675,6 @@ $ curl -s http://localhost:8080/api "v1" ] } - ``` If you have selected the `--register-node=true` option for kubelets, they will now being self-registering with the apiserver. @@ -689,7 +686,6 @@ Otherwise, you will need to manually create node objects. Complete this template for the scheduler pod: ```json - { "kind": "Pod", "apiVersion": "v1", @@ -719,7 +715,6 @@ Complete this template for the scheduler pod: ] } } - ``` Optionally, you may want to mount `/var/log` as well and redirect output there. @@ -746,7 +741,6 @@ Flags to consider using with controller manager. Template for controller manager pod: ```json - { "kind": "Pod", "apiVersion": "v1", @@ -802,7 +796,6 @@ Template for controller manager pod: ] } } - ``` diff --git a/docs/getting-started-guides/ubuntu.md b/docs/getting-started-guides/ubuntu.md index 1443810de52..bd466d90304 100644 --- a/docs/getting-started-guides/ubuntu.md +++ b/docs/getting-started-guides/ubuntu.md @@ -97,8 +97,6 @@ export NUM_MINIONS=${NUM_MINIONS:-3} export SERVICE_CLUSTER_IP_RANGE=11.1.1.0/24 export FLANNEL_NET=172.16.0.0/16 - - ``` The first variable `nodes` defines all your cluster nodes, MASTER node comes first and separated with blank space like ` ` @@ -124,13 +122,11 @@ After all the above variable being set correctly. We can use below command in cl The scripts is automatically scp binaries and config files to all the machines and start the k8s service on them. The only thing you need to do is to type the sudo password when promoted. The current machine name is shown below like. So you will not type in the wrong password. ```console - Deploying minion on machine 10.10.103.223 ... [sudo] password to copy files and start minion: - ``` If all things goes right, you will see the below message from console @@ -143,7 +139,6 @@ You can also use `kubectl` command to see if the newly created k8s is working co For example, use `$ kubectl get nodes` to see if all your nodes are in ready status. It may take some time for the nodes ready to use like below. ```console - NAME LABELS STATUS 10.10.103.162 kubernetes.io/hostname=10.10.103.162 Ready @@ -151,8 +146,6 @@ NAME LABELS STATUS 10.10.103.223 kubernetes.io/hostname=10.10.103.223 Ready 10.10.103.250 kubernetes.io/hostname=10.10.103.250 Ready - - ``` Also you can run kubernetes [guest-example](../../examples/guestbook/) to build a redis backend cluster on the k8s. @@ -165,7 +158,6 @@ After the previous parts, you will have a working k8s cluster, this part will te The configuration of dns is configured in cluster/ubuntu/config-default.sh. ```sh - ENABLE_CLUSTER_DNS=true DNS_SERVER_IP="192.168.3.10" @@ -173,7 +165,6 @@ DNS_SERVER_IP="192.168.3.10" DNS_DOMAIN="cluster.local" DNS_REPLICAS=1 - ``` The `DNS_SERVER_IP` is defining the ip of dns server which must be in the service_cluster_ip_range. @@ -183,11 +174,9 @@ The `DNS_REPLICAS` describes how many dns pod running in the cluster. After all the above variable have been set. Just type the below command ```console - $ cd cluster/ubuntu $ KUBERNETES_PROVIDER=ubuntu ./deployAddons.sh - ``` After some time, you can use `$ kubectl get pods` to see the dns pod is running in the cluster. Done! diff --git a/docs/user-guide/debugging-services.md b/docs/user-guide/debugging-services.md index 2151719bce9..ecfac13f1d7 100644 --- a/docs/user-guide/debugging-services.md +++ b/docs/user-guide/debugging-services.md @@ -195,7 +195,6 @@ or: ```console u@pod$ echo $HOSTNAMES_SERVICE_HOST - ``` So the first thing to check is whether that `Service` actually exists: diff --git a/docs/user-guide/kubeconfig-file.md b/docs/user-guide/kubeconfig-file.md index b9e7c69f3a0..9a1bb6f3078 100644 --- a/docs/user-guide/kubeconfig-file.md +++ b/docs/user-guide/kubeconfig-file.md @@ -151,7 +151,6 @@ users: myself: username: admin password: secret - ``` and a kubeconfig file that looks like this diff --git a/docs/user-guide/persistent-volumes/README.md b/docs/user-guide/persistent-volumes/README.md index c09a8b59aa8..572eea99496 100644 --- a/docs/user-guide/persistent-volumes/README.md +++ b/docs/user-guide/persistent-volumes/README.md @@ -75,7 +75,6 @@ They just know they can rely on their claim to storage and can manage its lifecy Claims must be created in the same namespace as the pods that use them. ```console - $ kubectl create -f docs/user-guide/persistent-volumes/claims/claim-01.yaml $ kubectl get pvc diff --git a/examples/cassandra/README.md b/examples/cassandra/README.md index 16b8af47332..16fa040fa29 100644 --- a/examples/cassandra/README.md +++ b/examples/cassandra/README.md @@ -287,7 +287,6 @@ UN 10.244.3.3 51.28 KB 256 51.0% dafe3154-1d67-42e1-ac1d-78e For those of you who are impatient, here is the summary of the commands we ran in this tutorial. ```sh - # create a service to track all cassandra nodes kubectl create -f examples/cassandra/cassandra-service.yaml diff --git a/examples/celery-rabbitmq/README.md b/examples/celery-rabbitmq/README.md index 3f43ffcf25d..f1e31719574 100644 --- a/examples/celery-rabbitmq/README.md +++ b/examples/celery-rabbitmq/README.md @@ -83,7 +83,7 @@ spec: To start the service, run: -```shell +```sh $ kubectl create -f examples/celery-rabbitmq/rabbitmq-service.yaml ``` diff --git a/examples/elasticsearch/README.md b/examples/elasticsearch/README.md index da697914b14..4accb3a1b9c 100644 --- a/examples/elasticsearch/README.md +++ b/examples/elasticsearch/README.md @@ -111,7 +111,6 @@ metadata: namespace: NAMESPACE data: token: "TOKEN" - ``` Replace `NAMESPACE` with the actual namespace to be used and `TOKEN` with the basic64 encoded @@ -126,7 +125,6 @@ $ kubectl config view ... $ echo yGlDcMvSZPX4PyP0Q5bHgAYgi1iyEHv2 | base64 eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK= - ``` resulting in the file: @@ -139,7 +137,6 @@ metadata: namespace: mytunes data: token: "eUdsRGNNdlNaUFg0UHlQMFE1YkhnQVlnaTFpeUVIdjIK=" - ``` which can be used to create the secret in your namespace: @@ -147,7 +144,6 @@ which can be used to create the secret in your namespace: ```console kubectl create -f examples/elasticsearch/apiserver-secret.yaml --namespace=mytunes secrets/apiserver-secret - ``` Now you are ready to create the replication controller which will then create the pods: @@ -155,7 +151,6 @@ Now you are ready to create the replication controller which will then create th ```console $ kubectl create -f examples/elasticsearch/music-rc.yaml --namespace=mytunes replicationcontrollers/music-db - ``` It's also useful to have a [service](../../docs/user-guide/services.md) with an load balancer for accessing the Elasticsearch @@ -184,7 +179,6 @@ Let's create the service with an external load balancer: ```console $ kubectl create -f examples/elasticsearch/music-service.yaml --namespace=mytunes services/music-server - ``` Let's see what we've got: @@ -301,7 +295,6 @@ music-db-u1ru3 1/1 Running 0 38s music-db-wnss2 1/1 Running 0 1m music-db-x7j2w 1/1 Running 0 1m music-db-zjqyv 1/1 Running 0 1m - ``` Let's check to make sure that these 10 nodes are part of the same Elasticsearch cluster: @@ -359,7 +352,6 @@ $ curl 104.197.12.157:9200/_nodes?pretty=true | grep name "name" : "mytunes-db" "vm_name" : "OpenJDK 64-Bit Server VM", "name" : "eth0", - ``` diff --git a/examples/explorer/README.md b/examples/explorer/README.md index b3ccabd23d1..432cc27f867 100644 --- a/examples/explorer/README.md +++ b/examples/explorer/README.md @@ -46,7 +46,7 @@ Currently, you can look at: Example from command line (the DNS lookup looks better from a web browser): -``` +```console $ kubectl create -f examples/explorer/pod.json $ kubectl proxy & Starting to serve on localhost:8001 diff --git a/examples/glusterfs/README.md b/examples/glusterfs/README.md index 9a908bc960d..f1cc96cfa46 100644 --- a/examples/glusterfs/README.md +++ b/examples/glusterfs/README.md @@ -63,13 +63,13 @@ The "IP" field should be filled with the address of a node in the Glusterfs serv Create the endpoints, -```shell +```sh $ kubectl create -f examples/glusterfs/glusterfs-endpoints.json ``` You can verify that the endpoints are successfully created by running -```shell +```sh $ kubectl get endpoints NAME ENDPOINTS glusterfs-cluster 10.240.106.152:1,10.240.79.157:1 @@ -79,7 +79,7 @@ glusterfs-cluster 10.240.106.152:1,10.240.79.157:1 The following *volume* spec in [glusterfs-pod.json](glusterfs-pod.json) illustrates a sample configuration. -```js +```json { "name": "glusterfsvol", "glusterfs": { @@ -98,13 +98,13 @@ The parameters are explained as the followings. Create a pod that has a container using Glusterfs volume, -```shell +```sh $ kubectl create -f examples/glusterfs/glusterfs-pod.json ``` You can verify that the pod is running: -```shell +```sh $ kubectl get pods NAME READY STATUS RESTARTS AGE glusterfs 1/1 Running 0 3m @@ -115,7 +115,7 @@ $ kubectl get pods glusterfs -t '{{.status.hostIP}}{{"\n"}}' You may ssh to the host (the hostIP) and run 'mount' to see if the Glusterfs volume is mounted, -```shell +```sh $ mount | grep kube_vol 10.240.106.152:kube_vol on /var/lib/kubelet/pods/f164a571-fa68-11e4-ad5c-42010af019b7/volumes/kubernetes.io~glusterfs/glusterfsvol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) ``` diff --git a/examples/k8petstore/dev/README b/examples/k8petstore/dev/README index 3b495ea7034..25c8d7b143e 100644 --- a/examples/k8petstore/dev/README +++ b/examples/k8petstore/dev/README @@ -6,13 +6,13 @@ Now start a local redis instance -``` +```sh redis-server ``` And run the app -``` +```sh export GOPATH=~/Development/k8hacking/k8petstore/web-server/ cd $GOPATH/src/main/ ## Now, you're in the local dir to run the app. Go get its depenedencies. diff --git a/examples/meteor/README.md b/examples/meteor/README.md index 2b1849d2a46..b56aec8fe3b 100644 --- a/examples/meteor/README.md +++ b/examples/meteor/README.md @@ -56,14 +56,14 @@ billing](https://developers.google.com/console/help/new/#billing). Authenticate with gcloud and set the gcloud default project name to point to the project you want to use for your Kubernetes cluster: -```shell +```sh gcloud auth login gcloud config set project ``` Next, start up a Kubernetes cluster: -```shell +```sh wget -q -O - https://get.k8s.io | bash ``` @@ -193,7 +193,7 @@ image is based on the Node.js official image. It then installs Meteor and copies in your apps' code. The last line specifies what happens when your app container is run. -``` +```sh ENTRYPOINT MONGO_URL=mongodb://$MONGO_SERVICE_HOST:$MONGO_SERVICE_PORT /usr/local/bin/node main.js ``` @@ -216,7 +216,8 @@ As mentioned above, the mongo container uses a volume which is mapped to a persistent disk by Kubernetes. In [`mongo-pod.json`](mongo-pod.json) the container section specifies the volume: -``` +```json +{ "volumeMounts": [ { "name": "mongo-disk", @@ -227,7 +228,8 @@ section specifies the volume: The name `mongo-disk` refers to the volume specified outside the container section: -``` +```json +{ "volumes": [ { "name": "mongo-disk", diff --git a/examples/nfs/README.md b/examples/nfs/README.md index 343b58c0c69..6572215b06d 100644 --- a/examples/nfs/README.md +++ b/examples/nfs/README.md @@ -45,7 +45,7 @@ into another one. The nfs server pod creates a privileged container, so if you are using a Salt based KUBERNETES_PROVIDER (**gce**, **vagrant**, **aws**), you have to enable the ability to create privileged containers by API. -```shell +```sh #At the root of Kubernetes source code $ vi cluster/saltbase/pillar/privilege.sls diff --git a/examples/phabricator/README.md b/examples/phabricator/README.md index 8ca60330a27..3068566c7bd 100644 --- a/examples/phabricator/README.md +++ b/examples/phabricator/README.md @@ -41,7 +41,7 @@ The example combines a web frontend and an external service that provides MySQL This example assumes that you have a basic understanding of kubernetes [services](../../docs/user-guide/services.md) and that you have forked the repository and [turned up a Kubernetes cluster](../../docs/getting-started-guides/): -```shell +```sh $ cd kubernetes $ hack/dev-build-and-up.sh ``` @@ -56,7 +56,7 @@ In the remaining part of this example we will assume that your instance is named To start Phabricator server use the file [`examples/phabricator/phabricator-controller.json`](phabricator-controller.json) which describes a [replication controller](../../docs/user-guide/replication-controller.md) with a single [pod](../../docs/user-guide/pods.md) running an Apache server with Phabricator PHP source: -```js +```json { "kind": "ReplicationController", "apiVersion": "v1", @@ -98,13 +98,13 @@ To start Phabricator server use the file [`examples/phabricator/phabricator-cont Create the phabricator pod in your Kubernetes cluster by running: -```shell +```sh $ kubectl create -f examples/phabricator/phabricator-controller.json ``` Once that's up you can list the pods in the cluster, to verify that it is running: -```shell +```sh kubectl get pods ``` @@ -117,7 +117,7 @@ phabricator-controller-9vy68 1/1 Running 0 1m If you ssh to that machine, you can run `docker ps` to see the actual pod: -```shell +```sh me@workstation$ gcloud compute ssh --zone us-central1-b kubernetes-minion-2 $ sudo docker ps @@ -148,7 +148,7 @@ gcloud sql instances patch phabricator-db --authorized-networks 130.211.141.151 To automate this process and make sure that a proper host is authorized even if pod is rescheduled to a new machine we need a separate pod that periodically lists pods and authorizes hosts. Use the file [`examples/phabricator/authenticator-controller.json`](authenticator-controller.json): -```js +```json { "kind": "ReplicationController", "apiVersion": "v1", @@ -184,7 +184,7 @@ To automate this process and make sure that a proper host is authorized even if To create the pod run: -```shell +```sh $ kubectl create -f examples/phabricator/authenticator-controller.json ``` @@ -195,7 +195,7 @@ A Kubernetes 'service' is a named load balancer that proxies traffic to one or m The pod that you created in Step One has the label `name=phabricator`. The selector field of the service determines which pods will receive the traffic sent to the service. Since we are setting up a service for an external application we also need to request external static IP address (otherwise it will be assigned dynamically): -```shell +```sh $ gcloud compute addresses create phabricator --region us-central1 Created [https://www.googleapis.com/compute/v1/projects/myproject/regions/us-central1/addresses/phabricator]. NAME REGION ADDRESS STATUS @@ -204,7 +204,7 @@ phabricator us-central1 107.178.210.6 RESERVED Use the file [`examples/phabricator/phabricator-service.json`](phabricator-service.json): -```js +```json { "kind": "Service", "apiVersion": "v1", @@ -228,14 +228,14 @@ Use the file [`examples/phabricator/phabricator-service.json`](phabricator-servi To create the service run: -```shell +```sh $ kubectl create -f examples/phabricator/phabricator-service.json phabricator ``` To play with the service itself, find the external IP of the load balancer: -```shell +```sh $ kubectl get services phabricator -o template --template='{{(index .status.loadBalancer.ingress 0).ip}}{{"\n"}}' ``` @@ -243,7 +243,7 @@ and then visit port 80 of that IP address. **Note**: You may need to open the firewall for port 80 using the [console][cloud-console] or the `gcloud` tool. The following command will allow traffic from any source to instances tagged `kubernetes-minion`: -```shell +```sh $ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --target-tags kubernetes-minion ``` @@ -251,7 +251,7 @@ $ gcloud compute firewall-rules create phabricator-node-80 --allow=tcp:80 --targ To turn down a Kubernetes cluster: -```shell +```sh $ cluster/kube-down.sh ``` diff --git a/examples/rethinkdb/README.md b/examples/rethinkdb/README.md index a493d1ae4d3..b39c58a0a18 100644 --- a/examples/rethinkdb/README.md +++ b/examples/rethinkdb/README.md @@ -134,7 +134,7 @@ The external load balancer allows us to access the service from outside via an e Note that you may need to create a firewall rule to allow the traffic, assuming you are using Google Compute Engine: -``` +```console $ gcloud compute firewall-rules create rethinkdb --allow=tcp:8080 ``` diff --git a/examples/spark/README.md b/examples/spark/README.md index 74ac2844a89..42944e717f1 100644 --- a/examples/spark/README.md +++ b/examples/spark/README.md @@ -63,7 +63,7 @@ cluster. Use the [`examples/spark/spark-master.json`](spark-master.json) file to create a [pod](../../docs/user-guide/pods.md) running the Master service. -```shell +```sh $ kubectl create -f examples/spark/spark-master.json ``` @@ -71,13 +71,13 @@ Then, use the [`examples/spark/spark-master-service.json`](spark-master-service. create a logical service endpoint that Spark workers can use to access the Master pod. -```shell +```sh $ kubectl create -f examples/spark/spark-master-service.json ``` ### Check to see if Master is running and accessible -```shell +```sh $ kubectl get pods NAME READY STATUS RESTARTS AGE [...] @@ -87,7 +87,7 @@ spark-master 1/1 Running 0 25 Check logs to see the status of the master. -```shell +```sh $ kubectl logs spark-master starting org.apache.spark.deploy.master.Master, logging to /opt/spark-1.4.0-bin-hadoop2.6/sbin/../logs/spark--org.apache.spark.deploy.master.Master-1-spark-master.out @@ -122,13 +122,13 @@ The Spark workers need the Master service to be running. Use the [`examples/spark/spark-worker-controller.json`](spark-worker-controller.json) file to create a [replication controller](../../docs/user-guide/replication-controller.md) that manages the worker pods. -```shell +```sh $ kubectl create -f examples/spark/spark-worker-controller.json ``` ### Check to see if the workers are running -```shell +```sh $ kubectl get pods NAME READY STATUS RESTARTS AGE [...] @@ -148,7 +148,7 @@ $ kubectl logs spark-master Get the address and port of the Master service. -```shell +```sh $ kubectl get service spark-master NAME LABELS SELECTOR IP(S) PORT(S) spark-master name=spark-master name=spark-master 10.0.204.187 7077/TCP