Merge pull request #21634 from jayunit100/spark-README

Auto commit by PR queue bot
This commit is contained in:
k8s-merge-robot 2016-02-20 23:59:59 -08:00
commit 60c6edaee7

View File

@ -38,23 +38,25 @@ Following this example, you will create a functional [Apache
Spark](http://spark.apache.org/) cluster using Kubernetes and
[Docker](http://docker.io).
You will setup a Spark master service and a set of
Spark workers using Spark's [standalone mode](http://spark.apache.org/docs/latest/spark-standalone.html).
You will setup a Spark master service and a set of Spark workers using Spark's [standalone mode](http://spark.apache.org/docs/latest/spark-standalone.html).
For the impatient expert, jump straight to the [tl;dr](#tldr)
section.
### Sources
The Docker images are heavily based on https://github.com/mattf/docker-spark
The Docker images are heavily based on https://github.com/mattf/docker-spark.
And are curated in https://github.com/kubernetes/application-images/tree/master/spark
## Step Zero: Prerequisites
This example assumes you have a Kubernetes cluster installed and
running, and that you have installed the ```kubectl``` command line
tool somewhere in your path. Please see the [getting
started](../../docs/getting-started-guides/) for installation
instructions for your platform.
This example assumes
- You have a Kubernetes cluster installed and running.
- That you have installed the ```kubectl``` command line tool somewhere in your path.
- That a spark-master service which spins up will be automatically discoverable by your kube DNS impl, as 'spark-master'
For details, you can look at the Dockerfiles in the Sources section.
## Step One: Start your Master service
@ -191,7 +193,7 @@ Zeppelin needs the Master service to be running.
### Check to see if Zeppelin is running
```console
$ kubectl get pods -lcomponent=zeppelin
$ kubectl get pods -l component=zeppelin
NAME READY STATUS RESTARTS AGE
zeppelin-controller-ja09s 1/1 Running 0 53s
```